text
stringlengths
1
1.92M
id
stringlengths
14
6.21k
metadata
dict
\section{Introduction} The discovery of new molecules with desired properties is critical in many fields, such as the drug and material design~\citep{hajduk2007decade,mandal2009rational,kang2006electrodes,pyzer2015high}. However, brute-force search in the overwhelming molecular space is extremely challenging. Recently, inverse molecular design~\citep{zunger2018inverse} provides an efficient way to explore the molecular space, which directly predicts promising molecules that exhibit desired properties. A natural way of inverse molecular design is to train a conditional generative model~\citep{sanchez2018inverse}. Formally, it learns a distribution of molecules conditioned on certain properties from data, and new molecules are predicted by sampling from the distribution with the condition set to desired properties. Among them, \textit{equivariant diffusion models} (EDM)~\citep{hoogeboom2022equivariant} leverage the current state-of-art diffusion models~\citep{ho2020denoising}, which involves a forward process to perturb data and a reverse process to generate 3D molecules conditionally or unconditionally. While EDM generates stable and valid 3D molecules, we argue that a single conditional generative model is insufficient for generating accurate molecules that exhibit desired properties (see Table~\ref{tab:variant} and Table~\ref{tab:multi} for an empirical verification). In this work, we propose \textit{equivariant energy-guided stochastic differential equations} (EEGSDE), a flexible framework for controllable 3D molecule generation under the guidance of an energy function in diffusion models. EEGSDE formalizes the generation process as an equivariant stochastic differential equation, and plug in energy functions to improve the controllability of generation. Formally, we show that EEGSDE naturally exploits the geometric symmetry in 3D molecular conformation, as long as the energy function is invariant to orthogonal transformations. We apply EEGSDE to various applications by carefully designing task-specific energy functions. When targeted to quantum properties, EEGSDE is able to generate more accurate molecules than EDM, e.g., reducing the mean absolute error by more than 30\% on the dipole moment property. When targeted to specific molecular structures, EEGSDE better capture the structure information in molecules than EDM, e.g, improving the similarity to target structures by more than 10\%. Furthermore, EEGSDE is able to generate molecules targeted to multiple properties by combining the corresponding energy functions linearly. These demonstrate that our EEGSDE enables a flexible and controllable generation of molecules, providing a smart way to explore the chemical space. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{imgs/eegsde.pdf} \vspace{-.8cm} \caption{Overview of our EEGSDE. EEGSDE iteratively generates molecules with desired properties (represented by the condition $c$) by adopting the guidance of energy functions in each step. As the energy function is invariant to rotational transformation ${\bm{R}}$, its gradient (i.e., the energy guidance) is equivariant to ${\bm{R}}$, and therefore the distribution of generated samples is invariant to ${\bm{R}}$.} \label{fig:eegsde} \end{figure} \section{Related Work} \textbf{Diffusion models} are initially proposed by ~\cite{sohl2015deep}. Recently, they are better understood in theory by connecting it to score matching and stochastic differential equations (SDE)~\citep{ho2020denoising,song2020score}. After that, diffusion models have shown strong empirical performance in many applications~\cite{dhariwal2021diffusion,ramesh2022hierarchical,chen2020wavegrad,kong2020diffwave}. There are also variants proposed to improve or accelerate diffusion models~\citep{nichol2021improved,vahdat2021score,dockhorn2021score,bao2022analytic,bao2022estimating,salimans2022progressive,lu2022dpm}. \textbf{Guidance} is a technique to control the generation process of diffusion models. Initially, \cite{song2020score,dhariwal2021diffusion} use classifier guidance to generate samples belonging to a class. Then, the guidance is extended to CLIP~\citep{radford2021learning} for text to image generation, and semantic-aware energy~\citep{zhao2022egsde} for image-to-image translation. Prior guidance methods focus on image data, and are nontrivial to apply to molecules, since they do not consider the geometric symmetry. In contrast, our work proposes a general guidance framework for 3D molecules, which leverage the geometric symmetry of molecules. \textbf{Molecule generation.} Several works attempt to model molecules as 3D objects via deep generative models~\cite{nesterov20203dmolnet,gebauer2019symmetry,satorras2021n,hoffmann2019generating,hoogeboom2022equivariant}. Among them, the most relevant one is the equivariant diffusion model (EDM)~\citep{hoogeboom2022equivariant}, which generates molecules in an iterative denoising manner. Benefiting from recent advances of diffusion models, EDM is stable to train and is able to generate high quality molecules. We provide a formal description of EDM in Section~\ref{sec:background}. Some other methods generate simplified representations of molecules, such as 1D SMILES strings~\citep{weininger1988smiles} and 2D graphs of molecules. These include variational autoencoders~\citep{kusner2017grammar,dai2018syntax,jin2018junction,simonovsky2018graphvae,liu2018constrained}, normalizing flows~\citep{madhawa2019graphnvp,zang2020moflow,luo2021graphdf}, generative adversarial networks~\citep{bian2019deep,assouel2018defactor}, and autoregressive models~\citep{popova2019molecularrnn,flam2021keeping}. \textbf{Inverse molecular design.} Generative models have been applied to inverse molecular design. For example, conditional autoregressive models~\citep{gebauer2022inverse} and EDM~\citep{hoogeboom2022equivariant} directly generate 3D molecules with desired quantum properties. \cite{gebauer2019symmetry} also finetune pretrained generative models on a biased subset to generate 3D molecules with small HOMO-LUMO gaps. In contrast to these conditional generative models, our work further proposes a guidance method, a flexible way to control the generation process of molecules. Some other methods apply optimization methods to search molecules with desired properties, such as reinforcement learning~\citep{zhou2019optimization,you2018graph} and genetic algorithms~\citep{jensen2019graph,nigam2019augmenting}. These optimization methods generally consider the 1D SMILES strings or 2D graphs of molecules, and the 3D information is not provided. \section{Background} \label{sec:background} \textbf{3D representation of molecules.} Suppose a molecule has $M$ atoms and let ${\bm{x}}^i \in {\mathbb{R}}^n$ ($n=3$ in general) be the coordinate of the $i$th atom. The collection of coordinates ${{\bm{x}}} = ({\bm{x}}^1, \dots, {\bm{x}}^M) \in {\mathbb{R}}^{Mn}$ determines the \textit{conformation} of the molecule. In addition to the coordinate, each atom is also associated with an atom feature, e.g., the atom type. We use ${\bm{h}}^i \in {\mathbb{R}}^d$ to represent the atom feature of the $i$th atom, and use ${\bm{h}} = ({\bm{h}}^1,\dots,{\bm{h}}^M) \in {\mathbb{R}}^{Md}$ to represent the collection of atom features in a molecule. We use a tuple ${\bm{z}} = ({\bm{x}}, {\bm{h}})$ to represent a molecule, which contains both the 3D geometry information and the atom feature information. \textbf{Equivariance and invariance.} Suppose ${\bm{R}}$ is a transformation. A distribution $p({\bm{x}}, {\bm{h}})$ is said to be \textit{invariant} to ${\bm{R}}$, if $p({\bm{x}}, {\bm{h}}) = p({\bm{R}} {\bm{x}}, {\bm{h}})$ holds for all ${\bm{x}}$ and ${\bm{h}}$. Here ${\bm{R}} {\bm{x}} = ( {\bm{R}} {\bm{x}}^1, \dots, {\bm{R}} {\bm{x}}^M )$ is applied to each coordinate. A function $({\bm{a}}^x, {\bm{a}}^h) = {\bm{f}}({\bm{x}}, {\bm{h}})$ that have two components ${\bm{a}}^x, {\bm{a}}^h$ in its output is said to be \textit{equivariant} to ${\bm{R}}$, if ${\bm{f}}({\bm{R}} {\bm{x}}, {\bm{h}}) = ({\bm{R}} {\bm{a}}^x, {\bm{a}}^h)$ holds for all ${\bm{x}}$ and ${\bm{h}}$. A function ${\bm{f}}({\bm{x}}, {\bm{h}})$ is said to be \textit{invariant} to ${\bm{R}}$, if ${\bm{f}}({\bm{R}} {\bm{x}}, {\bm{h}}) = {\bm{f}}({\bm{x}}, {\bm{h}})$ holds for all ${\bm{x}}$ and ${\bm{h}}$. \textbf{Zero CoM subspace.} It has been shown that the invariance to translational and rotational transformations is an important factor for the success of 3D molecule modeling~\citep{kohler2020equivariant,xu2022geodiff}. However, the translational invariance is impossible for a distribution in the full space ${\mathbb{R}}^{Mn}$~\citep{satorras2021n}. Nevertheless, we can view two collections of coordinates ${\bm{x}}$ and ${\bm{y}}$ as equivalent if ${\bm{x}}$ can be translated from ${\bm{y}}$, since the translation doesn't change the identity of a molecule. Such an equivalence relation partitions the whole space ${\mathbb{R}}^{Mn}$ into disjoint equivalence classes. Indeed, all elements in the same equivalence classes represent the same conformation, and we can use the element with zero center of mass (CoM), i.e., $\frac{1}{M}\sum_{i=1}^M {\bm{x}}^i = {\bm{0}}$, as the specific representation. These elements collectively form the \textit{zero CoM linear subspace} ${X}$~\citep{xu2022geodiff,hoogeboom2022equivariant}, and the rest of the paper always uses elements in ${X}$ to represent conformations. \textbf{Equivariant graph neural network.} \cite{satorras2021egnn} propose \textit{equivariant graph neural networks} (EGNNs), which incorporate the equivariance inductive bias into neural networks. Specifically, $({\bm{a}}^x, {\bm{a}}^h) = {\mathrm{EGNN}}({\bm{x}}, {\bm{h}})$ is a composition of $L$ \textit{equivariant convolutional layers}. The $l$-th layer takes the tuple $({\bm{x}}_l, {\bm{h}}_l)$ as the input and outputs an updated version $({\bm{x}}_{l+1}, {\bm{h}}_{l+1})$, as follows: \begin{align*} & {\bm{m}}^{ij} = \Phi_m ({\bm{h}}^i_l, {\bm{h}}^j_l, \|{\bm{x}}^i_l - {\bm{x}}^j_l \|_2^2, e^{ij}; {\bm{\theta}}_m), \ w^{ij} = \Phi_w({\bm{m}}^{ij}; {\bm{\theta}}_w), \ {\bm{h}}^i_{l+1} = \Phi_h ({\bm{h}}^i_l, \sum_{j \neq i} w^{ij} {\bm{m}}^{ij}; {\bm{\theta}}_h), \\ & {\bm{x}}^i_{l+1} = {\bm{x}}^i_l + \sum_{j\neq i} \frac{{\bm{x}}^i_l - {\bm{x}}^j_l}{\|{\bm{x}}^i_l - {\bm{x}}^j_l \|_2 + 1} \Phi_x({\bm{h}}^i_l, {\bm{h}}^j_l, \|{\bm{x}}^i_l - {\bm{x}}^j_l \|_2^2, e^{ij}; {\bm{\theta}}_x), \end{align*} where $\Phi_m, \Phi_w, \Phi_h, \Phi_x$ are parameterized by fully connected neural networks with parameters ${\bm{\theta}}_m, {\bm{\theta}}_w, {\bm{\theta}}_h, {\bm{\theta}}_x$ respectively, and $e^{ij}$ are optional feature attributes. We can verify that these layers are equivariant to orthogonal transformations, which include rotational transformations as special cases. As their composition, the EGNN is also equivariant to orthogonal transformations. Furthermore, let ${\mathrm{EGNN}}^h({\bm{x}}, {\bm{h}}) = {\bm{a}}^h$, i.e., the second component in the output of the EGNN. Then ${\mathrm{EGNN}}^h({\bm{x}}, {\bm{h}})$ is invariant to orthogonal transformations. \textbf{Equivariant diffusion models (EDM)}~\citep{hoogeboom2022equivariant} are a variant of diffusion models~\citep{ho2020denoising} for molecule data. EDMs gradually inject noise to the molecule data ${\bm{z}} = ({\bm{x}}, {\bm{h}})$ via a forward process \begin{align} \label{eq:edm_f} \! q({\bm{z}}_{1:N}|{\bm{z}}_0) \!=\! \prod_{n=1}^N q({\bm{z}}_n|{\bm{z}}_{n-1}), \ q({\bm{z}}_n|{\bm{z}}_{n-1}) = {\mathcal{N}}_X({\bm{x}}_n|\sqrt{\alpha_n} {\bm{x}}_{n-1}, \beta_n) {\mathcal{N}}({\bm{h}}_n|\sqrt{\alpha_n} {\bm{h}}_{n-1}, \beta_n), \end{align} where $\alpha_n$ and $\beta_n$ represent the noise schedule and satisfy $\alpha_n+\beta_n=1$, and ${\mathcal{N}}_X$ represent the Gaussian distribution in the zero CoM subspace $X$ (see its formal definition in Appendix~\ref{sec:rsde_proof}). Let $\overline{\alpha}_n = \alpha_1\alpha_2\cdots\alpha_n$, $\overline{\beta}_n = 1-\overline{\alpha}_n$ and $\tilde{\beta}_n = \beta_n \overline{\beta}_{n-1} / \overline{\beta}_n$. To generate samples, the forward process is reversed using a Markov chain: \begin{align} \label{eq:edm_r} \! p({\bm{z}}_{0:N}) \!=\! p({\bm{z}}_N) \! \prod_{n=1}^N \! p({\bm{z}}_{n-1}|{\bm{z}}_n), p({\bm{z}}_{n-1}|{\bm{z}}_n) \!=\! {\mathcal{N}}_X \! ({\bm{x}}_{n-1}|{\bm{\mu}}_n^x({\bm{z}}_n), \tilde{\beta}_n) {\mathcal{N}}\!({\bm{h}}_{n-1}|{\bm{\mu}}_n^h({\bm{z}}_n), \tilde{\beta}_n). \! \end{align} Here $p({\bm{z}}_N) = {\mathcal{N}}_X({\bm{x}}_N|{\bm{0}}, 1) {\mathcal{N}}({\bm{h}}_N|{\bm{0}}, 1)$. The mean ${\bm{\mu}}_n({\bm{z}}_n) = ({\bm{\mu}}_n^x({\bm{z}}_n), {\bm{\mu}}_n^h({\bm{z}}_n))$ is parameterized by a noise prediction network ${\bm{\epsilon}}_{\bm{\theta}}({\bm{z}}_n, n)$, and is trained using a MSE loss, as follows: \begin{align*} {\bm{\mu}}_n({\bm{z}}_n) = \frac{1}{\sqrt{\alpha}_n}({\bm{z}}_n - \frac{\beta_n}{\sqrt{\overline{\beta}_n}} {\bm{\epsilon}}_{\bm{\theta}}({\bm{z}}_n, n) ),\quad \min_{\bm{\theta}} \mathbb{E}_n \mathbb{E}_{q({\bm{z}}_0, {\bm{z}}_n)} w(t) \| {\bm{\epsilon}}_{\bm{\theta}}({\bm{z}}_n, n) - {\bm{\epsilon}}_n \|^2, \end{align*} where ${\bm{\epsilon}}_n = \frac{{\bm{z}}_n - \sqrt{\overline{\alpha}_n} {\bm{z}}_0}{\sqrt{\overline{\beta}_n}}$ is the standard Gaussian noise injected to ${\bm{z}}_0$ and $w(t)$ is the weight term. \cite{hoogeboom2022equivariant} show that the distribution of generated samples $p({\bm{z}}_0)$ is invariant to rotational transformations if the noise prediction network is equivariant to orthogonal transformations. In Section~\ref{sec:equi_sde}, we extend this proposition to the SDE formulation of molecular diffusion modelling. \cite{hoogeboom2022equivariant} also present a conditional version of EDM for inverse molecular design by adding an extra input of the condition to the noise prediction network. \section{Equivariant Energy-Guided SDE} In this part, we introduce our equivariant energy-guided SDE (EEGSDE), which is illustrated in Figure~\ref{fig:eegsde}. EEGSDE is based on the SDE formulation of molecular diffusion modeling, which is described in Section~\ref{sec:sde_prod} and Section~\ref{sec:equi_sde}. Then, we formally present our EEGSDE in Section~\ref{sec:eegsde}. We provide proof and derivations in Appendix~\ref{sec:derivations}. \subsection{SDE in the Product Space} \label{sec:sde_prod} Recall that a molecule is represented as a tuple ${\bm{z}} = ({\bm{x}}, {\bm{h}})$, where ${\bm{x}} = ({\bm{x}}^1, \dots, {\bm{x}}^M) \in {X}$ represents the conformation and ${\bm{h}} = ({\bm{h}}^1, \dots, {\bm{h}}^M) \in {\mathbb{R}}^{M d}$ represents atom features. Here ${X} = \{{\bm{x}} \in {\mathbb{R}}^{Mn}: \frac{1}{M} \sum_{i=1}^M {\bm{x}}^i = {\bm{0}} \}$ is the zero CoM subspace as mentioned in Section~\ref{sec:background}, and $d$ is the dimension of features. We first introduce a continuous-time diffusion process $\{{\bm{z}}_t\}_{0\leq t \leq T}$ in the product space ${X} \times {\mathbb{R}}^{Md}$, which gradually adds noise to both ${\bm{x}}$ and ${\bm{h}}$. This can be described by the following forward SDE: \begin{align} \label{eq:sde} \mathrm{d} {\bm{z}} = f(t) {\bm{z}} \mathrm{d} t + g(t) \mathrm{d} ({\bm{w}}_x, {\bm{w}}_h), \quad {\bm{z}}_0 \sim q({\bm{z}}_0), \end{align} where $f(t)$ and $g(t)$ are two scalar functions, ${\bm{w}}_x$ and ${\bm{w}}_h$ are independent standard Wiener processes in ${X}$ and ${\mathbb{R}}^{Md}$ respectively, and the SDE starts from the data distribution $q({\bm{z}}_0)$. Note that ${\bm{w}}_x$ can be constructed by subtracting the CoM of a standard Wiener process ${\bm{w}}$ in ${\mathbb{R}}^{Mn}$, i.e., ${\bm{w}}_x = {\bm{w}} - \overline{{\bm{w}}}$, where $\overline{{\bm{w}}} = \frac{1}{M} \sum_{i=1}^M {\bm{w}}^i$ is the CoM of ${\bm{w}} = ({\bm{w}}^1, \dots, {\bm{w}}^M)$. It can be shown that the SDE has a linear Gaussian transition kernel $q({\bm{z}}_t|{\bm{z}}_s) = q({\bm{x}}_t|{\bm{x}}_s) q({\bm{h}}_t|{\bm{h}}_s)$ from ${\bm{x}}_s$ to ${\bm{x}}_t$, where $0 \leq s < t \leq T$. Specifically, there exists two scalars $\alpha_{t|s}$ and $\beta_{t|s}$, s.t., $q({\bm{x}}_t|{\bm{x}}_s) = {\mathcal{N}}_X({\bm{x}}_t|\sqrt{\alpha_{t|s}} {\bm{x}}_s, \beta_{t|s})$ and $q({\bm{h}}_t|{\bm{h}}_s) = {\mathcal{N}}({\bm{h}}_t|\sqrt{\alpha_{t|s}} {\bm{h}}_s, \beta_{t|s})$. Here ${\mathcal{N}}_X$ denotes the Gaussian distribution in the subspace ${X}$, and see Appendix~\ref{sec:rsde_proof} for its formal definition. Indeed, the forward process of EDM in Eq.~{(\ref{eq:edm_f})} is a discretization of the forward SDE in Eq.~{(\ref{eq:sde})}. To generate molecules, we reverse the diffusion process in Eq.~{(\ref{eq:sde})} from $T$ to $0$. Such a time reversal forms another a SDE, which can be represented by both the \textit{score function form} and the \textit{noise prediction form}: \begin{align} \mathrm{d} {\bm{z}} = & [f(t) {\bm{z}} - g(t)^2 \underbrace{(\nabla_{\bm{x}} \log q_t({\bm{z}}) - \overline{\nabla_{\bm{x}} \log q_t({\bm{z}})}, \nabla_{\bm{h}} \log q_t({\bm{z}}))}_{\text{\normalsize{score function form}}}] \mathrm{d} t + g(t) \mathrm{d} (\tilde{{\bm{w}}}_x, \tilde{{\bm{w}}}_h), \nonumber \\ = & [f(t) {\bm{z}} + \frac{g(t)^2}{\sqrt{\beta_{t|0}}} \underbrace{\mathbb{E}_{q({\bm{z}}_0|{\bm{z}}_t)} {\bm{\epsilon}}_t}_{\text{\normalsize{noise prediction form}}}] \mathrm{d} t + g(t) \mathrm{d} (\tilde{{\bm{w}}}_x, \tilde{{\bm{w}}}_h), \quad {\bm{z}}_T \sim q_T({\bm{z}}_T). \label{eq:rsde} \end{align} Here $q_t({\bm{z}})$ is the marginal distribution of ${\bm{z}}_t$, $\nabla_{\bm{x}} \log q_t ({\bm{z}})$ is the gradient of $\log q_t ({\bm{z}})$ w.r.t. ${\bm{x}}$\footnote{While $q_t ({\bm{z}})$ is defined in ${X} \times {\mathbb{R}}^{Md}$, its domain can be extended to ${\mathbb{R}}^{Mn}\times {\mathbb{R}}^{Md}$ and the gradient is valid. See Remark~\ref{re:marginal} in Appendix~\ref{sec:rsde_proof} for details.}, $\overline{\nabla_{\bm{x}} \log q_t ({\bm{z}})} = \frac{1}{M} \sum_{i=1}^M \nabla_{{\bm{x}}^i} \log q_t({\bm{z}})$ is the CoM of $\nabla_{\bm{x}} \log q_t ({\bm{z}})$, $\mathrm{d} t$ is the infinitesimal negative timestep, $\tilde{{\bm{w}}}_x$ and $\tilde{{\bm{w}}}_h$ are independent reverse-time standard Wiener processes in ${X}$ and ${\mathbb{R}}^{Md}$ respectively, and ${\bm{\epsilon}}_t = \frac{{\bm{z}}_t - \sqrt{\alpha_{t|0}} {\bm{z}}_0}{\sqrt{\beta_{t|0}}}$ is the standard Gaussian noise injected to ${\bm{z}}_0$. Compared to the original SDE introduced by \cite{song2020score}, our reverse SDE in Eq.~{(\ref{eq:rsde})} additionally subtracts the CoM of $\nabla_{\bm{x}} \log q_t({\bm{z}})$. This ensures ${\bm{x}}_t$ always stays in the zero CoM subspace as time flows back. To sample from the reverse SDE in Eq.~{(\ref{eq:rsde})}, we use a noise prediction network ${\bm{\epsilon}}_{\bm{\theta}}({\bm{z}}_t, t)$ to estimate $\mathbb{E}_{q({\bm{z}}_0|{\bm{z}}_t)} {\bm{\epsilon}}_t$, through minimizing the following MSE loss \begin{align*} \min_{\bm{\theta}} \mathbb{E}_t \mathbb{E}_{q({\bm{z}}_0, {\bm{z}}_t)} w(t) \|{\bm{\epsilon}}_{\bm{\theta}}({\bm{z}}_t, t) - {\bm{\epsilon}}_t \|^2, \end{align*} where $t$ is uniformly sampled from $[0, T]$, and $w(t)$ controls the weight of the loss term at time $t$. Note that the noise ${\bm{\epsilon}}_t$ is in the product space ${X} \times {\mathbb{R}}^{Md}$, so we subtract the CoM of the predicted noise of ${\bm{x}}_t$ to ensure ${\bm{\epsilon}}_{\bm{\theta}}({\bm{z}}_t, t)$ is also in the product space. Substituting ${\bm{\epsilon}}_{\bm{\theta}}({\bm{z}}_t, t)$ into Eq.~{(\ref{eq:rsde})}, we get an approximate reverse-time SDE parameterized by ${\bm{\theta}}$: \begin{align} \label{eq:rsde_approx} \mathrm{d} {\bm{z}} = [f(t) {\bm{z}} + \frac{g(t)^2}{\sqrt{\beta_{t|0}}} {\bm{\epsilon}}_{\bm{\theta}}({\bm{z}}, t)] \mathrm{d} t + g(t) \mathrm{d} (\tilde{{\bm{w}}}_x, \tilde{{\bm{w}}}_h), \quad {\bm{z}}_T \sim p_T({\bm{z}}_T), \end{align} where $p_T({\bm{z}}_T) = {\mathcal{N}}_X({\bm{x}}_T|{\bm{0}}, 1) {\mathcal{N}}({\bm{h}}_T|{\bm{0}}, 1)$ is a Gaussian prior in the product space that approximates $q_T({\bm{z}}_T)$. We define $p_{\bm{\theta}}({\bm{z}}_0)$ as the marginal distribution of Eq.~{(\ref{eq:rsde_approx})} at time $t=0$, which is the distribution of our generated samples. Similarly to the forward process, the reverse process of EDM in Eq.~{(\ref{eq:edm_r})} is a discretization of the reverse SDE in Eq.~{(\ref{eq:rsde_approx})}. \subsection{Equivariant SDE} \label{sec:equi_sde} To leverage the geometric symmetry in 3D molecular conformation, $p_{\bm{\theta}}({\bm{z}}_0)$ should be invariant to translational and rotational transformations. As mentioned in Section~\ref{sec:background}, the translational invariance of $p_{\bm{\theta}}({\bm{z}}_0)$ is already satisfied by considering the zero CoM subspace. The rotational invariance can be satisfied if the noise prediction network is equivariant to orthogonal transformations, as summarized in the following theorem: \begin{restatable}{theorem}{equisde} \label{thm:equisde} Let $({\bm{\epsilon}}_{\bm{\theta}}^x({\bm{z}}_t, t), {\bm{\epsilon}}_{\bm{\theta}}^h({\bm{z}}_t, t)) = {\bm{\epsilon}}_{\bm{\theta}}({\bm{z}}_t, t)$, where ${\bm{\epsilon}}_{\bm{\theta}}^x({\bm{z}}_t, t)$ and ${\bm{\epsilon}}_{\bm{\theta}}^h({\bm{z}}_t, t)$ are the predicted noise of ${\bm{x}}_t$ and ${\bm{h}}_t$ respectively. If for any orthogonal transformation ${\bm{R}} \in {\mathbb{R}}^{n\times n}$, ${\bm{\epsilon}}_{\bm{\theta}}({\bm{z}}_t, t)$ is equivariant to ${\bm{R}}$, i.e., ${\bm{\epsilon}}_{\bm{\theta}}({\bm{R}} {\bm{x}}_t, {\bm{h}}_t, t) = ({\bm{R}} {\bm{\epsilon}}_{\bm{\theta}}^x({\bm{x}}_t, {\bm{h}}_t, t), {\bm{\epsilon}}_{\bm{\theta}}^h({\bm{x}}_t, {\bm{h}}_t, t))$, and $p_T({\bm{z}}_T)$ is invariant to ${\bm{R}}$, i.e., $p_T({\bm{R}} {\bm{x}}_T, {\bm{h}}_T) = p_T({\bm{x}}_T, {\bm{h}}_T)$, then $p_{\bm{\theta}}({\bm{z}}_0)$ is invariant to any rotational transformation. \end{restatable} As mentioned in Section~\ref{sec:background}, the EGNN satisfies the equivariance constraint, and we parameterize ${\bm{\epsilon}}_{\bm{\theta}}({\bm{z}}_t, t)$ using an EGNN following \cite{hoogeboom2022equivariant}. See details in Appendix~\ref{sec:npn}. \subsection{Equivariant Energy-Guided SDE} \label{sec:eegsde} \begin{algorithm}[t] \caption{Sample from EEGSDE using the Euler-Maruyama method} \label{alg:sample} \begin{algorithmic} \REQUIRE Number of steps $N$ \STATE $\Delta t = \frac{T}{N}$ \STATE ${\bm{z}} \leftarrow ({\bm{x}} - \overline{{\bm{x}}}, {\bm{h}})$, where ${\bm{x}} \sim {\mathcal{N}}({\bm{0}}, 1)$, ${\bm{h}} \sim {\mathcal{N}}({\bm{0}}, 1)$ \hfill \COMMENT{Sample from the prior $p_T({\bm{z}}_T)$} \FOR{$i = N$ to $1$} \STATE $t \leftarrow i \Delta t$ \STATE ${\bm{g}}_x \leftarrow \nabla_{\bm{x}} E({\bm{z}}, c, t)$, ${\bm{g}}_h \leftarrow \nabla_{\bm{h}} E({\bm{z}}, c, t)$ \hfill\COMMENT{Calculate the gradient of the energy function} \STATE ${\bm{g}} \leftarrow ({\bm{g}}_x - \overline{{\bm{g}}_x}, {\bm{g}}_h)$ \hfill\COMMENT{Subtract the CoM of the gradient} \STATE ${\bm{F}} \leftarrow f(t) {\bm{z}} + g(t)^2 (\frac{1}{\sqrt{\beta_{t|0}}} {\bm{\epsilon}}_{\bm{\theta}}({\bm{z}}, t) + {\bm{g}})$ \STATE ${\bm{\epsilon}} \leftarrow ({\bm{\epsilon}}_x - \overline{{\bm{\epsilon}}_x}, {\bm{\epsilon}}_h)$, where ${\bm{\epsilon}}_x \sim {\mathcal{N}}({\bm{0}}, 1)$, ${\bm{\epsilon}}_h \sim {\mathcal{N}}({\bm{0}}, 1)$ \STATE ${\bm{z}} \leftarrow {\bm{z}} - {\bm{F}} \Delta t + g(t) \sqrt{\Delta t} {\bm{\epsilon}}$ \hfill \COMMENT{Update ${\bm{z}}$ according to Eq.~{(\ref{eq:eegsde})}} \ENDFOR \RETURN ${\bm{z}}$ \end{algorithmic} \end{algorithm} Now we describe \textit{equivariant energy-guided SDE} (EEGSDE), which guides the generated molecules of Eq.~{(\ref{eq:rsde_approx})} towards desired properties $c$ by leveraging a time-dependent energy function $E({\bm{z}}, c, t)$: \begin{align} \label{eq:eegsde} \mathrm{d} {\bm{z}} = [& f(t) {\bm{z}} + g(t)^2 (\frac{1}{\sqrt{\beta_{t|0}}} {\bm{\epsilon}}_{\bm{\theta}}({\bm{z}}, t) \nonumber \\ & \! + \underbrace{(\nabla_{\bm{x}} E({\bm{z}}, c, t) - \overline{\nabla_{\bm{x}} E({\bm{z}}, c, t)}, \nabla_{\bm{h}} E({\bm{z}}, c, t))}_{\text{\normalsize{energy gradient taken in the product space}}} )] \mathrm{d} t + g(t) \mathrm{d} (\tilde{{\bm{w}}}_x, \tilde{{\bm{w}}}_h), \ {\bm{z}}_T \sim p_T({\bm{z}}_T), \end{align} which defines a distribution $p_{\bm{\theta}}({\bm{z}}_0|c)$ conditioned on the property $c$. Here the CoM $\overline{\nabla_{\bm{x}} E({\bm{z}}, c, t)}$ of the gradient is subtracted to keep the SDE in the product space, which ensures the translational invariance of $p_{\bm{\theta}}({\bm{z}}_0|c)$. Besides, the rotational invariance is satisfied by using energy invariant to orthogonal transformations, as summarized in the following theorem: \begin{restatable}{theorem}{equiegsde} \label{thm:equiegsde} Suppose the assumptions in Theorem~\ref{thm:equisde} hold and $E({\bm{z}}, c, t)$ is invariant to any orthogonal transformation ${\bm{R}}$, i.e., $E({\bm{R}} {\bm{x}}, {\bm{h}}, c, t) = E({\bm{x}}, {\bm{h}}, c, t)$. Then $p_{\bm{\theta}}({\bm{z}}_0|c)$ is invariant to any rotational transformation. \end{restatable} Note that we can also use a conditional model ${\bm{\epsilon}}_{\bm{\theta}}({\bm{z}}, c, t)$ in Eq.~{(\ref{eq:eegsde})}. See Appendix~\ref{sec:cnpn} for details. To sample from $p_{\bm{\theta}}({\bm{z}}_0|c)$, various solvers can be used for Eq.~{(\ref{eq:eegsde})}, such as the Euler-Maruyama method~\citep{song2020score} and the Analytic-DPM sampler~\citep{bao2022analytic,bao2022estimating}. In Algorithm~\ref{alg:sample}, we present the Euler-Maruyama method as an example. Compared with Eq.~{(\ref{eq:rsde_approx})}, Eq.~{(\ref{eq:eegsde})} additionally performs a gradient descent on the energy function. This encourages generated molecules to have a low energy. Thus, we can design the energy function according to the consistency between the molecule ${\bm{z}}$ and the property $c$. In the rest of the paper, we specify the choice of energy functions, and show these energies improve controllable molecule generation targeted to quantum properties, molecular structures, and even a combination of them. \section{Generating Molecules with Desired Quantum Properties} \label{sec:q} \begin{figure}[t] \begin{minipage}[t]{\textwidth} \captionof{table}{Generation performance targeted to a single quantum property (its unit is indicated in the parentheses after it). We report the mean absolute error (MAE), as well as the novelty, the atom stability (AS), and the molecule stability (MS). We reproduce L-bound and EDM results, which is consistent with \cite{hoogeboom2022equivariant} (see Appendix~\ref{sec:reproduce}). ``\#Atoms'' uses public results from \cite{hoogeboom2022equivariant}.} \vspace{-.1cm} \label{tab:variant} \renewcommand\arraystretch{1.2} \begin{center} \scalebox{0.82}{ \begin{tabular}{lrrrrlrrrr} \toprule Method & MAE$\downarrow$& Novelty$\uparrow$ & AS$\uparrow$ & MS$\uparrow$ & Method & MAE$\downarrow$ & Novelty$\uparrow$ & AS$\uparrow$ & MS$\uparrow$ \\ \cmidrule(lr){1-5} \cmidrule(lr){6-10} \multicolumn{5}{c}{$C_v$ ({$\frac{\mathrm{cal}}{{\mathrm{mol}}}\mathrm{K}$})} & \multicolumn{5}{c}{$\mu$ (D)} \\ \cmidrule(lr){1-5} \cmidrule(lr){6-10} U-bound & 6.873 & - & - & - & U-bound & 1.611 & - & - & - \\ \#Atoms & 1.971 & - & - & - & \#Atoms & 1.053 & - & - & - \\ EDM & 1.066 & 83.64 & \textbf{98.25 }& 80.50 &EDM &1.138 &84.04 &98.14 & 80.04 \\ EEGSDE ($s$=1) & 1.038& 83.71&98.21&\textbf{80.61} &EEGSDE ($s$=0.5) & 0.935&84.06& \textbf{98.20}& \textbf{80.05} \\ EEGSDE ($s$=5) & 0.981 & 83.75&98.12 &79.84& EEGSDE ($s$=1) & 0.853&\textbf{84.65}&98.13&79.66\\ EEGSDE ($s$=10) & \textbf{0.939}& \textbf{83.92} & 98.06 & 79.21& EEGSDE ($s$=2) & \textbf{0.772}&\textbf{84.65}&98.08&79.07\\ L-bound &0.040 &- &-&- & L-bound & 0.043&-&-&-\\ \cmidrule(lr){1-5} \cmidrule(lr){6-10} \multicolumn{5}{c}{$\Delta \varepsilon$ (meV)} & \multicolumn{5}{c}{$\varepsilon_{{\mathrm{HOMO}}}$ (meV)} \\ \cmidrule(lr){1-5} \cmidrule(lr){6-10} U-bound & 1460 & - & - & - & U-bound & 644 & - & - & - \\ \#Atoms & 866 & - & - & - & \#Atoms & 426 & - & - & - \\ EDM & 666 & 84.44 & \textbf{98.27 }& \textbf{81.83} &EDM &373 &84.38 & \textbf{98.24} & \textbf{79.97} \\ EEGSDE ($s$=0.5)& 577 & 83.80 & 98.11 & 80.73 &EEGSDE ($s$=0.1) & 362&\textbf{84.82}& 98.23& 79.88\\ EEGSDE ($s$=1) & 544& 84.22 &98.05 &79.65& EEGSDE ($s$=0.5) & 319&84.38& 98.15& 79.37\\ EEGSDE ($s$=3) & \textbf{485} &\textbf{85.87}& 97.72 & 77.01& EEGSDE ($s$=1) & \textbf{302}&84.68&98.10&79.20\\ L-bound &65 &-&- &- & L-bound & 39&-&-&-\\ \cmidrule(lr){1-5} \cmidrule(lr){6-10} \multicolumn{5}{c}{$\alpha$ ({$\mathrm{Bohr}^3$})} & \multicolumn{5}{c}{$\varepsilon_{{\mathrm{LUMO}}}$ (meV)} \\ \cmidrule(lr){1-5} \cmidrule(lr){6-10} U-bound & 8.98 & - & - & - & U-bound & 1451 & - & - & - \\ \#Atoms & 3.86 & - & - & - & \#Atoms & 813 & - & - & - \\ EDM & 2.74 & 84.30 &98.13 & 79.32 & EDM &595 & 84.57& 98.21 & 81.01 \\ EEGSDE ($s$=0.5) &2.65& 84.01 &\textbf{98.21}&80.46 &EEGSDE ($s$=0.5) & 525&84.03& \textbf{98.29}& 81.44 \\ EEGSDE ($s$=1) & 2.61& \textbf{84.83} &98.15 &80.09& EEGSDE ($s$=1) & 496&84.40& \textbf{98.29} & \textbf{81.49}\\ EEGSDE ($s$=3) & \textbf{2.50} & 83.91& \textbf{98.21} & \textbf{80.79}& EEGSDE ($s$=3) & \textbf{442}&\textbf{85.08}&98.14 & 80.20\\ L-bound &0.09 &- &-&- & L-bound & 36&-&-&-\\ \bottomrule \end{tabular} } \end{center} \end{minipage}\vspace{.4cm} \\ \begin{minipage}[t]{0.68\textwidth} \captionof{table}{Generation performance with multiple quantum properties.} \vspace{-.2cm} \label{tab:multi} \begin{center} \scalebox{0.82}{ \begin{tabular}{llllll} \toprule Method & MAE1$\downarrow$ & MAE2$\downarrow$& Novelty$\uparrow$ & AS$\uparrow$ & MS$\uparrow$ \\ \midrule \multicolumn{6}{c}{$C_v$ ({$\frac{\mathrm{cal}}{{\mathrm{mol}}}\mathrm{K}$}), \ $\mu$ (D)} \\ \midrule EDM & 1.075 &1.163 & 85.45 & \textbf{97.99} & \textbf{77.02}\\ EEGSDE ($s_1$=10, $s_2$=1) & \textbf{0.982} & \textbf{0.916} & \textbf{85.51}& 97.58 & 73.94\\ \midrule \multicolumn{6}{c}{$\Delta \varepsilon$ (meV), \ $\mu$ (D)} \\ \midrule EDM & 682 & 1.138 & 85.04 & \textbf{97.96} & \textbf{75.95}\\ EEGSDE ($s_1$=$s_2$=1) & \textbf{566} & \textbf{0.866} &\textbf{85.98}& 97.65 & 72.91\\ \midrule \multicolumn{6}{c}{$\alpha$ ({$\mathrm{Bohr}^3$}), \ $\mu$ (D)} \\ \midrule EDM & 2.76 & 1.161 & 85.47 &\textbf{98.03} & \textbf{77.83}\\ EEGSDE ($s_1$=$s_2$=1.5) & \textbf{2.56} & \textbf{0.859} &\textbf{85.50}& 97.97 & 77.14\\ \bottomrule \end{tabular}} \end{center} \end{minipage} \hfill \begin{minipage}[t]{0.3\textwidth} \captionof{table}{Generation performance targeted to structures from test set on QM9 dataset.} \label{tab:fingerprint} \begin{center} \scalebox{0.82}{ \begin{tabular}{lc} \toprule Method & Similarity$\uparrow$ \\ \midrule EDM & 0.670 \\ EEGSDE ($s$=0.1) & 0.695 \\ EEGSDE ($s$=0.5) & 0.738 \\ EEGSDE ($s$=1) & \textbf{0.748} \\ \bottomrule \end{tabular}} \end{center} \end{minipage} \vspace{-.4cm} \end{figure} Let $c \in {\mathbb{R}}$ be a certain quantum property. To generate molecules with the desired property, we set the energy function as the squared error between the predicted property and the desired property: \begin{align} \label{eq:l2} E({\bm{z}}_t, c, t) = s | g({\bm{z}}_t, t) - c |^2, \end{align} where $g({\bm{z}}_t, t)$ is a time-dependent property prediction model, and $s$ is the scaling factor controlling the strength of the guidance. Specifically, we parameterize $g({\bm{z}}_t, t)$ using the second component in the output of EGNN (see Section~\ref{sec:background}) followed by a decoder (Dec): \begin{align} \label{eq:g} g ({\bm{z}}_t, t) = {\mathrm{Dec}}({\mathrm{EGNN}}^h({\bm{x}}_t, {\bm{h}}_t')), \quad {\bm{h}}_t' = \mathrm{concatenate}({\bm{h}}_t, t), \end{align} where the concatenation is performed on each atom feature, and the decoder consists of a node-wise multilayer perceptron (MLP), a sum pooling layer and an MLP for the final property prediction based on \cite{satorras2021egnn}. The training objective of $g({\bm{z}}_t, t)$ is available in Appendix~\ref{sec:energy_train}. This parameterization ensures that the energy function $E({\bm{z}}_t, c, t)$ is invariant to orthogonal transformations, and thus the distribution of generated samples is also invariant according to Theorem~\ref{thm:equiegsde}. We evaluate on the QM9 dataset~\citep{ramakrishnan2014quantum}, which contains quantum properties and coordinates of $\sim$130k organic molecules with up to nine heavy atoms from (C, N, O, F). We follow \cite{hoogeboom2022equivariant} for training and evaluation settings. \textbf{Data split:} We split QM9 into training, validation and test sets, which include 100K, 18K and 13K samples respectively. The training set is further divided into two non-overlapping halves $D_a,D_b$ equally. \textbf{Training setting:} We train the noise prediction network and the energy function on $D_b$. \textbf{Evaluation setting:} We report the mean absolute error (MAE), which is evaluated by a property classifier network $\phi_c$~\citep{satorras2021egnn} trained on the other half $D_a$. We report the loss of $\phi_c$ on $D_b$ as the lower bound (L-bound)~\citep{hoogeboom2022equivariant} of the MAE. We also report the novelty of generated molecules, as well as the atom stability (AS) and molecule stability (MS), which measure the validity of generated molecules. Since the noise prediction network and the energy function are trained on $D_b$, the novelty is also evaluated on $D_b$, which leads to a higher value than that evaluated on the whole dataset~\citep{hoogeboom2022equivariant}. We provide additional experimental settings in Appendix~\ref{sec:detail_quant}. By default, EEGSDE uses a conditional noise prediction network in Eq.~{(\ref{eq:eegsde})}, since we find it generates more accurate molecules than using an unconditional one (see Appendix~\ref{sec:ablation_np} for an ablation study). We compare our EEGSDE to EDM, which only adopts a conditional noise prediction network. We also compare two additional baselines ``U-bound'' and ``\#Atoms'' from \cite{hoogeboom2022equivariant} (see Appendix~\ref{sec:baseline} for details). Following \cite{hoogeboom2022equivariant}, we consider six quantum properties in QM9: polarizability $\alpha$, highest occupied molecular orbital energy $\varepsilon_{\mathrm{HOMO}}$, lowest unoccupied molecular orbital energy $\varepsilon_{\mathrm{LUMO}}$, HOMO-LUMO gap $\Delta \varepsilon$, dipole moment $\mu$ and heat capacity $C_v$. Firstly, we generate molecules targeted to one of these six properties. As shown in Table~\ref{tab:variant}, with the energy guidance, our EEGSDE has a significantly better MAE than the EDM on all properties. Remarkably, with a proper scaling factor $s$, the MAE of EEGSDE is reduced by more than 25\% compared to EDM on properties $\Delta \varepsilon$, $\varepsilon_{\mathrm{HOMO}}$, and more than 30\% on $\mu$. Note that the energy guidance does not affect the novelty, the atom stability and molecule stability much (always within 6\% compared to EDMs). We further generate molecules targeted to multiple quantum properties by combining energies linearly. As shown in Table~\ref{tab:multi}, our EEGSDE still has a significantly better MAE than the EDM, and does not affect the novelty, the atom stability and molecule stability much. As a result, our EEGSDE is able to explore the chemical space in a guided way to generate promising molecules, which may benefit applications such as drug and material discovery. \section{Generating Molecules with Target Structures} \label{sec:fp} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{imgs/fingerprint1.pdf} \vspace{-.8cm} \caption{Examples of generated molecules targeted to specific structures (unseen during training). Compared to EDM, the molecular structures of EEGSDE align better with the target structures.} \label{fig:fp1} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{imgs/fingerprint2.pdf} \vspace{-.4cm} \caption{Visualization of the effect of the scaling factor.} \label{fig:fp1_sf} \vspace{-.2cm} \end{figure} Following \cite{gebauer2022inverse}, we use the molecular fingerprint to encode the structure information of a molecule. The molecular fingerprint $c=(c_1,\dots, c_L)$ is a series of bits that capture the presence or absence of substructures in the molecule. To generate molecules with a specific structure, we set the energy function as the squared error $E({\bm{z}}_t, c, t) = s \| m({\bm{z}}_t, t) - c \|^2$ between a time-dependent multi-label classifier $m({\bm{z}}_t, t)$ and the corresponding fingerprint $c$. Here $s$ is the scaling factor. In initial experiments, we also try binary cross entropy loss for the energy, but we find it causes the generation process unstable. The multi-label classifier is parameterized by EGNN as \begin{align*} m ({\bm{z}}_t, t) = \sigma({\mathrm{Dec}}({\mathrm{EGNN}}^h({\bm{x}}_t, {\bm{h}}_t'))), \quad {\bm{h}}_t' = \mathrm{concatenate}({\bm{h}}_t, t). \end{align*} The multi-label classifier has the same backbone to the property prediction model in Eq.~{(\ref{eq:g})}, except that the decoder outputs a vector of dimension $L$, and the sigmoid function $\sigma$ is adopted for multi-label classification. The training objective of $m ({\bm{z}}_t, t)$ is available in Appendix~\ref{sec:energy_train}. Similarly to Eq.~{(\ref{eq:g})}, the EGNN in the multi-label classifier guarantees the invariance of the distribution of generated samples according to Theorem~\ref{thm:equiegsde}. We train the noise prediction network and the energy function on the whole training set of QM9. We evaluate the Tanimoto similarity~\citep{gebauer2022inverse} between unseen target structures from test set and structures of generated molecules, which is computed by comparing fingerprints of two structures. We provide more experimental details in Appendix~\ref{sec:detail_fp}. As shown in Table~\ref{tab:fingerprint}, EEGSDE significantly improves the similarity between target structures and generated structures compared to EDM. Also note in a proper range, a larger scaling factor results in a better similarity, and EEGSDE with $s$=1 improves the similarity by more than 10\% compared to EDM. In Figure~\ref{fig:fp1}, we plot generated molecules of EDM and EEGSDE ($s$=1) targeted to specific structures, where our EEGSDE aligns better with them. We further visualize the effect of the scaling factor in Table~\ref{fig:fp1_sf}, where the generated structures align better as the scaling factor grows. These results demonstrate that our EEGSDE captures the structure information in molecules well. We also perform experiments on the more challenging GEOM-Drug~\citep{axelrod2022geom} dataset, and we train the conditional EDM baseline using the default setting of \cite{hoogeboom2022equivariant}. We find the EDM baseline has a similarity of 0.166, which is much lower than QM9. We hypothesize this is because molecules in GEOM-Drug has much more atoms than QM9 with a more complex structure, and the default setting in \cite{hoogeboom2022equivariant} is suboptimal. For example, the EDM on GEOM-Drug has a smaller number of parameters than the EDM on QM9 (15M v.s. 26M), which is insufficient to capture the structure information. Nevertheless, our EEGSDE still improves the similarity by $\sim$\%17. We provide more results on GEOM-Drug in Appendix~\ref{sec:drug}. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{imgs/2d.pdf} \vspace{-.6cm} \caption{Generate molecules targeted to both the quantum property $\alpha$ and the molecular structure.} \label{fig:2d} \vspace{-.4cm} \end{figure} Finally, we demonstrate that our EEGSDE is a flexible framework to generate molecules targeted to multiple properties, which is often the practical case. We additionally target to the quantum property $\alpha$ (polarizability) by combining the energy function for molecular structures in this section and the energy function for quantum properties in Section~\ref{sec:q}. Here we choose $\alpha=100$ {$\mathrm{Bohr}^3$}, which is a relatively large value, and we expect it to encourage less isometrically shaped structures. As shown in Figure~\ref{fig:2d}, the generated molecule aligns better with the target structure as the scaling factor $s_1$ grows, and meanwhile a ring substructure in the generated molecule vanishes as the scaling factor for polarizability $s_2$ grows, leading to a less isometrically shaped structure, which is as expected. \section{Conclusion} \vspace{-.2cm} This work presents equivariant energy-guided SDE (EEGSDE), a flexible framework for controllable 3D molecule generation under the guidance of an energy function in diffusion models. EEGSDE naturally exploits the geometric symmetry in 3D molecular conformation, as long as the energy function is invariant to orthogonal transformations. EEGSDE significantly improves the EDM baseline in inverse molecular design targeted to quantum properties and molecular structures. Furthermore, EEGSDE is able to generate molecules with multiple target properties by combining the corresponding energy functions linearly.
proofpile-arXiv_065-3946
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Mirror symmetry is originally stated as a duality between Calabi--Yau manifolds. Mirror symmetry predicts that the symplectic geometry (or the complex geometry) of a Calabi--Yau manifold is equivalent to the complex geometry (or the symplectic geometry, respectively) of the mirror Calabi--Yau manifold. The mirror duality has been generalized to Fano varieties, more generally, to log Calabi--Yau pairs. The mirror of a smooth log Calabi--Yau pair $(X,D)$ is a Landau--Ginzburg model, which is a variety $X^\vee$ with a proper map, called the superpotential, $W:X^\vee \rightarrow \mathbb C$. One further expects that the generic fiber of the superpotential $W$ is mirror to a smooth anticanonical divisor $D$ of $X$. We call the mirror duality between a smooth log Calabi--Yau pair $(X,D)$ and a proper Landau--Ginzburg model \emph{relative mirror symmetry}. To construct the mirror of the smooth log Calabi--Yau pair $(X,D)$, one needs to construct the variety $X^\vee$ and the proper Landau--Ginzburg potential $W$. The variety $X^\vee$ is considered as the mirror of the complement $X\setminus D$. A general construction of the variety $X^\vee$ is through intrinsic mirror symmetry \cite{GS19} in the Gross--Siebert program. One considers a maximally unipotent degeneration $g: Y\rightarrow S$, where $S$ is an affine curve, of the pair $(X,D)$. The mirror is constructed as the projective spectrum of the degree zero part of the relative quantum cohomology $QH^0_{\log}(Y,D^\prime)$ of $(Y,D^\prime)$, where $D^\prime$ is a certain divisor that contains $g^{-1}(0)$. It remains to compute the proper Landau--Ginzburg potential $W$. Following the Gross--Siebert program, the Landau--Ginzburg potentials are given by the theta functions. The theta functions are usually difficult to compute. Recently, \cite{GRZ} computed the proper Landau--Ginzburg potentials for toric del Pezzo surfaces. They considered a toric degeneration of the smooth pair $(X,D)$, then applied the tropical view of the Landau--Ginzburg models \cite{CPS}. The theta function in \cite{GRZ} was defined tropically. By proving a tropical correspondence theorem in \cite{Graefnitz2022}, they showed that the theta function can be written as a generating function of two-point relative invariants. The idea of computing two-point relative invariants in \cite{GRZ} was to relate these two-point relative invariants with one-point relative invariants of a blow-up $\tilde {X}$. Then use the local-relative correspondence of \cite{vGGR} to relate these invariants to local invariants of the Calabi--Yau threefold $K_{\tilde {X}}$. By the open-closed duality of \cite{LLW11}, these local invariants are open invariants of the local Calabi--Yau threefold $K_X$ which form the open mirror map. Therefore, \cite{GRZ} showed that, for toric del Pezzo surfaces, the proper Landau--Ginzburg potentials are the open mirror maps. In this paper, we study the Landau--Ginzburg model from the intrinsic mirror symmetry construction in \cite{GS19}. The goal of this paper is to generalize the result of \cite{GRZ} to all dimensions via a direct computation of two-point relative Gromov--Witten invariants. The computation is based on the relative mirror theorem of \cite{FTY}, where we only need to assume that $D$ is nef. The variety $X$ is not necessarily toric or Fano. \subsection{Intrinsic mirror symmetry and theta functions} Besides the tropical view of the Landau--Ginzburg model \cite{CPS}, the proper Landau--Ginzburg can also be constructed through intrinsic mirror symmetry. We learnt about the following construction from Mark Gross. Given a smooth log Calabi--Yau pair $(X,D)$. We recall the maximally unipotent degeneration $g:Y\rightarrow S$ and the pair $(Y,D^\prime)$ from the construction of $X^\vee$. The theta functions in $QH^0_{\log}(Y,D^\prime)$ form a graded ring. The degree zero part of the ring agrees with $QH^0_{\log}(X,D)$. The base of the Landau--Ginzburg mirror of $(X,D)$ is $\on{Spec}QH^0_{\log}(X,D)=\mathbb A^1$ and the superpotential is $W=\vartheta_{1}$, the unique primitive theta function of $QH^0_{\log}(X,D)$. We claim that the theta functions of $QH^0_{\log}(X,D)$ are generating functions of two-point relative Gromov--Witten invariants as follows. \begin{definition}[=Definition \ref{def-theta-func}]\label{intro-def-theta} For $p\geq 1$, the theta function is \begin{align}\label{intro-theta-func-def} \vartheta_p=x^{-p}+\sum_{n=1}^{\infty}nN_{n,p}t^{n+p}x^n, \end{align} where \[ N_{n,p}=\sum_{\beta} \langle [\on{pt}]_n,[1]_p\rangle_{0,2,\beta}^{(X,D)} \] is the sum of two-point relative Gromov--Witten invariants with the first marking having contact order $n$ along with a point constraint and the second marking having contact order $p$. \end{definition} By \cite{GS19}, theta functions should satisfy the following product rule \begin{align}\label{intro-theta-func-multi} \vartheta_{p_1}\star \vartheta_{p_2}=\sum_{r\geq 0, \beta}N_{p_1,p_2,-r}^{\beta} \vartheta_r, \end{align} where the structure constants $N_{p_1,p_2,-r}^{\beta}$ are punctured invariants with two positive contacts and one negative contact. In Proposition \ref{prop-struc-const}, we show that the structure constants can be written in terms of two-point relative invariants. In other words, we reduce relative invariants with two positive contacts and one negative contact to relative invariants with two positive contacts. Then we show that the theta functions in Definition \ref{intro-theta-func-def} indeed satisfy the product rule (\ref{intro-theta-func-multi}) with the correct structure constants $N_{p_1,p_2,-r}^{\beta}$. In particular, in Proposition \ref{prop-wdvv}, we prove an identity of two-point relative invariants generalizing \cite{GRZ}*{Lemma 5.3} and show that it follows from the WDVV equation. \begin{remark} During the preparation of our paper, we learnt that Yu Wang\cite{Wang} also obtained the same formula as in Proposition \ref{prop-struc-const} but using the punctured invariants of \cite{ACGS}. Some formulas for two-point relative invariants are also obtained in \cite{Wang} via a different method. \end{remark} \subsection{Relative mirror maps} The relative mirror theorem of \cite{FTY} states that, under the assumption that $D$ is nef, a genus zero generating function of relative Gromov--Witten invariants (the $J$-function) can be identified with the relative periods (the $I$-function) via a change of variables called the relative mirror map. This provides a powerful tool to compute genus zero relative Gromov--Witten invariants. Our computation of these two-point relative invariants is straightforward but complicated. It is straightforward to see that these invariants can be extracted from the relative $J$-function after taking derivatives. On the other hand, although such computation for (one-point) absolute invariants is well-known, the computation of two-point relative invariants is much more complicated due to the following reasons. First of all, we need to compute two-point relative invariants instead of one-point invariants. For one-point relative invariants, one can also use the local-relative correspondence of \cite{vGGR} (see also \cite{TY20b}) to reduce the computation to local invariants when the divisor $D$ is nef. To compute these two-point invariants one need to consider the so-called extended relative $I$-function, instead of the much easier non-extended relative $I$-function. Secondly, the relative mirror map has never been studied systematically. There have been some explicit computations of relative invariants when the relative mirror maps are trivial, see, for example, \cite{TY20b}. When the relative mirror maps are not trivial, more complicated invariants will appear. One of the important consequences of this paper is to provide a systematic analysis of these invariants and set up the foundation of future applications of the relative mirror theorem. We would like to point out a related computation in \cite{You20}, where we computed one-point relative invariants of some partial compactifications of toric Calabi--Yau orbifolds. The computation is much easier in \cite{You20} because of the two reasons that we just mentioned. First of all, one can apply the local-relative correspondence to compute these invariants, although we did not use it in \cite{You20}. Secondly, although the mirror map is not trivial, the mirror map is essentially coming from the absolute Gromov--Witten theory of the partial compactifications. The relative theory in \cite{You20} does not contribute to the non-trivial mirror map. Therefore we were able to avoid all these complexities. We would also like to point out that the computation in \cite{You20} is restricted to the toric case. In this paper, we work beyond the toric setting. In order to apply the relative mirror theorem of \cite{FTY}, we need to study the relative mirror map carefully. Let $X$ be a smooth projective variety and $D$ be a smooth nef divisor, we recall that the $J$-function for the pair $(X,D)$ is defined as \[ J_{(X,D)}(\tau,z)=z+\tau+\sum_{\substack{(\beta,l)\neq (0,0), (0,1)\\ \beta\in \on{NE(X)}}}\sum_{\alpha}\frac{q^{\beta}}{l!}\left\langle \frac{\phi_\alpha}{z-\bar{\psi}},\tau,\ldots, \tau\right\rangle_{0,1+l, \beta}^{(X,D)}\phi^{\alpha}, \] and the (non-extended) $I$-function of the smooth pair $(X,D)$ is \[ I_{(X,D)}(y,z)=\sum_{\beta\in \on{NE(X)}}J_{X,\beta}(\tau_{0,2},z)y^\beta\left(\prod_{0<a\leq D\cdot \beta-1}(D+az)\right)[1]_{-D\cdot \beta}. \] We refer to Definition \ref{def-relative-J-function} and Definition \ref{def-relative-I-function} for the precise meaning of the notation. The extended $I$-function $I(y,x_1,z)$ takes a more complicated form than the non-extended $I$-fnction $I(y,z)$. We refer to Definition \ref{def-relative-I-function-extended} for the precise definition of the extended $I$-function. We further assume that $-K_X-D$ is nef. The extended relative mirror map is given by the $z^0$-coefficient of the extended $I$-function: \begin{align*} \tau(y,x_1)=\sum_{i=1}^r p_i\log y_i+x_1[1]_{1}+\sum_{\substack{\beta\in \on{NE}(X)\\ D\cdot \beta \geq 2}}\langle [\on{pt}]\psi^{D\cdot \beta-2}\rangle_{0,1,\beta}^Xy^\beta (D\cdot \beta-1)![1]_{-D\cdot \beta}. \end{align*} The (non-extended) relative mirror map is given by $\tau(y,0)$, denoted by $\tau(y)$. The relative mirror theorem of \cite{FTY} states that \[ J(\tau(y,x_1),z)=I(y,x_1,z). \] Therefore from the expression of $\tau(y,x_1)$, we can see that relative invariants with several negative contact orders will appear when the relative mirror map is not trivial. We obtain the following identity which shows that the negative contact insertion $[1]_{-k}$ is similar to the insertion of a divisor class $[D]_0$. \begin{proposition}[=Proposition \ref{prop-several-neg-1}] \begin{align*} \langle [1]_{-k_1},\cdots, [1]_{-k_l}, [\gamma]_{k_{l+1}} \bar{\psi}^a\rangle_{0,l+1,\beta}^{(X,D)}=\langle [D]_0,\cdots, [D]_0, [\gamma]_{D\cdot \beta} \bar{\psi}^a\rangle_{0,l+1,\beta}^{(X,D)}, \end{align*} where $\gamma \in H^*(D)$, $k_i$'s are positive integers, and \[ D\cdot \beta=k_{l+1}-\sum_{i=1}^l k_i\geq 0. \] \end{proposition} Since we need to compute invariants with insertion $[1]_1$, we also prove similar propositions when there are also insertions of $[1]_1$. We refer to Proposition \ref{prop-several-neg-2} and Proposition \ref{prop-several-neg-3} for the precise formula. We also compute degree zero relative invariants with two positive contact orders and several negative contact orders in Proposition \ref{prop-degree-zero}. \begin{remark} We would like to point out that a key point of the computation of the proper potential is to observe the subtle (but vital) difference between the above mentioned formulas for invariants with two positive contacts and formulas for invariants with one positive contact. \end{remark} \begin{remark} The proof of Proposition \ref{prop-several-neg-1}, Proposition \ref{prop-several-neg-2}, Proposition \ref{prop-several-neg-3}, and Proposition \ref{prop-degree-zero} make essential use of the (both orbifold and graph sum) definitions of relative Gromov--Witten invariants with negative contact orders in \cite{FWY}. Since these invariants reduce to relative invariants with two positive contact orders, we do not need to assume a general relation between the punctured invariants of \cite{ACGS} and \cite{FWY}. To match with the intrinsic mirror symmetry in the Gross--Siebert program, we only need to assume that punctured invariants of a smooth pair with one negative contact order defined in \cite{ACGS} coincide with the ones in \cite{FWY}. A more general comparison result is an upcoming work of \cite{BNR22}, so we do not attempt to give a proof of this special case. \end{remark} With all these preparations, we are able to express invariants in $J(\tau(y,x),z)$ in terms of relative invariants without negative contact orders and the relative mirror map can be written as the following change of variables \begin{align}\label{intro-relative-mirror-map} \sum_{i=1}^r p_i\log q_i=\sum_{i=1}^r p_i\log y_i+g(y)D. \end{align} \subsection{The proper Landau--Ginzburg potential} The main result of the paper is to relate the proper Landau--Ginzburg potential with the relative mirror map. \begin{theorem}[=Theorem \ref{thm-main}]\label{intro-thm-main} Let $X$ be a smooth projective variety with a smooth nef anticanonical divisor $D$. Let $W:=\vartheta_1$ be the mirror proper Landau--Ginzburg potential. Set $q^\beta=t^{D\cdot \beta}x^{D\cdot\beta}$. Then \[ W=x^{-1}\exp\left(g(y(q))\right), \] where \[ g(y)=\sum_{\substack{\beta\in \on{NE}(X)\\ D\cdot \beta \geq 2}}\langle [\on{pt}]\psi^{D\cdot \beta-2}\rangle_{0,1,\beta}^Xy^\beta (D\cdot \beta-1)! \] and $y=y(q)$ is the inverse of the relative mirror map (\ref{intro-relative-mirror-map}). \end{theorem} \begin{remark} This is a natural expectation from the point of view of relative mirror symmetry. Recall that the proper Landau--Ginzburg model $(X^\vee,W)$ is mirror to the smooth log Calabi--Yau pair $(X,D)$. The proper Landau--Ginzburg potential $W$ should encode the instanton corrections. On the other hand, the relative mirror theorem relates relative Gromov--Witten invariants with relative periods (relative $I$-functions) via the relative mirror map. In order to have a mirror construction with a trivial mirror map, the instanton corrections should be the inverse relative mirror map. This provides an enumerative meaning of the relative mirror map. \end{remark} In \cite{GRZ}, the authors conjectured that the proper Landau--Ginzburg potential is the open mirror map. We also have a natural explanation for it. The open mirror map of \cite{GRZ} is given by open Gromov--Witten invariants of the local Calabi--Yau $\mathcal O_X(-D)$. These open invariants encode the instanton corrections and are expected to be the inverse mirror map of the local Gromov--Witten theory of $\mathcal O_X(-D)$. We observe that the relative mirror map (\ref{intro-relative-mirror-map}) and the local mirror map coincide up to a sign. So we claim that the proper Landau--Ginzburg potential is also the open mirror map. When $X$ is a toric variety, we proved the conjecture of \cite{GRZ}. \begin{theorem}[=Theorem \ref{thm-toric-open}]\label{intro-thm-toric-open} Let $(X,D)$ be a smooth log Calabi--Yau pair, such that $X$ is toric and $D$ is nef. The proper Landau--Ginzburg potential of $(X,D)$ is the open mirror map of the local Calabi--Yau manifold $\mathcal O_X(-D)$. \end{theorem} In general, Theorem \ref{intro-thm-toric-open} is true as long as the open-closed duality (e.g. \cite{CLT} and \cite{CCLT}) between open Gromov--Witten invariants $K_X$ and closed Gromov--Witten invariants $P(\mathcal O(-D)\oplus \mathcal O)$ is true. Therefore, we have the following. \begin{corollary} The open-closed duality implies the proper Landau--Ginzburg potential is the open mirror map. \end{corollary} \begin{remark} The reason that the relative mirror map is the same as the local mirror map can also be seen from the local-relative correspondence of \cite{vGGR} and \cite{TY20b}. It has already been observed in \cite{TY20b} that the local and (non-extended) relative $I$-functions can be identified. And this identification has been used to prove the local-relative correspondence for some invariants in \cite{TY20b}. \end{remark} Theorem \ref{intro-thm-main} provides explicit formulas for the proper potentials whenever the relevant genus zero absolute Gromov--Witten invariants of $X$ are computable. These absolute invariants can be extracted from the $J$-function of the absolute Gromov--Witten theory of $X$. Therefore, we have explicit formulas of the proper Landau--Ginzburg potentials whenever a Givental style mirror theorem holds. Givental style mirror theorem has been proved for many cases beyond the toric setting (e.g. \cite{CFKS} for non-abelian quotients via the abelian/non-abelian correspondence). Therefore, we have explicit formulas for the proper Landau--Ginzburg potentials for large classes of examples. Note that there may be non-trivial mirror maps for absolute Gromov--Witten theory of $X$. If we replace the absolute invariants in $g(y)$ by the corresponding coefficients of the absolute $I$-function, we also need to plug-in the inverse of the absolute mirror map. This can be seen in the case of toric varieties in Section \ref{sec-toric-semi-Fano}. For Fano varieties, the invariants in $g(y)$ are usually easier. We observed that $g(y)$ is closely related to the regularized quantum periods in the Fano search program \cite{CCGGK}. \begin{theorem}\label{intro-thm-quantum-period} The function $g(y)$ coincides with the anti-derivative of the regularized quantum period. \end{theorem} By mirror symmetry, it is expected that regularized quantum periods of Fano varieties coincide with the classical periods of their mirror Laurent polynomials. Therefore, as long as one knows the mirror Laurent polynomials, one can compute the proper Landau--Ginzburg potentials. For example, the proper Landau--Ginzburg potentials for all Fano threefolds can be explicitly computed using \cite{CCGK}. More generally, Theorem \ref{intro-thm-quantum-period} allows one to use the large databases \cite{CK22} of quantum periods for Fano manifolds to compute the proper Landau--Ginzburg potentials. Interestingly, the Laurent polynomials are considered as the mirror of Fano varieties with maximal boundaries (or as the potential for the weak, non-proper, Landau--Ginzburg models of \cite{Prz07}, \cite{Prz13}). Therefore, we have an explicit relation between the proper and non-proper Landau--Ginzburg potentials. \subsection{Acknowledgement} The author would like to thank Mark Gross for explaining the construction of Landau--Ginzburg potentials from intrinsic mirror symmetry. The author would also like to thank Yu Wang for illuminating discussions regarding Section \ref{sec:intrinsic}. This project has received funding from the Research Council of Norway grant no. 202277 and the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska -Curie grant agreement 101025386. \section{Relative Gromov--Witten invariants with negative contact orders} \subsection{General theory}\label{sec-rel-general} We follow the presentation of \cite{FWY} for the definition of genus zero relative Gromov--Witten theory with negative contact orders. Let $X$ be a smooth projective variety and $D\subset X$ be a smooth divisor. We consider a topological type (also called an admissible graph) \[ \Gamma=(0,n,\beta,\rho,\vec \mu) \] with \[ \vec \mu=(\mu_1, \ldots, \mu_\rho)\in (\mathbb Z^*)^{\rho} \] and \[ \sum_{i=1}^\rho \mu_i=\int_\beta D. \] \begin{defn}[\cite{FWY}, Definition 2.4] A rubber graph $\Gamma'$ is an admissible graph whose roots have two different types. There are \begin{enumerate} \item $0$-roots (whose weights will be denoted by $\mu^0_1,\ldots,\mu^0_{\rho_0}$), and \item $\infty$-roots (whose weights will be denoted by $\mu^\infty_1,\ldots,\mu^\infty_{\rho_\infty}$). \end{enumerate} The map $b$ maps $V(\Gamma)$ to $H_2(D,\mathbb Z)$. \end{defn} \begin{defn}[\cite{FWY}, Definition 4.1]\label{def:admgraph0} \emph{A (connected) graph of type $0$} is a weighted graph $\Gamma^0$ consisting of a single vertex, no edges, and the following. \begin{enumerate} \item {$0$-roots}, \item {$\infty$-roots of node type}, \item {$\infty$-roots of marking type}, \item {Legs}. \end{enumerate} $0$-roots are weighted by positive integers, and $\infty$-roots are weighted by negative integers. The vertex is associated with a tuple $(g,\beta)$ where $g\geq 0$ and $\beta\in H_2(D,\mathbb Z)$. \end{defn} A graph $\Gamma^\infty$ of type $\infty$ is an admissible graph such that the roots are distinguished by node type and marking type. \begin{defn}[\cite{FWY}, Definition 4.8]\label{defn:locgraph} \emph{An admissible bipartite graph} $\mathfrak G$ is a tuple $(\mathfrak S_0,\Gamma^\infty,I,E,g,b)$, where \begin{enumerate} \item $\mathfrak S_0=\{\Gamma_i^0\}$ is a set of graphs of type $0$; $\Gamma^\infty$ is a (possibly disconnected) graph of type $\infty$. \item $E$ is a set of edges. \item $I$ is the set of markings. \item $g$ and $b$ represent the genus and the degree respectively. \end{enumerate} Moreover, the admissible bipartite graph must satisfy some conditions described in \cite{FWY}*{Definition 4.8}. We refer to \cite{FWY} for more details. \end{defn} Let $\mathcal B_\Gamma$ be a connected admissible bipartite graph of topological type $\Gamma$. Given a bipartite graph $\mathfrak G\in \mathcal B_\Gamma$, we consider \[ \overline{\mathcal M}_{\mathfrak G}=\prod_{\Gamma_i^0\in \mathfrak S_0}\overline{\mathcal M}_{\Gamma_i^0}^\sim (D) \times_{D^{|E|}}\overline{\mathcal M}_{\Gamma^\infty}^{\bullet}(X,D), \] where \begin{itemize} \item $\overline{\mathcal M}_{\Gamma_i^0}^\sim (D)$ is the moduli space of relative stable maps to rubber target over $D$ of type $\Gamma_i^0$; \item $\overline{\mathcal M}_{\Gamma^\infty}^{\bullet}(X,D)$ is the moduli space of relative stable maps of type $\Gamma^\infty$; \item $\times_{D^{|E|}}$ is the fiber product identifying evaluation maps according to edges. \end{itemize} We have the following diagram. \begin{equation}\label{eqn:diag} \xymatrix{ \overline{\mathcal M}_{\mathfrak G} \ar[r]^{} \ar[d]^{\iota} & D^{|E|} \ar[d]^{\Delta} \\ \prod\limits_{\Gamma^0_i\in \mathfrak S_0}\overline{\mathcal M}^\sim_{\Gamma^0_i}(D) \times \overline{\mathcal M}^{\bullet}_{\Gamma^\infty}(X,D) \ar[r]^{} & D^{|E|}\times D^{|E|}. } \end{equation} There is a natural virtual class \[ [\overline{\mathcal M}_{\mathfrak G}]^{\on{vir}}=\Delta^![\prod\limits_{\Gamma^0_i\in \mathfrak S_0}\overline{\mathcal M}^\sim_{\Gamma^0_i}(D) \times \overline{\mathcal M}^{\bullet}_{\Gamma^\infty}(X,D)]^{\on{vir}} \] where $\Delta^!$ is the Gysin map. For each $\overline{\mathcal M}^\sim_{\Gamma_i^0}(D)$, we have a stabilization map $\overline{\mathcal M}^\sim_{\Gamma_i^0}(D) \rightarrow \overline{\mathcal M}_{0,n_i+\rho_i}(D,\beta_i)$ where $n_i$ is the number of legs, $\rho_i$ is the number of $0$-roots plus the number of $\infty$-roots of marking type, and $\beta_i$ is the curve class of $\Gamma_i^0$. Hence, we have a map \[ \overline{\mathcal M}_{\mathfrak G}=\prod\limits_{\Gamma^0_i\in \mathfrak S_0}\overline{\mathcal M}^\sim_{\Gamma^0_i}(D) \times_{D^{|E|}} \overline{\mathcal M}^{\bullet}_{\Gamma^\infty}(X,D) \rightarrow \prod\limits_{\Gamma^0_i\in \mathfrak S_0}\overline{\mathcal M}_{0,n_i+\rho_i}(D,\beta_i) \times_{D^{|E|}} \overline{\mathcal M}^{\bullet}_{\Gamma^\infty}(X,D). \] Composing with the boundary map \[ \prod\limits_{\Gamma^0_i\in \mathfrak S_0}\overline{\mathcal M}_{0,n_i+\rho_i}(D,\beta_i) \times_{D^{|E|}} \overline{\mathcal M}^{\bullet}_{\Gamma^\infty}(X,D) \rightarrow \overline{\mathcal M}_{0,n+\rho}(X,\beta)\times_{X^{\rho}} D^{\rho}, \] we obtain a map \[ \mathfrak t_{\mathfrak G}:\overline{\mathcal M}_{\mathfrak G}\rightarrow \overline{\mathcal M}_{0,n+\rho}(X,\beta)\times_{X^{\rho}} D^{\rho}. \] Following the definition in \cite{FWY}, we need to introduce the relative Gromov--Witten cycle of the pair $(X,D)$ of topological type $\Gamma$. Let $t$ be a formal parameter. Given a $\Gamma^\infty$, we define \begin{align}\label{neg-rel-infty} C_{\Gamma^\infty}(t)=\frac{t}{t+\Psi}\in A^*(\overline{\mathcal M}^\bullet_{\Gamma^\infty}(X,D))[t^{-1}]. \end{align} Then we consider $\Gamma_i^0$. Define \[ c(l)=\Psi_\infty^l-\Psi_\infty^{l-1}\sigma_1+\ldots+(-1)^l\sigma_l, \] where $\Psi_\infty$ is the divisor corresponding to the cotangent line bundle determined by the relative divisor on the $\infty$ side. We then define \[ \sigma_k=\sum\limits_{\{e_1,\ldots,e_k\}\subset \on{HE}_{m,n}(\Gamma_i^0)} \prod\limits_{j=1}^{k} (d_{e_j}\bar\psi_{e_j}-\ev_{e_j}^*D), \] where $d_{e_j}$ is the absolute value of the weight at the root $e_j$. For each $\Gamma_i^0$, define \begin{align}\label{neg-rel-0} C_{\Gamma_i^0}(t)= \frac{\sum_{l\geq 0}c(l) t^{\rho_\infty(i)-1-l}}{\prod\limits_{e\in \on{HE}_{n}(\Gamma_i^0)} \big(\frac{t+\ev_e^*D}{d_e}-\bar\psi_e\big) } \in A^*(\overline{\mathcal M}^\sim_{\Gamma_i^0}(D))[t,t^{-1}], \end{align} where $\rho_\infty(i)$ is the number of $\infty$-roots (of both types) associated with $\Gamma_i^0$. For each $\mathfrak G$, we write \begin{equation}\label{eqn:cg} C_{\mathfrak G}=\left[ p_{\Gamma^\infty}^*C_{\Gamma^\infty}(t)\prod\limits_{\Gamma_i^0\in \mathfrak S_0} p_{\Gamma_i^0}^*C_{\Gamma_i^0}(t) \right]_{t^{0}}, \end{equation} where $[\cdot]_{t^{0}}$ means taking the constant term, and $p_{\Gamma^\infty}, p_{\Gamma_i^0}$ are projections from $\prod\limits_{\Gamma^0_i\in \mathfrak S_0}\overline{\mathcal M}^\sim_{\Gamma^0_i}(D) \times \overline{\mathcal M}^{\bullet}_{\Gamma^\infty}(X,D)$ to the corresponding factors. Recall that \[ \iota: \overline{\mathcal M}_{\mathfrak G}\rightarrow \prod\limits_{\Gamma^0_i\in \mathfrak S_0}\overline{\mathcal M}^\sim_{\Gamma^0_i}(D) \times \overline{\mathcal M}^{\bullet}_{\Gamma^\infty}(X,D) \] is the closed immersion from diagram \eqref{eqn:diag}. \begin{defn}[\cite{FWY}*{Definition 5.3}]\label{def-rel-cycle} The relative Gromov--Witten cycle of the pair $(X,D)$ of topological type $\Gamma$ is defined to be \[ \mathfrak c_\Gamma(X/D) = \sum\limits_{\mathfrak G \in \mathcal B_\Gamma} \dfrac{1}{|\on{Aut}(\mathfrak G)|}(\mathfrak t_{\mathfrak G})_* ({\iota}^* C_{\mathfrak G} \cap [\overline{\mathcal M}_{\mathfrak G}]^{\on{vir}}) \in A_*(\overline{\mathcal M}_{0,n+\rho}(X,\beta)\times_{X^{\rho}} D^{\rho}), \] where $\iota$ is the vertical arrow in Diagram \eqref{eqn:diag}. \end{defn} \begin{proposition}[=\cite{FWY}, Proposition 3.4] \[\mathfrak c_\Gamma(X/D) \in A_{d}(\overline{M}_{0,n+\rho}(X,\beta)\times_{X^{\rho}} D^{\rho}),\] where \[d=\mathrm{dim}_{\mathbb C}X-3+\int_{\beta} c_1(T_X(-\mathrm{log} D)) + n + \rho_+, \] where $\rho_+$ is the number of relative markings with positive contact. \end{proposition} Let \begin{align*} \alpha_i\in H^*(X), \text{ and } a_i\in \mathbb Z_{\geq 0} \text{ for } i\in \{1,\ldots, n\}; \end{align*} \begin{align*} \epsilon_j\in H^*(D), \text{ and } b_j\in \mathbb Z_{\geq 0} \text{ for } j\in \{1,\ldots, \rho\}. \end{align*} We have evaluation maps \begin{align*} \ev_{X,i}:\overline{\mathcal M}_{0,n+\rho}(X,\beta)\times_{X^{\rho}} D^{\rho}&\rightarrow X, \text{ for } i\in\{1,\ldots,n\};\\ \ev_{D,j}:\overline{\mathcal M}_{0,n+\rho}(X,\beta)\times_{X^{\rho}} D^{\rho}&\rightarrow D \text{ for } j\in \{1,\ldots, \rho\}. \end{align*} \begin{defn}[\cite{FWY}, Definition 5.7]\label{rel-inv-neg} The relative Gromov--Witten invariant of topological type $\Gamma$ is \[ \langle \prod_{i=1}^n \tau_{a_i}(\alpha_i) \mid \prod_{j=1}^\rho\tau_{b_j}(\epsilon_j) \rangle_{\Gamma}^{(X,D)} = \displaystyle\int_{\mathfrak c_\Gamma(X/D)} \prod\limits_{j=1}^{\rho} \bar{\psi}_{D,j}^{b_j}\ev_{D,j}^*\epsilon_j\prod\limits_{i=1}^n \bar{\psi}_{X,i}^{a_i}\ev_{X,i}^*\alpha_i, \] where $\bar\psi_{D,j}, \bar\psi_{X,i}$ are pullback of psi-classes from $\overline{\mathcal M}_{0,n+\rho}(X,\beta)$ to $\overline{\mathcal M}_{0,n+\rho}(X,\beta)\times_{X^{\rho}} D^{\rho}$ corresponding to markings. \end{defn} \begin{remark} In \cite{FWY}, relative Gromov--Witten invariants of $(X,D)$ with negative contact order are also defined as a limit of the corresponding orbifold Gromov--Witten invariants of the $r$-th root stack $X_{D,r}$ with large ages \[ \langle \prod_{i=1}^{n+\rho} \tau_{a_i}(\gamma_i)\rangle_{0,n+\rho,\beta}^{(X,D)}:=r^{\rho_-}\langle \prod_{i=1}^{n+\rho} \tau_{a_i}(\gamma_i)\rangle_{0,n+\rho,\beta}^{X_{D,r}}, \] where $r$ is sufficiently large; $\rho_-$ is the number of orbifold markings with large ages (=relative markings with negative contact orders); $\gamma_i$ are cohomology classes of $X$ or $D$ depending on the markings being interior or orbifold/relative. We refer to \cite{FWY}*{Section 3} for more details. \end{remark} We recall the topological recursion relation and the WDVV equation which will be used later in our paper. \begin{prop}[\cite{FWY}*{Proposition 7.4}]\label{prop:TRR} Relative Gromov--Witten theory satisfies the topological recursion relation: \begin{align*} &\langle\bar\psi^{a_1+1}[\alpha_1]_{i_1}, \ldots, \bar\psi^{a_n}[\alpha_n]_{i_n}\rangle_{0,\beta,n}^{(X,D)} \\ =&\sum \langle\bar\psi^{a_1}[\alpha_1]_{i_1}, \prod\limits_{j\in S_1} \bar\psi^{a_j}[\alpha_j]_{i_j}, \widetilde T_{i,k}\rangle_{0,\beta_1,1+|S_1|}^{(X,D)} \langle\widetilde T_{-i}^k, \bar\psi^{a_2}[\alpha_2]_{i_2}, \bar\psi^{a_3}[\alpha_3]_{i_3}, \prod\limits_{j\in S_2} \bar\psi^{a_j}[\alpha_j]_{i_j}\rangle_{0,\beta_2,2+|S_2|}^{(X,D)}, \end{align*} where the sum is over all $\beta_1+\beta_2=\beta$, all indices $i,k$ of basis, and $S_1, S_2$ disjoint sets with $S_1\cup S_2=\{4,\ldots,n\}$. \end{prop} \begin{prop}[\cite{FWY}*{Proposition 7.5}]\label{prop:WDVV} Relative Gromov--Witten theory satisfies the WDVV equation: \begin{align*} &\sum \langle\bar\psi^{a_1}[\alpha_1]_{i_1}, \bar\psi^{a_2}[\alpha_2]_{i_2}, \prod\limits_{j\in S_1} \bar\psi^{a_j}[\alpha_j]_{i_j}, \widetilde T_{i,k}\rangle_{0,\beta_1,2+|S_1|}^{(X,D)} \\ & \quad \cdot \langle\widetilde T_{-i}^k, \bar\psi^{a_3}[\alpha_3]_{i_3}, \bar\psi^{a_4}[\alpha_4]_{i_4}, \prod\limits_{j\in S_2} \bar\psi^{a_j}[\alpha_j]_{i_j}\rangle_{0,\beta_2,2+|S_2|}^{(X,D)} \\ =&\sum \langle\bar\psi^{a_1}[\alpha_1]_{i_1}, \bar\psi^{a_3}[\alpha_3]_{i_3}, \prod\limits_{j\in S_1} \bar\psi^{a_j}[\alpha_j]_{i_j}, \widetilde T_{i,k}\rangle_{0,\beta_1,2+|S_1|}^{(X,D)} \\ & \quad \cdot \langle\widetilde T_{-i}^k, \bar\psi^{a_2}[\alpha_2]_{i_2}, \bar\psi^{a_4}[\alpha_4]_{i_4}, \prod\limits_{j\in S_2} \bar\psi^{a_j}[\alpha_j]_{i_j}\rangle_{0,\beta_2,2+|S_2|}^{(X,D)}, \end{align*} where each sum is over all $\beta_1+\beta_2=\beta$, all indices $i,k$ of basis, and $S_1, S_2$ disjoint sets with $S_1\cup S_2=\{5,\ldots,n\}$. \end{prop} \subsection{A special case} As explained in \cite{FWY}*{Example 5.5}, relative Gromov--Witten invariants with one negative contact order can be written down in a simpler form. In this case, we only have the graphs $\mathfrak G$ such that $\{\Gamma_i^0\}$ consists of only one element (denoted by $\Gamma^0$). Denote such a set of graphs by $\mathcal B_\Gamma'$. The relative Gromov--Witten cycle of topological type $\Gamma$ is simply \[ \mathfrak c_\Gamma(X/D) = \sum\limits_{\mathfrak G \in \mathcal B_\Gamma'} \dfrac{\prod_{e\in \on{HE}_{n}(\Gamma^0)}d_e}{|\on{Aut}(\mathfrak G)|}(\mathfrak t_{\mathfrak G})_* \big([\overline{\mathcal M}_{\mathfrak G}]^{\on{vir}}\big). \] Note that \[ [\overline{\mathcal M}_{\mathfrak G}]^{\on{vir}}=\Delta^![\overline{\mathcal M}^\sim_{\Gamma^0}(D)\times \overline{\mathcal M}^\bullet_{\Gamma^\infty}(X,D)]^{\on{vir}}. \] Let \begin{align*} \alpha_i\in H^*(X), \text{ and } a_i\in \mathbb Z_{\geq 0} \text{ for } i\in \{1,\ldots, n\}; \end{align*} \begin{align*} \epsilon_j\in H^*(D), \text{ and } b_j\in \mathbb Z_{\geq 0} \text{ for } j\in \{1,\ldots, \rho\}. \end{align*} Without loss of generality, we assume that $\epsilon_1$ is the insertion that corresponds to the unique negative contact marking. Then the relative invariant with one negative contact order can be written as \begin{align*} &\langle \prod_{i=1}^n \tau_{a_i}(\alpha_i) \mid \prod_{j=1}^\rho\tau_{b_j}(\epsilon_j) \rangle_{\Gamma}^{(X,D)}\\ =& \sum_{\mathfrak G\in \mathcal B_\Gamma'} \frac{\prod_{e\in E}d_e}{|\on{Aut}(E)|}\sum \langle \prod_{j\in S_{\epsilon,1}}\tau_{b_j}(\epsilon_j) | \prod_{i\in S_{\alpha,1}}\tau_{a_i}(\alpha_i) | \eta, \tau_{b_1}(\epsilon_1)\rangle^\sim_{\Gamma^0}\langle \check{\eta}, \prod_{j\in S_{\epsilon,2}}\tau_{b_j}(\epsilon_j) | \prod_{i\in S_{\alpha,2}}\tau_{a_i}(\alpha_i) \rangle_{\Gamma^\infty}^{\bullet, (X,D)}, \end{align*} where $\on{Aut}(E)$ is the permutation group of the set $\{d_1,\ldots, d_{|E|}\}$; $\check{\eta}$ is defined by taking the Poincar\'e dual of the cohomology weights of the cohomology weighted partition $\eta$; the second sum is over all splittings of \[ \{1,\ldots, n\}=S_{\alpha,1}\sqcup S_{\alpha,2}, \quad \{2,\ldots, \rho\}=S_{\epsilon,1}\sqcup S_{\epsilon,2} \] and all intermediate cohomology weighted partitions $\eta$. The following comparison theorem between punctured invariants of \cite{ACGS} and relative invariants with one negative contact order of \cite{FWY} for smooth pairs is an upcoming work of \cite{BNR22b}. \begin{theorem}\label{theorem-puncture-relative} Given a smooth projective variety $X$ and a smooth divisor $D\subset X$, the punctured Gromov--Witten invariants of $(X,D)$ and the relative Gromov--Witten invariants of $(X,D)$ with one negative contact order coincide. \end{theorem} \begin{remark} \cite{BNR22b} studies the comparison between punctured invariants of \cite{ACGS} and relative invariants with several negative contact orders. For the purpose of this paper, we only need the case with one negative contact order. In this case, the comparison is significantly simpler because we have simple graphs as described above and the class $\mathfrak c_\Gamma(X/D)$ is trivial. Since the general comparison is obtained in \cite{BNR22b}, we do not attempt to give a proof for this special case. Theorem \ref{theorem-puncture-relative} is sufficient for us to fit our result into the Gross--Siebert program as theta functions and structure constants in \cite{GS19} and \cite{GS21} only involve punctured invariants with one punctured marking. Note that relative invariants with several negative contact orders will also appear in this paper. However, the general comparison theorem is not necessary because we will reduce relevant invariants with several negative contact orders to invariants without negative contact order. \end{remark} \section{The proper Landau--Ginzburg potential from intrinsic mirror symmetry}\label{sec:intrinsic} \subsection{The Landau--Ginzburg potential} A tropical view of the Landau--Ginzburg potential is given in \cite{CPS} using the toric degeneration approach to mirror symmetry. We consider the intrinsic mirror symmetry construction instead and focus on the case when the Landau--Ginzburg potential is proper. Following intrinsic mirror symmetry \cite{GS19}, one considers a maximally unipotent degeneration $g:Y\rightarrow S$ of the smooth pair $(X,D)$. The mirror of $X\setminus D$ is constructed as the projective spectrum of the degree zero part of the relative quantum cohomology of $(Y,D^\prime)$, where $D^\prime$ is certain divisor of $Y$ that includes $g^{-1}(0)$. Let $(B^\prime,\mathscr P, \varphi)$ be the dual intersection complex or the fan picture of the degeneration. Recall that $B^\prime$ is an integral affine manifold with finite polyhedral decomposition $\mathscr P$ and a multi-valued strictly convex piecewise linear function $\varphi$. An \emph{asymptotic direction} is an integral tangent vector of a one-dimensional unbounded cell in $(B^\prime,\mathscr P, \varphi)$ that points in the unbounded direction. \begin{definition} The dual intersection complex $(B^\prime, \mathscr P)$ is asymptotically cylindrical if \begin{itemize} \item $B^\prime$ is non-compact. \item For every polyhedron $\sigma$ in $\mathscr P$, all of the unbounded one-faces of $\sigma$ are parallel with respect to the affine structure on $\sigma$. \end{itemize} \end{definition} We consider the case when $D$ is smooth. Hence, $(B^\prime, \mathscr P)$ is asymptotically cylindrical and $B^\prime$ has one unbounded direction $m_{\on{out}}$. We choose $\phi$ such that $\phi(m_{\on{out}})=1$ on all unbounded cells. For the smooth pair $(X,D)$, one can also consider its relative quantum cohomology. Let $QH^0_{\on{log}}(X,D)$ be the degree zero subalgebra of the relative quantum cohomology ring $\on{QH}^*_{\on{log}}(X,D)$ of a pair $(X,D)$. Let $S$ be the dual intersection complex of $D$. Let $B$ be the cone over $S$ and $B(\mathbb Z)$ be the set of integer points of $B$. Since $D$ is smooth, $B(\mathbb Z)$ is the set of nonnegative integers. The set \[ \{\vartheta_p\}, p\in B(\mathbb Z) \] of theta functions form a canonical basis of $\on{QH}^0_{\on{log}}(X,D)$. Moreover, theta functions satisfy the following multiplication rule \begin{align}\label{theta-func-multi} \vartheta_{p_1}\star \vartheta_{p_2}=\sum_{r\geq 0, \beta}N_{p_1,p_2,-r}^{\beta} \vartheta_r. \end{align} \if{ Given a point $P$ on the dual intersection complex, the theta function can be defined by \[ \vartheta_p(P)=\sum_{\mathfrak b}a_{\mathfrak b} m^{\mathfrak b}, \] where the sum is over all broken lines $\mathbf b$ with asymptotic monomial $p$ and $\mathbf b(0)=P$. Then we have the following relation \[ N_{p_1,p_2,-r}^{\beta}=\sum_{(\mathfrak a, \mathfrak b)} a_{\mathfrak a}a_{\mathfrak b}. \] }\fi Recall that the structure constants $N^{\beta}_{p_1,p_2,-r}$ are defined as the invariants of $(X,D)$ with two ``inputs'' with positive contact orders given by $p_1, p_2\in B(\mathbb Z)$, one ``output'' with negative contact order given by $-r$ such that $r\in B(\mathbb Z)$, and a point constraint for the punctured point. Namely, \begin{align}\label{def-stru-const} N^{\beta}_{p_1,p_2,-r}=\langle [1]_{p_1},[1]_{p_2},[\on{pt}]_{-r}\rangle_{0,3,\beta}^{(X,D)}. \end{align} We learnt about the following intrinsic mirror symmetry construction of the proper Landau--Ginzburg potential from Mark Gross. \begin{construction} We recall the maximally unipotent degeneration $g:Y\rightarrow S$ and the pair $(Y,D^\prime)$ from the intrinsic mirror construction of the mirror $X^\vee$. The degree zero part of the graded ring of theta functions in $QH^0_{\log}(Y,D^\prime)$ agrees with $QH^0_{\log}(X,D)$. The base of the Landau--Ginzburg mirror of $(X,D)$ is $\on{Spec}QH^0_{\log}(X,D)=\mathbb A^1$ and the superpotential is $W=\vartheta_{1}$, the unique primitive theta function of $QH^0_{\log}(X,D)$. \end{construction} Under this construction, to compute the proper Landau--Ginzburg potential, we just need to compute the theta function $\vartheta_1$. We would like to compute the structure constants $N^{\beta}_{p_1,p_2,-r}$ and then provide a definition of the theta functions, in terms of two-point relative invariants of $(X,D)$, which satisfy the multiplication rule (\ref{theta-func-multi}). The notion of broken lines will not be mentioned here. \subsection{Structure constants} We first express the structure constants in terms of two-point relative invariants. \begin{prop}\label{prop-struc-const} Let $(X,D)$ be a smooth log Calabi--Yau pair. Without loss of generality, we assume that $p_1\leq p_2$. Then the structure constants $N^{\beta}_{p_1,p_2,-r}$ can be written as two-point relative invariants (without negative contact): \begin{align}\label{equ-punctured-2} N^{\beta}_{p_1,p_2,-r}=\left\{ \begin{array}{cc} (p_1-r)\langle [\on{pt}]_{p_1-r}, [1]_{p_2}\rangle_{0,2,\beta}^{(X,D)}+ (p_2-r)\langle [\on{pt}]_{p_2-r}, [1]_{p_1}\rangle_{0,2,\beta}^{(X,D)} & \text{ if }0\leq r<p_1; \\ (p_2-r)\langle [\on{pt}]_{p_2-r}, [1]_{p_1}\rangle_{0,2,\beta}^{(X,D)} & \text{ if }p_1\leq r<p_2;\\ 0 & \text{ if } r\geq p_2, r\neq p_1+p_2;\\ 1 & \text{ if } r=p_1+p_2. \end{array} \right. \end{align} \end{prop} \begin{proof} We divide the proof into different cases. \begin{enumerate} \item $0<r<p_1:$ We use the definition of relative Gromov--Witten invariants with negative contact orders in \cite{FWY}. Recall that $N^{\beta}_{p_1,p_2,-r}$ (\ref{def-stru-const}) is a relative invariant with one negative contact order. It can be written as \begin{align}\label{def-one-negative} \sum_{\mathfrak G\in \mathcal B_\Gamma}\frac{\prod_{e\in E}d_e}{|\on{Aut}(E)|}\sum \langle \prod_{j\in S_1}\epsilon_j, \, |\,|[\on{pt}]_r,\eta\rangle^{\sim}_{\Gamma^0} \langle \check{\eta},\prod_{j\in S_2}\epsilon_j\rangle^{\bullet, (X,D)}_{\Gamma_\infty}, \end{align} where $S_1\sqcup S_2=\{1,2\}$, $\{\epsilon_j\}_{j=1,2}=\{[1]_{p_1},[1]_{p_2}\}$; the sum is over the cohomology weighted partition $\eta$ and the splitting $S_1\sqcup S_2=\{1,2\}$. \begin{itemize} \item [(I) $|S_1|=\emptyset$:] Then by the virtual dimension constraint on $\langle \, |\,|[\on{pt}]_r,\eta\rangle^{\sim}_{\Gamma^0}$, the insertions in $\eta$ must contain at least one element with insertion $[1]_k$, for some integer $k>0$. Let $\pi:P:=\mathbb P_D(\mathcal O_D\oplus N_D)\rightarrow D$ be the projection map and $D_0$ and $D_\infty$ be the zero and infinity divisors of $P$. Let \[ p: \overline{\mathcal M}_{\Gamma^0}^\sim(P,D_0\cup D_\infty)\rightarrow \overline{\mathcal M}_{0,m}(D,\pi_*(\beta_1)) \] be the natural projection of the rubber map to $X$ and contracting the resulting unstable components. By \cite{JPPZ18}*{Theorem 2}, we have \[ p_*[\overline{\mathcal M}_{\Gamma^0}^\sim(P,D_0\cup D_\infty)]^{\on{vir}}=[\overline{\mathcal M}_{0,m}(D,\pi_*(\beta_1))]^{\on{vir}}. \] The marking $[1]_k$ becomes the identity class $ 1\in H^*(D)$. Applying the string equation implies that the rubber invariant $\langle \, |\,|[\on{pt}]_r,\eta\rangle^{\sim}_{\Gamma^0}$ vanishes unless $\pi_*(\beta_1)=0$ and $m=3$. However, $\pi_*(\beta_1)=0$ implies that $D_0\cdot \beta_1=D_\infty\cdot \beta_1$. On the other hand, there is no relative marking at $D_0$. Therefore $D_0\cdot \beta_1=0$. We also know that $D_\infty\cdot \beta_1>0$ because $\eta$ is not empty. This is a contradiction. Therefore, we can not have $|S_1|=\emptyset$. \item [(II) $|S_1|\neq \emptyset$:] In this case, for $\langle \prod_{j\in S_1}\epsilon_j, \, |\,|[\on{pt}]_r,\eta\rangle^{\sim}_{\Gamma^0} $, the relative insertion $\prod_{j\in S_1}\epsilon_j$ at $D_0$ is not empty. That is, it must contain at least one of $[1]_{p_1},[1]_{p_2}$. Again, we consider the natural projection \[ p: \overline{\mathcal M}_{\Gamma^0}^\sim(P,D_0\cup D_\infty)\rightarrow \overline{\mathcal M}_{0,m}(D,\pi_*(\beta_1)). \] Since $\prod_{j\in S_1}\epsilon_j$ must contain at least one of $[1]_{p_1},[1]_{p_2}$, by the projection formula and the string equation, \[ \langle \prod_{j\in S_1}\epsilon_j, \, |\,|[\on{pt}]_r,\eta\rangle^{\sim}_{\Gamma^0} \] vanishes unless $\pi_*(\beta_1)=0$ and $m=3$. Note that $\pi_*(\beta_1)=0$ implies $D_0\cdot \beta=D_\infty\cdot \beta$, for any effective curve class $\beta$ of $P$. We recall that we assume that $r<p_1\leq p_2$, therefore $\eta$ must contain at least one markings with positive contact order. Then $\prod_{j\in S_1}\epsilon_j$ must contain exactly one of $[1]_{p_1},[1]_{p_2}$ when $r<p_1$. Therefore, $\eta$ contains exactly one element $[1]_{p_1-r}$ or $[1]_{p_2-r}$ respectively. Hence, (\ref{def-one-negative}) is the sum of the following two invariants \[ (p_1-r)\langle [\on{pt}]_{p_1-r}, [1]_{p_2}\rangle_{0,2,\beta}^{(X,D)}, \quad \text{and} \quad (p_2-r)\langle [\on{pt}]_{p_2-r},[1]_{p_1}\rangle_{0,2,\beta}^{(X,D)}, \] which are exactly the invariants that appear on the RHS of (\ref{equ-punctured-2}) when $r<p_1$. \end{itemize} \item $r=0$: In this case, there are no negative contacts. We can require the marking with the point insertion $[\on{pt}]_0$ maps to $D$. Consider the degeneration to the normal cone of $D$ and apply the degeneration formula. After applying the rigidification lemma \cite{MP}*{Lemma 2}, we also obtain the formula (\ref{def-one-negative}) with $r=0$. Then the rest of the proof is the same as the case when $0<r<p_1$. \item $p_1\leq r< p_2$: We again have the formula (\ref{def-one-negative}) as in the first case. The difference is that we can not have $\prod_{j\in S_1}\epsilon_j=[1]_{p_1}$ because this will imply that $\eta$ contains the non-positive contact order element $[1]_{p_1-r}$. Therefore, we have \[ (p_2-r)\langle [\on{pt}]_{p_2-r}, [1]_{p_1}\rangle_{0,2,\beta}^{(X,D)}. \] \item $r\geq p_2$ and $r\neq p_1+p_2$: Similar to the previous case, we can not have $\prod_{j\in S_1}\epsilon_j=[1]_{p_1}$ or $\prod_{j\in S_1}\epsilon_j=[1]_{p_2}$. The invariant is $0$. \item $p_1+p_2=r$: In this case, we can have $\eta$ to be empty. Then there is no $\Gamma^\infty$ and the curves entirely lie in $D$. Therefore, there is only one rubber integral. The invariant is just $1$. \end{enumerate} \end{proof} \begin{remark} A special case of Proposition \ref{prop-struc-const} also appears in \cite{Graefnitz2022}*{Theorem 2} for del Pezzo surfaces via tropical correspondence. Our result here uses the definition of \cite{FWY} for punctured invariants and it works for all dimensions and $X$ is not necessarily Fano. \end{remark} Later, we will also need to consider invariants of the following form: \[ \langle [1]_{p_1},[\on{pt}]_{p_2},[1]_{-r}\rangle_{0,3,\beta}^{(X,D)}. \] The proof of the following identity is similar to the proof of Proposition \ref{prop-struc-const}. \begin{prop}\label{prop-theta-2} Let $(X,D)$ be a smooth log Calabi--Yau pair. If $p_1\leq p_2$, then \begin{align*} &\langle [1]_{p_1},[\on{pt}]_{p_2},[1]_{-r}\rangle_{0,3,\beta}^{(X,D)}\\ =&\left\{ \begin{array}{cc} (p_1-r)\langle [\on{pt}]_{p_2}, [1]_{p_1-r}\rangle_{0,2,\beta}^{(X,D)}+ (p_2-r)\langle [\on{pt}]_{p_2-r}, [1]_{p_1}\rangle_{0,2,\beta}^{(X,D)} & \text{ if }0\leq r<p_1; \\ (p_2-r)\langle [\on{pt}]_{p_2-r}, [1]_{p_1}\rangle_{0,2,\beta}^{(X,D)} & \text{ if }p_1\leq r<p_2;\\ 0 & \text{ if } r\geq p_2, r\neq p_1+p_2;\\ 1 &\text{ if } r=p_1+p_2. \end{array} \right. \end{align*} If $p_2\leq p_1$, then \begin{align*} &\langle [1]_{p_1},[\on{pt}]_{p_2},[1]_{-r}\rangle_{0,3,\beta}^{(X,D)}\\ =&\left\{ \begin{array}{cc} (p_1-r)\langle [\on{pt}]_{p_2}, [1]_{p_1-r}\rangle_{0,2,\beta}^{(X,D)}+ (p_2-r)\langle [\on{pt}]_{p_2-r}, [1]_{p_1}\rangle_{0,2,\beta}^{(X,D)} & \text{ if }0\leq r<p_2; \\ (p_1-r)\langle [\on{pt}]_{p_2}, [1]_{p_1-r}\rangle_{0,2,\beta}^{(X,D)} & \text{ if }p_2\leq r<p_1;\\ 0 & \text{ if } r\geq p_1, r\neq p_1+p_2;\\ 1 & \text{ if } r=p_1+p_2. \end{array} \right. \end{align*} \end{prop} \subsection{Theta functions} Now we define the theta function in terms of two-point relative Gromov--Witten invariants of $(X,D)$. \begin{definition}\label{def-theta-func} Write $x=z^{(-m_{\on{out}},-1)}$ and $t=z^{(0,1)}$. For $p\geq 1$, the theta function is \begin{align}\label{theta-func-def} \vartheta_p:=x^{-p}+\sum_{n=1}^{\infty}nN_{n,p}t^{n+p}x^n, \end{align} where \[ N_{n,p}=\sum_{\beta} \langle [\on{pt}]_n,[1]_p\rangle_{0,2,\beta}^{(X,D)}. \] \end{definition} We also write \[ N_{p_1,p_2,-r}:=\sum_\beta N_{p_1,p_2,-r}^{\beta}. \] To justify the definition of the theta function, we need to show that this definition satisfies the multiplication rule (\ref{theta-func-multi}). Plug-in $(\ref{theta-func-def})$ to $\vartheta_{p_1}\star \vartheta_{p_2}$, we have \begin{align}\label{theta-p-q} \notag \vartheta_{p_1}\star \vartheta_{p_2}=&(x^{-p_1}+\sum_{m=1}^{\infty}mN_{m,p_1}t^{m+p_1}x^m)(x^{-p_2}+\sum_{n=1}^{\infty}nN_{n,p_2}t^{n+p_2}x^n)\\ =&x^{-(p_1+p_2)}+\sum_{n=1}^{\infty}nN_{n,p_2}t^{n+p_2}x^{n-p_1}+\sum_{m=1}^{\infty}mN_{m,p_1}t^{m+p_1}x^{m-p_2}\\ \notag &+\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}mnN_{m,p_1}N_{n,p_2}t^{m+p_1+n+p_2}x^{m+n}. \end{align} On the other hand, we have \begin{align}\label{theta-r} \notag \sum_{r\geq 0, \beta}N_{p_1,p_2,-r}^{\beta}t^\beta \vartheta_r&=\sum_{r\geq 0}N_{p_1,p_2,-r}t^{p_1+p_2-r}\vartheta_r\\ &=\sum_{r\geq 0}N_{p_1,p_2,-r}t^{p_1+p_2-r}(x^{-r}+\sum_{k=1}^{\infty}kN_{k,r}t^{k+r}x^k), \end{align} where the second line follows from (\ref{theta-func-def}). Note that $N_{k,r}=0$ when $r=0$ by the string equation. By Proposition \ref{prop-struc-const}, it is straightforward that the coefficients of $x^k$, for $k\leq 0$, of (\ref{theta-p-q}) and (\ref{theta-r}) are the same: without loss of generality, we assume that $p_1\leq p_2$. Then, we have the following cases. \begin{itemize} \item $k\leq -p_2$, and $k\neq -p_1-p_2$: we see that the coefficient of $\vartheta_{p_1}\star \vartheta_{p_2}$ in (\ref{theta-p-q}) is zero. The corresponding coefficient $N_{p_1,p_2,k}$ in (\ref{theta-r}) is also zero because of Proposition \ref{prop-struc-const}. \item $k=-p_1-p_2$: we see that the coefficient of $\vartheta_{p_1}\star \vartheta_{p_2}$ in (\ref{theta-p-q}) is $1$. The corresponding coefficient $N_{p_1,p_2,k}$ in (\ref{theta-r}) is also $1$ because of Proposition \ref{prop-struc-const}. \item $-p_2<k\leq -p_1$: the coefficient of $\vartheta_{p_1}\star \vartheta_{p_2}$ in (\ref{theta-p-q}) is $(p_2+k)N_{p_2+k,p_1}$. By Proposition \ref{prop-struc-const}, the corresponding coefficient $N_{p_1,p_2,k}$ in (\ref{theta-r}) is: \[ N_{p_1,p_2,k}=(p_2+k)N_{p_2+k,p_1}. \] \item $-p_1<k\leq 0$: the coefficient of $\vartheta_{p_1}\star \vartheta_{p_2}$ in (\ref{theta-p-q}) is \[ (p_1+k)N_{p_1+k,p_2}+(p_2+k)N_{p_2+k,p_1}. \] This coincides with the corresponding coefficient $N_{p_1,p_2,k}$ by Proposition \ref{prop-struc-const}. \end{itemize} For the coefficients of $x^k$, for $k>0$, the coefficients also match because of the following result. \begin{prop}\label{prop-wdvv} Let $(X,D)$ be a smooth log Calabi--Yau pair. We have \begin{align}\label{identity-wdvv} (k+p_1)N_{k+p_1,p_2}+(k+p_2)N_{k+p_2,p_1}+\sum_{m,n>0, m+n=k}mnN_{m,p_1}N_{n,p_2}=\sum_{r>0}N_{p_1,p_2,-r}kN_{k,r}. \end{align} \end{prop} \begin{proof} It can be proved using the WDVV equation of relative Gromov--Witten theory in \cite{FWY}*{Proposition 7.5}. Set \[ [\alpha_1]_{i_1}=[1]_{p_1}, [\alpha_2]_{i_2}=[1]_{p_2}, [\alpha_3]_{i_3}=[\on{pt}]_{k+p_1+p_2}, [\alpha_4]_{i_4}=[1]_{-p_1-p_2}. \] Then the WDVV equation states that \begin{align}\label{wdvv} \sum\langle [1]_{p_2},[\on{pt}]_{k+p_1+p_2},[\gamma]_{-i}\rangle_{0,\beta_1,3}^{(X,D)} \langle [\gamma^\vee]_{i}, [1]_{p_1},[1]_{-p_1-p_2} \rangle_{0,\beta_2,3}^{(X,D)} \\ \notag = \sum\langle [1]_{p_1},[1]_{p_2},[\gamma]_{-i}\rangle_{0,\beta_1,3}^{(X,D)} \langle [\gamma^\vee]_{i}, [\on{pt}]_{k+p_1+p_2},[1]_{-p_1-p_2} \rangle_{0,\beta_2,3}^{(X,D)} , \end{align} where each sum is over the curve class $\beta$ such that $D\cdot \beta=p_1+p_2+k$, all splittings of $\beta_1+\beta_2=\beta$ and the dual bases $\{[\gamma]_{-i}\}$ and $\{[\gamma^\vee]_{i}\}$ of $\mathfrak H$. \begin{enumerate} \item[\textbf{(I)}] We first consider the LHS of the WDVV equation (\ref{wdvv}): \begin{align}\label{wdvv-lhs} \sum\langle [1]_{p_2},[\on{pt}]_{k+p_1+p_2},[\gamma]_{-i}\rangle_{0,\beta_1,3}^{(X,D)} \langle [\gamma^\vee]_{i}, [1]_{p_1},[1]_{-p_1-p_2} \rangle_{0,\beta_2,3}^{(X,D)}. \end{align} We analyze the invariant \[ \langle [\gamma^\vee]_{i}, [1]_{p_1},[1]_{-p_1-p_2} \rangle_{0,\beta_2,3}^{(X,D)} \] in (\ref{wdvv-lhs}). \begin{itemize} \item[(i) $i<0$:] we claim that the invariant vanishes. By the virtual dimension constraint, $\deg(\gamma^\vee)=\dim_{\mathbb C}X-2$. We apply the definition of relative Gromov—Witten invariants with negative contact orders in Section \ref{sec-rel-general}. The marking with negative contact order $[1]_{-p_1-p_2}$ is distributed to a rubber space. The marking becomes a relative marking at $D_\infty$ with insertion $[1]_{p_1+p_2}$. We further divide it into two cases. \begin{itemize} \item [(Case 1):] the first marking and the third marking are distributed to different rubber spaces. Then the class $\mathfrak c_\Gamma$ is trivial. We consider the rubber moduli space $\overline{\mathcal M}_{\Gamma^0_v}^\sim(P,D_0\cup D_\infty)$ where the third marking is distributed to and pushforward this rubber moduli space to the moduli space $\overline{M}_{0,m}(D,\pi_*\beta_v)$ of stable maps to $D$. The marking with $[1]_{p_1+p_2}$ becomes the identity class $1\in H^*(D)$. Apply the string equation, we see that the rubber invariant vanishes unless $\pi_*(\beta_v)=0$ and $m=3$. However $\pi_*(\beta_v)=0$ implies that $D_0\cdot \beta_v=D_\infty\cdot \beta_v$. This is not possible because, based on the insertions of the markings, we must have $D_\infty\cdot \beta_v\geq p_1+p_2>D_0\cdot \beta_v$. \item [(Case 2):] the first marking and the third marking are distributed to the same rubber space. The class $\mathfrak c_\Gamma$ is a sum of descendant classes of degree one. By the virtual dimension constraint, $\eta$ must contain at least one element with insertion $[1]_k$ for some positive integer $k$. Pushing forward to the moduli space of stable maps to $D$ and applying the string equation twice, we again conclude that the invariant vanishes as in (Case 1). \end{itemize} \item [(ii) $i\geq 0$:] The invariants in (\ref{wdvv-lhs}) are genus zero $3$-point relative invariants of $(X,D)$ with one negative contact order. Therefore, the virtual dimensions of the moduli spaces are $(\dim_{\mathbb C} X-1)$. By the virtual dimension constraint, we must have \[ [\gamma]_{-i}=[1]_{-i}, \quad [\gamma^\vee]_{i}=[\on{pt}]_{i}. \] By Proposition \ref{prop-theta-2}, $ \langle [\on{pt}]_{i}, [1]_{p_1},[1]_{-p_1-p_2} \rangle_{0,\beta_2,3}^{(X,D)} $ vanishes unless $i>p_1+p_2$ or $i=p_2$. \begin{itemize} \item When $i=p_2$, we have the term \[ \langle [1]_{p_2},[\on{pt}]_{k+p_1+p_2},[1]_{-i}\rangle_{0,\beta_1,3}^{(X,D)} \langle [\on{pt}]_{i}, [1]_{p_1},[1]_{-p_1-p_2} \rangle_{0,\beta_2,3}^{(X,D)}=(k+p_1)\langle [\on{pt}]_{k+p_1}, [1]_{p_2}\rangle_{0,\beta,2}^{(X,D)} \] in (\ref{wdvv-lhs}), by Proposition \ref{prop-theta-2}. \item When $i>p_1+p_2$, we have \[ \langle [\on{pt}]_{i}, [1]_{p_1},[1]_{-p_1-p_2} \rangle_{0,\beta_2,3}^{(X,D)}=(i-p_1-p_2)\langle [\on{pt}]_{i-p_1-p_2}, [1]_{p_1}\rangle_{0,\beta_2,2}^{(X,D)}, \] by Proposition \ref{prop-theta-2}. \end{itemize} \end{itemize} Similarly, by Proposition \ref{prop-theta-2}, $\langle [1]_{p_2},[\on{pt}]_{k+p_1+p_2},[1]_{-i}\rangle_{0,\beta_1,3}^{(X,D)}$ vanishes unless $i<k+p_1+p_2$ or $i=k+p_1+2p_2$. \begin{itemize} \item When $i=k+p_1+2p_2$ we have the term \[ \langle [1]_{p_2},[\on{pt}]_{k+p_1+p_2},[1]_{-i}\rangle_{0,\beta_1,3}^{(X,D)} \langle [\on{pt}]_{i}, [1]_{p_1},[1]_{-p_1-p_2} \rangle_{0,\beta_2,3}^{(X,D)}=(k+p_2)\langle [\on{pt}]_{k+p_2}, [1]_{p_1}\rangle_{0,\beta,2}^{(X,D)}, \] in (\ref{wdvv-lhs}), by Proposition \ref{prop-theta-2}. \item When $i<k+p_1+p_2$, we have \[ \langle [1]_{p_2},[\on{pt}]_{k+p_1+p_2},[1]_{-i}\rangle_{0,\beta_1,3}^{(X,D)} =(k+p_1+p_2-i)\langle [\on{pt}]_{k+p_1+p_2-i}, [1]_{p_2}\rangle_{0,\beta_2,2}^{(X,D)}, \] by Proposition \ref{prop-theta-2}. \end{itemize} We summarize the above analysis of (\ref{wdvv-lhs}) in terms of $i$: \begin{itemize} \item $i<0$, the summand is $0$. \item $i=p_2$, the summand is \[ (k+p_1)\langle [\on{pt}]_{k+p_1}, [1]_{p_2}\rangle_{0,\beta,2}^{(X,D)}. \] \item $i=k+p_1+2p_2$, the summand is \[ (k+p_2)\langle [\on{pt}]_{k+p_2}, [1]_{p_1}\rangle_{0,\beta,2}^{(X,D)}. \] \item $p_1+p_2 <i<k+p_1+p_2$, the summand is \[ (k+p_1+p_2-i)\langle [\on{pt}]_{k+p_1+p_2-i}, [1]_{p_2}\rangle_{0,\beta_2,2}^{(X,D)}(i-p_1-p_2)\langle [\on{pt}]_{i-p_1-p_2}, [1]_{p_1}\rangle_{0,\beta_2,2}^{(X,D)}. \] \end{itemize} Set $m:=i-p_1-p_2$ and $n:=k+p_1+p_2-i$. Then (\ref{wdvv-lhs}) becomes \[ (k+p_1)N_{k+p_1,p_2}+(k+p_2)N_{k+p_2,p_1}+\sum_{m,n>0, m+n=k}mnN_{m,p_1}N_{n,p_2}. \] \item[\textbf{(II)}] Now we look at the RHS of the WDVV equation (\ref{wdvv}): \begin{align}\label{wdvv-rhs} \sum\langle [1]_{p_1},[1]_{p_2},[\gamma]_{-i}\rangle_{0,\beta_1,3}^{(X,D)} \langle [\gamma^\vee]_{i}, [\on{pt}]_{k+p_1+p_2},[1]_{-p_1-p_2} \rangle_{0,\beta_2,3}^{(X,D)}. \end{align} \begin{itemize} \item If $i<0$, then the invariant $\langle [\gamma^\vee]_{i}, [\on{pt}]_{k+p_1+p_2},[1]_{-p_1-p_2} \rangle_{0,\beta_2,3}^{(X,D)}$ is a genus zero three-point relative invariant with two negative contact orders. The virtual dimension of the moduli space is $(n-2)$. On the other hand, the second marking has a point insertion: \[ \deg([\on{pt}]_{k+p_1+p_2})=n-1>n-2. \] It is a contradiction. \item If $i=0$, by the virtual dimension constraint, we must have \[ [\gamma^\vee]_{i}=[1]_0. \] The string equation implies the invariant is zero. \item If $i>0$, we must have \[ [\gamma^\vee]_{i}=[1]_{r}, \quad \text{and } [\gamma]_{-i}=[\on{pt}]_{-r} \quad \text{for } r:=i>0. \] We have \[ \langle [1]_{p_1},[1]_{p_2},[\gamma]_{-i}\rangle_{0,\beta_1,3}^{(X,D)}=\langle [1]_{p_1},[1]_{p_2},[\on{pt}]_{-r}\rangle_{0,\beta_1,3}^{(X,D)}. \] By Proposition \ref{prop-theta-2}, \[ \langle [1]_{r}, [\on{pt}]_{k+p_1+p_2},[1]_{-p_1-p_2} \rangle_{0,\beta_2,3}^{(X,D)}=k\langle [1]_{r}, [\on{pt}]_{k} \rangle_{0,\beta_2,2}^{(X,D)}. \] \end{itemize} Therefore, (\ref{wdvv-rhs}) becomes \[ \sum_{r>0}N_{p_1,p_2,-r}kN_{k,r}. \] \end{enumerate} This completes the proof. \end{proof} \begin{remark} We can consider the special case when $p_1=1$, then the LHS of (\ref{identity-wdvv}) is \[ (k+1)N_{k+1,p_2}+(k+p_2)N_{k+p_2,1}+\sum_{m,n>0, m+n=k}mnN_{m,1}N_{n,p_2} \] and the RHS of (\ref{identity-wdvv}) is \begin{align*} &\sum_{r>0}N_{1,p_2,-r}kN_{k,r}\\ =& kN_{k,p_2+1}+ \sum_{r=1}^{p_2-1}(p_2-r)N_{p_2-r,1}kN_{k,r}, \end{align*} by Proposition \ref{prop-struc-const}. Identity (\ref{identity-wdvv}) becomes \[ (k+1)N_{k+1,p_2}+(k+p_2)N_{k+p_2,1}+\sum_{m,n>0, m+n=k}mnN_{m,1}N_{n,p_2} =kN_{k,p_2+1}+ \sum_{r=1}^{p_2-1}(p_2-r)N_{p_2-r,1}kN_{k,r}. \] If we further specialize to the case of toric del Pezzo surfaces with smooth divisors, we recover \cite{GRZ}*{Lemma 5.3}. Here, we give a direct explanation of Identity (\ref{identity-wdvv}) in terms of the WDVV equation of the relative Gromov--Witten theory in \cite{FWY}. \end{remark} \section{A mirror theorem for smooth pairs} \subsection{Relative mirror theorem} Let $X$ be a smooth projective variety. Let $\{p_i\}_{i=1}^r$ be an integral, nef basis of $H^2(X)$. For the rest of the paper, we assume that $D$ is nef. Recall that the $J$-function for absolute Gromov--Witten theory of $X$ is \[ J_{X}(\tau,z)=z+\tau+\sum_{\substack{(\beta,l)\neq (0,0), (0,1)\\ \beta\in \on{NE(X)}}}\sum_{\alpha}\frac{q^{\beta}}{l!}\left\langle \frac{\phi_\alpha}{z-\psi},\tau,\ldots, \tau\right\rangle_{0,1+l, \beta}^{X}\phi^{\alpha}, \] where $\tau=\tau_{0,2}+\tau^\prime\in H^*(X)$; $\tau_{0,2}=\sum_{i=1}^r p_i \log q_i\in H^2(X)$; $\tau^\prime\in H^*(X)\setminus H^2(X)$; $\on{NE(X)}$ is the cone of effective curve classes in $X$; $\{\phi_\alpha\}$ is a basis of $H^*(X)$; $\{\phi^\alpha\}$ is the dual basis under the Poincar\'e pairing. We can decompose the $J$-function as follows \[ J_{X}(\tau,z)=\sum_{\beta\in \on{NE(X)}}J_{X,\beta}(\tau,z)q^\beta. \] The $J$-function of the smooth pair $(X,D)$ is defined similarly. We first define \[ \mathfrak H_0:=H^*(X) \text{ and }\mathfrak H_i:=H^*(D) \text{ if }i\in \mathbb Z \setminus \{0\}. \] The ring of insertions (state space) of relative Gromov--Witten theory is defined as \[ \mathfrak H:=\bigoplus\limits_{i\in\mathbb Z}\mathfrak H_i. \] Each $\mathfrak H_i$ naturally embeds into $\mathfrak H$. For an element $\gamma\in \mathfrak H_i$, we denote its image in $\mathfrak H$ by $[\gamma]_i$. Define a pairing on $\mathfrak H$ by the following. \begin{equation}\label{eqn:pairing} \begin{split} ([\gamma]_i,[\delta]_j) = \begin{cases} 0, &\text{if } i+j\neq 0,\\ \int_X \gamma\cup\delta, &\text{if } i=j=0, \\ \int_D \gamma\cup\delta, &\text{if } i+j=0, i,j\neq 0. \end{cases} \end{split} \end{equation} The pairing on the rest of the classes is generated by linearity. \begin{defn}\label{def-relative-J-function} Let $X$ be a smooth projective variety and $D$ be a smooth nef divisor, the $J$-function for the pair $(X,D)$ is defined as \[ J_{(X,D)}(\tau,z)=z+\tau+\sum_{\substack{(\beta,l)\neq (0,0), (0,1)\\ \beta\in \on{NE(X)}}}\sum_{\alpha}\frac{q^{\beta}}{l!}\left\langle \frac{\phi_\alpha}{z-\bar{\psi}},\tau,\ldots, \tau\right\rangle_{0,1+l, \beta}^{(X,D)}\phi^{\alpha}, \] where $\tau=\tau_{0,2}+\tau^\prime\in H^*(X)$; $\tau_{0,2}=\sum_{i=1}^r p_i \log q_i\in H^2(X)$; $\tau^\prime\in \mathfrak H\setminus H^2(X)$; $\{\phi_\alpha\}$ is a basis of the ambient part of $\mathfrak H$; $\{\phi^\alpha\}$ is the dual basis under the Poincar\'e pairing. \end{defn} \begin{defn}\label{def-relative-I-function} The (non-extended) $I$-function of the smooth pair $(X,D)$ is \[ I_{(X,D)}(y,z)=\sum_{\beta\in \on{NE(X)}}J_{X,\beta}(\tau_{0,2},z)y^\beta\left(\prod_{0<a\leq D\cdot \beta-1}(D+az)\right)[1]_{-D\cdot \beta}, \] where $\tau_{0,2}\in H^2(X)$. \end{defn} \begin{theorem}[\cite{FTY}, Theorem 1.4]\label{thm:rel-mirror} Let $X$ be a smooth projective variety and $D$ be a smooth nef divisor such that $-K_X-D$ is nef. Then the $I$-function $I_{(X,D)}(y,\tau,z)$ coincides with the $J$-function $J_{(X,D)}(q,z)$ via change of variables, called the relative mirror map. \end{theorem} The relative mirror theorem also holds for the extended $I$-function of the smooth pair $(X,D)$. For the purpose of this paper, we only write down the simplest case when the extended data $S$ is the following: \[ S:=\{1\}. \] \begin{defn}\label{def-relative-I-function-extended} The $S$-extended $I$-function of $(X,D)$ is defined as follows. \[ I_{(X,D)}^{S}(y,x_1,z)=I_++I_-, \] where \begin{align*} I_+:=&\sum_{\substack{\beta\in \on{NE}(X),k\in \mathbb Z_{\geq 0}\\ k<D\cdot \beta} }J_{X, \beta}(\tau_{0,2},z)y^{\beta}\frac{ x_1^{k}}{z^{k}k!}\frac{\prod_{0<a\leq D\cdot \beta}(D+az)}{D+(D\cdot \beta-k)z}[{1}]_{-D\cdot \beta+k}, \end{align*} and \begin{align*} I_-:=&\sum_{\substack{\beta\in \on{NE}(X),k\in \mathbb Z_{\geq 0}\\ k\geq D\cdot \beta} }J_{X, \beta}(\tau_{0,2},z)y^{\beta}\frac{ x_1^{k}}{z^{k}k!}\left(\prod_{0<a\leq D\cdot \beta}(D+az)\right)[{ 1}]_{-D\cdot \beta+k}. \end{align*} \end{defn} \begin{theorem}[\cite{FTY}, Theorem 1.5]\label{thm:rel-mirror-extended} Let $X$ be a smooth projective variety and $D$ be a smooth nef divisor such that $-K_X-D$ is nef. Then the $S$-extended $I$-function $I^S_{(X,D)}(y,x_1,z)$ coincides with the $J$-function $J_{(X,D)}(q,z)$ via change of variables, called the relative mirror map. \end{theorem} Although the relative mirror theorem of \cite{FTY} has been used in the literature several times, the relative mirror map has never been studied in detail. We would like to provide a detailed description of the relative mirror map here. We consider the extended $I$-function in Definition \ref{def-relative-I-function-extended} under the assumption that $D$ is nef and $-K_X-D$ is nef. The extended $I$-function can be expanded as follows \begin{align*} &I^S_{(X,D)}(y,x_1,z)\\ =&z+\sum_{i=1}^r p_i\log y_i+x_1[1]_{1}+\sum_{\substack{\beta\in \on{NE}(X)\\ D\cdot \beta \geq 2}}\langle [\on{pt}]\psi^{D\cdot \beta-2}\rangle_{0,1,\beta}^Xy^\beta (D\cdot \beta-1)![1]_{-D\cdot \beta}+\sum_{k=1}^{\infty}I_{-k}z^{-k}, \end{align*} where the coefficient of $z^0$, denoted by $\tau(y,x_1)$, is the relative mirror map: \begin{align}\label{rel-mirror-map} \tau(y,x_1)=\sum_{i=1}^r p_i\log y_i+x_1[1]_{1}+\sum_{\substack{\beta\in \on{NE}(X)\\ D\cdot \beta \geq 2}}\langle [\on{pt}]\psi^{D\cdot \beta-2}\rangle_{0,1,\beta}^Xy^\beta (D\cdot \beta-1)![1]_{-D\cdot \beta}. \end{align} The relative mirror theorem of \cite{FTY} states that \[ J(\tau(y,x_1),z)=I(y,x_1,z). \] The function $\tau(y,x_1)$ is the mirror map computed from the extended $I$-function. We will refer to $\tau(y,x_1)$ as the \emph{extended relative mirror map}. The relative mirror map for the non-extended $I$-function is $\tau(y,0)$. We will refer to it as the \emph{relative mirror map} and denote it by $\tau(y)$. To be able to compute invariants from the relative mirror theorem, we need to understand the invariants that appear in $J(\tau(y,x_1),z)$. In particular, we need to understand the following invariants in order to compute the theta function $\vartheta_1$. \begin{itemize} \item Relative invariants with one positive contact order and several negative contact orders: \[ \langle [1]_{-k_1},\cdots, [1]_{-k_l}, [\gamma]_{k_{l+1}} \bar{\psi}^a\rangle_{0,l+1,\beta}^{(X,D)}, \] where \[ D\cdot \beta=k_{l+1}-\sum_{i=1}^l k_i\geq 0, \text{ and } k_i>0. \] This is needed to understand the relative mirror map. \item Relative invariants with two positive contact orders and several negative contact orders of the following form: \[ \langle [1]_1, [1]_{-k_1},\cdots, [1]_{-k_l}, [\gamma]_{k_{l+1}} \rangle_{0,l+2,\beta}^{(X,D)}, \] \[ D\cdot \beta-1=k_{l+1}-\sum_{i=1}^l k_i\geq 0, \text{ and } k_i>0. \] \item Degree zero relative invariants with two positive contact orders and several negative contact orders of the following form: \[ \langle [1]_1,[1]_{-k_1},\cdots, [1]_{-k_l},[\on{pt}]_{k_{l+1}}\rangle_{0,l+2,0}^{(X,D)}. \] \end{itemize} We will compute these invariants in the following sections. \subsection{Relative invariants with several negative contact orders} Based on the expression of relative mirror map in (\ref{rel-mirror-map}), to be able to compute relative invariants from the relative mirror theorem, we first need to study relative invariants with several insertions of $[1]_{-i}$ for $i\in \mathbb Z_{>0}$. We start with the case when $x=0$. That is, there is only one marking with positive contact order and no marking with insertion $[1]_1$. We would like to claim that the insertion $[1]_{-i}$ behaves like the divisor class $D$ in the sense that there is an analogous of the divisor equation as follows. \begin{proposition}\label{prop-several-neg-1} Given a curve class $\beta$, Let $k_i\in \mathbb Z_{>0}$ for $i\in \{1,\ldots, l+1\}$ such that \[ D\cdot \beta=k_{l+1}-\sum_{i=1}^l k_i\geq 0. \] Then we have the following relation. \begin{align}\label{identity-several-neg-1} \langle [1]_{-k_1},\cdots, [1]_{-k_l}, [\gamma]_{k_{l+1}} \bar{\psi}^a\rangle_{0,l+1,\beta}^{(X,D)}=\langle [D]_0,\cdots, [D]_0, [\gamma]_{D\cdot \beta} \bar{\psi}^a\rangle_{0,l+1,\beta}^{(X,D)}, \end{align} where $\gamma \in H^*(D)$. \end{proposition} \begin{proof} \textbf{The base case I: $a=0$.} In this case, there are no descendant classes. Then the identity becomes \begin{align}\label{identity-several-neg-1-0} \langle [1]_{-k_1},\cdots, [1]_{-k_l}, [\gamma]_{k_{l+1}} \rangle_{0,l+1,\beta}^{(X,D)}=\langle [D]_0,\cdots, [D]_0, [\gamma]_{D\cdot \beta} \rangle_{0,l+1,\beta}^{(X,D)}. \end{align} By Section \ref{sec-rel-general}, relative Gromov--Witten theory is defined as graph sums by gluing moduli spaces of relative stable maps with moduli space of rubber maps using fiber products. When there are more than one negative contact orders, the invariants are usually complicated and involve summation over different graphs as described in Section \ref{sec-rel-general}. But for invariants on the LHS of (\ref{identity-several-neg-1-0}), the situation is significantly simplified. Every negative contact marking must be distributed to a rubber moduli $\overline{\mathcal M}^\sim_{\Gamma_i^0}(D)$ labelled by $\Gamma_i^0$. Since $D$ is a nef divisor in $X$, we have \[ \int_{\beta_D}c_1(N_{D/X})\geq 0, \] for every effective curve class $\beta_D$ in $D$. Let $\beta_v$ be a curve class associated to a vertex $v$ in $\Gamma^0$, we must have \[ D_0\cdot \beta_v-D_\infty\cdot \beta_v=\int_{\pi_*(\beta_v)}c_1(N_{D/X})\geq 0, \] where $\pi:P\rightarrow D$ is the projection map. Therefore, the nefness of $D$ implies that, for each $\overline{\mathcal M}^\sim_{\Gamma_i^0}(D)$, the relative insertion at $D_0$ can not be empty. Hence, at least one of the positive contact markings on the LHS of (\ref{identity-several-neg-1-0}) must be distributed to $\overline{\mathcal M}^\sim_{\Gamma_i^0}(D)$. Since there is only one positive contact marking, there can only be one rubber moduli, denoted by $\overline{\mathcal M}^\sim_{\Gamma^0}(D)$. Therefore, all the negative contact markings, as well as the positive contact marking, are distributed to $\overline{\mathcal M}^\sim_{\Gamma^0}(D)$. The invariant can be written as \[ \sum_{\mathfrak G\in \mathcal B_\Gamma}\frac{\prod_{e\in E}d_e}{|\on{Aut}(E)|}\sum_\eta \langle \check{\eta}\rangle^{\bullet,\mathfrak c_\Gamma, (X,D)}_{\Gamma^\infty}\langle \eta,[1]_{k_1},\cdots, [1]_{k_l},| \, | [\gamma]_{k_{l+1}} \rangle^{\sim,\mathfrak c_{\Gamma}}_{\Gamma^0}, \] where the superscript $\mathfrak c_\Gamma$ means capping with the class $\mathfrak c_\Gamma(X/D)$ in Definition \ref{def-rel-cycle}. Let \[ p: \overline{\mathcal M}_{\Gamma^0}^\sim(P,D_0\cup D_\infty)\rightarrow \overline{\mathcal M}_{0,m}(D,\pi_*(\beta_1)) \] be the natural projection of the rubber map to $X$ and contracting the resulting unstable components. By \cite{JPPZ18}*{Theorem 2}, we have \[ p_*[\overline{\mathcal M}_{\Gamma^0}^\sim(P,D_0\cup D_\infty)]^{\on{vir}}=[\overline{\mathcal M}_{0,m}(D,\pi_*(\beta_1))]^{\on{vir}}, \] where $\pi:P\rightarrow D$ is the projection map. Note that there are $l$ identity classes $[1]$ and the degree of the class $\mathfrak c_\Gamma(X/D)$ is less or equal to $l-1$. We can apply the string equation $l$-times. Then the invariant \[ \langle \eta,[1]_{k_1},\cdots, [1]_{k_l},| \, | [\gamma]_{k_{l+1}} \rangle^{\sim,\mathfrak c_{\Gamma}}_{\Gamma^0} \] vanishes unless $\pi_*(\beta)=0$ and $\eta$ contains exactly one element. Moreover, $\eta$ needs to be $[\check{\gamma}]_{D\cdot \beta}$ and $\check{\eta}$ needs to be $[\gamma]_{D\cdot \beta}$. Therefore, \[ \langle \check{\eta}\rangle^{\bullet,\mathfrak c_\Gamma, (X,D)}_{\Gamma^\infty}=\langle [\gamma]_{D\cdot \beta}\rangle_{0,1,\beta}^{(X,D)}. \] There is only one edge, hence \[ \frac{\prod_{e\in E}d_e}{|\on{Aut}(E)|}=D\cdot \beta. \] It remains to compute \[ \langle [\check{\gamma}]_{D\cdot \beta},[1]_{k_1},\cdots, [1]_{k_l},| \, | [\gamma]_{k_{l+1}} \rangle^{\sim,\mathfrak c_{\Gamma}}_{\Gamma^0} \] with $\pi_*(\beta)=0$. This is the same as the rubber invariant with the base being a point. Set $d=D\cdot \beta$. We claim that it coincides with the following relative Gromov--Witten invariants of $(\mathbb P^1,0\cup\infty)$ with negative contact orders \begin{align}\label{rel-inv-P-1} \langle [1]_{d}, [1]_{-k_1},\cdots, [1]_{-k_l}, | \, | [1]_{k_{l+1}}\rangle_{0,l+2,d}^{(\mathbb P^1,0\cup\infty)}. \end{align} This is because one can run the above computation of the LHS of (\ref{identity-several-neg-1-0}) to the invariant (\ref{rel-inv-P-1}), we see that (\ref{rel-inv-P-1}) equals to \[ d\langle [1]_d | \, | [1]_d \rangle_{0,l+2,d}^{(\mathbb P^1,0\cup\infty)} \langle [\check{\gamma}]_{D\cdot \beta},[1]_{k_1},\cdots, [1]_{k_l},| \, | [\gamma]_{k_{l+1}} \rangle^{\sim,\mathfrak c_{\Gamma}}_{\Gamma^0}. \] It is straightforward to compute that \[ \langle [1]_d | \, | [1]_d \rangle_{0,l+2,d}^{(\mathbb P^1,0\cup\infty)}=\frac{1}{d}. \] This proves the claim that \[ \langle [\check{\gamma}]_{D\cdot \beta},[1]_{k_1},\cdots, [1]_{k_l},| \, | [\gamma]_{k_{l+1}} \rangle^{\sim,\mathfrak c_{\Gamma}}_{\Gamma^0} \] with $\pi_*(\beta)=0$ equals to (\ref{rel-inv-P-1}). The invariant (\ref{rel-inv-P-1}) has already been computed in \cite{KW}*{Proposition B.2}: \[ \langle [1]_{d}, [1]_{-k_1},\cdots, [1]_{-k_l}, | \, | [1]_{k_{l+1}}\rangle_{0,l+2,d}^{(\mathbb P^1,0\cup\infty)}=d^{l-1}. \] Therefore the LHS of (\ref{identity-several-neg-1-0}) is \[ (D\cdot \beta)^l \langle [\gamma]_{D\cdot \beta}\rangle_{0,1,\beta}^{(X,D)}=\langle [D]_0,\cdots, [D]_0, [\gamma]_{D\cdot \beta} \rangle_{0,l+1,\beta}^{(X,D)} \] by the divisor equation. \textbf{ The base case II: $l=1$.} Then the LHS of (\ref{identity-several-neg-1-0}) is a relative invariant with one negative contact order. Similar to the proof of Proposition \ref{prop-struc-const}, the invariant is of the form, \begin{align*} \sum_{\mathfrak G\in \mathcal B_\Gamma}\frac{\prod_{e\in E}d_e}{|\on{Aut}(E)|}\sum \langle [\gamma]_{k_2}\psi^a |\,|[1]_{k_1},\eta\rangle^{\sim}_{\Gamma^0} \langle \check{\eta}\rangle^{\bullet, (X,D)}_{\Gamma_\infty}. \end{align*} For the RHS of (\ref{identity-several-neg-1-0}), we consider the degeneration to the normal cone of $D$ and apply the degeneration formula. After applying the rigidification lemma \cite{MP}*{Lemma 2}, the invariant is of the form \begin{align*} \sum_{\mathfrak G\in \mathcal B_\Gamma}\frac{\prod_{e\in E}d_e}{|\on{Aut}(E)|}\sum \langle [\gamma]_{k_2-k_1}\psi^a |\,|[1]_{0},\eta\rangle^{\sim}_{\Gamma^0} \langle \check{\eta}\rangle^{\bullet, (X,D)}_{\Gamma_\infty}. \end{align*} The only difference between the LHS of (\ref{identity-several-neg-1-0}) and the RHS of (\ref{identity-several-neg-1-0}) is the contact orders of two markings (contact orders $k_2$ and $k_1$ for the LHS and contact orders $k_2-k_1$ and $0$ for the RHS) for the rubber invariants. We pushforward the rubber moduli spaces to the moduli space $\overline{M}(D)$ of stable maps to $D$. Since the genus zero double ramification cycle is trivial, it does not depend on the contact orders. Therefore, the LHS of (\ref{identity-several-neg-1-0}) and the RHS of (\ref{identity-several-neg-1-0}) are the same. \textbf{Induction:} Now we use the induction to prove the case when $a>0$ and $l>1$. Suppose Identity (\ref{identity-several-neg-1}) is true when $a=N\geq 0$. When $a=N+1$, we apply the topological recursion relation \begin{align*} &\langle [1]_{-k_1},\cdots, [1]_{-k_l}, [\gamma]_{k_{l+1}} \bar{\psi}^{N+1}\rangle_{0,l+1,\beta}^{(X,D)}\\ =&\sum \langle [\gamma]_{k_{l+1}} \bar{\psi}^{N}, \prod_{j\in S_1}[1]_{-k_j},\tilde {T}_{i,k}\rangle_{0,\beta_1,2+|S_1|}^{(X,D)} \langle \tilde {T}_{-i}^k, \prod_{j\in S_2}[1]_{-k_j},[1]_{-k_1},[1]_{-k_2} \rangle_{0,\beta_2,3+|S_2|}^{(X,D)}, \end{align*} where the sum is over all $\beta_1+\beta_2=\beta$, all indices $i,k$ of basis and $S_1, S_2$ disjoint sets with $S_1\cup S_2=\{3,\ldots,l\}$. The nefness of the divisor $D$ implies that \[ \tilde {T}_{-i}^k=[\alpha]_{b} \text{ and } \tilde {T}_{i,k}=[\check{\alpha}]_{-b}, \] for some positive integer $b\geq k_1$. Note that \[ \langle [\alpha]_{b}, \prod_{j\in S_2}[1]_{-k_j}, [1]_{-k_1},[1]_{-k_2} \rangle_{0,\beta_2,3+|S_2|}^{(X,D)}=\langle [\alpha]_{b-k_1-k_2},\prod_{j\in S_2}[1]_{-k_j}, [D]_0,[D]_0 \rangle_{0,\beta_2,3+|S_2|}^{(X,D)} \] follows from the base case. On the other hand, we have \[ \langle [\gamma]_{k_{l+1}} \bar{\psi}^{N}, \prod_{j\in S_1}[1]_{-k_j}, [\check{\alpha}]_{-b}\rangle_{0,\beta_1,2+|S_1|}^{(X,D)}=\langle [\gamma]_{k_{l+1}-k_1} \bar{\psi}^{N}, \prod_{j\in S_1}[1]_{-k_j}, [\check{\alpha}]_{-b+k_1}\rangle_{0,\beta_1,2+|S_1|}^{(X,D)}. \] This is because, for these invariants, the graph sum in the definition of the relative invariants only has one rubber space and all the markings are in the rubber space. Moreover, the class $\mathfrak c_\Gamma$ does not depend on the value of $k_{l+1}$ and $b$. Therefore, we have the identity. Therefore, we have \begin{align*} &\sum \langle [\gamma]_{k_{l+1}} \bar{\psi}^{N}, \prod_{j\in S_1}[1]_{-k_j}, \tilde {T}_{i,k}\rangle_{0,\beta_1,2+|S_1|}^{(X,D)} \langle \tilde {T}_{-i}^k, \prod_{j\in S_2}[1]_{-k_j}, [1]_{-k_1},[1]_{-k_2} \rangle_{0,\beta_2,3+|S_2|}^{(X,D)}\\ =&\sum \langle [\gamma]_{k_{l+1}-k_1} \bar{\psi}^{N}, \prod_{j\in S_1}[1]_{-k_j}, [\check{\alpha}]_{-b+k_1}\rangle_{0,\beta_1,2+|S_1|}^{(X,D)} \langle [\alpha]_{b-k_1},\prod_{j\in S_2}[1]_{-k_j}, [D]_0,[D]_0 \rangle_{0,\beta_2,3+|S_2|}^{(X,D)}\\ =&\langle [D]_0, [D]_0, \prod_{j\in\{3,\ldots,l\}}[1]_{-k_j},[\gamma]_{k_{l+1}-k_1} \bar{\psi}^{N+1}\rangle_{0,3,\beta}^{(X,D)},\\ \end{align*} where the third line is the topological recursion relation. We have the identity \begin{align*} \langle [1]_{-k_1},\cdots, [1]_{-k_l}, [\gamma]_{k_{l+1}} \bar{\psi}^{N+1}\rangle_{0,l+1,\beta}^{(X,D)} =\langle [D]_0, [D]_0,\prod_{j\in\{3,\ldots,l\}}[1]_{-k_j}, [\gamma]_{D\cdot \beta-k_{1}} \bar{\psi}^{N+1}\rangle_{0,l+1,\beta}^{(X,D)}. \end{align*} Run the above argument multiple times to trade markings with negative contact orders with markings with insertion $[D]_0$. We end up with either one negative contact order or no negative contact order. The former case is the base case II: $l=1$, the latter case is just Identity (\ref{identity-several-neg-1}). \if{ When $a>0$ and $l>1$, we use induction (on $a$ and $l$) and apply the topological recursion relation for relative Gromov--Witten theory in \cite{FWY}. Suppose the identity (\ref{identity-several-neg-1}) is true for $a=N$. By the topological recursion, the LHS of (\ref{identity-several-neg-1}) when $a=N+1$ is \[ \sum \langle [\gamma]_{k_{l+1}} \bar{\psi}^{N}, \prod_{j\in S_1}[1]_{-k_j}, \tilde {T}_{i,k}\rangle_{0,\beta_1,1+|S_1|}^{(X,D)} \langle \tilde {T}_{-i}^k, [1]_{-k_1},[1]_{-k_2},\prod_{j\in S_2}[1]_{-k_j} \rangle_{0,\beta_2,2+|S_2|}^{(X,D)}, \] where the sum is over all $\beta_1+\beta_2=\beta$, all indices $i,k$ of basis, and $S_1, S_2$ disjoint sets with $S_1\cup S_2=\{3,\ldots,l\}$. We first consider \[ \langle \tilde {T}_{-i}^k, [1]_{-k_1},[1]_{-k_2},\prod_{j\in S_2}[1]_{-k_j} \rangle_{0,\beta_2,2+|S_2|}^{(X,D)}. \] The nefness of the divisor $D$ implies that \[ \tilde {T}_{-i}^k=[\alpha]_{b} \] for some positive integer $b$. }\fi \end{proof} The identity will be slightly different if we add an insertion of $[1]_1$. For the purpose of this paper, we only consider the case when there are no descendant classes. There is also a (more complicated) identity for descendant invariants, but we do not plan to discuss it here. \begin{proposition}\label{prop-several-neg-2} Given a curve class $\beta$, Let $k_i\in \mathbb Z_{>0}$ for $i\in \{1,\ldots, l+1\}$ such that \[ D\cdot \beta-1=k_{l+1}-\sum_{i=1}^l k_i\geq 0. \] Then we have the following relation. \begin{align}\label{identity-several-neg-2} \langle [1]_1, [1]_{-k_1},\cdots, [1]_{-k_l}, [\gamma]_{k_{l+1}} \rangle_{0,l+2,\beta}^{(X,D)}=(D\cdot \beta-1)^l\langle [1]_1, [\gamma]_{D\cdot \beta} \rangle_{0,2,\beta}^{(X,D)}, \end{align} where $\gamma \in H^*(D)$. \end{proposition} \begin{proof} The proof is similar to the proof of Proposition \ref{prop-several-neg-1}. We first consider the LHS of (\ref{identity-several-neg-2}). By definition, every negative contact marking must be in a rubber moduli $\overline{\mathcal M}^\sim_{\Gamma_i^0}(D)$ labelled by $\Gamma_i^0$ and each rubber moduli $\overline{\mathcal M}^\sim_{\Gamma_i^0}(D)$ has at least one negative contact marking distributed to it. Similar to the proof of Proposition \ref{prop-several-neg-1}, the nefness of $D$ implies that the last marking (with insertion $[\gamma]_{k_{l+1}}$) has to be distributed to the rubber space. Now we examine the first marking. Since the contact order of the first marking is $1$, the nefness of $D$ implies that the first marking and the last marking can not be in different rubber space. On the other hand, if the first marking and the last marking (both with positive contact orders) are in the same rubber space, then we claim the invariant vanishes. This is because, after pushing forward to $\overline{M}_{g,n}(D, \pi_*(\beta))$, there are $(l+1)$-identity class $1$ and the degree of the class $\mathfrak c_\Gamma$ is $l-1$. Applying the string equation $(l+1)$-times implies that the invariant is zero. Therefore, there is only one rubber moduli, labelled by $\Gamma^0$, and the first marking can not be distributed to the rubber moduli. The LHS of (\ref{identity-several-neg-2}) is of the following form \[ \sum_{\mathfrak G\in \mathcal B_\Gamma}\frac{\prod_{e\in E}d_e}{|\on{Aut}(E)|}\sum_\eta \langle [1]_1,\check{\eta}\rangle^{\bullet,\mathfrak c_\Gamma, (X,D)}_{\Gamma^\infty}\langle \eta,[1]_{k_1},\cdots, [1]_{k_l},| \, | [\gamma]_{k_{l+1}} \rangle^{\sim,\mathfrak c_{\Gamma}}_{\Gamma^0}. \] Then the rest of the proof follows from the proof of Proposition \ref{prop-several-neg-1} that the Equation (\ref{identity-several-neg-2}) holds. Note that the contact order of the unique marking in $\eta$ is $D\cdot \beta-1$ instead of $D\cdot \beta$. Therefore, we have the factor $(D\cdot \beta-1)^l$ instead of $(D\cdot \beta)^l$. \end{proof} One can add more insertions of $[1]_1$, then we have a similar identity as follows. \begin{prop}\label{prop-several-neg-3} Given a curve class $\beta$, Let $k_i\in \mathbb Z_{>0}$ for $i\in \{1,\ldots, l+1\}$ such that \[ D\cdot \beta-a=k_{l+1}-\sum_{i=1}^l k_i\geq 0. \] Then we have the following relation. \begin{align}\label{identity-several-neg-3} \langle [1]_1, \ldots, [1]_1, [1]_{-k_1},\cdots, [1]_{-k_l}, [\gamma]_{k_{l+1}} \rangle_{0,a+l+1,\beta}^{(X,D)}=(D\cdot \beta-a)^l\langle [1]_1,\ldots, [1]_1, [\gamma]_{D\cdot \beta} \rangle_{0,a+1,\beta}^{(X,D)}, \end{align} where $\gamma \in H^*(D)$. \end{prop} \begin{proof} The proof is similar to the proof of Proposition \ref{prop-several-neg-1} and Proposition \ref{prop-several-neg-2}. We do not repeat the details. \end{proof} \subsection{Degree zero relative invariants}\label{sec-deg-0} The following invariants will also appear in the $J$-function when plugging in the mirror map: \[ \langle [1]_1, [1]_{-k_1},\cdots, [1]_{-k_l}, [\gamma]_{k_{l+1}} \rangle_{0,l+2,\beta}^{(X,D)}, \text{ with } D\cdot \beta=0. \] We again apply the definition of relative Gromov--Witten invariants with negative contact orders in Section \ref{sec-rel-general} and then pushforward the rubber moduli to $\overline{M}_{0,l+2}(D,\pi_*(\beta))$. Then applying the string equation $(l+1)$-times, then the invariants vanish unless $\beta=0$. When $l=1$, the invariants have two markings with positive contact orders and one marking with negative contact order. By direct computation, we have \[ \langle [1]_1,[1]_{-k_1},[\on{pt}]_{k_{2}}\rangle_{0,3,0}^{(X,D)}=1. \] Therefore, We still need to compute degree zero, genus zero relative invariants with two positive contacts and several negative contacts. By the definition of relative invariants with negative contact orders in Section \ref{sec-rel-general}, the bipartite graphs simplifies to a single vertex of type $0$ and the moduli space is simply the product $\overline{M}_{0,n}\times D$. We will have the following result. \begin{prop}\label{prop-degree-zero} \begin{align}\label{identity-zero-several-neg} \langle [1]_1,[1]_{-k_1},\cdots, [1]_{-k_l},[\on{pt}]_{k_{l+1}}\rangle_{0,l+2,0}^{(X,D)}=(-1)^{l-1}, \end{align} where $k_1, \ldots, k_l$ are positive integers and \[ 1+k_{l+1}=k_1+\cdots +k_l. \] \end{prop} These invariants can be computed from the $I$-function. The $I$-function is a limit of the $I$-function for the twisted invariants of $B\mathbb Z_r$. The $I$-function is also the restriction of the $I$-function for $(X,D)$ to the degree zero case. Before computing the corresponding orbifold invariants, we briefly recall the relative-orbifold correspondence of \cite{FWY}: \[ r^{m_-}\langle \prod_{i=1}^m \tau_{a_i}(\gamma_i)\rangle_{0,m,\beta}^{X_{D,r}}=\langle \prod_{i=1}^m \tau_{a_i}(\gamma_i)\rangle_{0,m,\beta}^{(X,D)}, \] where there are $m$ markings in total and $m_-$ of them are markings with negative contact orders; $a_i\in \mathbb Z_{\geq 0}$; $\gamma_i$ are cohomology classes of $X$ (or $D$) when the marking is interior (or relative/orbifold, respectively). We would like to point out that it is important to keep in mind the factor $r^{m_-}$. We need to consider the $S$-extended $I$-function for the twisted Gromov--Witten invariants of $B\mathbb Z_r$ for sufficiently large $r$, with extended data \[ S=\{1,-k_1,\ldots,-k_l\}. \] The $I$-function is \begin{align}\label{I-func-BZ} z\sum \frac{x_1^{a}\prod_{i=1}^l x_{-k_i}^{a_{i}}}{z^{a+\sum_{i=1}^l a_i}a!\prod_{i=1}^l a_i!}\frac{\prod_{b\leq 0, \langle b\rangle=\langle -\frac{1}{r}-\sum_{i=1}^l (a_i(1-\frac{k_i}{r}))\rangle} (bz) }{\prod_{b\leq -\frac{1}{r}-\sum_{i=1}^l (a_i(1-\frac{k_i}{r})), \langle b\rangle=\langle -\frac{1}{r}-\sum_{i=1}^l (a_i(1-\frac{k_i}{r}))\rangle} (bz) } [1]_{\langle a-\sum_{i=1}^l \frac{a_ik_i}{r}\rangle}, \end{align} where $\langle b \rangle$ is the fractional part of the rational number $b$. The orbifold mirror map, the $z^0$-coefficient of the $I$-function, is \[ x_1[1]_{\frac{1}{r}}+\sum_{\{i_1,\ldots, i_j\}\subset \{1,\ldots, l\}} x_{-k_{i_1}}\cdots x_{-k_{i_j}} \prod_{b=0}^{j-1}\left(\frac{ k_{i_1}+\cdots+ k_{i_j}}{r}-b\right)[1]_{-\frac{k_{i_1}+\cdots+ k_{i_j}}{r}}. \] Since the expression of the $I$-function and the mirror map looks quite complicated for general $l$. We would like to first start with the computation for the case when $l=2$ for a better explanation of the idea. \subsubsection{Computation for $l=2$} When $l=2$, the $I$-function becomes \[ z\sum \frac{x_1^{a} x_{-k_1}^{a_{1}}x_{-k_2}^{a_{2}}}{z^{a+a_1+a_2}a! a_1!a_2!}\frac{\prod_{b\leq 0, \langle b\rangle=\langle -\frac{1}{r}-\sum_{i=1}^2 a_i(1-\frac{k_i}{r}))\rangle} (bz) }{\prod_{b\leq -\frac{1}{r}-\sum_{i=1}^2 (a_i(1-\frac{k_i}{r})), \langle b\rangle=\langle -\frac{1}{r}-\sum_{i=1}^2 (a_i(1-\frac{k_i}{r}))\rangle} (bz) } [1]_{\langle a-\sum_{i=1}^l \frac{a_ik_i}{r}\rangle}. \] The orbifold mirror map is \[ x_1[1]_{\frac{1}{r}}+x_{-k_1}[1]_{1-\frac{k_1}{r}}+x_{-k_2}[1]_{1-\frac{k_2}{r}}+x_{-k_1}x_{-k_2}\left(\frac{k_1+k_2}{r}-1\right)[1]_{-\frac{k_1+k_2}{r}}. \] We would like to take the coefficient of $x_1x_{-k_1}x_{-k_2}z^{-1}[1]_{-\frac{k_1+k_2-1}{r}}$ of the $I$-function and the $J$-function. The coefficient of the $I$-function is \[ \frac{k_1+k_2-1}{r}-1. \] By the mirror theorem, this coefficient of the $I$-function coincides with the coefficient of the $J$-function: \begin{align*} &\langle [1]_{\frac{1}{r}}, [1]_{1-\frac{k_1}{r}}, [1]_{1-\frac{k_2}{r}}, r[1]_{\frac{k_1+k_2-1)}{r}}\rangle_{0,4}^{B \mathbb Z_r, tw}\\ +&\left(\frac{k_1+k_2}{r}-1\right)\langle [1]_{\frac{1}{r}}, [1]_{1-\frac{k_1+k_2}{r}}, r[1]_{\frac{k_1+k_2-1}{r}}\rangle_{0,3}^{B \mathbb Z_r, tw}. \end{align*} Note that the invariant \[ \langle [1]_{\frac{1}{r}}, [1]_{1-\frac{k_1+k_2}{r}}, r[1]_{\frac{k_1+k_2-1}{r}}\rangle_{0,3}^{B \mathbb Z_r, tw} \] coincides with degree zero relative invariant with one negative contact order and the value of the invariants is $1$ by direct computation. Therefore, we have \[ \frac{k_1+k_2-1}{r}-1=\langle [1]_{\frac{1}{r}}, [1]_{1-\frac{k_1}{r}}, [1]_{1-\frac{k_2}{r}}, r[1]_{\frac{k_1+k_2-1}{r}}\rangle_{0,4}^{B \mathbb Z_r, tw}+\left(\frac{k_1+k_2}{r}-1\right). \] Hence, we have \[ r\langle [1]_{\frac{1}{r}}, [1]_{1-\frac{k_1}{r}}, [1]_{1-\frac{k_2}{r}}, [1]_{\frac{k_1+k_2-1}{r}}\rangle_{0,4}^{B \mathbb Z_r, tw}=-\frac{1}{r}. \] We conclude that, the degree zero relative invariant with two negative contact orders is \begin{align*} &\langle [1]_1,[1]_{-k_1}, [1]_{-k_2},[\on{pt}]_{k_1+k_2-1}\rangle^{(X,D)}_{0,4,0}\\ =& r^2\langle [1]_{\frac{1}{r}}, [1]_{1-\frac{k_1}{r}}, [1]_{1-\frac{k_2}{r}}, [1]_{\frac{k_1+k_2-1}{r}}\rangle_{0,4}^{B \mathbb Z_r, tw}\\ =&-1. \end{align*} \subsubsection{Computation for general $l$} \begin{proof}[Proof of Proposition \ref{prop-degree-zero}] We proceed with induction on $l$. Suppose Identity (\ref{identity-zero-several-neg}) is true for $l=N>0$. For $l=N+1$, extracting the coefficient $x_1\prod_{i=1}^{N+1} x_{-k_i}z^{-1}[1]_{-1-\frac{\sum_{i=1}^{N+1} k_i}{r}}$ of the $I$-function (\ref{I-func-BZ}), we have \[ \prod_{b=1}^{N} \left( \frac{-1+\sum_{i=1}^{N+1} k_i }{r}-b\right). \] The corresponding coefficient of the $J$-function is \begin{align*} &\langle [1]_{\frac{1}{r}}, \prod_{i=1}^{N+1} [1]_{1-\frac{k_i}{r}}, r[1]_{\frac{-1+\sum_{i=1}^{N+1}}{r}}\rangle_{0,N+3}^{B \mathbb Z_r, tw}\\ +&\sum_{\{i_1,i_2\}\subset \{1,\ldots,N+1\}}\left(\frac{k_{i_1}+k_{i_2}}{r}-1\right)\langle [1]_{\frac 1 r}, \prod_{i\in\{1,\ldots, N+1\}\setminus \{i_1,i_2\} }[1]_{k_i}, [1]_{1-\frac{k_{i_1}+k_{i_2}}{r}}, r[1]_{\frac{-1+\sum_{i=1}^{N+1} k_i}{r}}\rangle_{0,N+2}^{B \mathbb Z_r, tw}\\ +& \cdots\\ +& \prod_{b=1}^{N} \left( \frac{\sum_{i=1}^{N+1} k_i }{r}-b\right)\langle [1]_{\frac{1}{r}}, [1]_{1-\frac{\sum_{i=1}^{N+1}k_i}{r}}, r[1]_{\frac{-1+\sum_{i=1}^{N+1} k_i}{r}}\rangle_{0,3}^{B \mathbb Z_r, tw}. \end{align*} We further multiply both coefficients of the $I$-function and the $J$-function by $r^{-N}$, then take the constant coefficient (which is the coefficient of the lowest power of $r$). This coefficient of the $I$-function is \[ \left( -1+\sum_{i=1}^{N+1} k_i \right)^N. \] Apply the induction on $l$, the coefficient of the $J$-function is \begin{align*} & r^{N+1}\langle [1]_{\frac{1}{r}}, \prod_{i=1}^{N+1} [1]_{1-\frac{k_i}{r}}, [1]_{\frac{-1+\sum_{i=1}^{N+1}}{r}}\rangle_{0,N+3}^{B \mathbb Z_r, tw}\\ +& \sum_{\{i_1,i_2\}\subset \{1,\ldots,N+1\}}\left(k_{i_1}+k_{i_2}\right)(-1)^{N-1}\\ +& \sum_{\{i_1,i_2,i_3\}\subset \{1,\ldots,N+1\}}\left(k_{i_1}+k_{i_2}+k_{i_3}\right)(-1)^{N-2}\\ +& \cdots\\ +& \left( \sum_{i=1}^{N+1} k_i \right)^N. \end{align*} The coefficient of the $J$-function can be simplified to \begin{align*} & r^{N+1}\langle [1]_{\frac{1}{r}}, \prod_{i=1}^{N+1} [1]_{1-\frac{k_i}{r}}, [1]_{\frac{-1+\sum_{i=1}^{N+1}}{r}}\rangle_{0,N+3}^{B \mathbb Z_r, tw}\\ +& N\left(\sum_{i=1}^N k_i\right)(-1)^{N-1}+ {N\choose 2} \left(\sum_{i=1}^N k_i\right)(-1)^{N-2}+\cdots + \left( \sum_{i=1}^{N+1} k_i \right)^N. \end{align*} Therefore, the identity of the (coefficients of the) $I$-function and the $J$-function is \begin{align*} \left( -1+\sum_{i=1}^{N+1} k_i \right)^N=& r^{N+1}\langle [1]_{\frac{1}{r}}, \prod_{i=1}^{N+1} [1]_{1-\frac{k_i}{r}}, [1]_{\frac{-1+\sum_{i=1}^{N+1}k_i}{r}}\rangle_{0,N+3}^{B \mathbb Z_r, tw}\\ +& N\left(\sum_{i=1}^N k_i\right)(-1)^{N-1}+ {N\choose 2} \left(\sum_{i=1}^N k_i\right)(-1)^{N-2}+\cdots + \left( \sum_{i=1}^{N+1} k_i \right)^N. \end{align*} The binomial theorem implies that \[ (-1)^N=r^{N+1}\langle [1]_{\frac{1}{r}}, \prod_{i=1}^{N+1} [1]_{1-\frac{k_i}{r}}, [1]_{\frac{-1+\sum_{i=1}^{N+1}}{r}}\rangle_{0,N+3}^{B \mathbb Z_r, tw} \] The orbifold definition of the relative Gromov--Witten invariants with negative contact orders implies Identity (\ref{identity-zero-several-neg}). \end{proof} \subsection{Relative mirror map}\label{sec-rel-mirror-map} We recall that the extended relative mirror map (\ref{rel-mirror-map}) is \begin{align*} \tau(y,x_1)=\sum_{i=1}^r p_i\log y_i+x_1[1]_{1}+\sum_{\substack{\beta\in \on{NE}(X)\\ D\cdot \beta \geq 2}}\langle [\on{pt}]\psi^{D\cdot \beta-2}\rangle_{0,1,\beta}^Xy^\beta (D\cdot \beta-1)![1]_{-D\cdot \beta}. \end{align*} Let \[ g(y)=\sum_{\substack{\beta\in \on{NE}(X)\\ D\cdot \beta \geq 2}}\langle [\on{pt}]\psi^{D\cdot \beta-2}\rangle_{0,1,\beta}^Xy^\beta (D\cdot \beta-1)!. \] Let $\iota: D\hookrightarrow X$ be the inclusion map and $\iota_!:H^*(D)\rightarrow H^*(X)$ be the Gysin pushforward. Recall that \[ \mathfrak H:=\bigoplus_{i\in \mathbb Z}\mathfrak H_i, \] where $\mathfrak H_0=H^*(X)$ and $\mathfrak H_i=H^*(D)$ for $i\neq 0$. We also denote $\iota_!:\mathfrak H\rightarrow H^*(X)$ for the map such that it is the identity map for the identity sector $\mathfrak H_0$ and is the Gysin pushforward for twisted sectors $\mathfrak H_i$. We first let $x_1=0$, then \[ \iota!J_{(X,D)}(\tau(y),z)=e^{(\sum_{i=1}^r p_i\log y_i+g(y)D)/z}\left(z+\sum_{ \beta\in \on{NE(X)}}\sum_{\alpha}y^{\beta}e^{g(y)(D\cdot \beta)}\left\langle \frac{\phi_\alpha}{z-\bar{\psi}}\right\rangle_{0,1, \beta}^{(X,D)}D\cup \phi^{\alpha}\right). \] This has the same effect with the change of variables \begin{align}\label{relative-mirror-map} \sum_{i=1}^r p_i\log q_i=\sum_{i=1}^r p_i\log y_i+g(y)D, \end{align} or, \[ q^\beta=e^{g(y)D\cdot \beta}y^\beta. \] We will also refer to the change of variables (\ref{relative-mirror-map}) as the \emph{relative mirror map}. In particular, relative mirror map coincides with the local mirror map of $\mathcal O_X(-D)$ after a change of variables $y\mapsto -y$. \if{ Then we write the $J$-function $J(\tau(-y),z)$ as \[ e^{(\sum_{i=1}^r p_i\log q_i)/z}\left(z+\sum_{\substack{(\beta,l)\neq (0,0)\\ \beta\in \on{NE(X)}}}\sum_{\alpha}\frac{(-q)^{\beta}e^{-lg(-y(q))}}{l!}\left\langle \frac{\phi_\alpha}{z-\psi},[1]_1,\ldots, [1]_1\right\rangle_{0,1+l, \beta}^{X}x^l\phi^{\alpha}\right), \] where \[ q^\beta=e^{g(-y)D\cdot \beta}y^\beta. \] }\fi When $x_1\neq 0$, we will be able to compute invariants with more than one positive contact order. We will consider it in the following section. \section{Theta function computation via relative mirror theorem} \begin{theorem}\label{thm-main} Let $X$ be a smooth projective variety with a smooth nef anticanonical divisor $D$. Let $W:=\vartheta_1$ be the mirror proper Landau--Ginzburg potential. Set $q^\beta=t^{D\cdot \beta}x^{D\cdot\beta}$. Then \[ W=x^{-1}\exp\left(g(y(q))\right), \] where \[ g(y)=\sum_{\substack{\beta\in \on{NE}(X)\\ D\cdot \beta \geq 2}}\langle [\on{pt}]\psi^{D\cdot \beta-2}\rangle_{0,1,\beta}^Xy^\beta (D\cdot \beta-1)! \] and $y=y(q)$ is the inverse of the relative mirror map (\ref{relative-mirror-map}). \end{theorem} We will prove Theorem \ref{thm-main} in this section through the mirror theorem for the smooth pair $(X,D)$ proved in \cite{FTY}. For the purpose of the computation of the theta function: \[ x\vartheta_1=1+\sum_{\beta: D\cdot \beta=n+1}n\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta, \] we will only consider the $S$-extended $I$-function with \[ S=\{1\}. \] Recall that the $S$-extended $I$-function of $(X,D)$ is defined as follows: \[ I_{(X,D)}^{S}(y,x_1,z)=I_++I_-, \] where \begin{align*} I_+:=&\sum_{\substack{\beta\in \on{NE}(X),k\in \mathbb Z_{\geq 0}\\ k<D\cdot \beta} }J_{X, \beta}(\tau_{0,2},z)y^{\beta}\frac{ x_1^{k}}{z^{k}k!}\frac{\prod_{0<a\leq D\cdot \beta}(D+az)}{D+(D\cdot \beta-k)z}[{1}]_{-D\cdot \beta+k}, \end{align*} and \begin{align*} I_-:=&\sum_{\substack{\beta\in \on{NE}(X),k\in \mathbb Z_{\geq 0}\\ k\geq D\cdot \beta} }J_{X, \beta}(\tau_{0,2},z)y^{\beta}\frac{ x_1^{k}}{z^{k}k!}\left(\prod_{0<a\leq D\cdot \beta}(D+az)\right)[{ 1}]_{-D\cdot \beta+k}. \end{align*} \subsection{Extracting the coefficient of the $J$-function} We consider the $J$-function \[ J(\tau(y,x_1),z), \] where \[ J_{(X,D)}(\tau,z)=z+\tau+\sum_{\substack{(\beta,l)\neq (0,0), (0,1)\\ \beta\in \on{NE(X)}}}\sum_{\alpha}\frac{q^{\beta}}{l!}\left\langle \frac{\phi_\alpha}{z-\bar{\psi}},\tau,\ldots, \tau\right\rangle_{0,1+l, \beta}^{(X,D)}\phi^{\alpha}, \] and \begin{align*} \tau(y,x_1)=\sum_{i=1}^r p_i\log y_i+x_1[1]_{1}+\sum_{\substack{\beta\in \on{NE}(X)\\ D\cdot \beta \geq 2}}\langle [\on{pt}]\psi^{D\cdot \beta-2}\rangle_{0,1,\beta}^Xy^\beta (D\cdot \beta-1)![1]_{-D\cdot \beta}. \end{align*} \if{ \[ e^{(\sum_{i=1}^r p_i\log q_i)/z}\left(z+x[1]_1+\sum_{\substack{(\beta,l)\neq (0,0), (0,1)\\ \beta\in \on{NE(X)}}}\sum_{\alpha}\frac{q^{\beta}e^{-lg(y(q))}}{l!}\left\langle \frac{\phi_\alpha}{z-\psi},[1]_1,\ldots, [1]_1\right\rangle_{0,1+l, \beta}^{X}x^l\phi^{\alpha}\right). \] }\fi The sum over the coefficient of $x_1z^{-1}$ of $J(\tau(y,x_1),z)$ that takes value in $[1]_{-n}$, for $n\geq 1$, is the following \begin{align}\label{coeff-J} [J(\tau(y,x_1),z)]_{x_1z^{-1}}=& \sum_{\beta: D\cdot \beta\geq 1,n\geq 1}\langle [1]_1,\tau(y),\cdots, \tau(y),[\on{pt}]_{n}\rangle_{0,\beta,k+2}^{(X,D)}q^\beta \\ \notag & +\sum_{\beta: D\cdot \beta=0}\sum_{n\geq 1, k>0}\langle[1]_1,\tau(y),\cdots,\tau(y),[\on{pt}]_n \rangle_{0,\beta,k+2}^{(X,D)}. \end{align} By Proposition \ref{prop-several-neg-2}, we have \begin{align}\label{identity-deg-geq-1} & \sum_{\beta: D\cdot \beta \geq 1, n\geq 1}\langle [1]_1,\tau(y),\cdots, \tau(y),[\on{pt}]_{n}\rangle_{0,\beta,k+2}^{(X,D)}q^\beta\\ \notag = &\exp\left(-g(y)\right)\sum_{\beta: D\cdot \beta=n+1,n\geq 1}\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta, \end{align} where \[ g(y)=\sum_{\substack{\beta\in \on{NE}(X)\\ D\cdot \beta \geq 2}}\langle [\on{pt}]\psi^{D\cdot \beta-2}\rangle_{0,1,\beta}^Xy^\beta (D\cdot \beta-1)! \] and \[ q^\beta=e^{g(y)D\cdot \beta}y^\beta. \] When $D\cdot \beta=0$, the invariants are studied in Section \ref{sec-deg-0}. As mentioned at the beginning of Section \ref{sec-deg-0}, we need to have $\beta=0$. The degree zero invariants are computed in Proposition \ref{prop-degree-zero}. Therefore, we have \begin{align}\label{identity-deg-0} &\sum_{\beta: D\cdot \beta=0}\sum_{n\geq 1, k>0}\langle[1]_1,\tau(y),\cdots,\tau(y),[\on{pt}]_n \rangle_{0,\beta,k+2}^{(X,D)}\\ \notag =& g(y)+\sum_{l\geq 2} g(y)^l(-1)^{l-1}\\ \notag =& -\exp\left(-g(y)\right)+1. \end{align} Therefore, (\ref{identity-deg-geq-1}) and (\ref{identity-deg-0}) imply that (\ref{coeff-J}) is \begin{align}\label{coeff-J-1} & [J(\tau(y,x_1),z)]_{x_1z^{-1}}\\ \notag =& \exp\left(-g(y)\right)\sum_{\beta: D\cdot \beta=n+1,n\geq 1}\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta -\exp\left(-g(y)\right)+1. \end{align} Note that (\ref{coeff-J-1}) is not exactly the generating function of relative invariants in the theta function $\vartheta_1$. We want to compute $\sum_{\beta: D\cdot \beta=n+1,n\geq 1}n\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta$ instead of $\sum_{\beta: D\cdot \beta=n+1,n\geq 1}\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta$. Write \[ D=\sum_{i=1}^r m_ip_i \] for some $m_i\in \mathbb Z_{\geq 0}$. In order to compute $\vartheta_1$, we apply the operator $\Delta_D=\sum_{i=1}^r m_iy_i\frac{\partial}{\partial y_i}-1$ to the $J$-function $J(\tau(y,x_1),z)$. Then (\ref{coeff-J-1}) becomes \begin{align}\label{coeff-J-der} & \left(-\sum_{i=1}^r m_iy_i\frac{\partial (g(y))}{\partial y_i}\right)\exp\left(-g(y)\right)\sum_{\beta: D\cdot \beta=n+1}\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta \\ \notag +& \exp\left(-g(y)\right)\sum_{i=1}^r m_iy_i\sum_{j=1}^r\frac{\partial q_j}{\partial y_i}\frac{\partial}{\partial q_j}\sum_{\beta: D\cdot \beta=n+1}\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta \\ \notag -& \exp\left(-g(y)\right)\sum_{\beta: D\cdot \beta=n+1}\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta+ \left(1+\sum_{i=1}^r m_iy_i\frac{\partial (g(-y))}{\partial y_i}\right)\exp(-g(y))-1. \end{align} We compute the partial derivatives \[ \frac{\partial q_j}{\partial y_i}=\left\{ \begin{array}{cc} y_je^{m_jg(y)}m_j\frac{\partial g(y)}{\partial y_i} & j\neq i; \\ e^{m_j g(y)}+y_je^{m_jg(y)}m_j\frac{\partial g(y)}{\partial y_j} & j=i. \end{array} \right. \] Therefore, \begin{align*} &\sum_{i=1}^r m_iy_i\sum_{j=1}^r\frac{\partial q_j}{\partial y_i}\frac{\partial}{\partial q_j}\\ =&\sum_{j=1}^r \left(m_jy_j\left(e^{m_j g(y)}+y_je^{m_jg(y)}m_j\frac{\partial g(y)}{\partial y_j}\right)+m_iy_i\sum_{j\neq i} y_je^{m_jg(y)}m_j\frac{\partial g(y)}{\partial y_i} \right)\frac{\partial}{\partial q_j}\\ =&\sum_{j=1}^r\left(1+\sum_{i=1}^r m_iy_i\frac{\partial g(y)}{\partial y_i}\right)m_jq_j\frac{\partial}{\partial q_j}\\ =&\left(1+\sum_{i=1}^r m_iy_i\frac{\partial g(y)}{\partial y_i}\right)\sum_{j=1}^rm_jq_j\frac{\partial}{\partial q_j}. \end{align*} Hence, (\ref{coeff-J-der}) is \begin{align*} &\left(-\sum_{i=1}^r m_iy_i\frac{\partial (g(y))}{\partial y_i}\right)\exp\left(-g(y)\right)\sum_{\beta: D\cdot \beta=n+1}\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta \\ & +\exp\left(-g(y)\right)\left(1+\sum_{i=1}^r m_iy_i\frac{\partial g(y)}{\partial y_i}\right)\sum_{j=1}^rm_jq_j\frac{\partial}{\partial q_j}\sum_{\beta: D\cdot \beta=n+1}\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta \\ & -\exp\left(-g(y)\right)\sum_{\beta: D\cdot \beta=n+1}\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta + \left(1+\sum_{i=1}^r m_iy_i\frac{\partial (g(-y))}{\partial y_i}\right)\exp(-g(y))-1\\ = &\left(-1-\sum_{i=1}^r m_iy_i\frac{\partial (g(y))}{\partial y_i}\right)\exp\left(-g(y)\right)\left(\sum_{\beta: D\cdot \beta=n+1}\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta \right)\\ & +\exp\left(-g(y)\right)\left(1+\sum_{i=1}^r m_iy_i\frac{\partial g(y)}{\partial y_i}\right)\left(\sum_{j=1}^r\sum_{\beta: D\cdot \beta=n+1}(n+1)\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta +1\right)-1\\ =& \left(1+\sum_{i=1}^r m_iy_i\frac{\partial (g(y))}{\partial y_i}\right)\exp\left(-g(y)\right)\left(\sum_{\beta: D\cdot \beta=n+1}n\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta +1\right)-1. \end{align*} \subsection{Extracting the coefficient of the $I$-function} Recall that, when $\beta=0$, we have \[ J_{X, 0}(\tau_{0,2},z)=z. \] When $\beta\neq 0$, we have \[ J_{X, \beta}(\tau_{0,2},z)=e^{\tau_{0,2}/z}\sum_\alpha\left\langle\psi^{m-2}\phi_\alpha\right\rangle_{0,1,\beta}^Xy^\beta\phi^\alpha\left(\frac{1}{z}\right)^{m-1} \] and \[ m=\dim_{\mathbb C} X+D\cdot \beta-\deg (\phi_\alpha)\geq D\cdot\beta. \] The $I$-function can be expanded as \[ I=z+x_1[1]_1 +\tau_{0,2}+\sum_{\substack{\beta\in \on{NE}(X)\\ D\cdot \beta \geq 2}}\langle [\on{pt}]\psi^{D\cdot \beta-2}\rangle_{0,1,\beta}^Xy^\beta (D\cdot \beta-1)![1]_{-D\cdot \beta}+\sum_{k=1}^{\infty}I_{-k}z^{-k}. \] We would like to sum over the coefficient of $x_1z^{-1}$ of the $I$-function that takes value in $[1]_{-n}$ for $n\geq 1$. By direct computation, the sum of the coefficients is \begin{align}\label{coeff-I} [I(y,z)]_{x_1z^{-1}}=\sum_{\beta: D\cdot \beta =n+1, n\geq 1}\langle [\on{pt}]\psi^{n-1}\rangle_{0,1,\beta}^Xy^\beta \frac{(n+1)!}{n}. \end{align} We apply the operator \[ \Delta_D=\sum_{i=1}^r m_iy_i\frac{\partial}{\partial y_i}-1 \] to the $I$-function $I(y,z)$. Then (\ref{coeff-I}) becomes \[ \sum_{\beta: D\cdot \beta =n+1, n\geq 1}\langle [\on{pt}]\psi^{n-1}\rangle_{0,1,\beta}^X(y)^\beta (n+1)!. \] \subsection{Matching} The relative mirror theorem of \cite{FTY} states that the coefficients of the $I$-function and the $J$-function are the same. Therefore, we have \begin{align}\label{identity-coeff-I-J} & \sum_{\beta: D\cdot \beta =n+1, n\geq 1}\langle [\on{pt}]\psi^{n-1}\rangle_{0,1,\beta}^X(y)^\beta (n+1)!\\ \notag = & \left(1+\sum_{i=1}^r m_iy_i\frac{\partial (g(y))}{\partial y_i}\right)\exp\left(-g(y)\right)\left(\sum_{\beta: D\cdot \beta=n+1}n\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta +1\right)-1. \end{align} Recall that \[ g(y)=\sum_{\substack{\beta\in \on{NE}(X)\\ D\cdot \beta \geq 2}}\langle [\on{pt}]\psi^{D\cdot \beta-2}\rangle_{0,1,\beta}^Xy^\beta (D\cdot \beta-1)!. \] Therefore \begin{align*} &\sum_{i=1}^r m_iy_i\frac{\partial (g(y))}{\partial y_i}\\ =&\sum_{\substack{\beta\in \on{NE}(X)\\ D\cdot \beta \geq 2}}\langle [\on{pt}]\psi^{D\cdot \beta-2}\rangle_{0,1,\beta}^Xy^\beta (D\cdot \beta)!\\ =& \sum_{\beta: D\cdot \beta =n+1, n\geq 1}\langle [\on{pt}]\psi^{n-1}\rangle_{0,1,\beta}^X(y)^\beta (n+1)! \end{align*} we conclude that (\ref{identity-coeff-I-J}) becomes \begin{align*} & 1+ \sum_{\beta: D\cdot \beta =n+1, n\geq 1}\langle [\on{pt}]\psi^{n-1}\rangle_{0,1,\beta}^X(y)^\beta (n+1)!\\ \notag = & \left(1+\sum_{\beta: D\cdot \beta =n+1, n\geq 1}\langle [\on{pt}]\psi^{n-1}\rangle_{0,1,\beta}^X(y)^\beta (n+1)!\right)\exp\left(-g(y)\right)\left(\sum_{\beta: D\cdot \beta=n+1}n\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta +1\right). \end{align*} Therefore \[ 1=\exp\left(-g(y)\right)\left(\sum_{\beta: D\cdot \beta=n+1}n\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta +1\right). \] We have \begin{align} 1+\sum_{\beta: D\cdot \beta=n+1}n\langle [1]_1,[\on{pt}]_{n}\rangle_{0,\beta,2}^{(X,D)}q^\beta=\exp\left(g(y(q))\right), \end{align} where $y=y(q)$ is the inverse mirror map. This concludes Theorem \ref{thm-main}. \section{Toric varieties and the open mirror map} In this section, We will specialize our result to the toric case. The proper Landau--Ginzburg potential can be computed explicitly. Let $X$ be a toric variety with a smooth, nef anticanonical divisor $D$. Recall that the small $J$-function for absolute Gromov--Witten theory of $X$ is \[ J_{X}(z)=e^{\sum_{i=1}^r p_i\log q_i/z}\left(z+\sum_{\substack{(\beta,l)\neq (0,0), (0,1)\\ \beta\in \on{NE(X)}}}\sum_{\alpha}\frac{q^{\beta}}{l!}\left\langle \frac{\phi_\alpha}{z-\psi}\right\rangle_{0,1, \beta}^{X}\phi^{\alpha}\right), \] where $\tau_{0,2}=\sum_{i=1}^r p_i \log q_i\in H^2(X)$; $\{\phi_\alpha\}$ is a basis of $H^*(X)$; $\{\phi^\alpha\}$ is the dual basis under the Poincar\'e pairing. By \cite{Givental98}, the $I$-function for a toric variety $X$ is \[ I_{X}(y,z) = ze^{t/z}\sum_{\beta\in \on{NE}(X)}y^{\beta}\left(\prod_{i=1}^{m}\frac{\prod_{a\leq 0}(\bar{D}_i+az)}{\prod_{a \leq D_i\cdot \beta}(\bar{D}_i+az)}\right), \] where $t=\sum_{a=1}^r \bar p_a \log y_a$, $y^\beta=y_1^{p_1\cdot \beta}\cdots y_r^{p_r\cdot \beta}$ and $\bar{D}_i$'s are toric divisors. The $I$-function can be expanded as \[ z+\sum_{a=1}^r \bar p_a \log y_a+\sum_{j}\bar{D}_j\sum_{\substack{c_1(X)\cdot \beta=0,D_j\cdot \beta<0\\ D_i\cdot \beta\geq 0, \forall i\neq j}} \frac{(D_j\cdot \beta-1)!}{\prod_{i=1, i\neq j}^m (D_i\cdot \beta)!}y^\beta+O(z^{-1}). \] The $J$-function and the $I$-function are related by the following change of variables, called the (absolute) mirror map: \[ \sum_{i=1}^r p_i\log q_i=\sum_{i=1}^r p_i\log y_i+\sum_{j}\bar{D}_j\sum_{\substack{c_1(X)\cdot \beta=0,D_j\cdot \beta<0\\ D_i\cdot \beta\geq 0, \forall i\neq j}} \frac{(D_j\cdot \beta-1)!}{\prod_{i=1, i\neq j}^m (D_i\cdot \beta)!}y^\beta \] Set $z=1$, the coefficient of the $1\in H^0(X)$ of the $J$-function is \[ \sum_{\substack{\beta\in \on{NE}(X)\\ D\cdot \beta \geq 2}}\langle [\on{pt}]\psi^{D\cdot \beta-2}\rangle_{0,1,\beta}^X \] The corresponding coefficient of the $I$-function is \begin{align*} &\sum_{D_i\cdot \beta\geq 0, \forall i} \frac{1}{\prod_{i=1}^m (D_i\cdot \beta)!}y^\beta\\ =&\sum_{\substack{\beta\in \on{NE}(X)\\ D\cdot \beta \geq 2}} \frac{1}{\prod_{i=1}^m (D_i\cdot \beta)!}y^\beta. \end{align*} \subsection{Toric Fano varieties} When $X$ is Fano, the absolute mirror map is trivial. Then we have \[ g(y)=\sum_{D_i\cdot \beta\geq 0, \forall i} \frac{ (D\cdot \beta-1)!}{\prod_{i=1}^m (D_i\cdot \beta)!}y^\beta . \] Then Theorem \ref{thm-main} specialize to \[ W=\exp\left(\sum_{D_i\cdot \beta\geq 0, \forall i} \frac{ (D\cdot \beta-1)!}{\prod_{i=1}^m (D_i\cdot \beta)!}y(q)^\beta \right), \] where $y(q)$ is the inverse to the relative mirror map \begin{align*} \sum_{i=1}^r p_i\log q_i=\sum_{i=1}^r p_i\log y_i+g(y)D. \end{align*} If we further specialize the result to dimension $2$ case, we recover the main result of \cite{GRZ}. \subsection{Toric varieties with a smooth, nef anticanonical divisor}\label{sec-toric-semi-Fano} Theorem \ref{thm-main} specializes to \[ W=\exp\left(\sum_{D_i\cdot \beta\geq 0, \forall i} \frac{ (D\cdot \beta-1)!}{\prod_{i=1}^m (D_i\cdot \beta)!}y(q)^\beta \right), \] where $y(q)$ is the inverse to the relative mirror map \begin{align*} \sum_{i=1}^r p_i\log q_i=\sum_{i=1}^r p_i\log y_i+g(y)D + \sum_{j}\bar{D}_j\sum_{\substack{c_1(X)\cdot \beta=0,D_j\cdot \beta<0\\ D_i\cdot \beta\geq 0, \forall i\neq j}} \frac{(D_j\cdot \beta-1)!}{\prod_{i=1, i\neq j}^m (D_i\cdot \beta)!}y^\beta. \end{align*} Let $X$ be a Fano variety with a smooth anticanonical divisor $D$. In \cite{GRZ}, the authors proposed that the mirror proper Landau--Ginzburg potential is the open mirror map of the local Calabi--Yau $\mathcal O_X(-D)$. When $X$ is toric, the open mirror map for the toric Calabi--Yau $\mathcal O_X(-D)$ has been computed in \cite{CCLT} and \cite{CLT} (see also \cite{You20} for the computation in terms of relative Gromov--Witten invariants). The SYZ mirror construction for a toric Calabi--Yau manifold $Y$ was constructed in \cite{CCLT} and \cite{CLT}. We specialize to the case when $Y=\mathcal O_X(-D)$ where $D$ is a smooth, nef, anticanonical divisor of $X$. Note that, we do not need to assume $X$ is Fano. The SYZ mirror of $Y$ is modified by the instanton corrections. Following \cite{Auroux07} and \cite{Auroux09}, the SYZ mirror of a toric Calabi--Yau manifold was constructed in \cite{CCLT} and \cite{CLT} where the instanton corrections are given by genus zero open Gromov--Witten invariants. These open Gromov--Witten invariants are virtual counts of holomorphic disks in $Y$ bounded by fibers of the Gross fibration. It was shown in \cite{CCLT} and \cite{CLT} that the generating function of these open invariants is the inverse of the mirror map for $Y$. This generating function of these open invariants is referred to as the open mirror map in \cite{GRZ}. Comparing the open mirror map of \cite{CCLT} and \cite{CLT} with our relative mirror map, we directly have \begin{theorem}\label{thm-toric-open} Let $(X,D)$ be a smooth log Calabi--Yau pair, such that $X$ is toric and $D$ is nef. The proper Landau--Ginzburg potential of $(X,D)$ is the open mirror map of the local Calabi--Yau manifold $\mathcal O_X(-D)$. \end{theorem} \subsection{Beyond the toric case} Beyond the toric setting, the conjecture of \cite{GRZ} is also expected to be true as long as we assume the following principal (open-closed duality) in mirror symmetry. \begin{conjecture}\label{conj-open-closed} The instanton corrections of a local Calabi--Yau manifold $\mathcal O_X(-D)$ is the inverse mirror map of the local mirror theorem that relates local Gromov--Witten invariants with periods. \end{conjecture} Open Gromov--Witten invariants have not been defined in the general setting. Moreover, open Gromov--Witten invariants are more difficult to compute. On the other hand, the local mirror map can usually be computed. As we have seen in Section \ref{sec-rel-mirror-map}, the local mirror map and the relative mirror map coincide. Therefore, we have the following result. \begin{corollary} Assuming Conjecture \ref{conj-open-closed}, the proper Landau--Ginzburg potential is the open mirror map. \end{corollary} \section{Fano varieties and quantum periods} For a Fano variety $X$, the function $g(y)$ is closely related to the quantum period of $X$. In fact, we have \begin{theorem}\label{thm-quantum-period} The function $g(y)$ coincides with the anti-derivative of the regularized quantum period. \end{theorem} \begin{proof} We recall that \[ g(y)=\sum_{\substack{\beta\in \on{NE}(X)\\ D\cdot \beta \geq 2}}\langle [\on{pt}]\psi^{D\cdot \beta-2}\rangle_{0,1,\beta}^Xy^\beta (D\cdot \beta-1)!. \] We consider the change of variables \[ y^\beta=t^{D\cdot \beta}. \] Then the derivative $\frac{d}{dt}$ of $g(t)$ is \[ \sum_{\substack{\beta\in \on{NE}(X)\\ D\cdot \beta \geq 2}}\langle [\on{pt}]\psi^{D\cdot \beta-2}\rangle_{0,1,\beta}^Xt^{D\cdot\beta} (D\cdot \beta)!, \] which is precisely the regularized quantum period in \cite{CCGGK}. \end{proof} Following the Fanosearch program, the regularized quantum period of a Fano variety coincides with the classical period of its mirror Laurent polynomial. A version of the relation between the quantum period and the classical period was obtained in \cite{TY20b} using the formal orbifold invariants of infinite root stacks \cite{TY20c}. Combining with the Fanosearch program, one can explicitly compute the proper Landau--Ginzburg potential of a Fano variety as long as one knows its mirror Lanurent polynomial. In particular, we have found explicit expressions of the proper Landau--Ginzburg potentials for all Fano threefolds using the expression of the quantum periods in \cite{CCGK}. \begin{example} We consider the Fano threefold $V_{10}$ in \cite{CCGK}*{Section 12}. It is a Fano threefold with Picard rank 1, Fano index 1, and degree $10$. It can be considered as a complete intersection in the Grassmannian $\on{Gr}(2,5)$. Following \cite{CCGK}, the quantum period is \[ G_{V_{10}}(y)=e^{-6y}\sum_{l=0}^\infty\sum_{m=0}^\infty (-1)^{l+m}y^{l+m}\frac{((l+m)!)^2(2l+2m)!}{(l!)^5(m!)^5}(1-5(m-l)H_m), \] where $H_m$ is the $m$-th harmonic number. Therefore, \[ g_{V_{10}}(y)=e^{-6y}\sum_{l=0}^\infty\sum_{m=0}^\infty (-1)^{l+m}y^{l+m}\frac{((l+m)!)^2(2l+2m)!}{(l!)^5(m!)^5}(1-5(m-l)H_m)(l+m-1)! \] The proper Landau--Ginzburg potential is \[ W=x^{-1}\exp\left(g_{V_{10}}(y(tx))\right), \] where $y(tx)$ is the inverse of \[ tx=y\exp\left(g_{V_{10}}(y)\right). \] \end{example} Similarly, one can compute the proper Landau--Ginzburg potential for all Fano threefold using the quantum period in \cite{CCGK}. Moreover, there are large databases \cite{CK22} of quantum periods for Fano manifolds which can be used to compute the proper Landau--Ginzburg potential. \begin{remark} We noticed that H. Ruddat \cite{Ruddat} has been working on the relation between the proper Landau--Ginzburg potential and the classical period. This can also be seen from Theorem \ref{thm-quantum-period} because it is expected from mirror symmetry that the regularized quantum period of a Fano variety equals the classical period of the mirror Laurent polynomial. The Laurent polynomials are considered as the potential for the weak, non-proper, Landau--Ginzburg model of \cite{Prz07}, \cite{Prz13}. Therefore, Theorem \ref{thm-quantum-period} provides an explicit relation between the proper and non-proper Landau--Ginzburg potentials. \end{remark} \bibliographystyle{amsxport}
proofpile-arXiv_065-3954
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:Introduction} Reliable proximity operations on orbit are critical for modern space applications like on-orbit servicing, assembly, and manufacturing. Proximity operations are characterized by a safe approach under constraints and the subsequent interaction and manipulation of target objects. Due to complex contact dynamics and sensitive responses in a micro-gravity environment, the on-ground verification of system behaviour, reliability and safety in various on-orbit operations is very important. The latter is addressed using reliable ground-based simulation, where one of the critical challenges is to reproduce multi-body kinematics, dynamics and interaction of micro-gravity environments. High-fidelity numerical models may be used to analyze the effect of multi-body motion and interaction in micro-gravity, although they tend to ignore non-deterministic artefacts of a physical system. Physical simulations may also be conducted with full-scale space hardware, although it is often intractable, cost-ineffective and unnecessary. Hardware-in-the-loop (HIL) simulation provides a hybrid approach where the subsystem elements or a subset of all system components may be utilized while other parts of the problem are appropriately ignored or compensated with accurate numerical simulation. HIL simulation facilities and frameworks are an important element for space systems development. Typically, HIL test facilities relevant here enable the recreation of up to 6-DOF motion in Cartesian space, using one or more linear actuators/tracks, gimbals, robotic manipulators and suspension cables. For instance, NASA's rendezvous docking simulator that supported verification of docking and proximity operations for Gemini and Apollo missions used suspension cables to allow translation motion and a suspended gimbal on which the satellite was mounted to allow rotation \cite{hatch1967dynamic}. Large-scale HIL facilities have enabled consistent advancement in rendezvous and proximity operations for the International Space Station, wherein large full-scale satellites can be tested in representative visual and dynamic conditions. Examples of this are: the NASA's Docking overhead target simulator \cite{roe2004automated}, the JAXA RDOTS based on Cartesian motion table, Lockheed space operations simulation center (SOSC) \cite{milenkovic2012space}, and DLR's European Proximity Operations Simulator (EPOS-1) \cite{boge2010epos}. The latter employs 6-DOF Cartesian motion with manipulators mounted on linear tracks with appropriate pan/tilt mechanisms. While these facilities enable 6-DOF simulation, it can be difficult to isolate the dynamics of the ground environment and the testbed apparatus. The latter is important when testing actuators or force-based contact/interaction. It can be challenging to reproduce accurate dynamic responses in a multi-body system. For accurate reproduction of dynamic responses, air-bearing setups \cite{schlotterer2010testbed, lemaster2006experimental,kolvenbach2016recent, cho20095}, drop towers and parabolic flights \cite{menon2005free,watanabe1998microgravity}, neutral bouyancy \cite{bolender2009humans,strauss2005extravehicular} and even orbital test beds \cite{kong2004spheres,miller2000spheres} may be used for this purpose. Recent facilities have used kinemo-dynamic environment simulation using robotic manipulators. As robotic manipulators form essential components of most HIL facilities, the accuracy of HIL simulations depends on how accurately a manipulator can execute its motion. In this regard, finding a solution to the robot's inverse kinematics problem is essential for close-to-real HIL emulations. Often, IK solvers need to find a solution to sparsely sampled target poses, which is a problem at the joint level as the actuators operate in a continuous domain requiring smooth transition between joint states. Although solving for these discretely sampled poses, the solutions might be correct, they are not continuous at the joint level causing sudden jumps in robot motion. This problem can be addressed by interpolating between target poses in task space, but that still leads to decoupled solutions in joint space. A forward dynamics approach to Cartesian motion control based on a virtual robot model solves both these problems by introducing a conditioned mass matrix that helps in a smooth transition between sparse targets \cite{scherzinger2019inverse}. This work proposes a novel ROS-based framework for HIL emulation using position-controlled manipulators by combining Virtual Forward Dynamics Model (VFDM) for Cartesian Motion Control with an Orbital Dynamic Simulator (ODS) based on Clohessy-Wiltshire model. Approximate dynamics of satellites are modelled, and along with the generalized external forces acting on these satellites, a series of motions can be realized for different orbital operations. Experiments are performed using two UR10e robots of the ZeroG Lab ROS-based HIL facility at the University of Luxembourg\cite{ZeroG_CVI} shown in Fig. \ref{fig:ZeroG}. Two scenarios are emulated: a free-floating behaviour and free-floating interaction (a collision between two satellites). \begin{figure}[h!] \centering \adjincludegraphics[width=0.89\linewidth,trim={0cm 0cm 0cm 0cm},clip]{Figures/zeroGLab.png} \caption{The Zero-G lab facility at University of Luxembourg.} \label{fig:ZeroG} \end{figure} This paper is organised as follows: Section \ref{sec:vfd_cm} presents the general concept of Cartesian Motion control using Virtual Forward Dynamics Method based Inverse Kinematics. Section \ref{sec:System Setup} describes the experimental setup for the Hardware-in-the-loop (HIL) tests. Section \ref{sec:expr} presents the experiments conducted along with the results. Finally, Section \ref{sec:disc_conc} briefly discusses the finding, conclusion, and direction of future work. \section {Virtual Forward Dynamics based Cartesian Motion Control}\label{sec:vfd_cm} A forward dynamics model describes the effect of forces and torques on the motion of a body. Consider a "virtual" robotic manipulator with identical kinematics as the real robot, described by the following dynamic equation: \begin{equation} \tau = H(q)\ddot{q} + C(q,\dot{q}) + G(q) \label{eqn:generalized_external_forces} \end{equation} where the positive definite inertia matrix is denoted by $H(q)$, while $C(q)$ denotes the Coriolis and Centrifugal terms, $G(q)$ denotes the gravitational terms, $\tau$ represents the torques in the joints. Finally, $\ddot{q},\dot{q},q$ are the joint space accelerations, velocities and positions, respectively. The mapping between end-effector force and joint torques for any force $f$ acting on the robot's end-effector is given by \begin{equation} \tau = J^T f \label{eqn:torque_force_static_relation} \end{equation} where $J$ is the Jacobian of the virtual model. Assuming that the virtual model does not need to account for gravity, and needs to accelerate from zero in each control cycle, then $G$ and $C$ are neglected. Moreover, for brevity, the notation $q$ is dropped. Rearranging \eqref{eqn:generalized_external_forces} to solve for acceleration, we have: \begin{equation} \ddot{q} = H^{-1}J^T f \label{eqn:simplified_fd_eqn} \end{equation} Formulating a closed-loop control around this virtual model, by defining the error term as $\mathbf{e} = x_d - x $, i.e., the difference between the desired target $x_d$ and current end-effector pose $x$ obtained via a forward kinematics (FK) routine. The control input $f$ to the forward model is given expressed as a proportional derivative PD control law: \begin{equation} f = K_P \mathbf{e} + K_D \dot{\mathbf{e}} \label{eqn:pd_control} \end{equation} where $\mathbf{e}$ is a vector with translational and rotational components. This closed loop control scheme is outlined in Fig. \ref{fig:OverallBD} by either of the green dashed rectangle. Equation \eqref{eqn:simplified_fd_eqn} relates the Cartesian space forces to instantaneous joint space accelerations. The relation between Cartesian space velocities and joint space velocities, and their derivatives, is given by: \begin{equation} \dot{x} = J q \; \; \to \;\; \ddot{x} = \dot{J}\dot{q} + J\ddot{q} \label{eqn:accel_relation} \end{equation} Considering instantaneous accelerations for each cycle when at rest, the term $\dot{J}\dot{q} = 0$. Substituting \eqref{eqn:accel_relation} in \eqref{eqn:simplified_fd_eqn} yields: \begin{equation} \ddot{x} = JH^{-1}J^Tf = M^{-1}f \label{eqn:decoupled_f_n_a} \end{equation} where the quantity $JH^{-1}J^T$ is the inverse of the virtual model's operational space inertia matrix $M$. Scherzinger et al. further propose the idea of conditioning the virtual model to render operational space inertia $M$ less dependant on joint configurations while making $M^{-1}$ a diagonal time-invariant matrix across joint configurations \cite{scherzinger2019inverse,scherzinger2020virtual}. Therefore, for the virtual model illustrated in Fig. \ref{fig:OverallBD} mapping between forces and accelerations (using \eqref{eqn:decoupled_f_n_a}) in Cartesian Space is decoupled, providing a suitable virtual system for closed-loop Cartesian Control. This virtual forward dynamics model (see green dashed rectangle(s) in Fig. \ref{fig:OverallBD}) serves as an IK solver as the output of this model (i.e. joint positions) becomes set-points to the robot's internal position controller. \begin{figure*}[h!] \centering \adjincludegraphics[width=0.6\linewidth]{Figures/collision_BD_modified2.png} \caption{Overall HIL System Block Diagram: Orbital Dynamics Simulator imparts motion necessary for robotic emulation satellites in-orbit based on on-the-fly force/torque measurements and state measurements} \label{fig:OverallBD} \end{figure*} \section{System Setup}\label{sec:System Setup} \subsection{Hardware Setup}\label{sec:hw_setup} The HIL simulation facility at the University of Luxembourg, the ZeroG Lab, was used for these experiments (see Fig. \ref{fig:ZeroG}). The ZeroG facility has two robotic rails with two mounted 6-DOF UR10e robotic arms. These robots incorporate an inbuilt flange-mounted F/T sensor to obtain force/torque measurements. The rail on which each arm is mounted provides an additional degree of freedom to help emulate complex orbital scenarios in the lab. Mock-up satellites were mounted at the flange of both robotic arms. \subsection{Software/ROS setup}\label{sec:ros_setup} The robots and the rails are connected over a ROS Network with multiple machines running ROS Melodic (See Fig. \ref{fig:ZeroG_HIL}). The Cartesian motion control algorithm for controlling the robots and the orbital dynamic simulator (ODS) is run as ROS nodes on two separate machines, each with 64-Bit Linux based operating systems; however, the control computer for the robots has a realtime patched Linux kernel installed. This enables to operate the robots at a control frequency of 500 Hz. The ODS rosnode subscribes to F/T sensor data and publishes satellite poses by evaluating relative orbital motion using the Clohessy-Wiltshire model as described in section \ref{subsec: Orbit_dyn}. \begin{figure}[h!] \centering \adjincludegraphics[width=0.8\linewidth,trim={0cm 0cm 0cm 0cm},clip]{Figures/HIL_ROS_Network_Setup.png} \vspace{-0.1cm} \caption{Network Setup of ROS based HIL emulation Facility} \vspace{-1.9mm} \label{fig:ZeroG_HIL} \end{figure} \subsection{Guided Orbital Simulation} \label{subsec: Orbit_dyn} Using Kepler's laws of planetary motion, a time-invariant two-body dynamics defines the satellite motion in this investigation. A free-floating motion of a satellite relative to an inertial frame centered on the Earth is not feasible for simulating space environments in a laboratory. The same is valid for observing interactions between two satellites in the inertial frame. Such a challenge is overcome by observing the satellite motion from the perspective of a virtual observer in a rotating frame $\underline{R}$, and demonstrated in Fig. \ref{fig:CW_Frame}. The satellite of interest may be in an elliptical orbit, but with an assumption that the virtual observer is in a nearby circular orbit, a simplified model of relative orbital motion is approximated by the Clohessy-Wiltshire (CW) model \cite{clohessy1960terminal}. Consider a satellite position defined by vector $\bar{\rho}$ relative to the $\underline{R}$ frame, the equations of motion that govern the satellite motion are given by \vspace{-0.5cm} \begin{align} \ddot{x} &= 3\Omega^2 x + 2\Omega\dot{y} + \dfrac{F_1}{m}\\ \ddot{y} &= - 2\Omega\dot{x} + \dfrac{F_2}{m}\\ \ddot{z} &= -\Omega^2 z + \dfrac{F_3}{m} \end{align} where $\Omega = \sqrt{\mu/a^3} $ is the orbital angular velocity for the virtual observer in an orbit with radius $a$, and $\mu$ is the standard gravitational parameter for Earth. Note that $\Bar{\rho} = [x,y,z]^\text{T}$. The quantity $m$ is the mass of the satellite while $F_1$, $F_2$ and $F_3$ are external forces that a satellite may experience along $\hat{x}$, $\hat{y}$, and $\hat{z}$ directions, respectively. The satellites' approximate attitude dynamics are modelled so that the natural gravity torque due to the Earth and any external torques influences the orientation \cite{muralidharan2022ieee}. The angular velocities, $\omega_i$, and orientation quaternions, $\quart_i$, evolve as \begin{align} \dot{\omega_1} &= \dfrac{ I_3 - I_2}{ I_1} \left(3\mathbb{C}_{12}\mathbb{C}_{13}\Omega^2 - \omega_2 \omega_3 \right) +\dfrac{T_1}{I_1}\\ \dot{\omega_2} &= \dfrac{ I_1 - I_3}{ I_2} \left(3\mathbb{C}_{11}\mathbb{C}_{13}\Omega^2 - \omega_1 \omega_3 \right) +\dfrac{T_2}{I_2}\\ \dot{\omega_3} &= \dfrac{ I_2 - I_1}{ I_3} \left(3\mathbb{C}_{11}\mathbb{C}_{12}\Omega^2 - \omega_1 \omega_2 \right) +\dfrac{T_3}{I_3} \end{align} \begin{equation} \begin{bmatrix} \dot{\quart_x} \\ \dot{\quart_y} \\ \dot{\quart_z} \\ \dot{\quart_w} \end{bmatrix} = \dfrac{1}{2} \begin{bmatrix} \quart_w & -\quart_z & \quart_y & \quart_x \\ \quart_z & \quart_w & -\quart_x & \quart_y \\ -\quart_y & \quart_x & \quart_w & \quart_z \\ -\quart_x & -\quart_y & -\quart_z & \quart_w \end{bmatrix} \begin{bmatrix} \omega_1 \\ \omega_2 \\ \omega_3 \\ 0 \end{bmatrix} \end{equation} where, $\mathbb{C}$ is the rotation matrix corresponding to the orientation of the satellite body frame relative to the rotating coordinate frame, $\underline{R}$ and identifies the effects of gravity torques along each of the body frame axes. The quantities $\omega_i$ and $\quart_i$ indicate the evolution of the satellite body relative to the inertial frame. Moreover, $I_1$, $I_2$ and $I_3$ are the moment of inertia along the principal axes of the satellite body. Finally, $T_1$, $T_2$ and $T_3$ are external torques acting on the satellite. \begin{figure}[h!] \centering \adjincludegraphics[width=0.65\linewidth,trim={6cm 6cm 0.5cm 0.2cm},clip]{Figures/CW.png} \caption{Clohessy-Wiltshire model with a virtual observer defined at the origin of the rotating coordinate frame, $\underline{R}$. Direction $\hat{x}$ points radially away from the Earth, $\hat{y}$ points in the direction of orbit velocity and $\hat{z}$ is in the direction of the positive angular momentum vector. \label{fig:CW_Frame} \end{figure} One or more satellites in the neighbourhood may be modelled with the dynamics described above. A variety of satellite motions can be achieved with specific initial configurations for one or more satellites, including gravity-stabilized orientation, rendezvous, collisions, interactions, and free-floating and free-flying behaviour. \subsection{Scenarios}\label{subsec:Scenarios} \subsubsection{Scenario 1 - Basic Free-floating Satellite Motion} Assume the presence of only satellite 1 in Fig. \ref{fig:CW_Frame} near an observer, one that is in a low earth orbit (LEO) at an altitude of 800 km from the surface of the Earth. The motion of satellite 1 is expressed relative to the rotating coordinate frame $\underline{R}$. For convenience, the initial velocity of the satellite relative to the frame $\underline{R}$ is considered zero. Additionally, it is assumed that no net generalized forces expressed in the frame $\underline{R}$ are acting on the satellite. Any change in the satellite states in terms of velocities or forces means that the satellite moves in a free-floating motion with respect to the frame $\underline{R}$. \subsubsection{Scenario 2 - Free-Floating Interaction (Collision)} An interaction between two satellites is emulated by considering the simultaneous existence of satellite 1 and 2 in Fig. \ref{fig:CW_Frame}. In this scenario, the rotating observer is considered to be in a circular low earth orbit (LEO) at an altitude of 800 km from the surface of the Earth. The density of space debris is exceptionally high at this altitude \cite{liou2011active}. The satellites are placed sufficiently close to each other. Opposing initial velocities are deliberately imparted to generate a collision. External forces are absent on both the satellite mock-ups at the initial time. One of the robots' flange was mounted with a sponge satellite to ensure safety during impact. An inelastic collision is achieved during impact; the sponge damps the kinetic energy. \section{Experiments and Results}\label{sec:expr} Appropriate frames and transformations are contemplated to conduct the experiments. Fig. \ref{fig:Lab_view_frames} describes the setup for the scenarios discussed prior. For scenario 1, only the ceiling-mounted robot is used. For scenario 2, both robots are utilized. The common characteristics of both these experiments are mentioned briefly hereafter. Each robot had a mock-up satellite mounted at the flange. The centre-of-mass (COM) of the mock-up is selected as the tool-centre-point (TCP) of the robot to ensure that the robot TCP motion expressed in frame $\underline{R}$ potentially coincides with the satellite motions expressed in $\underline{R}$. The robot payload is calibrated according to the weights of the mock-up satellites. The inertial parameters of the mock-ups are specified in the ODS node. For convenience, the center of the $\underline{R}$ frame is considered within the limits of the ZeroG lab space, as indicated in Fig. \ref{fig:Lab_view_frames}. \begin{figure}[h!] \centering \adjincludegraphics[width=0.8\linewidth]{Figures/Lab_view_frame.png} \vspace{-4mm} \caption{Schematic of laboratory setup with associated frames.} \label{fig:Lab_view_frames} \end{figure} The F/T sensor mounted on the robot flange measures the forces and torques applied during contact (the human operator during the first scenario and the mock-up collision in the second scenario). Force transformations are performed to obtain the generalized forces expressed in the $\underline{R}$ frame. These generalized forces are delivered to the ODS ROS node along with the initial conditions of the mock-up (robot's TCP position and orientation) to assign the initial states of the satellites in the ODS environment based on the ground truth. Other important parameters, such as initial translational and rotational velocities and other scaling factors, are also set up. The impact of measured torque values, including noise from the F/T sensors, on the orientation of the satellites, are beyond the range of motion that the robots can deliver before reaching the safety stop limits. The values of torque are thus scaled down by a factor of 2000. The ODS uses the force/torque information along with the current state of the mock-up to generate motion waypoints during the flight, similar to the one exercised in \cite{muralidharan2022_hitl_iac}. These waypoints serve as input set-points to the Cartesian motion controller, and the robot TCP moves along a trajectory identical to a floating object in space, including applying an external force. Table \ref{table:param_table} lists the choice of controller gains for the PD controller in \eqref{eqn:pd_control}, and the Inertial Matrix $H$ in \eqref{eqn:simplified_fd_eqn}, where $m_e$ and $I_e$ are the mass and inertia of the last link of the virtual model, while as $m_l$ and $I_l$ are the mass and inertia values assigned to remaining links in the kinematic chain. The dynamics of the satellites are calculated based on a 4U CubeSat (2x2 configuration) with 1 kg mass and uniform mass density. \begin{table}[h!] \caption{Controller and Virtual Model Parameters} \label{table:param_table} \centering \begin{tabular}{ C{3.5cm} C{4.0cm} } \hline \hline $\quad$ $P_{x,y,z},\; D_{x,y,z}$ & 10.0, 0.0\\ $\quad$ $P_{R_x,R_y,R_z},\; D_{R_x,R_y,R_z}$ & 1.0, 0.0\\ $m_e, m_l$ & $ 1kg, 0.01 kg $\\ $I_e$ & $diag[1, 1, 1] \; kgm/s^2$\\ $I_l$ & $10^{-6}I_e \; kgm/s^2$ \\ \hline \end{tabular} \vspace{-1mm} \end{table} \subsection{Basic Free-floating Satellite motion} In this experiment, the mock-up satellite experiences intermittent external forces exerted by the operator. The satellite flight path is altered due to these forces; the response is illustrated in Fig. \ref{fig:Free_floating} with a sequence of pictures to demonstrate the free-floating motion of the mock-up in response to an external force. The sequence of pictures from left to right are snapshots of the position and orientation of the satellite at different times. The time history of force inputs imparted by the operator during the experiment along different axes is presented in Fig. \ref{fig:free_floating_profile} (top). The change in velocity and position resulting from these external forces are graphically illustrated in \ref{fig:free_floating_profile} center and bottom plots, respectively. \begin{figure}[h!] \centering \adjincludegraphics[width=0.8\linewidth,trim={0cm 0cm 0cm 0cm},clip]{Figures/Interaction.png} \caption{Snapshots of satellite states at different times under free floating behavior with externally applied forces by human touch.} \label{fig:Free_floating} \vspace{-2mm} \end{figure} \begin{figure}[h!] \vspace{-2mm} \centering \adjincludegraphics[width=0.75\linewidth,trim={0.2cm 0.9cm 1.6cm 1.0cm}, clip]{Figures/Col5_single_profile.png} \caption{Free floating behavior of a satellite under externally applied forces by human touch. Impact of force (top) resulting changes in velocity (center) and position (bottom) is illustrated.} \label{fig:free_floating_profile} \vspace{-2mm} \end{figure} \subsection{Free-Floating Interaction (Collision)} The simultaneous motion of two satellites caused by the force transfer during their collision is demonstrated in this experiment. Fig. \ref{fig:Lab_view_collision} shows the assembly of mock-up satellites 1 and 2 during the free-floating collision experiment right during the impact. The satellites, however, start at initial states sufficiently distant from each other. An initial velocity is specified for both the mock-ups causing them to move toward each other. The ODS generates the necessary waypoints for the Cartesian motion controller once the initial velocity is specified. Due to the setup, an inelastic collision occurs between the satellites during their free-floating motion. Such impact ensures safety but at a trade-off that the non-conservative forces caused during these experiments are not precisely reproducible. Nevertheless, the satellites' overall behaviour only depends on the force values measured at the sensor on the tooltip. The sequence of satellite motions throughout the experiment is understood from Fig. \ref{fig:collision_process} where sequence 1 describes the initial states, sequence 2 is the collision, while the rest characterizes the motion post-collision. Sequence 6 shows the final states of the mock-up in the final position before the "safety limit stop" enforced by the robot's internal controller to avoid self-collision. \begin{figure}[h!] \centering \adjincludegraphics[width=0.70\linewidth,trim={3cm 1cm 3cm 1cm},clip]{Figures/Lab_view_collision_modified.png} \caption{Satellite mock-ups in the ZeroG lab during the impact/collision.} \label{fig:Lab_view_collision} \vspace{-2mm} \end{figure} \begin{figure}[h!] \centering \adjincludegraphics[width=0.8\linewidth,trim={0.0cm 0cm 0cm 0cm},clip]{Figures/Collision_process.png} \caption{Sequence of satellite states during the process of impact. (1) Before impact, (2) During impact, (3)-(6) After impact.} \label{fig:collision_process} \end{figure} A force transfer occurs during the impact resulting in an exchange of equal but opposite forces on the satellites, evident in Fig. \ref{fig:Force_impact}. These forces are expressed in the $\underline{R}$ observer frame. The highest impact occurred along $x$-axis. As the robot motion is not strictly decoupled, lighter impact forces were detected along the other axes. The evolution of velocity and position quantities (for satellite 2) are plotted in Figs. \ref{fig:velocity_actual_vs_desired_sat2} and \ref{fig:actual_vs_desired} (bottom), respectively, that corresponds to the force profile in Fig. \ref{fig:Force_impact} (bottom). Post impact, the direction of motion of satellite 2 (also satellite 1) changes, consequently the velocity and position. The ODS determines the change in motion in response to the impact generated during the collision. \begin{figure}[h!] \centering \adjincludegraphics[width=0.75\linewidth,trim={0cm 0cm 0cm 0cm},clip]{Figures/Col4_force_impact.png} \caption{Force transfer between two satellites (top: satellite 1; bottom: satellite 2) during orbital interactions. Force values measured along the CW directions.} \label{fig:Force_impact} \end{figure} A comparable consistency in the simulation and HIL velocity profile (i.e., desired and actual) is evident in Fig. \ref{fig:velocity_actual_vs_desired_sat2} measured for satellite 2. Of course, the actual readings have inherent noise. Similar adjacency in velocity profiles between simulation and HIL tests demonstrated with a torque-controlled robot is available in literature \cite{aghili2009scaling}. The motion tracking accuracy of the VFDM based Cartesian Controller is evident from both Figs. \ref{fig:actual_vs_desired} and \ref{fig:error}. Before and after the impact, the satellite mock-ups track the motion generated by the ODS with errors less than 0.01 m during the entire experiment. \begin{figure}[h!] \centering \adjincludegraphics[width=0.8\linewidth,trim={0.2cm 0.0cm 0cm 0cm},clip]{Figures/Vel_2_actual_vs_desired.png} \vspace{-2mm} \caption{Motion Tracking Accurracy. Comparison of the desired satellite velocities from ODS and actual velocity recorded by the HIL system for satellite 2.} \label{fig:velocity_actual_vs_desired_sat2} \end{figure} \begin{figure}[h!] \centering \adjincludegraphics[width=0.75\linewidth,trim={0.5cm 0.4cm 1.4cm 0.9cm},clip]{Figures/Actual_vs_desired.png} \caption{Desired satellite states from ODS and actual path executed by the robot for satellite 1 (Top) and satellite 2 (Bottom). Motion of satellite 1 is restricted beyond the safety limit stop.} \label{fig:actual_vs_desired} \end{figure} \begin{figure}[h!] \centering \adjincludegraphics[width=0.75\linewidth,trim={0cm 0.0cm 0.0cm 0.0cm},clip]{Figures/Error1.png} \caption{Isochronous error in satellite states between the actual executed trajectory and the desired path. Motion of satellite 1 is restricted beyond the safety limit stop.} \label{fig:error} \vspace{-3mm} \end{figure} \section{Discussion and Conclusion}\label{sec:disc_conc} This work presented a novel framework for Hardware-in-the-Loop emulation for on-orbit applications in micro-gravity. The approach combines Cartesian Motion Control based on a virtual forward dynamic model and an Orbital Dynamics Simulator based on implementing the Clohessy-Wiltshire model. Two experiments are conducted; One to emulate the basic free-floating motion of a satellite under externally applied forces/torques, and the other to emulate collision during a free-floating motion. Changes in velocity and position profiles generated by ODS corresponding to the force/torque inputs measured on-the-fly establish reactive free-floating motion. Low position tracking error between ODS generated profile and mock-up validates the feasibility of VFDM for HIL applications. The appropriate choice of controller gains can further reduce the tracking error. The proximity between ODS and robot-driven mock-up velocity resembles the behaviour observed in similar experiments performed using a torque-controlled robot. The latter indicates that the proposed approach could provide an alternative to HIL test beds that use torque-controlled manipulators, which could be especially useful, given that position-controlled robots are lower in cost and limited in their capabilities compared to torque-controlled robots. Precise sensing of force and torque acting on the satellite body, rather than through moment conversion and frame transformations, potentially enhances the fidelity of these HIL tests. For future research, it is intended to explore this idea further by incorporating appropriate sensing modalities on the satellite. Furthermore, the HIL concept presented will be extended and validated for other on-orbit operations such as rendezvous in a constrained environment, controlled contact space debris capture/removal and other in-orbit assembly operations. \bibliographystyle{ieeetr}
proofpile-arXiv_065-3976
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} After initial proposals of silicon-based quantum computer architectures \cite{KaneN1998,VrijenPRA2000,HollenbergPRB2004}, and subsequent efforts \cite{OBrienPRB2001,JarrydN2013,DehollainPRL2014,GonzalezNL2014}, atomically precise phosphorus donor placement in silicon became achievable \cite{FuechsleNN2012,BuchNC2013,WyrickAFM2019,WangCP2020,ZwanenburgRoMP2013,AlipourJVS2022}. This development finally made it possible to demonstrate a quantum simulator of the extended Hubbard model based on a 2-D donor array \cite{Wang2021}. In parallel to these technological and experimental advances, the development of a theoretical description of donor levels \cite{KohnPR1955} continues, with the effective mass theory (EMT) \cite{KlymenkoJPCM2014,SaraivaJPCM2015,GamblePRB2015} and atomistic tight-binding \cite{MencheroPRB1999,MartinsPRB2005,KlimeckCMiES2002,RahmanNano2011,TankasalaPRB2022} approaches giving results well conforming to the experimental findings. Recently, EMT allowed for a detailed calculation of excited two-electron states of a donor pair and their optical spectra \cite{WuPRB2021}. Apart from the correct prediction of donor orbital energies \cite{JagannathPRB1981,MayurPRB1993}, detailed modeling of wave functions has been achieved \cite{SalfiNC2014,GamblePRB2015}, which made possible the calculation of parameters \cite{KoillerPRL2001,XuPRB2005,WellardPRB2005,QiuziPRB2010,LePRB2017a,DuskoNPJQI2018} such as hopping and exchange integrals, on-site energy, and Coulomb repulsion, i.e., matrix elements of one- and two-body operators. These together allow for a two-way correspondence between donor chains or arrays and lattice models of the Fermi-Hubbard type \cite{DuskoNPJQI2018}. With this, an interesting path of observing and describing the onset of collective effects in small many-body systems opens \cite{TownsendPRB2021}. The most basic parameter of such models is the hopping integral (tunnel coupling) $t$. It is also the one that is not easily accessible experimentally, which makes its modeling of great importance. The tunnel coupling can be found as half of the splitting between the bonding and antibonding states of a donor pair. The most accurate but also computationally challenging way of evaluating it would be to directly calculate the two-donor states in a multi-orbital model, separately for every donor displacement. For a more feasible calculation, the H\"uckel tight-binding theory \cite{AshcroftBook1976} can be successfully utilized to determine $t$ based on the known single donor ground state and $1/r$ donor Coulomb potential \cite{LePRB2017a}. Here, we propose another method and evaluate the hopping integrals utilizing Bardeen's transfer Hamiltonian theory, originally derived for the problem of electron tunneling between many-body eigenstates of two regions separated by a potential barrier \cite{BardeenPRL1961}. It provides an expression for the matrix element involving only the initial and final eigenstates from the two regions. Most importantly, no specific knowledge about the potential barrier is needed except that it is wide enough to fulfill the assumptions of the theory. This formulation made it particularly suitable for calculating tunneling currents in various types of junctions. After adaptation \cite{TersoffPRL1983,TersoffPRB1985} and further development \cite{ChenPRB1990}, it became a standard calculation method in scanning tunneling microscopy (STM), both for theoretical simulation \cite{DrakovaRPP2001} and for the translation of the measured current into atomistically resolved scans \cite{TsukadaASS1994}, i.e., for the interpretation of STM data. Using this method to calculate hopping between donors, we take advantage of the fact that it only relies on wave functions. Thanks to this, it may be applied, based on known or postulated functions, \newtext{to various systems, including, e.g., arrays of quantum dots, other defect systems, or systems of nanoscale thickness with dielectric mismatch \cite{RyuNanoscale2013}, based on a single solution of the one-site problem followed by a computationally cheap application of the method to pairs or other arrangements of sites. The method can be thus used} without explicitly knowing or expressing the potential of the donor (site) and the surrounding lattice. \newtext{In the case of donors, where the ionic $1/r$ potential has to be augmented with a central cell correction, a purely wave-function-based method turns out to have practical advantages. We show that for the standard method applied to the given system, there are cases in which pure $1/r$ potential may give falsely vanishing results, while the explicit inclusion of the correction potential in the integration involved turns out to be numerically troublesome due to its small spatial extension. In contrast, the integration of wave functions that intrinsically contain central cell correction effects in the proposed method is not so problematic.} \newtext{For donor arrays with lattice spacings below $\sim6$~nm, the need to consider at least a second orbital in Hubbard-like models arises due to orbital mixing \cite{SaraivaJPCM2015,GamblePRB2015,TankasalaPRB2022}. This may allow studying phenomena like the orbital-selective Mott transition \cite{LiebschPRL2005}. While full solutions of the two-donor problem \cite{GamblePRB2015,TankasalaPRB2022} are available, in this range of distances, the splitting of the two lowest eigenstates from such a solution loses the meaning of hopping. Only the lowest-orbital "constituent" hoppings can be found in the literature \cite{LePRB2017a}, and those among different orbitals cannot be extracted from a direct solution of the problem.} Thus, we calculate here all nonvanishing hopping integrals between the six ground-state orbitals at each of the donors, i.e., both intra- and interorbital tunnel couplings. In the case of vanishing integrals, we give a symmetry-based justification. For comparison, we also use the standard method. In most cases, the results are in good agreement, which we consider a mutual confirmation since both methods are intrinsically approximate. Next, we use all these integrals to form a two-donor Hamiltonian and find its eigenstates. These are in good agreement with a full two-donor EMT calculation \cite{GamblePRB2015}. We characterize the eigenstates showing their composition in terms of valleys and single-donor orbitals. \newtext{As the original derivation of Bardeen's formula for the matrix element assumes non-overlapping potentials, which is not fulfilled here, we present a derivation that does not rely on this assumption. Moreover, while rederiving the expression, we find a correction that extends the range of its applicability at short distances and hence for significant wave-function overlaps.} Finally, we evaluate the computational cost of both methods, which turns out to be similar. Based on this and on the conceptual and practical advantages of the proposed method, we find it to be prospective for the calculation of hopping integrals based on known or postulated wave functions in various physical systems. In the following, we first introduce the system and its theoretical model in Sec.~\ref{sec:system}, then present and discuss the general results in Sec.~\ref{sec:results}. \newtext{Next, in Sec.~\ref{sec:corr}, we derive a correction to Bardeen's formula and show its results.} Finally, we conclude the study in Sec.~\ref{sec:conclusions}. \newtext{In Appendix~\ref{sec:huckel}, we describe the standard method used by us for comparison.} More technical calculation details are given in Appendix~\ref{sec:details}, while in Appendix~\ref{sec:calctime}, we estimate the computational cost of the calculation. \section{System and theoretical model}\label{sec:system} \begin{figure}[tb] \includegraphics[width=0.9\linewidth]{scheme.pdf} % \caption{\label{fig:scheme}(Color online) Schemes. (a) A schematic view of the system with the integration plane drawn. (b) Schematic view of donor levels with relevant intra- (solid blue) and interorbital (dashed red) tunnel couplings marked with arrows. Note that energy splittings in (b) are not to scale, i.e., vertical distances do not reflect real splittings; in particular, artificial spacing in degenerate $T_2$ and $E$ multiplets is introduced to show the number of levels.} % \end{figure} We deal with a system composed of two substitutional phosphorus donors in bulk silicon separated by displacement $\bm{d}$, so located at a distance $d=\abs{\bm{d}}$ apart. In \subfigref{scheme}{a}, we schematically present the system. Additionally, we show the integration plane and its exemplary element, which will be helpful later. Our aim is to calculate tunnel couplings between various pairs of orbitals localized at two different donors. The ground-state manifold contains six states named after their symmetry: $A_1$, three degenerate $\cramped{T_2^{(x)}}$, $T_2^{(y)}$, $T_2^{(z)}$, and two degenerate $E^{(xy)}$, $E^{(z)}$. Note that the labels for $E$ orbitals are shorthand, as they transform as $x^2-y^2$, $2z^2-x^2-y^2$, respectively. We denote the hopping integrals among pairs of orbitals by $t$ with various indices. In the subscript, a single label means a same-type orbital hopping, while two labels are given for a matrix element between different orbital types. Orbital indices for degenerate multiplets are given in the superscript, where $i,j\in\LR{x,y,z}$ and $\alpha,\beta\in\LR{xy,z}$ relate to the $T_2$ and $E$ orbitals, respectively. Thus, for instance, $t_{AT}^{(x)} = \matrixel{A_1}{H_{\mathrm{t}}}{\,T_2^{(x)}}$, and $t_{TE}^{(x,z)} = \matrixel{T_2^{(x)}}{H_{\mathrm{t}}}{\,E^{(z)}}$, where $H_{\mathrm{t}}$ is the tunnel coupling Hamiltonian. The ladder of orbital levels is shown in \subfigref{scheme}{b} with all unique variants of calculated couplings schematically marked with arrows and labels, which can make the naming scheme easier to follow. The system is described by the Hamiltonian \begin{equation}\label{eq:hamiltonian} H = H_{\mathrm{a}} + H_{\mathrm{b}} + H_{\mathrm{t}}, \end{equation} where $H_{\mathrm{a}}$ and $H_{\mathrm{b}}$ are identical except for different positions and are the energies of isolated donors, while the already announced tunnel (or transfer) Hamiltonian $H_{\mathrm{t}}$ is the part responsible for coupling between donors. According to Bardeen's theory, matrix elements of $H_{\mathrm{t}}$ can be calculated by integrating the transition probability current operator \begin{equation}\label{eq:trans-prob-curr} J_{ij}\mkern-1mu\lr{\bm{r}} = \psi_i^{*}\lr*{\bm{r}}\frac{\hbar^2\nabla}{2m}\psi_j\lr*{\bm{r}} - \psi_j\lr*{\bm{r}}\frac{\hbar\nabla}{2m}\psi_i^{*}\lr*{\bm{r}} \end{equation} over an arbitrary surface $S$ separating the sites given the surface lays in the potential barrier region [see \subfigref{scheme}{a}], \begin{align}\label{eq:bardeen} \matrixel{i}{H_{\mathrm{t}}}{j} ={}& \frac{\hbar^2}{2} \int_{S} \mathrm{d}\bm{S} \, \Lr*{ \psi_i^{*}\lr*{\bm{r}}\frac{\nabla}{m}\psi_j\lr*{\bm{r}} - \psi_j\lr*{\bm{r}}\frac{\nabla}{m}\psi_i^{*}\lr*{\bm{r}} } \nonumber\\ &- \lr*{E_i-E_j} \, \int_{V_+} \mathrm{d}^3\bm{r} \, \psi_i^{*}\lr*{\bm{r}}\psi_j\lr*{\bm{r}}, \end{align} where $\mathrm{d}\bm{S}$ is the surface element, $\psi_i\lr*{\bm{r}}=\braket{\bm{r}}{i}$ is the wave function of the $i$-th donor level, and $E_i$ is its energy. The second term on the right-hand side is a correction for the case of nondegenerate levels \cite{ReittuAJoP1995}, in which $V_+$ means the volume on one side of $S$. In \eqref{bardeen}, we deliberately keep the mass $m$ inside the expression, as in the silicon matrix, we need to take into account the anisotropy and valley dependence of the effective mass operator. At this point, we may notice that by choosing $S$ to be a plane perpendicular to the donor displacement, $S\perp\bm{d}$, we may simplify the calculation of the first term \begin{multline}\label{eq:gradtodx} \int_{S} \mathrm{d}\bm{S} \, \Lr*{ \psi_i^{*}\lr*{\bm{r}}\frac{\nabla}{m}\psi_j\lr*{\bm{r}} - \psi_j\lr*{\bm{r}}\frac{\nabla}{m}\psi_i^{*}\lr*{\bm{r}} } \\ \stackrel{ S\perp\bm{d}}{=} \int_S \mathrm{d}s \, \Lr*{ \psi_i^{*}\lr*{\bm{r}}\frac{1}{m} \frac{\partial\psi_j\lr*{\bm{r}}}{\partial x} - \psi_j\lr*{\bm{r}}\frac{1}{m} \frac{\partial\psi_i^{*}\lr*{\bm{r}} }{\partial x} }, \end{multline} where coordinate axes are chosen such that $x$ is along the displacement, $\widehat{\bm{x}}\parallel\bm{d}$, and $\mathrm{d}s = \mathrm{d}y\mathrm{d}z$. \newtext{The general results in the next section are obtained with \eqref{bardeen} and are in satisfactory agreement with the standard method. However, in Sec.~\ref{sec:corr}, we show that the formula can be further corrected to be more accurate at short distances (for high wave function overlaps).} To evaluate the above integrals, we use the orbital wave functions provided in \oncite{GamblePRB2015}. They are calculated within the multivalley effective mass theory \cite{KohnPR1955,LuttingerPR1995,ShindoJPSJ1976} with a symmetry-adapted central cell correction \cite{CastnerPRB2009,GreenmanPRB2013} and were shown to reproduce orbital energies from the experiment \cite{JagannathPRB1981,MayurPRB1993} and lead to position-dependent results quantitatively consistent with those of the atomistic tight-binding method \cite{MencheroPRB1999,MartinsPRB2005,KlimeckCMiES2002}. For the latter, in turn, the agreement with measured wave function was demonstrated \cite{SalfiNC2014}. Thus, the used wave functions provide a good basis for reliable calculations. The wave function $\psi\lr{\bm{r}}$ is expressed as \begin{equation} \psi\lr{\bm{r}} = \sum_{\mu} F_{\mu}\lr{\bm{r}} \, \phi_{\mu}\lr{\bm{r}}, \end{equation} where $\mu\in\LR{-x,+x,-y,+y,-z,+z}$ runs over the six $\bm{k}$-space valleys at $\bm{k}_{\mu} = 0.84\times(2 \pi /a) \,\widehat{\mu}$. Here, $\widehat{\mu}$ are the corresponding Cartesian unit vectors, $\widehat{\mu}\in\LR{[100],~[010],~[001]}$, $a=0.54307$~nm is the Si lattice constant, and \begin{equation} \phi_{\mu}\lr{\bm{r}} = u_{\bm{k}_{\mu}}\lr{\bm{r}} \, \mr{e}^{i\bm{k}_\mu\!\cdot\bm{r}} = e^{i\bm{k}_\mu\!\cdot\bm{r}} \sum_{\bm{G}} A^{(\mu)}_{\bm{G}} \mr{e}^{i\bm{G}\cdot\bm{r}} \end{equation} are the Bloch functions for the respective valley minima with the periodic part $u_{\bm{k}_{\mu}}\lr{\bm{r}}$ expanded in plane waves with $A^{(\mu)}_{\bm{G}}$ being the coefficients, and $\bm{G}$ the reciprocal lattice vectors. Bloch functions are weighted by slowly varying envelopes $F_{\mu}\lr{\bm{r}}$, \begin{equation} F_{\mu}\lr{\bm{r}} = \sum_i B_{\mu,i} F_{\mu,i}\lr{\bm{r}} \end{equation} that are in turn expanded in a basis of Gaussian envelopes (identical for all valleys upon coordinate permutation) with coefficients $B_{\mu,i}$. Details on calculations of wave functions and data allowing for their reproduction can be found in \oncite{GamblePRB2015}. Let us now focus on the derivative of $\psi\lr{\bm{r}}$, \begin{multline} \!\!\!\!\!\lr*{ \frac{1}{m}\frac{\partial}{\partial x} } \,\psi\lr*{\bm{r}}\\ ~~= \sum_{\mu} \Lr*{ \phi_{\mu}\lr{\bm{r}}\, \lr*{ \frac{1}{m}\frac{\partial}{\partial x} }\, F_{\mu}\lr{\bm{r}} + F_{\mu}\lr{\bm{r}} \, \lr*{ \frac{1}{m}\frac{\partial}{\partial x} }\, \phi_{\mu}\lr{\bm{r}} }, \end{multline} where the coordinate $x$ defined along the displacement may be expressed by $x_i$ denoting the coordinates along the [100], [010], and [001] crystallographic axes, $x = \sum_i c_i x_i$. At this point the reason for keeping $m$ inside the expression becomes evident, as, under the sum over valleys, we need to replace the operator composed of differentiation and inversed $m$ with \begin{equation} \frac{1}{m}\frac{\partial}{\partial x} = \sum_i c_i \, \sum_\mu \delta_{\mu\nu} \, \frac{1}{m_{i,\nu}} \, \frac{\partial}{\partial x_i}, \end{equation} where $\delta_{\mu\nu}$ is the Kronecker delta. In the calculation, we use the following values for the effective mass, $m_\perp = 0.191$ and $m_\parallel = 0.916$, where $m_{i,\mu} = m_\parallel$ applies if $\widehat{x}_i = \widehat{\mu}$, i.e., valley $\mu$ is oriented along $x_i$, and $m_{i,\mu} = m_\perp$ otherwise. For donor displacements in directions where valley interference leads to oscillatory behavior \cite{KoillerPRL2001}, we observe a significant variation of the results with respect to the location of the integration plane. To circumvent this problem, we introduce averaging over the position of $S$. This may be done in a mathematically elegant manner by transforming \eqref{gradtodx} into a volume integral with a Dirac delta constraint \begin{equation}\label{eq:dstod3r} \int_S \mathrm{d}S \, f\lr*{\bm{r}} = \int_V \mathrm{d}^3\bm{r} \, \delta\lr*{x-x_S}\, f\lr*{\bm{r}}, \end{equation} where $f\lr*{\bm{r}}$ is the integrand, $x_S = d/2$ is the nominal position of $S$, $V$ is the volume of the system (calculation box), and then replacing $\delta\lr*{x}$ with its broadened representation with a finite support, like \cite{PazPSSB2006} \begin{align}\label{eq:deltabroad} {\delta}(x) &\simeq \frac{15}{16\sigma}\Lr*{1-\lr*{ \frac{x}{\sigma} }^2 }^2 \, \theta\lr*{ \sigma - \abs{x} } \nonumber\\ &\equiv \widetilde{\delta}(x) \, \theta\lr*{ \sigma - \abs{x} }. \end{align} Here, $\sigma$ is the broadening and $\theta(x)$ is the Heaviside step function. Substituting \eqref{deltabroad} to \eqref{dstod3r}, we obtain \begin{multline} \int_V \mathrm{d}^3\bm{r} \, f\lr*{\bm{r}}\,\delta\lr*{x-x_S} \simeq \iint_{-\infty}^{\infty} \mathrm{d}y\mathrm{d}z \int_{-\sigma}^{\sigma} \mathrm{d}x \, f\lr*{\bm{r}}\,\widetilde{\delta}\lr*{x-x_S}, \end{multline} which is the integral we evaluate numerically in a finite box. Details are given in Appendix~\ref{sec:details}. While it is a triple integral, the smoothly decaying $\widetilde{\delta}$ constraint with $\sigma$ not exceeding a few lattice constants makes the increase of the numerical cost insignificant. The estimation of the computational cost of the calculation is described in Appendix~\ref{sec:calctime}. We find the time needed for satisfactory convergence of results to be comparable to the standard method. \section{General results and discussion}\label{sec:results} In this section, we present the evaluated hopping integrals between pairs of identical (Sec.~\ref{sec:intra}) and different (Sec.~\ref{sec:inter}) orbitals on the two sites for a range of distances along relevant crystallographic directions, discuss in detail an exemplary case where the standard method encounters problems related to the central cell correction (Sec.~\ref{sec:pxpy}), and finally present and characterize calculated eigenstates of donor pairs (Sec.~\ref{sec:full}). \subsection{Intraorbital hopping}\label{sec:intra} \begin{figure}[t] \includegraphics[width=1.0\linewidth]{hopsame.pdf} % \caption{\label{fig:hopsame}(Color online) Intraorbital hopping. Same-orbital hopping integrals (full symbols; {\color{cbblue}$\bullet$}) calculated as a function of donor distance along the [100] (left column) and [110] (right column) crystallographic directions. Each row of panels is for one of the six ground-state orbitals, as marked. For comparison, empty squares ({\color{cblgreen!90!black}$\bm{\square}$}) mark results obtained using the H\"uckel theory; in the first row, empty triangles ({\color{cblred!90!black}$\bm{\triangle}$}) show results from \oncite{LePRB2017a}. Lines are to guide the eye.} % \end{figure} We begin the presentation of results by showing in \figref{hopsame} the calculated hopping integrals between identical orbitals at the two donors. The values are shown as a function of the displacement: along [100] on the left, and along [110] on the right. To verify the accuracy of our calculation, we also plot with empty symbols the results of a standard computation employing the H\"uckel (tight-binding) theory \cite{AshcroftBook1976}. \newtext{We refer to this method as "standard" throughout the paper and describe it in Appendix~\ref{sec:huckel}.} In the top row, where hopping in the ground $A_1$ orbital is considered, we also show with triangles the available data from the literature \cite{LePRB2017a} (we acquired the values by digitizing the linear-scale plot from the original paper, which could lead to some degree of inaccuracy). We find the results to be in an overall very good agreement with these datasets. In the following panels, where tunneling between higher-energy orbitals is considered, we can compare only to the H\"uckel theory results obtained by us. Also here, the agreement is satisfactory. In particular the oscillations in hoppings along the [110] displacement are well reproduced. Apart from the irregular oscillation arising from valley interference and present in $t_A$ and $t_E^{(z)}$, we deal also with a regular one with longer period observed in $t_T^{(x)}$, $t_T^{(y)}$, and $t_E^{(xy)}$. It results from the in-plane excited-state character (i.e., presence of a node) of the wave function envelopes in these orbitals. The main discrepancies between the two methods are that hoppings calculated using Bardeen's theory are generally slightly lower at short distances while being qualitatively similar, and only the dependence of $t_{T}^{(z)}$ on distance is a bit different. At this point, we need to underline that both methods are approximate, and there is currently no experimental data we could compare to. \newtext{On the one hand, Bardeen's theory originally assumes a low-overlap system. On the other, it involves only the wave function and thus it is free of problems that arise when the potential is directly used. Moreover, in Sec.~\ref{sec:corr}, we derive a correction to Bardeen's formula that enhances its accuracy at short distances. The main discrepancy noticed above is then fixed.} In the H\"uckel tight-binding approach, one neglects the background potential and only considers those from the two sites. An additional complication arises for a donor system, as the exact form of the donor potential is unknown, and regular $1/r$ Coulomb potential augmented by central cell corrections is used. While this approach yields correct energy levels with wave functions having all expected properties, the physicality of phenomenologically introduced central cell corrections \cite{NingPRB1971,PantelidesPRB1974} is disputable. For this reason, it is excluded in the calculation of hopping integrals within the H\"uckel model \cite{LePRB2017a}. However, using the pure $1/r$ potential does not have to be exact either, as it has too high symmetry and implicitly assumes a uniform medium characterized by a dielectric constant. This approximation is not obvious on the length scales of a few lattice constants. As we show in the following in a specific example, using the pure $1/r$ potential in the standard method can lead to qualitatively incorrect results, while the inclusion of the central cell correction turns out to be numerically challenging. Thus, we treat the agreement of the results obtained within these two approaches as a mutual confirmation rather than a benchmark against a reference. \subsection{Interorbital hopping}\label{sec:inter} \begin{figure}[tb] \includegraphics[width=1.0\linewidth]{hopdiffa.pdf} % \caption{\label{fig:hopdiffa}(Color online) Degenerate interorbital hopping. Hopping integrals between pairs of different degenerate orbitals within the $T_2$ and $E$ manifolds (full symbols; {\color{cbblue}$\bullet$}) calculated as a function of donor distance along the [100] (left column) and [110] (right column) crystallographic directions. Each row of panels is for a different orbital pair, as marked. For comparison, empty symbols ({\color{cblgreen!90!black}$\bm{\square}$}) mark results obtained using the H\"uckel theory. Lines are to guide the eye. For vanishing cases, the underlying symmetry of wave functions is schematically shown; the two colors mark the sign of wave-function envelopes.} % \end{figure} Having established this, we proceed to the evaluation of hopping between pairs of different orbitals. First, we consider degenerate pairs within the $T_2$ and $E$ manifolds. Here, most of the integrals vanish due to symmetry. In terms of \eqref{bardeen} and \eqref{gradtodx}, it happens when the product of the two orbitals is antisymmetric in the plane perpendicular to the displacement, thus if they differ in the in-plane parity. Note that the differentiation in \eqref{gradtodx} changes the parity only in the lateral direction, which is irrelevant in this regard. As we consider (001)-plane displacements, $t_{T}^{(x,z)}$ and $t_{T}^{(y,z)}$ vanish for any direction since the orbitals differ in the $z$-axis parity. On the other hand, $t_{T}^{(x,y)}$ may be nonzero for any displacement direction other than [100] and [010], for which it vanishes. The pair of orbitals from the $E$ manifold gives a nonzero hopping $t_{E}^{(xy,z)}$ for all directions except the diagonal ones: [110] and [1$\bar{1}$0]. Thus, we calculate the two nonvanishing hopping integrals: $t_{T}^{(x,y)}$ for $\bm{d}\parallel$[110] and $t_{E}^{(xy,z)}$ for $\bm{d}$ along [100]. The calculation is similar as previously, and the results are presented in \figref{hopdiffa}. Again, on the left, we show results for displacement along [100] and for [110] on the right, while each row is for a different orbital pair. In the vanishing cases, instead of plots, we show schematic diagrams visualizing the difference in parity that makes the coupling forbidden. For $t_{E}^{(xy,z)}$ along [100], we find good agreement with the standard method, including the non-monotonic behavior at short distances. Notably, the values are significant and comparable with same-orbital hopping. In the case of $t_{T}^{(x,y)}$ along [110] (top right panel in \figref{hopdiffa}), we face an issue. To obtain nonvanishing hopping values using the standard method, we need to explicitly take into account the central cell correction in the on-site potential. It applies to the entire range of distances, including those much larger than the spatial extent of the correction potential. As the latter is very local compared to the integration domain, the need for its inclusion creates a great computational challenge. We find the computed values to be significantly sensitive to the integration volume and other computational details. Thus, we cannot confirm that the obtained strongly oscillating result is quantitatively correct. On the other hand, using Bardeen's theory, we get a well-converged result without any additional treatment. The integral is considerably small compared to others, but it exhibits a meaningful slow oscillation similar to those observed above in same-orbital couplings for $p$-type orbital envelopes. We discuss the calculation of this integral in more detail in the following subsection. \begin{figure}[tb] \includegraphics[width=1.0\linewidth]{hopdiffb.pdf} % \caption{\label{fig:hopdiffb}(Color online) Nondegenerate interorbital hopping. As in \figref{hopdiffa}, but for hopping integrals between pairs of orbitals from different manifolds: from $A_1$ to $T_2$ and $E$.} % \end{figure} \begin{figure}[tb] \includegraphics[width=1.0\linewidth]{hopdiffc.pdf} % \caption{\label{fig:hopdiffc}(Color online) Nondegenerate interorbital hopping. As in \figref{hopdiffa}, but for hopping integrals between pairs of orbitals from different manifolds: from $T_2$ to $E$.} % \end{figure} We are left with the calculation of hopping integrals connecting different manifolds. Also in this case, a number of couplings vanish due to symmetry. For those that may be nonzero, as nondegenerate orbital pairs are considered, we also need to evaluate the correction given in the second term of \eqref{bardeen}. Technically, for the problem in question, it is a half of the wave-function overlap multiplied by the energy splitting. The results, presented in \figref{hopdiffb} and \figref{hopdiffc} for two displacement directions as previously, are in similarly overall good agreement with the standard method, as it was for the same-orbital and degenerate interorbital hoppings except for $t_{T}^{(x,y)}$ along [110]. The main difference between the results that may be found in \figref{hopdiffb} is the large-distance behavior of couplings between $T_2$ and $E$ orbitals: the standard method predicts a faster decay. A weaker opposite difference is also noticeable in $t_{AT}^{(x)}$ along [100]. In \figref{hopdiffc}, only a minor shift in the oscillation phase is present in some of the hoppings. Again, we notice that all nonvanishing hopping integrals are comparable not only with the same-orbital couplings but, at short distances, also with the orbital splitting. This explains the observed transition to the strong coupling regime below $\sim6$~nm \cite{KlymenkoJPCM2014,SaraivaJPCM2015,GamblePRB2015}. \subsection{Calculation of $t_{T}^{(x,y)}$ along [110]}\label{sec:pxpy} \begin{figure}[tb] \includegraphics[width=0.75\linewidth]{pxpy_scheme.pdf} % \caption{\label{fig:pxpy_scheme}(Color online) Schematic presentation of the expansion of $t_{T}^{(x,y)}$ into even and odd contributions with respect to the plane normal to $\bm{d}\parallel$[110]. Vanishing contributions are marked.} % \end{figure} \begin{figure}[tb] \includegraphics[width=0.9\linewidth]{pxpy110.pdf} % \caption{\label{fig:pxpy110}(Color online) Evaluation of $t_{T}^{(x,y)}$ for separation along [110]. Hopping integrals $t_{T}^{(+,+)}$ and $t_{T}^{(-,-)}$ for a fixed donor distance along the [110] crystallographic direction as a function of the number of integrand evaluations (integration precision). The top panel shows the results obtained using Bardeen's theory; the middle and bottom panels are for the H\"uckel method with and without the central cell correction (CCC), respectively.} % \end{figure} Let us go back to the calculation of $t_{T}^{(x,y)}$ along [110], where an issue arises. Trying to converge the calculation using the standard method, we obtain results that seem to tend to zero in a weak manner (weakly decreasing values comparable to their uncertainty, behavior different than for hoppings vanishing due to symmetry). As we use Monte Carlo integration, and the integrand is highly oscillatory, this could mean that the integral vanishes. On the other hand, using Bardeen's theory, we get well-converged values, which are small compared to other integrals, but certainly do not vanish. As both methods are approximate, we cannot decide readily which of the results is correct. There is no evident symmetry-based argument for the vanishing of the given hopping. To get more insight, we define the mixed states \begin{equation}\label{eq:tpm} \ket[\big]{\,T_2^{(\pm)}} = \frac{1}{\sqrt{2}}\lr*{ \ket[\big]{\,T_2^{(x)}} \pm \ket[\big]{\,T_2^{(y)}} } \end{equation} with well-defined parity in the separation plane normal to $\bm{d}\parallel$[110], even and odd, respectively. Using the inverse transformation, \begin{equation}\label{eq:txy} \ket[\big]{\,T_2^{(x/y)}} = \frac{1}{\sqrt{2}}\lr*{ \ket[\big]{\,T_2^{(+)}} \pm \ket[\big]{\,T_2^{(-)}} }, \end{equation} we may rewrite the hopping integral in question as \begin{align}\label{eq:txypm} t_{T}^{(x,y)} &= \frac12 \lr*{ t_{T}^{(+,+)} - \,t_{T}^{(-,-)} - \,t_{T}^{(+,-)} + \,t_{T}^{(-,+)} } \nonumber\\ &= \frac12 \lr*{ t_{T}^{(+,+)} - \,t_{T}^{(-,-)} }, \end{align} where the last two terms, $t_{T}^{(+,-)}$ and $t_{T}^{(-,+)}$, vanish as they couple states of different parity. Thus, the transformation allows us to explicitly remove two vanishing contributions, and express $t_{T}^{(x,y)}$ as a difference of two definitely finite integrals, as schematically shown in \figref{pxpy_scheme}. We expect the integral to be small, as it is given by a difference of two similar terms. However, there is no reason why it should vanish. In \figref{pxpy110}, we plot the two contributions, $t_{T}^{(+,+)}$ and $t_{T}^{(-,-)}$, as a function of the number of integrand evaluations, i.e., we show how they converge. The bottom panel shows the results obtained for the standard method with the $1/r$ potential used. The two contributions tend to have the same value, and hence $t_{T}^{(x,y)}$, given by their difference, vanishes. On the other hand, Bardeen's theory gives us a finite difference and thus nonvanishing hopping integral, as shown in the top panel. Looking for the reason for this discrepancy, we focus on the differences between the two methods. In both cases, we use the same wave functions. In Bardeen's theory, they are the only ingredient for the calculation, while in the standard H\"uckel method, the on-site potential is integrated between these functions. Here, a subtle difference arises in the symmetry of the problem in the two methods. The wave functions implicitly inherit the tetrahedral symmetry of the full donor potential (including the central cell correction), while the symmetry of the $1/r$ potential is higher. To verify if this is the source of the problem, we repeat the H\"uckel-method calculation, this time adding the central cell correction to the integrated potential. The result is shown in the middle panel, where the two contributions indeed show a finite difference, as those calculated using Bardeen's theory. We need to emphasize that, in this specific case, the central cell correction influences the results in the entire range of donor distances. The lack of this symmetry-breaking correction changes the result qualitatively, giving an artificially vanishing hopping. In general, the solution should be straightforward. One needs just to take into account the central cell correction. However, its inclusion in numerical integration is challenging. The explicit form of the correction potential for a donor is \cite{GamblePRB2015} \begin{equation} U_{\mathrm{cc}}\lr{\bm{r}} = A_0 e^{-r^2/(2a^2)}+A_1\sum_{i=1}^4 e^{-\abs{\bm{r}-b\bm{t}_i}^2/(2c^2)}, \end{equation} where $\bm{t}_i\in\LR{ (1,1,1),(-1,1,-1),(1,-1,-1),(-1,-1,1)}$ are the tetrahedral directions of the bonds, $A_0=-1.2837$~meV, $A_1=-2642.0$~meV are the amplitudes, $a=0.12857$~nm is the spatial extent of the symmetric part, while $b=0.21163$~nm, and $c=0.09467$~nm are the displacement and extent of the non-spherical parts. The last parameter shows that symmetry-breaking contributions are very local, as their spatial extension is tiny compared to the volume of the two-donor system, hence also the integration domain. Thus, obtaining quantitatively correct results when they critically depend on this potential is at least challenging. We observe a substantial variation of calculated values with the integration domain size (in the range where it should be already large enough) as well as with the specific integration algorithm used. Because of this, we are unable to confirm the quantitative accuracy of these specific H\"uckel-method results. Thus, while in general correct results can be obtained in the standard method if one is careful about the potential used, it may occur to be computationally unfeasible. In contrast, an advantage of Bardeen's theory is revealed here, resulting from its dependence on the barrier-region parts of wave functions only. \subsection{Donor pair eigenstates}\label{sec:full} \begin{figure}[tb] \includegraphics[width=1.0\linewidth]{hopfull.pdf} % \caption{\label{fig:hopfull}(Color online) Donor pair excited-ground state splitting. Energy splitting between two lowest-energy eigenstates of a donor pair (full symbols; {\color{cbblue}$\bullet$}) calculated as a function of donor distance along the [100] (left column) and [110] (right column) crystallographic directions. For comparison, empty triangles ({\color{cblred!90!black}$\bm{\triangle}$}) show the result of a full calculation from \oncite{GamblePRB2015}. Lines are to guide the eye only.} % \end{figure} Having all the hopping integrals calculated and knowing the single-donor orbital energies, we may finally construct the total Hamiltonian from \eqref{hamiltonian}. By diagonalizing it, we obtain the energy spectrum of a donor pair. In \figref{hopfull}, we show the energy splitting between the ground and first excited states calculated as a function of donor distance in the [100] and [110] directions. In this case, we may benchmark our results against a full two-donor EMT calculation from \oncite{GamblePRB2015} shown with empty triangles. We find the result to be in a very good qualitative agreement, with some minor quantitative differences, mainly in the medium distance regime. The transition to the strong coupling regime, visible for the [100] displacement as a kink at $d\simeq \SI{6}{\nano\metre}$, is reproduced correctly. Also, the oscillations for [110] displacement are in phase for the entire distance range. This agreement confirms the suitability of the proposed method to calculate the eigenstates of pairs and clusters of admixtures based only on the wave functions of a single donor. \begin{figure}[tb] \includegraphics[width=1.0\linewidth]{fulleig.pdf} % \caption{\label{fig:fulleig}(Color online) Donor pair energy spectra. Energy (with respect to the Si conduction band edge) of the twelve lowest eigenstates of a donor pair calculated as a function of donor distance along the [100] (left column) and [110] (right column) crystallographic directions. The red, green, and blue components of the point color (additive RGB model) show the contributions of $\pm x$, $\pm y$, and $\pm z$ valleys (top panels) or $A_1$, $T_2$, and $E$ orbitals (bottom), respectively. The apparent random ordering of blue and green points in the top left panel is due to the degeneracy of $T_2^{(y)}$ and $T_2^{(z)}$ levels.} % \end{figure} Next, in \figref{fulleig}, we plot entire calculated spectra, i.e., twelve donor-pair eigenstates, again as a function of [100] and [110] donor separation. To characterize the eigenstates, we color-code the information on their composition: in the top row of panels, valley contributions are shown with the red, green, and blue color components, while in the bottom row, we similarly present the contribution of orbital types. When the tunnel coupling is relatively weak, i.e., at larger distances $d>$~nm, there is no evident orbital mixing, and each of the orbitals forms its own bonding and antibonding eigenstates. The two lowest-energy levels are then such states composed mainly of the lowest $A_1$ orbitals. Their splitting is equal to twice a quantity, which may be considered as an effective ground-state tunnel coupling $t_{\mr{eff}}$. For strong coupling, when hopping integrals are comparable or greater than orbital splittings, the ordering of states is affected, and the antibonding state with dominant $A_1$ contribution is no longer the first excited state \cite{KlymenkoJPCM2014,SaraivaJPCM2015}. This transition is reflected in the kink seen in \figref{hopfull}. Additionally, the antibonding state mixes considerably with other orbitals. Consequently, determining the proper value of effective hopping $t_{\mr{eff}}$ for $d<6$~nm requires more detailed research. Moreover, the very issue of the applicability of single-band models for closely spaced dopant arrays also requires consideration. These issues will be addressed in our upcoming work. \newtext{ \section{Applicability of the method to overlapping potentials}\label{sec:corr} In this section, we rederive the formula for the hopping without assuming non-overlapping potentials, to assure its applicability to long-range potentials as those studied here. Additionally, we show that \eqref{bardeen}{} can be corrected to better describe the hopping at short distances. Bardeen's theory aims to calculate the tunneling current (or tunneling transition rate) between the initial and final states being the eigenstates of two potentials. The result for the rate is perturbative and has the form of Fermi's golden rule, which can also be obtained via standard time-dependent perturbation theory \cite{ReittuAJoP1995}. In this aspect, one deals with standard limitations: the matrix element has to be a small perturbation, and the calculated rate is valid at long enough time scales. Here, we do not study time-dependent phenomena, as we are only interested in the tunneling Hamiltonian. Thus, we exploit only a part of Bardeen's derivation showing that the matrix element can be calculated as a surface integral of the transition probability current $J_{ij}\mkern-1mu\lr{\bm{r}}$. For this, the conditions of the perturbation theory do not apply. An additional assumption is made in Bardeen's theory that the two potentials do not overlap. For long-range potentials like $1/r$ considered here, this is not fulfilled. Here, we rederive the formula without this assumption and show that the surface integral from \eqref{bardeen} is, in fact, generally valid for the calculation of tunnel coupling. For non-overlapping potentials, it additionally conforms to the transfer Hamiltonian matrix element as defined by Bardeen, which allows then for calculation of the transition rate. Let $H_1 = T + U_1$ and $H_2 = T + U_2$ be the Hamiltonians of the two isolated parts of the system, where $T = -\hbar^2\nabla^2/2m$ is the kinetic energy, and $U_i$ is the $i$th potential. $H_1$ differs from $H_2$ by the position at which its potential is centered. Their ground states are $H_i\psi_i=E\psi_i$. Let us now assume that we are in the range of distances at which the hopping integral is well defined. This means that the lowest-energy eigenstates of $H=T+U_1+U_2$, i.e., for the pair of potentials (sites), are given by the bonding and antibonding superpositions of single-site ground states \begin{equation}\label{eq:app-hopp-def} H \, \frac{\psi_1\pm\psi_2}{\sqrt{2}} = \lr{ E\pm t } \, \frac{\psi_1\pm\psi_2}{\sqrt{2}}, \end{equation} which are split by twice the hopping integral $t$. From this, by adding/subtracting by sides, we get \begin{subequations}\label{eq:app-hopp-eqns} \begin{align} H\psi_1 = E\psi_1 + t\psi_2, \\ H\psi_2 = E\psi_2 + t\psi_1. \end{align} \end{subequations} Now, we left-multiply the first equation by $-\psi_2^{*}$, conjugate the second one and multiply it by $\psi_1$, add equations by sides and integrate over half-space $x>x_0$. This yields \begin{align}\label{eq:app-hopp-diff} \int_{x_0}^\infty \!\!\mathrm{d}x \, \psi_1\lr{x} H \psi_2^{*}\lr{x} &{} - \int_{x_0}^\infty \!\!\mathrm{d}x \, \psi_2^{*}\lr{x} H \psi_1\lr{x} \nonumber \\ = \int_{x_0}^\infty \!\!\mathrm{d}x &{}\Big[ t\abs*{\psi_1\lr{x}}^2 - t\abs*{\psi_2\lr{x}}^2 \\ &{}+ E\psi_1\lr{x}\psi_2^{*}\lr{x} - E\psi_2^{*}\lr{x}\psi_1\lr{x} \Big],\nonumber \end{align} where terms in the last line are identical and cancel out. On the left-hand side, potential terms from the two integrals also cancel out, and, for the kinetic-energy part, by integrating one of the terms by parts twice, we get \begin{equation} \int_{x_0}^\infty \!\!\mathrm{d}x \, \psi_1\lr{x} T \psi_2^{*}\lr{x} - \int_{x_0}^\infty \!\!\mathrm{d}x \, \psi_2^{*}\lr{x} T \psi_1\lr{x} = J_{ij}\mkern-1mu\lr{x_0}, \end{equation} where $J_{ij}$ is the transition probability current density from Bardeen's theory, given in \eqref{trans-prob-curr}. Finally, we get \begin{align}\label{eq:bardeen-corr} t &{}= - J_{ij}\mkern-1mu\lr{x_0} \LR*{ \int_{x_0}^\infty \mathrm{d}x \Lr*{ \abs*{\psi_2\lr{x}}^2 - \abs*{\psi_1\lr{x}}^2 } }^{-1} \nonumber\\ &{}\equiv - J_{ij}\mkern-1mu\lr{x_0} \, R^{-1}, \end{align} where the minus sign comes from the fact we treated $t$ as positive, and we defined $R = 1-\rho_1-\rho_2$, with $\rho_i$ being the tails of the two probability densities on the sides of the division point opposite to the location of the given site. Thus, the hopping integral is given by the matrix element from Bardeen's theory up to a multiplicative factor that tends towards unity for a low-overlap system. } \newtext{We can repeat this reasoning for the case of no degeneracy, in which the eigenstates are unequal superpositions of the initial states. Assuming the orbital splitting $\Delta E=E_2-E_1$ to be small and thus keeping only linear terms in $\Delta E$, we write \begin{align}\label{eq:app-hopp-def2} H \, \lr*{\alpha\psi_1+\beta\psi_2} &{}= \phantom{-} t \, \lr*{\alpha\psi_1+\beta\psi_2}, \nonumber \\ H \, \lr*{-\beta\psi_1+\alpha\psi_2} &{}= -t \, \lr*{-\beta\psi_1+\alpha\psi_2}, \end{align} where $\alpha = \cos\lr{\theta/2}$, $\beta = \sin\lr{\theta/2}$ are the superposition coefficients (real for real $t$) with the mixing angle $\theta=\mr{atan}\lr{2t/\Delta E}$, and we put the mean energy to zero. By multiplying the first equation by $\alpha$ and the second one by $\beta$ and subtracting by sides (and vice versa, followed by adding), we get \begin{subequations}\label{eq:app-hopp-eqns2} \begin{align} H\psi_1 = \phantom{-}t\,\lr*{\alpha^2-\beta^2}\,\psi_1 + 2\alpha\beta\,t\,\psi_2, \\ H\psi_2 = -t\,\lr*{\alpha^2-\beta^2}\,\psi_2 + 2\alpha\beta\,t\,\psi_1. \end{align} \end{subequations} As previously, we multiply the first equation by $-\psi_2^{*}$, conjugate the second one and multiply it by $\psi_1$, add by sides and integrate over half-space $x>x_0$, which gives \begin{equation} \!\!\!J_{ij}\lr*{x_0} = - t\left\lbrack 2\,\alpha\beta \, R +2\lr*{\alpha^2-\beta^2}\int_{x_0}^{\infty} \!\!\mathrm{d}x \, \psi_2^{*}\lr{x}\psi_1\lr{x} \right\rbrack.\!\! \end{equation} Keeping up to linear terms in $\Delta E$, we have $\alpha(\beta) \simeq 1/\sqrt{2} \mp \Delta E /4\sqrt{2}t$, and thus $\alpha\beta\simeq 1/2$ and $\alpha^2-\beta^2\simeq-\Delta E/2t$. With this, we arrive at the result for the hopping \begin{equation}\label{eq:bardeen-nondeg-corr} t = - \Lr*{ J_{ij}\mkern-1mu\lr{x_0} - \lr*{E_1-E_2} \int_{x_0}^{\infty} \!\!\mathrm{d}x \, \psi_2^{*}\lr{x}\psi_1\lr{x} }\,R^{-1}, \end{equation} where the second term on the right-hand side reproduces the correction for nondegeneracy from \eqref{bardeen}. The generalization of the above derivations to three dimensions is straightforward. It yields an integral of $J_{ij}\mkern-1mu\lr{\bm{r}}$ over the division surface $S$ and a volume integral over the $V_{+}$ region in the second term, as in \eqref{bardeen}, and in the expression for $R$. } \newtext{In this way, we have reproduced Bardeen's result without using the assumption of non-overlapping potentials. The overlap of wave functions is still not treated strictly, but we have obtained a correction accounting for it in the form of the $R$ factor. The latter depends on wave functions only, so the augmented method remains potential-free in the sense that one does not need to use donor/site potentials in the calculation explicitly.} \begin{figure}[tb] \includegraphics[width=\linewidth]{overlap-corr.pdf} % \caption{\label{fig:overlap-corr}(Color online) Results corrected for the overlap. Selected same-orbital [100]-axis hopping integrals calculated with (empty pentagons; {\color{cborange}$\boldsymbol{\pentagon}$}) and without (full circles; {\color{cbblue}$\bullet$}) the $R$ factor from \eqref{bardeen-corr}. For comparison, empty squares ({\color{cblgreen!90!black}$\bm{\square}$}) mark results obtained using the H\"uckel theory; in the first row, empty triangles ({\color{cblred!90!black}$\bm{\triangle}$}) show results from \oncite{LePRB2017a}. The shaded area shows the range of distances not presented in previous plots. Lines are to guide the eye.} % \end{figure} \newtext{In \figref{overlap-corr}, we show for selected cases that introducing this correction (empty pentagons) brings our results into even better agreement with H\"uckel tight-binding (empty squares), however, the difference it introduces is not significant overall down to distances of $d\simeq4$~nm. At around $d\simeq2.5$~nm our corrected results start to diverge from the ones from the H\"uckel method. This may be due to the lack of central cell correction in our H\"uckel-method calculation, but it could also be a sign of reaching the limit of applicability of our method.} \section{Conclusions}\label{sec:conclusions} We have shown that hopping integrals (tunnel couplings) between phosphorus donors in silicon can be calculated with satisfactory accuracy using Bardeen's transfer method when orbital wave functions are known. \newtext{We have calculated both inter- and intraorbital tunnel matrix elements, which are essential for constructing multi-orbital lattice models of donor arrays.} We have also used these hoppings to form and diagonalize the two-donor Hamiltonian. With this, we have obtained the ladder of eigenstates and characterized their orbital and valley composition. In contrast to the commonly used H\"uckel theory, the method used by us does not involve integration with donor or lattice potentials. Instead, the matrix element is evaluated purely from the barrier-region parts of wave functions of the two states in question. \newtext{This turns out to be practically advantageous}, as we show that neglecting the central cell correction in the standard method may lead to qualitatively incorrect results, while its inclusion in the integration is computationally troublesome. \newtext{In contrast, wave functions obtained with the correction do not cause such problems in the proposed method.} \newtext{As the original derivation for the matrix element in Bardeen's theory exploits the assumption of non-overlapping potentials, which is not fulfilled for $1/r$ ones, we present a derivation that does not rely on this assumption. Additionally, we find a correction to the original expression, which extends the applicability of the method to shorter distances (higher wave function overlaps).} Using the available wave functions for the six orbitals forming the ground-state manifold in a Si:P donor, we have calculated tunneling matrix elements both for matching and different pairs of orbitals. The results are close to those obtained in a standard way, and, where available, we have compared them with data from the literature. While for crystallographic directions, where valley interference occurs, additional averaging is needed, the presented method has turned out to be of comparable computational cost to the standard one. Concerning this and the conceptual advantages it offers, we find the method to be competitive to the commonly used H\"uckel theory. Our work may serve as a benchmark of the method with a positive outcome and indicates its suitability for evaluating hopping integrals for lattice models when orbital wave functions are known or postulated. \acknowledgments We acknowledge support from the National Science Centre (Poland) under Grant No. 2015/18/E/ST3/00583.
proofpile-arXiv_065-3985
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{Introduction} \label{intro} A charge produces radial electric field lines in accordance with the Gauss law. The divergence of the electric field vector is singular at the position of this charge. The magnetic correspondence has attracted a lot of attention with the quest of magnetic monopoles \cite{Curie,Dirac} and showing interesting developments \cite{Smooth} related to quantum physics \cite{AharonovBohm,Berry}. Engineering such radial magnetic fields on the surface of a Poincar\' e, Riemann, Bloch sphere for a spin-$\frac{1}{2}$ particle is now possible in quantum systems allowing then to relate the flux produced by the Berry curvature \cite{Berry} acting as the analogue of the magnetic field with the first Chern number \cite{Chern} counting the number of monopoles \cite{Roushan,Boulder,Henriet,HH}. A quantized topological number equal to unity can be measured when driving from north to south poles \cite{Roushan,Boulder}. These developments show the realization of magnetic monopoles inside atomic, mesoscopic planets with a good control on the phase space of parameters and on the Hilbert space. Dirac magnetic monopoles are also realized in Bose-Einstein spinor condensates \cite{Ray} and analogues are found in condensed-matter systems such as spin ices \cite{Moessner,Bramwell}. A topological sphere can be equivalently defined in terms of the Euler characteristic $\chi=2-2g=0$ with $g$ corresponding to the topological charge. Identifying the topological charge to a handle or hole, the sphere acquires the same topology as a cup, torus, donut. The topological sphere is also equivalent to two circles one on top of the other from Stokes' theorem. When the charge leaks out of the sphere, from the Poincar\' e-Hopf theorem, the Euler characteristic turns to $\chi=2$. Interestingly, the same topological number can describe topological lattice models related to the quantum Hall effect, the quantum anomalous Hall effect and the Haldane model on the honeycomb lattice in two dimensions \cite{Haldane}. The quantum Hall effect has been observed by K. von Klitzing, G. Dorda and M. Pepper \cite{Hall} in MOSFETs associated with theoretical developments \cite{Ando}, a century after the discovery of the Hall effect \cite{HallEdwin}. The topological system reveals a bulk-edge correspondence showing a protected one-dimensional chiral flow around the sample with a quantized conductance $\frac{e^2}{h}$, $e$ is the charge of an electron and $h$ the Planck constant \cite{Halperin,Buttiker}. Topological properties are measured from the quantum Hall conductivity \cite{Thouless} related to the dynamics of these edge states \cite{Hatsugai}, and also through a quantized circular dichroism of light \cite{Goldman,Hamburg}. The Berry curvature can be measured in ultra-cold atoms through a correspondence with the Bloch sphere \cite{Weitenberg,Hauke}. The Karplus-Luttinger velocity has also engendered important developments, since 1954, towards the understanding of anomalous Hall effect in materials \cite{KarplusLuttinger,Luttinger,Nozieres,Nagaosa}. The quantum anomalous Hall effect and the Haldane model in two dimensions find applications in materials \cite{Liu}, cold atoms \cite{Jotzu} and light systems \cite{HaldaneRaghu,Joannopoulos,Ozawa,KLHlightnetworks}. Topological phenomena in the presence of interactions can yet reveal other intriguing phenomena such as the presence of fractional charges and statistics as observed in the fractional quantum Hall effect \cite{Stormer,Heiblum,Saminadayar,Kapfer,Bartolomei,TalKaryn} related to Laughlin phases \cite{Laughlin}. The sphere is also appropriate to describe topological aspects of the Laughlin states \cite{Haldanesphere}. The geometrical foundations and Berry phase effects play a key role on the understanding of topological properties of band theory \cite{Niu,Haldanegeometry,RMPColloquium,Book}. This review starts from the observation that the formalism can define a common language (`{\it fil conducteur}') between electromagnetism of planets, quantum physics and topological matter with crystalline structures through a quantum topometry or geometrical approach. Here, we elaborate on smooth fields related to the vector potential in classical physics and the Berry connection in quantum mechanics allowing us to define local markers of global topological properties from the poles of the sphere down to specific points in the Brillouin zone of the topological lattice model such as the Dirac $K$ and $K'$ points of graphene \cite{graphene,Vozmediano} and also the $M$ point \cite{C2}. We present applications of this `topometry' for transport properties in time, from a Newtonian approach in curved space related to the Parseval-Plancherel theorem in quantum mechanics \cite{HH}, and also for the responses to circular polarizations of light locally in the reciprocal space of the lattice model \cite{Klein,C2}. We introduce a relation between the smooth fields and Fourier series through a dynamical protocole to describe the topological response. The robustness of the local topological responses can be shown from a smooth deformation of a sphere onto an ellipse and onto a cylinder where we can show analytically the presence of edge modes at the boundaries with the top and bottom disks. The formalism reveals a relation \cite{C2} between Berry curvatures, quantum distance \cite{Ryu,BlochMetric} and light response which becomes equivalent to the square of the global topological number at the poles of the sphere. The metric close to the poles of the sphere is effectively flat and can be interpreted as a vacuum for the gravitational field in the sense of Einstein field equation assuming a pure state. The recent work \cite{BlochMetric} also identifies a possible relation between stress-energy, entropy, Bloch bands and gravitational potential from the reciprocal space. The (global) topological characterization from the poles of the sphere is now encoded in the definition of the smooth fields which imply that information on the topological charge is transported in a thin Dirac cylinder from the equatorial plane to each pole. We also draw a parallel between the light responses in the quantum Hall effect in graphene and in the topological Haldane model on the honeycomb lattice from the Dirac points. A circularly polarized electric field on a sphere can produce a topological phase either at the classical level through a skin effect or at the quantum level showing the possibility to induce a topological phase on a lattice through the light-matter interaction. Such protocols are commonly applied in cold atoms in optical lattices and topological light systems to implement artificial gauge fields within a Floquet approach \cite{Monika,GoldmanDalibard}. In this way graphene can also turn into a quantum anomalous Hall state through circularly polarized light \cite{McIver,Sato}. Topological insulators are phases of matter characterized through a $\mathbb{Z}_2$ topological number as a result of the spin-orbit coupling \cite{RMPColloquium,QiZhang,Book}. In two dimensions, topological insulators equivalently refer to the quantum spin Hall effect \cite{BernevigZhang,Murakami} and the Kane-Mele model on the honeycomb lattice \cite{KaneMele1} where the $\mathbb{Z}_2$ topological order reveals a helical Luttinger liquid at the edges \cite{WuC}. In three dimensions, the system develops metallic surface states. Topological insulators find applications in two and three dimensions starting from Mercury \cite{Konig} and Bismuth materials \cite{HsiehTI,RMPColloquium} respectively. These states of matter have engendered interesting mathematical developments on symmetries related to the Pfaffian and Bloch theorem \cite{KaneMele2,FuKane,MooreBalents}. From the smooth fields, we show that we can measure the topological spin Chern number \cite{Sheng} at specific points in the Brillouin zone from the responses to circularly polarized lights with a correspondence towards a $\mathbb{Z}_2$ spin pump \cite{C2}. We develop an application of the smooth fields in topological superconducting wires \cite{Kitaev} and the $p+ip$ superconducting state on a square lattice \cite{ReadGreen} which reveal Majorana fermions \cite{Wilczekclass,WilczekMajorana} through the Bogoliubov-de Gennes Hamiltonian \cite{Bogoliubov,deGennes,Tinkham} and Bardeen-Cooper-Schrieffer (BCS) theory \cite{Cooper,BardeenCooperSchrieffer}. The stability of topological phases towards weak interaction effects have been shown through various approaches such as renormalization group methods \cite{KaneMele1}, perturbative, mean-field and gauge theories \cite{PesinBalents,Mott}. At the same time, strong interactions give rise to Mott physics which can be quantitatively studied through numerical tools \cite{Varney,WuQSH,Hohenadler,Zheng,Cocks}. Here, we present a way to include interaction effects within the smooth fields topological description from a recently developed stochastic mean-field variational approach \cite{Klein,QSHstoch} with a simple estimate of the Mott transition line through the ground-state energy. Strong interactions can also reveal other interesting states of matter such as topological Mott phases \cite{PesinBalents}, fractional topological insulators \cite{MaciejkoFiete} and fractionalized quantum spin Hall states \cite{Kallin}. Within this review, introducing interaction effects between two Bloch spheres we elaborate on the possibility of fractional $\frac{1}{2}$ topological numbers \cite{HH} in the presence of a Einstein-Podolsky-Rosen (EPR) entangled wavefunction or Bell pair \cite{Hagley,Aspect} at one pole. For two spheres, we introduce one-half topological numbers as the superposition of two geometries on each sphere, a halved surface radiating the flux produced by the magnetic monopole or topological charge inside the sphere and a halved surface participating in the spooky non-local correlations between the two spheres defining the quantum entangled geometry \cite{HH}. We develop the mathematical relation between topological properties, quantum entangled wavefunction at one pole and a correspondence towards Majorana fermions. From Stokes' theorem, the topological properties of each sphere is equivalent in the equatorial plane to a circle surrounding the topological charge on top of a disk acting as a mirror such that only half of the surface radiates the Berry curvature vector lines. In this sense, the $\frac{1}{2}$ topological number can equivalently be interpreted as an Euler characteristic $\chi=0+1$. Writing $\chi=2-2g_{eff}$ would then lead to an effective topological number $g_{eff}=\frac{1}{2}$ for the whole surface. This model then may link with meron physics covering half a unit sphere \cite{Meron}. The $\frac{1}{2}$ can also be viewed as a half thin cylinder joining one pole to the topological charge in the equatorial plane. Here, it is also interesting to mention developments in black holes physics and Gauss-Bonnet theorem in the presence of a boundary in space-time reporting also the possibility of an Euler characteristics $\chi=1$ \cite{Gibbons}. We introduce a realization of this two-spheres' model in a mesoscopic circuit and this model can in fact be engineered with existing capabilities \cite{Roushan}. As an application, this two-spheres' model may be used for quantum cryptography and keys distributions where the presence of identically polarized spins at the north pole may result in an additional security check for Bob and Alice \cite{BennettBrassard}. At the south pole, the two-spheres' model then would produce the entangled pairs in this protocol. Related to topological lattice models, we present applications of the two spheres in two-dimensional topological semi-metals, in graphene and also in bilayer honeycomb systems \cite{HH,bilayerQSH,Semimetal}. We develop further the formalism for topological coupled-planes models in a cube showing a relation between $\frac{1}{2}$ topological numbers, Ramanujan alternating infinite series and also surface states of three-dimensional topological insulators \cite{SekineNomura}. Interestingly, the one-half topological numbers of the two-spheres model also finds a correspondence in the physics of two-interacting topological superconducting wires \cite{Herviou,delPozo}. Finally, we also elaborate on the smooth fields formalism applied to generalized resonating valence bond states \cite{HH}. The organization of the review is as follows. In the preliminaries of Sec. \ref{preliminaries}, we introduce the smooth fields in Eq. (\ref{polesA}) from the electrodynamics of planets. We also show how a time-dependent circularly polarized electric field can induce a Dirac magnetic monopole. In Sec. \ref{quantumphysics}, we introduce the quantum analogues smooth fields on the Bloch sphere of a spin-$\frac{1}{2}$ particle and show their relevance for the description of the global topological properties locally from the poles of the sphere. We describe transport properties and energetics, the correspondence with Fourier series and the responses to circularly polarized light from these smooth fields showing a topologically quantized photo-electric effect \cite{C2}. We also show how circularly polarized light can produce a topological Bloch sphere. Deforming slightly the sphere onto a cylinder geometry reveals the occurrence of edge modes. The stability towards a dissipative environment is addressed related to quantum phase transitions and a sphere in a bath may also find applications as a quantum dynamo \cite{Henriet, EphraimCyril}. Then, we introduce the fractional entangled geometry \cite{HH} leading to $\frac{1}{2}$ topological numbers and the mesoscopic circuit implementation. The stability of the formalism towards a gentle deformation of the surface is also discussed. In Sec. \ref{anomalous}, we apply the sphere model for the Haldane model on the honeycomb lattice referring to the quantum anomalous Hall effect discussing interaction and disorder effects with the stochastic variational approach \cite{Klein}. In Sec. \ref{Observables}, the formalism is developed related to this topological lattice model where we show the link between smooth fields, quantum Hall conductivity, the responses to circularly polarized light and the ${\cal I}(\theta)$ function \cite{C2}, related to the quantum metric \cite{Ryu}, which reveals the square of the topological invariant at the poles of the spheres and equivalently at the Dirac points on the lattice. The light response at the $M$ point is also quantized in units of half of the (integer) topological invariant from symmetries applied to the tight-binding model. Here, we also address the parallel with the light response in the quantum Hall regime of graphene \cite{Zhang,Novoselov}, addressing also recent developments on fractional quantum Hall physics. The formalism can reveal the topological transition induced by a Semenoff mass \cite{Semenoff}. Related to the developed methodology, we describe the photovoltaic effect in graphene \cite{OkaAoki1,OkaAoki2,MoessnerCayssol} and the photogalvanic effect in Weyl semimetals in three dimensions \cite{Juan,Orenstein}. In Sec. \ref{quantumspinHall}, we develop the formalism in the situation of the quantum spin Hall effect and two-dimensional topological insulators with $\mathbb{Z}_2$ symmetry showing a relation between the ${\cal I}(\theta)$ function through the light responses \cite{C2}, the topological spin Chern number \cite{Sheng} and the zeros of the Pfaffian as formulated by Kane and Mele \cite{KaneMele2}. The correspondence with the cylinder geometry establishes an analogue towards a spin pump. We also address interaction effects towards the Mott transition \cite{Mott,WuQSH,QSHstoch,Plekhanov}. In Sec. \ref{proximityeffect}, we introduce bilayer systems from the angle of a proximity effect between graphene and a topological material described by a Haldane model \cite{bilayerQSH}. Then, we elaborate on the correspondence between the half-topological number per sphere and a topological semimetal with applications in bilayer honeycomb systems \cite{HH} and also in one layer Fermi-liquid graphene model \cite{Semimetal}. In Sec. \ref{Planks}, we build a topological coupled-planes model in a cube and show a correspondence between the $\frac{1}{2}$ topological number and the Ramanujan infinite alternating series with applications in transport and circularly polarized light. We also show a correspondence with the quantum Hall effect developing on the surface of three-dimensional topological insulators and axion electrodynamics \cite{Wilczek,QiZhang}. In Sec. \ref{further}, we develop the formalism for topological superconducting systems with a relation between the Nambu representation, the Anderson' pseudospin \cite{Anderson} and the Bloch sphere \cite{SatoAndo}. We propose an analogue of the ${\cal I}(\theta)$ function \cite{C2} and a protocol implementation related to the $\mathbb{Z}_2$ symmetry associated to the Bardeen-Cooper-Schrieffer theory of the Kitaev superconducting wire \cite{Kitaev}. We develop a Majorana fermions representation of the one-half topological number and address an analogy with two superconducting wires \cite{Herviou,delPozo}. Through the topological $p+ip$ superconductor on the square lattice, we show a link between smooth fields and a topological marker introduced by Wang and Zhang through the Green's function at zero-frequency \cite{Wang}. In Sec. \ref{GRVBT}, we develop the formalism of smooth fields for small spin arrays showing the possibility of other fractional topological numbers \cite{HH}. In Sec. \ref{Summary}, we summarize the main findings related to this research. In Appendix \ref{Berrycurvature}, we elaborate on the Berry curvature, the quantum distance, the metric and the ${\cal I}(\theta)$ function. In Appendix \ref{lightconductivity}, we develop the correspondence between the light responses and the photo-induced currents. In Appendix \ref{timereversal}, we introduce the time-reversal symmetry from quantum mechanics to topological insulators. In Appendix \ref{GeometryCube}, we develop the geometry related to the Green and divergence theorem in the thermodynamical limit for an assemblage of topological planks. In Appendix \ref{interactions}, we describe the stability of the topological superconducting phase for one wire regarding interactions within the Luttinger liquid formalism \cite{HaldaneLuttinger}. \section{Preliminaries: Electromagnetism and Smooth Fields} \label{preliminaries} \subsection{Magnetism and Vector Potential} \label{potential} Here, we introduce the smooth vector fields from classical electromagnetism in the presence of a radial magnetic field ${\bf B}=\bm{\nabla}\times{\bf A}=B{\bf e}_r$ with ${\bf e}_r$ being the radial unit vector. We study the surface of a sphere with a fixed radius $r$ such that $B(r)=B$. These radial magnetic fields can be produced by a Dirac magnetic monopole at the origin via the equation $\bm{\nabla}\cdot{\bf B}=q_m\delta({\bf r})$ \cite{Dirac}. Although this step may be perceived as a simple calculation related to electromagnetism, we find it useful to precisely define Eq. (\ref{tildefields}) through vector fields $\tilde{\bf A}$ defined smoothly on the whole surface. An analogous situation in quantum physics can be produced on the Bloch sphere of a spin-$\frac{1}{2}$ particle related to topological lattice models where the Dirac monopole will describe the topological charge associated to the model. Therefore, this Section aims at defining the local geometry from the magnetic flux on the surface of the sphere. The smooth fields will form the foundations of the mathematical language and at the same time will show that topological properties can be measured only from the poles of the sphere. The form of the potential vector will link with the smooth fields and with topological properties of the system including quantum physics, from the observation that the Berry connection \cite{Berry} is analogous to a momentum in quantum mechanics and therefore should have similar properties from symmetries as a classical vector potential. We also mention here that the discussion below can be adapted for an electric field produced by a charge $q_e$ at the origin introducing the vector ${\bf A}_e$ such that ${\bf E}=\bm{\nabla}\times{\bf A}_e=E{\bf e}_r$. It is also interesting to observe that a similar formalism has recently allowed to identify half magnetic monopoles through thoughts on the Berry phase and magnetic flux in the presence of Cooper pairs \cite{DeguchiFujikawa}. In spherical coordinates, the radial component of the `nabla' (del) operator leads to the equation \begin{equation} \frac{1}{r\sin\theta}\left(\frac{\partial}{\partial\theta}(A_{\varphi}\sin\theta) - \frac{\partial A_{\theta}}{\partial_{\varphi}}\right) = B. \end{equation} Here, $\theta$ refers to the polar angle and $\varphi$ to the azimuthal angle. A solution of this equation in agreement with the Stokes theorem and the geometry admits the form $\frac{\partial A_{\theta}}{\partial\varphi}=0$; we can also set $A_{\theta}=0$ for this solution. Therefore, this leads to \begin{equation} \label{differential} \frac{\partial}{\partial\theta}(A_{\varphi}\sin\theta)=Br\sin\theta. \end{equation} To solve this equation, we can redefine \begin{equation} \label{smoothclassical} {A}'_{\varphi} = A_{\varphi}\sin\theta. \end{equation} At the north and south poles, we must have ${A}'_{\varphi}=0$. If we introduce smooth ${\bf A}$ fields from the geometry then this supposes that ${\bf A}$ does not diverge at the poles and consequently that ${A}'_{\varphi}=0$ at the poles. Then, Eq. (\ref{differential}) simplifies to \begin{equation} \partial {A}'_{\varphi} = Br \partial_{\theta}(\sin\theta). \end{equation} Then, if we integrate this equation continuously from an angle $\theta=0$ up to $\theta$ then the solution at an angle $\theta=\pi$ would not satisfy ${A}'_{\varphi}=0$. Therefore, it is justified to introduce two regions joining at an angle $\theta_c$; see the discussion in the book of Nakahara \cite{Nakahara}. We can integrate this equation on the north (with $0\leq\theta<\theta_c)$ and south (with $\pi\geq\theta>\theta_c$) regions which then results in \begin{eqnarray} \label{vecpot} {A}'_{\varphi}(\theta<\theta_c) &=& -Br(\cos\theta-1) = 2Br\sin^2\frac{\theta}{2} \\ \nonumber {A}'_{\varphi}(\theta>\theta_c) &=& -Br(\cos\theta+1) = -2Br\cos^2\frac{\theta}{2}. \end{eqnarray} Eqs. (\ref{vecpot}) have been developed related to topological properties, the Aharonov-Bohm effect and Berry phase \cite{Smooth}. In this way, we define solutions such that the smooth fields ${A}'_{\varphi}=0$ at the two poles. These smooth fields have also the properties that \begin{equation} \label{Acircles} {A}'_{\varphi}(\theta<\theta_c) - {A}'_{\varphi}(\theta>\theta_c)=2Br. \end{equation} Here, we precisely introduce $\tilde{A}_{\varphi}(\theta)=-Br\cos\theta$ such that on the two regions we have \begin{eqnarray} \label{tildefields} {A}'_{\varphi}(\theta<\theta_c) &=& \tilde{A}_{\varphi}(\theta) -\tilde{A}_{\varphi}(0) \\ \nonumber {A}'_{\varphi}(\theta>\theta_c) &=& \tilde{A}_{\varphi}(\theta) -\tilde{A}_{\varphi}(\pi), \end{eqnarray} and such that \begin{equation} \label{polesA} {A}'_{\varphi}(\theta<\theta_c) - {A}'_{\varphi}(\theta>\theta_c) = \tilde{A}_{\varphi}(\pi) - \tilde{A}_{\varphi}(0). \end{equation} From Eq. (\ref{Acircles}), the last identity measured from the poles is fixed to $2Br$. Here, we find it important to emphasize that the $\tilde{A}_{\varphi}(\theta)$ function introduced in Eq. (\ref{tildefields}) is defined smoothly on the whole surface. This will play the role of the Berry connection field or quantum vector potential in the next Sections. This will also allow us to introduce a Dirac string or a thin cylinder in the topological quantum formalism and to show that topological properties can be revealed from the poles of the sphere (only) through fields $\tilde{A}_{\varphi}$ defined smoothly on the whole surface. For our purposes, it is useful to introduce the Gauss law on the surface of the sphere $(S^2)$ through the smooth fields ${A}'_{\varphi}$ which will then allow for a simple re-formulation in a flat metric. On the one hand with the measure of an area in spherical coordinates $r^2\sin\theta d\theta d\varphi$, we have \begin{equation} \label{Phiclassical} \Phi = Br^2\left(\int_0^{2\pi} d\varphi\right)\left(\int_0^{\pi} \sin\theta d\theta \right) = 4\pi r^2 B(r). \end{equation} From $\bm{\nabla}\cdot{\bf B}=0$ for $r\neq 0$ then we verify that $4\pi r^2$ is independent of the radius such that we can re-interpret $B=\frac{q_m}{2 r^2}=B(r)$ with the encircled magnetic charge $q_m=\frac{\Phi}{2\pi}$. On the other hand, it is also useful to re-interpret Eq. (\ref{Phiclassical}) in flat space $(\theta,\varphi)$ with the area measure $r^2d\theta d\varphi$ through the introduction of $F_{\theta\varphi}=\frac{1}{r}\partial_{\theta}{A}'_{\varphi} = B\sin\theta$. For a sphere with a radius $r$, then we identify \begin{equation} \label{Fclassical} \Phi = r^2\left(\int_0^{2\pi} d\varphi \right)\left(\int_0^{\pi} F_{\theta\varphi} d\theta \right) = B(r)(4\pi r^2). \end{equation} This is also equivalent to \begin{equation} \label{Phi} \Phi = 2\pi r^2\left(\int_0^{\theta_c^-} F_{\theta\varphi} d\theta + \int_{\theta_c^+}^{\pi} F_{\theta\varphi} d\theta \right). \end{equation} Therefore, \begin{equation} \label{Phi'} \Phi = 2\pi r\left({A}'_{\varphi}(\theta_c^-) - {A}'_{\varphi}(0) + {A}'_{\varphi}(\pi) - {A}'_{\varphi}(\theta_c^+)\right). \end{equation} The smooth fields are defined to be zero at the two poles, such that \begin{equation} \label{circles} \Phi = 2\pi r\left({A}'_{\varphi}(\theta_c^-) - {A}'_{\varphi}(\theta_c^+)\right) = 2\pi r(\tilde{A}_{\varphi}(\pi) - \tilde{A}_{\varphi}(0)). \end{equation} The last identity measures the magnetic flux from the poles of the sphere. The last step in this equation reveals that topological properties can be measured from the poles (only) with a smooth field defined on the whole surface. This will play a key role in the developments of next Sections related to quantum physics and topological band structures. In particular, this proof can be generalized to multiple topological spheres from geometry as in Sec. \ref{smooth}. We can also interpret Eq. (\ref{Fclassical}) as \begin{equation} \label{curl} \Phi = \iint_{S^2} {\bf F} \cdot d^2{\bf s} \end{equation} with $d^2{\bf s}=d\varphi d\theta {\bf e}_r$ and ${\bf F}=\bm{\nabla}\times {\bf A}'$. As mentioned above, through the application of Gauss theorem, the double integral should be thought of as $\oiint=\iint_{S^2}$. In that sense, Eq. (\ref{circles}) can then be viewed as a consequence of the Stokes' theorem on each hemisphere where ${A}'_{\varphi}$ is independent of $\varphi$ (and $A'_{\theta}=0$ or $\partial_{\varphi} A'_{\theta}=0$). We underline here that Eq. (\ref{smoothclassical}) allows a re-interpretation of the curved space into a flat metric similarly as in cartesian coordinates. In spherical coordinates, we have the geodesic defined through $dl^2 = dx^2+dy^2+dz^2 = r^2 d\theta^2 + r^2\sin^2\theta d\varphi^2$. This is equivalent to define the (differential) vector \begin{equation} \label{geodesic} d{\bf l} = r d\theta {\bf e}_{\theta} + r\sin\theta d\varphi {\bf e}_{\varphi}, \end{equation} with ${\bf e}_{\theta}$ and ${\bf e}_{\varphi}$ being unit vectors along the polar and azimuthal angles directions. Therefore, for a sphere of radius unity, we have \begin{equation} {\bf A}\cdot d{\bf l} = A_{\theta}d\theta + {A}'_{\varphi} d\varphi, \end{equation} which is another way to visualize the first equality in Eq. (\ref{circles}) through circles parallel to the equator. As a remark, we observe that we also have \begin{eqnarray} A_{\varphi}(\theta<\theta_c) = Br\tan\frac{\theta}{2} \end{eqnarray} and \begin{eqnarray} A_{\varphi}(\theta>\theta_c)=-\frac{Br}{\tan\frac{\theta}{2}}. \end{eqnarray} Then, we verify that these functions go to zero at the two poles. We show below that the fields ${A}'_{\varphi}$ and Eqs. (\ref{circles}) from the curved space are interesting to understand physical properties of quantum and topological lattice models. \subsection{Radial Magnetic Field from Time-Dependent Electric Field and Skin Effect} \label{electricfield} Here, we discuss a realization of such a radial magnetic field in the vicinity of a metallic surface on a sphere produced by a time-dependent electric field, as a skin effect. Waves with circular polarizations will induce a time-dependent momentum boost for a charge on the surface parallel to the azimuthal angle. Here, the analysis of the charged particle and of the electromagnetism is purely classical assuming that the sphere is sufficiently macroscopic. This Section then shows that a `mean' magnetic flux can be produced on the surface of the sphere through a Floquet perturbation periodic in time already at a classical level. These waves will also intervene in the quantum situation with a Bloch sphere when coupling an electric dipole to circularly polarized light or a magnetic spin-$\frac{1}{2}$ to a rotating magnetic field. We suppose a circularly polarized wave propagating in $z$ direction from the point of view of Arago and Fresnel. We describe here the skin effect produced by an in-coming electromagnetic wave on a particle with charge $e$ and mass $m$ such that ${\bf p}=m{\bf v}$. For simplicity, we study the response in the equatorial plane of the sphere where the interaction between the matter with the electromagnetic wave is the most prominent meaning that the response in the equatorial plane grows as the perimeter of the sphere related to the typical number of charged particles on the surface. From polar coordinates in the equatorial plane corresponding to $z=0$, in the case of a right-moving $(+)$ or left-moving $(-)$ wave moving with the rotating frame this corresponds to set $\varphi=\mp \omega t$. Then, we have the correspondence of vectors $-i e^{-i\omega t}({\bf e}_x \mp i {\bf e}_y) = \mp {\bf e}_{\varphi} - i {\bf e}_r$. Here, ${\bf e}_x$ and ${\bf e}_y$ refer to unit vectors in cartesian coordinates in the plane. A vector potential ${\bf A}_{\pm}(t)=A_0e^{-i\omega t}({\bf e}_x\mp i {\bf e}_y)$ for $z=0$ then would correspond to a static electric field in the moving frame ${\bf E}_{\pm}=-i E_0e^{-i\omega t}({\bf e}_x \mp i {\bf e}_y) = E_0(\mp {\bf e}_{\varphi} - i {\bf e}_r)$ with a real component along the azimuthal angle $\mp E_0 {\bf e}_{\varphi}$. Here, we suppose that on the surface of the sphere we have a time-dependent electric field ${\bf E}(t)=E_0 e^{-i\omega t} e^{i k z}{\bf e}_{\varphi}$ with $E_0$ fixed, corresponding to an $AC$ perturbation in time in the rotating frame. Within this time-dependent Floquet perturbation we will then derive an effective real radial magnetic field from an average on a time period. The Newton's equation on a charge $e$ with $r=1$ on the surface of the sphere then results in \begin{equation} \dot{p}_{\varphi}(t) = eE_0 e^{-i\omega t} e^{i k z}. \end{equation} We will analyse the response of a charge $e$ particle in the equatorial plane with $z\rightarrow 0$ such that we can equally use cylindrical coordinates $(r,\varphi,z)$ and the polar angle also corresponds to the azimuthal angle in spherical coordinates. Integrating this equation \begin{equation} p_{\varphi}(t) = \frac{eE_0}{(-i\omega)}\left( e^{-i\omega t} -1 \right) e^{ikz}. \end{equation} If we develop the response with $z\rightarrow 0$, then $p_{\varphi}(t)$ acquires both a real and imaginary parts \begin{equation} \hbox{Re} p_{\varphi}(t) = \frac{e E_0}{\omega}\sin(\omega t) \end{equation} \begin{equation} \hbox{Im} p_{\varphi}(t) = -\frac{e E_0}{\omega} 2\sin^2\frac{\omega t}{2}, \end{equation} traducing an oscillating response. The real part produces a sinusoidal current associated to a charge $e$. The $\hbox{Im} p_{\varphi}(t)$ component is important to induce a real radial magnetic field when averaging on a Floquet time period. Since we define the incident wave propagating in $z$ direction, we can equally work in cylindrical coordinates $(r,\varphi,z)$. From Maxwell-Faraday equation $\bm{\nabla}\times {\bf E} = -\frac{\partial {\bf B}}{\partial t}$, we obtain an induced radial magnetic field on the surface such that \begin{equation} B_r(t) = + \frac{ik}{e} p_{\varphi}(t). \end{equation} At time $t=0$, we fix $B_r=0$ such that we study the magnetic response in the presence of the light-matter interaction related to $p_{\varphi}(t)$ for $t>0$. Keeping the factor $e^{ikz}$, also we identify a relation between $\bm{\nabla}\times{\bf B}$ and the moving particle \begin{equation} \bm{\nabla}\times{\bm B} = -\frac{k^2}{e}p_{\varphi} {\bm e}_{\varphi}. \end{equation} Inserting the form of $p_{\varphi}(t)$, in the presence of the light-matter interaction then we can interpret the Amp\` ere's law as: \begin{equation} \bm{\nabla}\times{\bf B} = \frac{1}{c^2}\frac{\partial {\bf E}}{\partial t} + \mu_0 \bar{\bf{J}} \end{equation} with \begin{equation} \mu_0\bar{\bf{J}} = \frac{ik}{c}E_0 e^{ikz}{\bf {e}}_{\varphi}. \end{equation} This corresponds to a current contribution per unit area. Here, $c$ refers to the speed of light. From the cylindrical coordinates, on the surface of the sphere around the equatorial plane, we have $\bm{\nabla}\cdot{\bf B}=0$. The real part of $B$ then evolves as \begin{equation} \hbox{Re} B_r(t) = -\frac{k}{e}\hbox{Im} p_{\varphi}(t) = \frac{2 E_0 k}{\omega}\sin^2\frac{\omega t}{2}. \end{equation} The electric field is periodic in time with period $T=\frac{2\pi}{\omega}$. We can then define an effective averaged magnetic field on this time period \begin{equation} \bar{B}_r=\frac{1}{T}\int_0^T \hbox{Re} B_r(t) dt = \frac{E_0 k}{\omega} = \frac{E_0}{c}. \end{equation} In this way, we produce classically an effective (mean) magnetic flux on the surface of the sphere from the horizon. The mean energy stored in this magnetic field $\frac{1}{2}\mu_0^{-1} \bar{B}_r \bar{B}_r = \frac{1}{2}\epsilon_0 |E_0|^2$ is then the symmetric entity related to the electric energy density $\frac{1}{2}\epsilon_0|{\bf E}(t)|^2=\frac{1}{2}\epsilon_0 |E_0|^2$. The energy in the fluctuations related to $B_r B_r^*$ is associated to the mean kinetic energy of the particle(s). In addition to the radial magnetic field, the Maxwell-Faraday equation in cylindrical coordinates also produces a magnetic field along $z$ direction. Introducing the polar angle $z=\cos\theta$ on the unit sphere, we have \begin{equation} B_z = \frac{E_0}{i\omega}(e^{-i\omega t} - 1)e^{ik\cos\theta}. \end{equation} From the equatorial plane, then this gives rise to \begin{equation} \hbox{Re} B_z = -\frac{E_0}{\omega}\sin(\omega t)-\frac{2kE_0}{\omega}\sin^2\frac{\omega t}{2}\cos\theta, \end{equation} producing a finite value \begin{equation} \bar{B}_z = -\frac{E_0}{c}\cos\theta \end{equation} if we average the response on a time period related to a loop of current in the equatorial plane. The averaged (sinusoidal) current on the loop is zero, but we identify a finite real part of the magnetic field along $z$ direction from the wave propagation factor $e^{i kz}=e^{ik\cos\theta}$ developed from the equatorial plane (where we have assumed that the number of particles would be related to the perimeter of the sphere). Assuming that the measured magnetic flux is real, then we can neglect the effect of the induced imaginary component $\hbox{Im}B_z=\frac{2E_0}{\omega}\sin^2\frac{\omega t}{2}$. Rotating the sphere such that the south pole becomes the north pole and the north pole the south pole through the modification $\theta\rightarrow \theta+\pi$ then we identify a (real-component) of the magnetic field as \begin{equation} \hbox{Re}{\bf B} = \frac{E_0}{c}(\sin\theta\cos\varphi, \sin\theta\sin\varphi,\cos\theta), \end{equation} which precisely corresponds to a radial magnetic field on a unit sphere with a positive amplitude. This radial magnetic field then produces a flux on the surface according to Eq. (\ref{Fclassical}). Hereafter, we introduce applications of such a radial magnetic onto quantum physics and spin degrees of freedom through the same smooth fields' formalism. \section{Quantum Physics, Smooth Fields and Topological Aspects} \label{quantumphysics} \subsection{Topological Spin-$\frac{1}{2}$} \label{spin1/2} Here, we introduce a quantum analogue of the smooth field $\tilde{A}_{\varphi}$, the Berry connection or vector field defined as \cite{Berry} \begin{equation} \label{A} A_{\varphi}=-i\langle \psi| \frac{\partial}{\partial\varphi} |\psi\rangle. \end{equation} (The $-$ sign is to have a precise link with the momentum or wave-vector if $\varphi$ would refer to a position variable). Since the vector potential plays a similar role as the the momentum, here we show a simple correspondence between the classical and quantum formalisms on the sphere. For quantum mechanics, the sphere here refers to the Bloch sphere of a spin-$\frac{1}{2}$ particle. Suppose we start with a radial magnetic field in quantum mechanics such that the Hamiltonian reads \begin{equation} {H}=-{\bf d}\cdot\mathbfit{\sigma} \end{equation} with \begin{equation} \label{dvector} {\bf d}(\varphi,\theta) = d(\cos\varphi\sin\theta,\sin\varphi\sin\theta,\cos\theta) = (d_x,d_y,d_z). \end{equation} We define the Hilbert space with $\{|+\rangle_z; |-\rangle_z\}$ corresponding to the eigenstates in the $z$ direction and we re-write these states in terms of two-dimensional orthogonal unit vectors. In this way, the eigenstates take the simple form \begin{equation} \label{eigenstates} |\psi_+\rangle = \left( \begin{array}{lcl} \cos\frac{\theta}{2}e^{-i\frac{\varphi}{2}} \\ \sin\frac{\theta}{2} e^{i\frac{\varphi}{2}} \end{array} \right), \hskip 0.25cm |\psi_-\rangle = \left( \begin{array}{lcl} -\sin\frac{\theta}{2} e^{-i\frac{\varphi}{2}} \\ \cos\frac{\theta}{2} e^{i\frac{\varphi}{2}} \end{array} \right). \end{equation} The eigenenergies are respectively $-|{\bf d}|$ and $+|{\bf d}|$ for the eigenstates $|\psi_+\rangle$ and $|\psi_-\rangle$. Here, we represent the wave functions in a specfic gauge $\varphi$-representation. The topological responses will be formulated in a gauge-invariant way from the geometry. We study the topological response related to the lowest energy eigenstate $|\psi_+\rangle$. A similar calculation can be reproduced for the other eigenstate such that the two energy eigenstates will be characterized through opposite topological numbers. For the $|\psi_+\rangle$ eigenstate, \begin{equation} \label{cosine} A_{\varphi} = -\frac{\cos\theta}{2}, \end{equation} and $A_{\theta}=0$. The sphere as a fiber bundle then allows to describe the topological properties only through one component of the vector ${\bm A}$ leading to simple analytical calculations. This will also justify the choice of a boundary (interface) defined parallel to the equatorial line in the application of Stokes' theorem with two regions or hemispheres. It is perhaps important to emphasize here that such a boundary can be viewed as a mathematical definition to define the smooth fields appropriately. On the other hand, since we can gently move this interface close to the two poles then such a boundary can shrink to a point defining the pole or equivalently to a small circle encircling one of this pole. Related to definitions in the classical situation of Sec. \ref{potential}, $A'_{\varphi}(\theta<\theta_c)$ and $A'_{\varphi}(\theta>\theta_c)$ \cite{HH,C2} such that \begin{eqnarray} \label{smoothfields} A'_{\varphi}(\theta<\theta_c) &=& A_{\varphi}(\theta) - A_{\varphi}(0) = \sin^2\frac{\theta}{2} \\ \nonumber A'_{\varphi}(\theta>\theta_c) &=& A_{\varphi}(\theta) - A_{\varphi}(\pi) = -\cos^2\frac{\theta}{2}. \end{eqnarray} Here, $\theta_c$ refers to the angle of the boundary (interface). It is perhaps important to emphasize here that within this definition the field $A_{\varphi}(\theta)$ is definite smoothly on the whole sphere and that this equation also implies that the singularity at the place of the topological charge is transported in a thin Dirac cylinder from the center of the sphere to each pole (see Sec. \ref{smooth}). The field $A'_{\varphi}$ is discontinuous at the interface and will also reveal the topological properties from Stokes' theorem. These equations are indeed very similar to Eqs. (\ref{vecpot}) if we set $B=\frac{1}{2}$ for a sphere with $r=1$ and if we identify $\tilde{A}_{\varphi}(\theta)$ for the classical model precisely to $A_{\varphi}(\theta)$ for quantum physics. The Berry connection in Eq. (\ref{cosine}) precisely corresponds to the introduced classical field $\tilde{A}_{\varphi}$. The effective radius $r=1$ can also be understood from the fact that the spin eigenvalues are quantized such that $\sigma_z=\pm 1$. In that case, Eq. (\ref{Fclassical}) then leads to \begin{equation} C = \frac{\Phi}{2\pi} = \frac{1}{2\pi} \left(\int_0^{2\pi} d\varphi \right)\left(\int_0^{\pi} F_{\theta\varphi} d\theta \right) = 1. \label{formulaC} \end{equation} We introduce $F_{\theta\varphi}=\partial_{\theta}A'_{\varphi}=\frac{\sin\theta}{2}$ with ${\bf F}=\bm{\nabla}\times{\bf A}=\bm{\nabla}\times{\bf A}'$, which is a smooth (gauge-invariant) and continuous function on the whole surface of the sphere. Similarly as the effect of a charge $e$ producing a radial electric field or a Dirac monopole producing a radial magnetic field, we can interpret $C=1$ as an effective topological charge encircled by the surface; see Fig. \ref{Edges.pdf}. The topological charge then turns the sphere into a dounught (torus) or cup from the mathematics of surfaces counting the number of holes $g$. With the presence of a singularity or topological charge inside the sphere, the Euler characteristic $\chi=2-2g$ changes from $2$ to $0$ with $g=C$ in agreement with the flux integration of the Berry curvature on the Riemann surface and the Poincar\' e-Hopf theorem. The definition of $C$ is also in accordance with the first Chern topological number \cite{Chern}. \begin{center} \begin{figure}[t] \includegraphics[width=0.47\textwidth]{SphereCylinder} \caption{(Left) Definition of the smooth fields ${A}'_{\varphi}$ where the north and south hemispheres or regions meet at the angle $\theta_c$. The function $A_{\varphi}$ is smoothly defined on the whole surface. The topology of the sphere with a radial magnetic field then is similar to that of a torus or cup with a handle in the presence of a topological charge. From the point of view of the smooth fields, we can associate two thin cylinders inside the sphere joigning each pole to the equatorial plane around the topological charge and transporting $A_{\varphi}(0)$ and $A_{\varphi}(\pi)$ respectively in the definitions of $A'_{\varphi}(\theta<\theta_c)$ and $A'_{\varphi}(\theta>\theta_c)$. At a pole, a purple circle can be adiabatically deformed from a small radius $r_c\rightarrow 0$ onto a unit radius from Stokes' theorem since we can define $\bm{\nabla}\times {\mathbf{A}}=0$ outside the surface $S^2$. Therefore, we have ${A}_{\varphi}(0) =\lim_{\theta \rightarrow 0}{A}_{\varphi}(r_c,\theta,\varphi)=\lim_{\theta\rightarrow 0}{A}_{\varphi}(\theta)$ and similarly ${A}_{\varphi}(\pi) =\lim_{\theta \rightarrow \pi}{A}_{\varphi}(r_c,\theta,\varphi)=\lim_{\theta\rightarrow \pi}{A}_{\varphi}(\theta)$. The topological charge in green is encircled by two circles in the equatorial plane of the sphere and the topological information is transported on a thin cylinder from the equator to a pole. (Right) Equivalent topological representation on the cylinder with a topological charge therein and a uniform Berry curvature ${\bf F}$. The cylinder geometry reveals two protected chiral edge modes with a quantized current analytically.} \label{Edges.pdf} \end{figure} \end{center} \vskip-0.5cm We can also re-write this equality locally at the poles \begin{equation} \label{C} C = \int_0^{\pi} \frac{\sin\theta}{2} d\theta = -\frac{1}{2}[\cos \theta ]_0^{\pi} = (A_{\varphi}(\pi) - A_{\varphi}(0)). \end{equation} From the Ehrenfest theorem, then we identify $\langle \psi_+ |\sigma_z |\psi_+\rangle = \langle \sigma_z\rangle = \cos\theta$ such that \begin{equation} \label{polesC} C = (A_{\varphi}(\pi) - A_{\varphi}(0)) = \frac{1}{2}\left(\langle\sigma_z(0)\rangle - \langle\sigma_z(\pi)\rangle\right). \end{equation} This sphere's model is realized in circuit quantum electrodynamics \cite{Roushan,Boulder} and can equally be realized in atomic physics. The topological number can be detected by rolling the spin adiabatically from north to south pole when changing linearly the polar angle with time $\theta=vt$. The topological protection is observed when deforming slightly a sphere onto an ellipse \cite{Roushan}. Then, we have \begin{equation} C = -\frac{1}{2}\int_0^{\pi} \frac{\partial\langle\sigma_z\rangle}{\partial\theta} d\theta = -\frac{1}{2}\int_0^{T_{\pi}} \frac{\partial\langle\sigma_z\rangle}{\partial t} dt, \label{drive} \end{equation} where $T_{\pi}=\frac{\pi}{v}$. Similarly as Eq. (\ref{Phi'}), Eq. (\ref{formulaC}) is equivalent to \begin{equation} \label{Cgeometry} C = \left({A}'_{\varphi}(\theta_c^-) - {A}'_{\varphi}(0) + {A}'_{\varphi}(\pi) - {A}'_{\varphi}(\theta_c^+)\right). \end{equation} Since $A'_{\varphi}(0)={A}'_{\varphi}(\pi)=0$ then this implies \begin{eqnarray} \label{CA'} C &=& A'_{\varphi}(\theta_c^-) - A'_{\varphi}(\theta_c^+) = 1 \nonumber \\ &=& A'_{\varphi}(\theta<\theta_c) - A'_{\varphi}(\theta>\theta_c). \end{eqnarray} The topological number can be viewed as two circles at the equator encircling equally the topological charge, similarly as in the presence of vortices in a superfluid, such that $\chi=2-2C=0+0$. This form of the topological number, which can be understood from Stokes' theorem, then also justifies the re-interpretation of the curved space into an effective flat metric through the discussion around Eq. (\ref{geodesic}). It is also interesting to observe that Eq. (\ref{C}) gives the same result as a one-dimensional Zak phase \cite{Zak} defined here as a gradient of the polar angle \begin{equation} \label{Zak} C = \frac{1}{\pi}\int_0^{\pi} |\bm{\nabla}\theta| d\theta = 1. \end{equation} \subsection{Smooth Fields and Quantum TopoMetry} \label{smooth} Here, we show that the definition of the smooth fields can be understood from quantum topometry on the Bloch sphere related to Fig. \ref{Edges.pdf}, elaborating on a geometrical proof with two boundaries per hemisphere \cite{HH}. This proof is useful to show that the global topological number can be measured from the poles of the sphere only simply from geometry and it gives another view to Eq. (\ref{Cgeometry}). This proof will be then applicable to one and also to multi-spheres systems. We define the smooth fields ${\bf A}'$ equal to zero at the poles, such that the topological charge can be defined as \begin{equation} C = \frac{1}{2\pi}\iint_{S^{2'}} \bm{\nabla}\times{\bf A}'\cdot d^2{\bf s}, \label{topologicalnumber} \end{equation} with here $S^{2'}$ corresponding to the slightly modified Riemann, Poincar\' e, Bloch sphere where we subtract the two poles (two points). Introducing the smooth field ${\bf A}'$ leads to the definition of the area differential $d^2{\bf s}=d\varphi d\theta {\bf e}_r$ similarly as in Secs. \ref{potential} and \ref{spin1/2}. We also have ${\bf F}=\bm{\nabla}\times{\bf A}'=\bm{\nabla}\times{\bf A}$ on $S^{2'}$ to validate $C$ as a topological number. This relation ensures the form of the smooth fields in the two hemispheres. This step then requires that ${\bf A}$ is smooth at the two poles leading then to the azimuthal topological responses $A_{\varphi}(0)$ and $A_{\varphi}(\pi)$. If we define two regions, north and south hemispheres, meeting at the boundary angle $\theta_c$, then from Stokes' theorem applied on each region we have \begin{equation} \iint_{north'} \bm{\nabla}\times{\bf A} \cdot d^2{\bf s} = \int_0^{2\pi} \left(A_{N\varphi}(\theta,\varphi)-A_{\varphi}(0)\right) d\varphi. \label{north'} \end{equation} To write this line, we take into account the information in caption of Fig. \ref{Edges.pdf}. Here, $A_{N\varphi}=A_{\varphi}(\theta<\theta_c)$ refers to the azimuthal component of the Berry connection in the north region and $A_{\varphi}(0)=\lim_{\theta\rightarrow 0} A_{N\varphi}(\theta,\varphi)$. Similarly, in the south region, we have \begin{equation} \hskip -0.2cm \iint_{south'} \bm{\nabla}\times{\bf A} \cdot d^2{\bf s} = -\int_0^{2\pi} \left(A_{S\varphi}(\theta,\varphi)-A_{\varphi}(\pi)\right) d\varphi. \label{south'} \end{equation} Here, $A_{S\varphi}=A_{\varphi}(\theta>\theta_c)$ refers to the azimuthal component of the Berry connection in the south region and $A_{\varphi}(\pi)=\lim_{\theta\rightarrow \pi} A_{S\varphi}(\theta,\varphi)$. The relative $-$ sign between the two regions on the right-hand side comes from the different orientations of the two surfaces. From these two equalities, we can then define smooth fields on each region $A'_{\varphi}(\theta<\theta_c)=A_{N\varphi}(\theta,\varphi)-A_{\varphi}(0)$ and $A'_{\varphi}(\theta>\theta_c)=A_{S\varphi}(\theta,\varphi)-A_{\varphi}(\pi)$. Now, suppose we slightly move the boundary close to a pole, for instance setting $\theta_c\rightarrow 0^+$. Within our definitions, since ${\bf A}$ is smooth close to the pole this implies that $A_{S\varphi}(\theta\rightarrow 0^+)=A_{\varphi}(0)=A_{N\varphi}(\theta\rightarrow 0^+)$ and therefore that $C=A_{\varphi}(\pi)-A_{\varphi}(0)$ from Eq. (\ref{north'}) and (\ref{south'}). This traduces that the global topological Chern number can be defined locally from the poles. Moving back slowly the boundary close to the equator through small portions, from the smoothness of the fields then we conclude that we can yet define $A_{S\varphi}(\theta,\varphi)=A_{N\varphi}(\theta,\varphi)=A_{\varphi}(\theta)$ for each interval such that the smooth fields can be defined as $A'_{\varphi}(\theta<\theta_c)=A_{\varphi}(\theta)-A_{\varphi}(0)$ and $A'_{\varphi}(\theta>\theta_c)=A_{\varphi}(\theta)-A_{\varphi}(\pi)$ in accordance with definitions above (summing Eqs. (\ref{north'}) and (\ref{south'})) and with the classical definitions in Eqs. (\ref{vecpot}). Indeed, this is equivalent to say that the relation $C=A_{\varphi}(\pi)-A_{\varphi}(0)$ can be obained adding Eqs. (\ref{north'}) and (\ref{south'}) for any $\theta_c$, as $C$ is uniquely defined. The smoothness of the function $A_{\varphi}(\theta)$ is also important to validate that the total flux created by the two disks at the boundary is zero and therefore that all the flux is distributed on the curved surface of the sphere. Through the definition of these smooth fields, the Berry gauge field $A_{\varphi}$ is continuous and smooth on the whole sphere and the topological number then precisely occurs through the discontinuity of the function $A'_{\varphi}$ at the interface. Yet, in each region the field $A'_{\varphi}$ is also smoothly defined. It is then interesting to observe that we can equivalently transport the discontinuity of the function $A'_{\varphi}(\theta<\theta_c)-A'_{\varphi}(\theta>\theta_c)$ back to the poles which then leads to a local interpretation of the topological number in terms of $A_{\varphi}(\pi)-A_{\varphi}(0)$. Related to Fig. \ref{Edges.pdf}, we can precisely define two cylinders (handles) with a thin radius $r_c$ piercing or approaching the center of the sphere on each side of the boundary \begin{equation} \oint {\bf A}'(\theta_c^{\pm})\cdot{\bf dl} = \oint {\bf A}(\theta_c^{\pm})\cdot{\bf dl} -\oint_{r=r_c} {\bf A}(pole^{S,N})\cdot{\bf dl}, \end{equation} such that we effectively transport in a Dirac string of radius $r_c$ informations from a pole $S,N$ down to or up to an angle $\theta_c^{\pm}$. We emphasize here that since the global topological number can be defined locally from the poles, this implies that we may then slightly adjust a sphere onto an ellipse or a cylinder. In Sec. \ref{cylinderformalism}, we present an analytical understanding of the smooth fields and of the occurrence of edge modes on a cylinder geometry and super-ellipses structures with uniform Berry curvatures. In Sec. \ref{Geometry}, we adiabatically deform this geometry onto generalized cubic geometries or three-dimensional geometries with corners starting from superellipses in the plane. Hereafter, we show that the smooth fields provide important information related to physical observables associated to the quantum transport and light-matter coupling. In Table I, we review the quantum symbols related to the smooth fields and observables. The last line is related to Sec. \ref{lightKM} and the Pfaffian for two-dimensional topological insulators through the response to circularly polarized light. \begin{table}[t] \caption{Quantum Formalism and Symbols} \centering \begin{tabular}{c c} \hline\hline Smooth Fields and Observables & \hskip 0.5cm Definitions \\ \hline $A_{\varphi}$ & $-i\langle \psi| \partial_{\varphi} |\psi\rangle$ \\ $A'_{\varphi}(\theta<\theta_c)$ & $A_{\varphi}-A_{\varphi}(0)$ \\ $A'_{\varphi}(\theta>\theta_c)$ & $A_{\varphi}-A_{\varphi}(\pi)$ \\ $F_{\theta\varphi}=\partial_{\theta}A'_{\varphi}$ & $C=\int_0^{\pi} F_{\theta\varphi}d\theta$ \\ $C$ & $A_{\varphi}(0)-A_{\varphi}(\pi)$\\ $C$ & $A'_{\varphi}(\theta<\theta_c) - A'_{\varphi}(\theta>\theta_c)$ \\ $C$ & $\frac{1}{2}(\langle\sigma_z(0)\rangle - \langle \sigma_z(\pi)\rangle)$ \\ $\chi$ & $2-2g=2-2C$ \\ $J_{\perp}(\theta)$ & $\frac{e}{T}A'_{\varphi}(\theta<\theta_c)$ \\ $G$, $\sigma_{xy}$ & $\frac{q^2}{h}C$ \\ $\alpha(\theta)$ & $C^2 +2A'_{\varphi}(\theta<\theta_c)A'_{\varphi}(\theta>\theta_c)$ \\ $\alpha(0) = \alpha(\pi) = 2\alpha(\frac{\pi}{2})$ & $C^2$ \\ $g_{\mu\mu}$ & $\frac{1}{2}\frac{{\cal I}(0)}{m^2} = \frac{1}{2}\frac{{\cal I}(\pi)}{m^2}$\\ $\alpha_{\uparrow}(\theta)+\alpha_{\downarrow}(\theta)$ & $|C_s|-(P({\bf k}))^2$ \\ \\ \hline % \end{tabular} \label{tableI} \end{table} \subsection{Transport from Newtonian Mechanics and Force, Quantum Mechanics and Parseval-Plancherel Theorem} \label{ParsevalPlancherel} To show the usefulness of the formalism we study the transport on the sphere due to an electric field ${\bf E}=E{\bf e}_{x_{\parallel}}=-\bm{\nabla} V$ where the unit vector ${\bf e}_{x_{\parallel}}$ refers to the direction of the electric field. To apply the Parseval-Plancherel theorem, we will define the wave-vector through $(k_{\parallel},k_{\perp})$ on the same sphere and for simplicity we choose the direction of ${\bf e}_{x_{\parallel}}$ to be related to $k_{\parallel}$. Applying periodic boundary conditions for the wave-vector components will allow us to link with topological properties of lattice models from the reciprocal space. Similar as for the charged particle in a magnetic field, we will navigate from classical to quantum mechanics in a coherent way. We start from the Hamiltonian for a charge $q$, mass $m$ and spin-$\frac{1}{2}$ such that the motion longitudinal to the direction of the electric field is associated to the polar angle direction: \begin{equation} H_{\parallel} = \frac{(\hbar k_{\parallel})^2}{2m} + qV - {\bf d}\cdot\mathbfit{\sigma}. \end{equation} We introduce the Planck constant $h$ such that $\hbar=\frac{h}{2\pi}$. To derive simple arguments, here the key point is not to develop the correspondence with the Laplacian in spherical coordinates but rather start from the reciprocal space to define the unit sphere, as $(k_{\parallel},k_{\perp})=(\theta,\varphi)$. The analogy with a flat metric for the motion of the phase $\theta(t)$ can also be understood from the form of the geodesic on the sphere, see Eq. (\ref{geodesic}). For the motional part of the Hamiltonian, then we should satisfy $\dot{x}_{\parallel} = \frac{p_{\parallel}}{m}=\partial_{p_{\parallel}}H$ and $\dot{p}_{\parallel} = - \partial_{x_{\parallel}}H$, consistent with the Newton equation $ma_{\parallel}=\dot{p}_{\parallel}=\hbar \dot{k}_{\parallel}=qE$ and the Coulomb force. Then, we obtain the equation \begin{equation} \label{distancetime} \theta(t) = k_{\parallel}(t) = \frac{q}{\hbar} E t. \end{equation} This equation can also be understood from the Lagrangian form in spherical coordinates for a charged particle of mass $m$ \begin{equation} \label{lagrangian} L = \frac{1}{2}m\dot{\tilde{\theta}}^2 + \frac{1}{2}m\sin^2\tilde{\theta}\dot{\tilde{\varphi}}^2 - qV(\tilde{\theta}) \end{equation} where $\tilde{\theta}=x_{\parallel}$ and $\tilde{\varphi}=x_{\perp}$ now describe the polar angle and azimuthal angles of the unit sphere associated to the real space variables $(x_{\parallel},x_{\perp})$. From Euler-Lagrange equations, we have \begin{equation} \label{Lagrangian} m\ddot{\tilde{\theta}} = qE + \sin\tilde{\theta}\cos\tilde{\theta} \dot{\tilde{\varphi}}^2= \hbar \dot{\theta}, \end{equation} with the relation between potential and electric field $E=-\frac{dV(\tilde{\theta})}{d\tilde{\theta}}$. We start at the north pole such that $\tilde{\theta}=0$ at time $t=0$. From the first term in Eq. (\ref{Lagrangian}), this gives rise to $m\dot{\tilde{\theta}} = qEt = \hbar \theta$ and to $\tilde{\theta} = \frac{1}{2m} qE t^2$. In a short-time $dt$, when writing the differentials, then the term in $\sin\tilde{\theta}$ would only give negligible $dt^2$ corrections. For the transverse direction, the Euler-Lagrange equation in a short-time $dt$ leads to $\ddot{\tilde{\varphi}}=\dot{\varphi}=0$ or equivalently to $\dot{k}_{\perp}=0$. At initial time, we can assume that $k_{\perp}=0$ which leads to $\dot{\tilde{\varphi}}=0$ in Eq. (\ref{Lagrangian}). In this way, this stabilizes an angle $\theta_E(t)=\frac{qE}{\hbar}t$ in agreement with the form of the geodesic in Eq. (\ref{geodesic}) along the polar angle direction. In the spherical geometry, including the effect of a gravitational potential $-mgz=-mg\cos\tilde{\theta}$ would then result in a term $mg\sin\tilde{\theta}$ accompanying $qE$ in Eq. (\ref{Lagrangian}). Developing such a term from the equatorial plane with $\tilde{\theta}\sim \frac{\pi}{2}$, then we observe that the electric field plays a similar role as the gravitational potential in this steep region. On the one hand, the response of the charged particle can be viewed as an analogue of a response to a gravitational potential from the equatorial plane. On the other hand, the effect of the gravitational potential may be precisely calibrated from time measurements on a particle to reach the south pole at different electric fields. The gravitational potential may also have useful applications as an entanglement source \cite{UCL}. Here, we study the perpendicular response to the electric field in a quantum mechanical form to show a quantized response induced by the topological properties coming from $- {\bf d}\cdot\mathbfit{\sigma}$ in the Hamiltonian. We derive the topological current in the transverse direction to the polar direction from the Parseval-Plancherel theorem. We define the averaged transverse current density as \cite{HH} \begin{eqnarray} J_{\perp} &=& \frac{q}{T}\int_0^T \frac{d\langle x_{\perp}\rangle}{dt} dt = \frac{q}{T}\left(\langle x_{\perp}\rangle(T)-\langle x_{\perp}\rangle(0)\right) \\ \nonumber &=& \oint \left(J_{\varphi}(\varphi,T)-J_{\varphi}(\varphi,0)\right)d\varphi, \end{eqnarray} such that the time $T$ in the protocol is related to the angle $\theta$ through the longitudinal dynamics $\theta=q E T/\hbar = v^* T$ and $\varphi$ represents the wave-vector associated to the $x_{\perp}$ direction in real space. For a fixed angle $\varphi$, then we identify \begin{equation} \label{Jtransverse} J_{\varphi}(\varphi,\theta) = \frac{i q}{4\pi T}\left(\psi^{*}\frac{\partial}{\partial \varphi}\psi - \frac{\partial\psi^*}{\partial\varphi}\psi\right) = \frac{i q}{2\pi T}\psi^*\frac{\partial}{\partial \varphi}\psi \end{equation} and the factor $\frac{1}{4\pi}$ absorbs the normalization of the motional part of the wave-function such that within our definition $\langle \psi| \psi\rangle=1$ with $|\psi\rangle=|\psi_{motional}\rangle\otimes|\psi_+\rangle$. It should be noted that in the absence of the magnetic field, $J_{\perp}=0$ which is also in accordance with the classical equation of motion $\dot{k}_{\perp}=0$. Furthermore, solutions of the quantum equation \begin{equation} i\hbar\frac{\partial}{\partial t}\psi(\varphi,t) = \frac{\hbar^2\varphi^2}{2m} \psi(\varphi,t), \end{equation} are of the form $\psi(\varphi,t) = \frac{1}{\sqrt{4\pi}}e^{-i\frac{\hbar^2\varphi^2}{2m} t}$ which include solutions with $\pm \varphi$. If we average the two contributions for positive and negative $\varphi$ then this is equivalent to say that $k_{\perp}=0$ in the semi-classical approach. Therefore, in our situation, for $q=e>0$ such that $\theta>0$ through $J_{\perp}$ we probe the topological linear response to the electric field induced by the presence of the radial magnetic field on the sphere acting on the spin degrees of freedom. In this way, Eq. (\ref{Jtransverse}) turns into \begin{eqnarray} \label{Jperp} |J_{\perp}(\theta)| = \frac{e}{2\pi T} \oint A'_{N\varphi}(\varphi,\theta)d\varphi &=& \frac{e}{T}A'_{\varphi}(\theta<\theta_c), \nonumber \\ \end{eqnarray} where $A'_{N\varphi}$ is identical to the smooth field $A'_{\varphi}(\theta<\theta_c)$ defined in Eq. (\ref{smoothfields}). The particle starts at $t=0$ from the north pole and we assume here that at time $T$ the particle remains inside the same $N$ hemisphere (to ensure an adiabatic protocol) such that we can adjust $\theta_c$ accordingly closer to the south pole. In fact, to obtain a gauge-invariant form, it is appropriate to fix here $\theta_c=\pi$ corresponding to the south pole. The typical time scale to reach the south pole is defined through $\theta(T)=\pi=\frac{q}{\hbar}ET$. When fixing $\theta_c=\pi$, this implies $A'_{\varphi}(\theta_c^+)=0$ in Eq. (\ref{CA'}) implying the simple relation $A'_{\varphi}(\theta_c^-)=C$. It is interesting to observe that the relation between the transverse pumped current and the topological number is shown without invoking any specific form of the wavefunction of the system such that this relation remains valid for multiple spheres. In that case, from Eq. (\ref{CA'}), we have $A'_{N\varphi}(\varphi,\theta\rightarrow\theta_c)=C$ and therefore the produced current density is \cite{HH} \begin{equation} \label{Jperp2} |J_{\perp}(T)| = \frac{e}{T} C. \end{equation} The transverse pumped charge is $Q=J_{\perp}.T=eC$ and it is therefore quantized in units of $C$. This is also equivalent as if the particle acquires a macroscopic current \begin{equation} \label{polarization} |J_{\perp}|T=\frac{e\hbar (k_{\perp}T)}{m} = eC. \end{equation} The integrated current on the whole surface of the sphere then reveals a quantization related to the presence of the topological charge. Analogously to the Bohr quantization of an atom for the angular momentum, here the transverse pumped response is quantized in units of $C$. In Sec. \ref{Observables}, we show a correspondence between this smooth-fields' formalism and general many-body theory relating then to the Karplus-Luttinger velocity \cite{KarplusLuttinger} and transport \cite{Thouless1983}. It is perhaps also relevant to mention that the formula $J_{\perp}=eC$ can be equivalently obtained by fixing the interface at any angle $\theta_c$ through Eq. (\ref{CA'}) via the transportation of a charge $e$ from north pole to this interface and simultaneously the transportation of a charge $-e$ from south pole to the same interface. At this stage, one may question the feedback of the produced current $J_{\perp}$ onto the semiclassical analysis in Eq. (\ref{lagrangian}) which has resulted in the identification $\theta(T)=\pi=\frac{q}{\hbar}ET$. From $A'_{\varphi}(\theta<\theta_c)=\sin^2\frac{\theta}{2}$, we observe that this gives rise to a small correction in $\dot{\tilde{\varphi}}$ (being also related to $J_{\perp}$) proportional to $E^2 t$ at a time $t$ which then means a (very small) correction of the order of $E^4 dt^2$ when evaluating the influence of $\sin\tilde{\theta}\cos\tilde{\theta} \dot{\tilde{\varphi}}^2$ on $\dot{\theta}$. Therefore, we can safely neglect this small effect in the evaluation of the angle $\theta(t)$ such that the typical time scale to reach the south pole is defined through $\theta(T)=\pi=\frac{q}{\hbar}ET$. \subsection{Dynamics, Energetics and Fourier Series} Here, we develop further the transport from the point of view of a charge which navigates from north to south pole adiabatically when adjusting appropriately $\theta_c=\pi$ such that $A'_{\varphi}$ and $A_{\varphi}$ remain smooth on the whole trajectory. To establish a link with Fourier series it is useful to evaluate the (mean) current \begin{equation} |\bar{J}_{\perp}| = \frac{eC}{2T}. \label{current} \end{equation} At a time $t$, we define the averaged pumped charge in the transverse direction when summing all the accumulated currents until angle $\theta$ associated to time $t=\frac{\hbar \theta}{e E}$ \begin{equation} \bar{Q} = \frac{e}{t}\int_0^t \sin^2\frac{\theta}{2} d\theta. \end{equation} From the smooth fields, then we have the identity $\bar{Q}(T)= \frac{eC}{2}$ and this charge is accumulated in the time $T=\frac{\hbar\pi}{eE}$ such that $|\bar{J}_{\perp}|= \frac{eC}{2T}$. The presence of the topological invariant $C$ in this identity can be precisely understood as follows. First, one may write down $\sin^2\frac{\theta}{2}$ as $C-\cos^2\frac{\theta}{2}$ and close to the north pole we also have the limit that $\cos^2\frac{\theta}{2}\rightarrow C$ such that we may then identify $\cos^2\frac{\theta}{2}\rightarrow \frac{C}{2}(1+\cos\theta)$ on the trajectory. This is also physical in the sense that the averaged pumped charge is yet related to the encircled topological charge. In the sense of quantum mechanics, we can relate the current with mean velocity through $J = e \langle v_{\perp}\rangle$ with $\langle v_{\perp}\rangle = \frac{\hbar k_{\perp}}{m} = \frac{\hbar \varphi}{m}$. In this way, we have a simple correspondence between the pumped current and the transverse momentum. The quantity $J_{\perp}^2(\theta)$ is directly related to a produced energy, the kinetic energy in the transverse direction produced by the topological phase and the electric field applied along the polar angle. Bringing a particle (electron) from the north pole up to $\theta_c=\pi$ produces the transverse kinetic energy \begin{equation} \label{Ekin} E_{kin} = \frac{1}{2m}(\hbar k_{\perp})^2 = \frac{m}{2T^2}C^2. \end{equation} We can also define an averaged kinetic energy \begin{equation} \bar{E}_{kin} = \frac{1}{\pi}\int_0^{\pi} \frac{m}{2}\frac{(eE)^2}{\hbar^2} \frac{\sin^4\frac{\theta}{2}}{\theta^2}d\theta \approx \frac{\pi m}{T^2}\frac{C^2}{8}, \end{equation} which is slightly reduced but comparable to $E_{kin}$. Now, we discuss a relation with Fourier series by absorbing the effect of the radial magnetic field simply through the equivalence with the transverse pumped current $|J_{\perp}(\pi)|=\frac{e}{T}C$ and also through the averaged pumped current $|\bar{J}_{\perp}|$ as if it would be produced by another electric field parallel to the azimuthal angle. To relate to the smooth fields or to define a function $J_{\perp}(\theta)$ we suppose that this transverse electric field is proportional to the electric field applied along the polar direction. Then, we identify $\hbar\dot{\varphi} = \alpha' eE$ leading to $\dot{\varphi} = \alpha' \dot{\theta}$ and therefore to a linear relation between the azimuthal angle and the polar angle $\varphi = \alpha' \theta$. Therefore, we identify \begin{equation} {\cal J}_{\perp}(\theta) = |J_{\perp}(\theta)| = \frac{e\hbar}{m}\varphi = \frac{\alpha' e \hbar}{m} \theta = \alpha \theta, \end{equation} where we implicitly assume that $\alpha'>0$ and $\alpha>0$. This relation is applicable in the physical space $\theta\in[0;\pi]$ and we can in fact generalize this relation on the full cycle $\theta\in[0;2\pi]$ from the continuity of the current along any line parallel to the equator such that ${\cal J}_{\perp}(\theta)$ defined on $[0;\pi]$ can be viewed as a even function on the interval $[0;2\pi]$. Then, we have ${\cal J}_{\perp}(\theta) = \alpha \theta$ for $\theta\in [0;\pi]$ and ${\cal J}_{\perp}(\theta) = -\alpha\theta + \alpha 2\pi$ for $\theta\in [\pi;2\pi]$. We obtain the same decomposition of ${\cal J}_{\perp}(\theta)$ from the point of view of a charge $-e$ navigating from south to north pole if we redefine the angle $\theta$ such that $\theta=\pi$ at $t=0$ and $\theta=0$ at $vt=\pi$. If we visualize this function as a periodic function of period $2\pi$ then this corresponds to a triangular periodic function which can be decomposed as a Fourier series. Through the correspondence of averaged current within the topological phase \begin{equation} \frac{1}{2\pi}\int_0^{2\pi} {\cal J}_{\perp}(\theta) d\theta = \frac{1}{\pi}\int_0^{\pi} {\cal J}_{\perp}(\theta) d\theta = \frac{e C}{2{T}} = \bar{{\cal J}}_{\perp} \end{equation} we obtain \begin{equation} \alpha = \frac{eC}{\pi {T}}. \end{equation} To calculate simply the Fourier coefficients, it is useful to redefine $\theta' = \theta-\pi$ such that ${\cal J}_{\perp}(\theta)={\cal J}_{\perp}(\theta'+\pi)=f(\theta')$ with $f(\theta') = \alpha(\theta'+\pi)$ for $\theta'\in [-\pi;0]$ and $f(\theta') = \alpha (-\theta' +\pi)$ for $\theta'\in [0;\pi]$ such that $f(-\theta')=f(\theta')$. In this way, the function can be visualized through the Fourier decomposition $$ f(\theta') = f_0 +\sum_{n=1}^{+\infty} a_n \cos(n \theta'), $$ which corresponds to \begin{equation} {\cal J}_{\perp}(\theta) = f_0 +\sum_{n=1}^{+\infty} a_n (-1)^n \cos(n \theta) = \frac{e}{T} A'_{\varphi}(\theta<\theta_c). \end{equation} Here, $f_0 = \bar{J}_{\perp}=\frac{eC}{2T}$. To determine the Fourier coefficients $a_n$, we can write down the boundary conditions ${\cal J}_{\perp}(0)={\cal J}_{\perp}(2\pi)=0$ and also ${\cal J}_{\perp}(\pi)=\alpha\pi=\frac{e}{T}C$ which traduces that the sphere is equivalently topological. The equation $J_{\perp}(0)=J_{\perp}(2\pi)=0$ comes from the definition of the function $J_{\perp}(\theta)$ which also implies that the function $A'_{\varphi}$ is defined to be zero at the poles. From the Riemann and Hurwitz-Zeta functions, the solution takes the form $a_{2n}=0$ and \begin{equation} a_{2n+1} = \frac{8}{\pi^2} \frac{1}{(2n+1)^2} f_0. \end{equation} It is then interesting to observe that developing the series to order $a_1$, we have \begin{equation} {\cal J}_{\perp}(\theta) = \frac{eC}{2T}\left(1-\frac{8}{\pi^2}\cos\theta\right) \end{equation} which is in fact very close to the result produced by the radial magnetic field in Eq. (\ref{Jperp}) \begin{equation} {\cal J}_{\perp}(\theta) = \frac{e}{2T}(1-\cos\theta). \end{equation} This function indeed evolves linearly in a (large) region around the equatorial plane. We can also evaluate the energetics related to the Fourier series. The mean energy when averaging on $\theta\in [0;2\pi]$ is then related to the Parseval-Plancherel theorem and is equivalently related to the interval $\theta\in [0;\pi]$ \begin{eqnarray} \hskip -0.5cm \frac{1}{\pi}\int_0^{\pi} \frac{m}{2e^2}{\cal J}_{\perp}^2(\theta) d\theta &=& \frac{m}{2e^2}\left(f_0^2 +\frac{1}{2}\sum_{n=1}^{+\infty} a_n^2\right) \nonumber \\ &=& \frac{mC^2}{6{T}^2}. \end{eqnarray} In Sec. \ref{curvature}, we will justify this correspondence further from the fact that the topological state on the sphere can be described through the Karplus-Luttinger velocity \cite{KarplusLuttinger} producing indeed a transverse velocity which is proportional to the longitudinal electric field. \subsection{Cylinder Geometry with Uniform Berry Curvatures} \label{cylinderformalism} The cylinder geometry is particularly judicious to characterize topological properties of the quantum Hall effect \cite{LaughlinQHE,Fabre}. Here, we show the application of smooth fields on the cylinder geometry of Fig. \ref{Edges.pdf} starting the analysis from the Berry curvature on the topological sphere. The cylinder will allow us to define edge modes localized around the top disk and the bottom disk related to the topological number $C$ of a topological dispersive Bloch band \cite{C2}. The topological charge is present in the core of the cylinder. We define the surface of the (large) vertical cylinder $2\pi H$, with $H$ the height, to be precisely the same as the surface of the sphere $4\pi$ with the radius $r=1$. This leads to fix the height of the cylinder $H=2$. Within this definition, we can then suppose that on the vertical axis of the cylinder, in polar coordinates, ${\bf F}=F{\bf e}_r$ with $F=F_{\theta\varphi}(\theta=\frac{\pi}{2})=\frac{1}{2}$ from Sec. \ref{spin1/2} such that $C=\frac{\Phi}{2\pi}=1$ on the vertical region of the cylinder. This solution admits a similar form of $A_{\varphi}$ than on the unit sphere. Indeed, in cylindrical coordinates \begin{equation} F(\varphi,z)=\partial_{\varphi}A_z-\partial_z A_{\varphi}=\frac{1}{2} \end{equation} allows us to write $A_{\varphi}=-\frac{z}{2}$ with the identification $z=\cos\theta$ on the sphere for the same $r=1$ (in cylindrical and spherical coordinates). Therefore, this solution on the cylinder reveals the smooth fields of Sec. \ref{smooth}. From topological properties, the sphere and the (vertical region of the) cylinder are then identical from the forms of $C$ and from the smooth fields on the surface. These arguments then agree with the fact that $A_{\varphi}=cst$ at the boundary disks with $z=\frac{H}{2}=+1$ at the north disk and $z=-\frac{H}{2}=-1$ at the bottom disk resulting in $A_{\varphi}(0^+)=-\frac{1}{2}$ and $A_{\varphi}(\pi^-)=+\frac{1}{2}$ from the eigenstates in Eq. (\ref{eigenstates}). On the top and bottom surfaces ${\bf A}=cst$ such that the $\oint {\bf A}\cdot {\bf e}_{\varphi} d\varphi$ is identical along a small purple circle associated with the sphere and along a golden circle related to the cylinder geometry in Fig. \ref{Edges.pdf}. Here, we apply the protocol of Sec. \ref{ParsevalPlancherel} with an electrical field on the sphere acting on a charge $q$ through the Coulomb force along the direction ${\bf e}_{\theta}$ related to the polar angle. Since the smooth fields are identical in spherical and cylindrical coordinates, Eq. (\ref{Jperp}) is also valid in cylindrical coordinates. In the correspondence, we assume then that for the same electric field and the same time $T=\frac{h}{2q E}$, the particle reaches the south pole on the sphere and the bottom of the cylinder. For the physical correspondence below, we simply use the fact that Eq. (\ref{Jperp}) and Eq. (\ref{Jperp2}) are equivalent in both geometries, corresponding to the current produced in the perpendicular direction along the ${\bf e}_{\varphi}$ unit vector in the same time $T$. On the other hand, it is correct that we must satisfy the linear relation between time and distance from Eq. (\ref{distancetime}). In the cylindrical geometry this corresponds to a non-linear evolution between distance and time such that $d(\arccos(z))=-\frac{dz}{\sqrt{1-z^2}}=\frac{qE}{\hbar}dt$. In this way, one can formulate a corresponding (Lorentz) force $qE\sqrt{1-z^2}$ which is reduced at $z=+1$ and $z=-1$ due to the curvature effects on the sphere close to the poles. We can now measure the conductance located at the two edges on the cylinder introducing a difference of potential such that $H.E=(V_b-V_t)$ and we suppose that the potential is fixed on the top and bottom disks. Using the form of the final time $T$ for a charge $q$, $T=\frac{h}{2q E}$, then we have \begin{equation} | J_{\perp} | = \frac{q^2}{h}2EC = \frac{q^2}{h} C(V_b-V_t) = (I_b - I_t). \end{equation} We can now visualize $I_b$ and $I_t$ as two edge currents associated with a quantized conductance \begin{equation} \label{conductance} G = \frac{dI}{dV} = \frac{q^2}{h}C. \end{equation} Therefore, $C=1$ measures one edge mode (located at the boundaries of the top and bottom disks) in agreement with the Landauer-B\" uttiker formula \cite{Landauer,Buttiker,Imry}. Similarly as the quantum Hall effect and Laughlin cylinder \cite{LaughlinQHE}, the smooth fields here allow us to reveal analytically the formation of edge modes at the boundary with the top and bottom disks. In Sec. \ref{Geometry}, we show a correspondence towards cube or rectangle geometries and also generalized boxes with a superellipse form in the plane. \subsection{Topological Response from Circularly Polarized Light} \label{lightdipole} Here, we show that coupling an electric dipole to circularly polarized light gives rise to a quantized response at the poles of the sphere related to the square of the topological invariant. Through this $C^2$, there is then a relation between the topological transverse current produced when driving from north to south poles in Eq. (\ref{Ekin}) and the responses to circularly polarized light. We describe the effect of a time-dependent electric field ${\bf E}_{\pm}=E_0 e^{-i\omega t} e^{ikz}({\bf e}_x \mp i {\bf e}_y)$ in the Jones formalism, producing an interaction with an atom or electric dipole located in the $z=0$ plane \cite{Klein} \begin{equation} \label{energyshift} \delta H_{\pm} = E_0 e^{\pm i\omega t} |a\rangle \langle b| +h.c. = E_0 e^{\pm i\omega t} \sigma^+ +h.c. \end{equation} Here, $|a\rangle$ and $|b\rangle$ refer to two discrete energy levels which can also be identified to the $|+\rangle_z$ and $|-\rangle_z$ states of a spin-$\frac{1}{2}$ particle and the light field is assumed to be classical here. Similarly, this Hamiltonian is produced when applying a rotating magnetic field on a spin-$\frac{1}{2}$ particle similarly as in the nuclear magnetic resonance. Suppose the dipole or spin-$\frac{1}{2}$ is in the topological phase described through a radial magnetic field as in Sec. \ref{spin1/2}. Below, we describe the possibility of a topologically quantized photo-electric effect from the poles of the sphere in the situation of resonance. The resonance situation is obtained from the transformation on states $|b\rangle=e^{\mp i \frac{\omega t}{2}}|b'\rangle$ and $|a\rangle=e^{\pm i \frac{\omega t}{2}}|a'\rangle$ such that $E_b-E_a=\pm \hbar\omega$ for the $(\pm)$ polarization. Since we will evaluate real transition rates for the dipole induced by the right-handed circular polarization of light for instance, we can assume here that $E_0$ is real. When using this formalism to describe lattice models, the vector potential itself will couple to the pseudo-spin. Then, we can equivalently re-write $\delta H_{\pm}$ in the basis of the eigenstates $|\psi_+\rangle$ and $|\psi_-\rangle$ introduced in Sec. \ref{spin1/2} such that \begin{eqnarray} \label{deltaH} \delta H_{\pm} &=& E_0 \sin\theta \cos(\varphi \pm \omega t)(|\psi_+\rangle \langle \psi_+| - |\psi_-\rangle \langle \psi_-| ) \\ \nonumber &-& E_0 e^{\pm i\omega t} e^{i\varphi} A'_{\varphi}(\theta>\theta_c)|\psi_+\rangle \langle \psi_-| +h.c. \\ \nonumber &-& E_0 e^{\mp i\omega t} e^{-i\varphi} A'_{\varphi}(\theta<\theta_c) |\psi_+\rangle \langle \psi_-| +h.c. \end{eqnarray} The $+$ $(-)$ sign refers to the right-handed (left-handed) circularly polarized light. The smooth fields are identical to the ones in Eq. (\ref{smoothfields}). The ground state is described through $|\psi_+\rangle$ with a topological number $C=1$. Here, we show a simple relation between the topological properties of the dipole or spin-$\frac{1}{2}$ and the response to this time-dependent electric field or magnetic field. For this purpose, we can apply the Fermi golden's rule approach and calculate the transition rate: \begin{equation} \Gamma_{\pm}(\omega)=\frac{2\pi}{\hbar}|\langle \psi_+ | \delta H_{\pm} |\psi_-\rangle |^2 \delta(E_b - E_a \mp\hbar\omega). \label{rates} \end{equation} From the geometrical definitions above, then we obtain \begin{equation} \Gamma_{\pm}(\omega) = \frac{2\pi}{\hbar}E_0^2{\alpha}(\theta) \delta(E_b - E_a \mp\hbar\omega), \end{equation} with the function \cite{C2} \begin{equation} \label{alpha} \alpha(\theta) = \left(\cos^4\frac{\theta}{2} +\sin^4\frac{\theta}{2}\right). \end{equation} The introduction of smooth fields $A'_{\varphi}$ allows then a simple interpretation of this function in terms of a topological response. Indeed, from the square of Eq. (\ref{CA'}) we have \begin{equation} \label{polarizationstheta} \alpha(\theta) = C^2 +2A'_{\varphi}(\theta<\theta_c)A'_{\varphi}(\theta>\theta_c). \end{equation} Here, $C^2$ precisely refers to the square of the topological invariant defined through Stokes' theorem. Measuring the response at specific angles $\theta$ then allow to reproduce the global topological number $C=1$. For instance one can select the angles close to north and south poles such that $\theta\rightarrow 0$ and $\theta\rightarrow \pi$ respectively: \begin{equation} \label{alpha} \alpha(0) = \alpha(\pi) = C^2. \end{equation} It is important to emphasize here that due to the radial structure of the magnetic field on the surface of the sphere, the north pole will prominently interact with the right-handed wave to fullfill the prerequisite $E_b-E_a=E_- - E_+ = +\hbar\omega>0$ and the south pole will prominently couple with the left-handed wave such that $E_b - E_a = E_+ - E_- = -\hbar\omega$. It should be mentioned here that in the situation of a time-dependent linearly polarized perturbation, Eq. (\ref{polarizationstheta}) would acquire a dependence on the azimuthal angle for a fixed value of $\theta$, yet at the two poles the results in Eq. (\ref{alpha}) would be identical. Also, it is interesting to discuss the response in the equatorial plane similarly as for the classical analysis above. In this situation, the structure of the eigenstates allow each circularly polarized light to interact with the system. The response in the equatorial plane becomes \begin{equation} \label{onehalf} \alpha\left(\frac{\pi}{2}\right) = \frac{C^2}{2} \end{equation} for each light polarization equally. This formula (\ref{alpha}) then can also be reformulated as \begin{equation} \alpha(\theta) = |\langle \psi_+ | \sigma_x | \psi_-\rangle|^2 + |\langle \psi_+ | \sigma_y | \psi_-\rangle|^2. \end{equation} The key point here is that $C^2$ can then be measured from the evolution in time of the lowest energy-band population \cite{C2}, as developed in Sec. \ref{light} related to topological lattice models. The response will take a similar form as for nuclear magnetic resonance with the topological response $\alpha(\theta)$ entering as a prefactor. A quantized photo-electric effect can then be observed close to the poles of the sphere and a one-half response is revealed at the equator for one circular polarization of light. In a sense, this describes a topological protection of the photo-electric effect due to the presence of $C^2$ which may find practical applications related to the production of electronic quasiparticles and current through a light field. Another interesting remark here is that this optical analogy to nuclear magnetic resonance may find practical applications in (medical) imaging. Related to this analysis, we also mention here that recently a four-dimensional scaling for the atomic (dipole) polarisability \cite{4Dscaling} has been emphasized which then reveals that some interesting formulae may yet be identified in quantum physics. \subsection{Topological Phase from Circularly Polarized Light} \label{polarizationlight} In fact, Eq. (\ref{energyshift}) also allows us to show that the time-dependent electric field is able to induce a topological phase for the electric dipole or spin-$\frac{1}{2}$ similarly as the classical analysis of Sec. \ref{electricfield}. If we go to the rotating frame applying the unitary transformation $U_{\pm}(t) = e^{\mp i\frac{\omega t}{2}\sigma_z}$, then the Hamiltonian takes the form \cite{C2} \begin{equation} \delta \tilde{H}_{\pm} = U_{\pm} \delta H_{\pm} U_{\pm}^{-1} \pm \frac{\hbar\omega}{2}\sigma_z, \end{equation} with \begin{equation} U_{\pm} \delta H_{\pm} U_{\pm}^{-1} = E_0\sigma_x. \end{equation} The symbol $\pm$ refers to each circular polarization, right-handed $+$ and left-handed $-$ respectively related to ${\bf E}_{\pm}=\mp E_0{\bf e}_{\varphi}$ as defined in Sec. \ref{potential}. For the dipole (spin) Hamiltonian, we can also reach a time-independent form if we adjust the azimuthal angle $\varphi = \mp \omega t$ and $U_{\pm} (- d_x\sigma_x - d_y\sigma_y) U_{\pm}^{-1} = -d\sin\theta \sigma_x$. If we include the presence of a topological term $-d_z\sigma_z$ with $d_z=d\cos\theta$ then we verify that a resonance can be obtained such that $\pm \frac{\hbar\omega}{2} = \pm d$ corresponding to the right-handed wave interacting with the north pole and the left-handed wave interacting with the south pole. We can indeed adjust the angle $\varphi$ and preserving the topological properties of the system as they depend only on the variable $\theta$. Assuming $E_0\ll d$, then the $E_0\sigma_x$ term will not modify the spin polarizations at the poles such that the topological number remains unity. This also justifies the perturbative analysis of \ref{lightdipole}. The term $E_0\sigma_x$ is precisely equivalent to the terms proportional to $|\psi_+\rangle\langle \psi_-|$ in Eq. (\ref{deltaH}) allowing to mediate inter-band transitions. From the rotating frame, we can also verify that $C^2$ can be measured from the evolution in time of the lowest-energy band population \cite{C2}. Suppose now that $d_z=0$ and that we modify the ${\bf d}$ vector acting on the Bloch sphere such that close to the poles \begin{equation} {\bf d} = d(\sin\theta \cos\varphi, \sin\theta \sin (\zeta \varphi),0), \end{equation} with $\zeta=1$ close to the north pole and $\zeta=-1$ close to the south pole. This situation will precisely correspond to the case of graphene discussed in Sec. \ref{spherelattice}. We discuss here an implementation of a topological phase on the Bloch sphere through the analogy with a radial magnetic field. It is useful to introduce the rotating frame through the transformation $U_{\pm}(t)=e^{\mp i \frac{\omega t}{2}\sigma_z}$ such that $U_{\pm} (- d_x\sigma_x - d_y\sigma_y) U_{\pm}^{-1} = -d\sin\theta e^{-i\zeta\varphi} e^{\mp i \omega t}\sigma^+ +h.c.$. The azimuthal angle then effectively rotates in different directions at the two poles. For a specific light polarization, then we can reach the rotating frame fixing $\zeta\varphi = \mp\omega t$ in the preceding equation. The light-matter interaction turns into $E_0\sigma^+ e^{\pm i \omega t} e^{\mp i\omega t}+h.c.$. From the rotating frame, a resonance condition can be reached with the two poles of the sphere if the right-handed light polarization interacts with the north pole and the left-handed light polarization with the south pole. Physically, this implementation may be realized with a dipole or effective spin $\frac{1}{2}$ interacting with an electric field in the plane at $z=0$ with two circular components such as ${\bf E}={\bf E}_+ + {\bf E}_- = E_0 e^{-i k z} e^{-i\omega t} ({\bf e}_x + i {\bf e}_y) + E_0 e^{ikz} e^{-i\omega t}({\bf e}_x - i {\bf e}_y)$. Here, for instance we assume two symmetric beams propagating in opposite vertical direction and meeting in the plane $z=0$ such that the form of the electric field at $z=0$ precisely corresponds to the definitions in Sec. \ref{lightdipole}. In the vicinity of $\theta=0$ and $\pi$ the time-independent model in the rotated frame then reads \begin{eqnarray} \label{showmatrix} H_{eff} = \begin{pmatrix} \frac{\hbar\omega}{2}\cos\theta & E_0 - d\sin\theta\\ E_0 - d\sin\theta & -\frac{\hbar\omega}{2}\cos\theta \\ \end{pmatrix}.\quad \end{eqnarray} In the present case, then we observe that the induced dynamical term along one diagonal of the matrix is equivalent to a $d_z$ term. The effective ${\bf d}_{eff}$ vector acting on the spin in the rotated basis in the vicinity of the poles can be written as \begin{equation} \label{E0field} {\bf d}_{eff} = \left((\tilde{d}+d)\sin\theta,0, -\frac{\hbar \omega}{2}\cos\theta\right), \end{equation} where $\tilde{d}\sin\theta=-E_0$ producing an ellipse in the space of parameters. The induced gap between the bands $|\psi_+\rangle$ and $|\psi_-\rangle$ tends to $\hbar\omega$ at the poles if $E_0\ll \hbar\omega$. A similar conclusion can be reached in the $|\psi_+\rangle$ and $|\psi_-\rangle$ eigenstates representation such that \begin{equation} \pm (|a\rangle\langle a|-|b\rangle\langle b|) = \pm\cos\theta(|\psi_+\rangle \langle \psi_+| - |\psi_-\rangle \langle \psi_-|). \end{equation} Therefore, for the $+$ polarization, at the north pole the lowest-energy eigenstate is $|\psi_-\rangle=|b\rangle$ and at south pole, for the $-$ polarization, it becomes $|\psi_-\rangle=|a\rangle$ similarly as in the presence of a radial magnetic field. In Ref. \cite{MoessnerCayssol}, a similar situation was discussed through two light polarizations with a vector potential ${\bf A}=-A_0(\sin(\omega t){\bf e}_x +\sin(\omega t -\phi){\bf e}_y$ and $E_0=A_0\omega$ using a high-frequency Magnus development within the Floquet formalism \cite{GoldmanDalibard}. The case $\phi=\frac{\pi}{2}$ can be precisely visualized as the superposition of two circular right- and left-handed polarizations. For $e A_0 v_F \ll \hbar\omega$, this gives rise to a topological $d_z$ term of the form $-\frac{(e v_F A_0)^2}{\hbar\omega}(\sigma_z\zeta)$ close to the poles of the sphere. \subsection{Dissipation and Radiation} The topological number is generally a robust quantity towards perturbations. Here, we show the stability of the topological number towards dissipation effects coming from a bath or from an ensemble of bosons $a_i$ (harmonic modes) \cite{Leggett,Weiss}. The Hamiltonian becomes \begin{equation} H = -{\bf d}\cdot \mathbfit{\sigma} +\sum_i \hbar\omega_i a^{\dagger}_i a_i +\sigma_z \sum_i \lambda_i(a_i+a^{\dagger}_i). \end{equation} We implicitly suppose that the ${\bf d}$ vector plays the role of a radial magnetic field. Fixing the azimuthal angle $\varphi=0$, then this model is equivalent to a spin-boson model characterized through the spectral function $J(\omega) = \pi\sum_i \lambda_i^2 \delta(\omega-\omega_i)$. A situation of particular interest is for ohmic dissipation where $J(\omega)=2\pi\alpha\omega$. As soon as $\alpha\neq 0$, there is an energy coupling between the system (spin) and the reservoir supposed here in the quantum limit. The spin and reservoir become entangled characterized through the entropy $E= - p_+ \ln p_+ - p_- \ln p_-$ with $p_{\pm} = (1\pm \sqrt{\langle \sigma_x\rangle^2 +\langle \sigma_z\rangle^2})/2$. To show the stability of the topological number with respect towards $\alpha$, we can for instance evaluate the ground state energy through Bethe Ansatz \cite{Ponomarenko} or perturbatively in the transverse field \cite{CedraschiButtiker,LeHur}. Using the identifications close to the north pole $h=-d_z=-d\cos\theta>0$ (with for instance $d<0$ and $\theta\rightarrow 0$, $h\ll \omega_c$) and $\Delta=d\sin\theta$ then the spin magnetization can be evaluated from the ground state energy of the total system $\langle \sigma_z\rangle = -\frac{\partial {\cal E}_g}{\partial h}$. This parameter $h$ is precisely introduced in Ref. \cite{LeHur} and should be then distinguished from the Planck constant. This results in \begin{equation} \langle \sigma_z\rangle = 1 - \frac{1}{2}\left(\frac{\Delta}{\omega_c}\right)^2 (1-2\alpha)\Gamma(1-2\alpha)\left(\frac{h}{\omega_c}\right)^{2\alpha-2}. \end{equation} To obtain the result at the south pole this is equivalent to modify $h\rightarrow -h$ or $\sigma_z\rightarrow -\sigma_z$. From this result, we observe that as long as $\alpha<1$, the magnetizations at the poles remain indeed identical and stable. From Eq. (\ref{polesC}), then this shows that the topological number remains identical. The point $\alpha=1$ traduces a Berezinskii-Kosterlitz-Thouless quantum phase transition \cite{Berezinskii,Berezinskii2,KT} where the topological number jumps to zero. This transition is also analogous to the quantum phase transition in the Kondo model \cite{Hewson}. Another way to visualize this instability is through the calculation of the Berry curvature $F_{\theta\varphi}$ which also shows a similar instability at $\alpha=1$ \cite{Henriet}. It is interesting to mention that from entanglement entropy in the equatorial plane corresponding to $h\rightarrow 0$, $E$ reaches a maximum at $\alpha=\frac{1}{2}$ such that interestingly the topological number remains quantized even in the situation where $\alpha>\frac{1}{2}$ as long as $\alpha<1$, referring to the strong-coupling regime \cite{LeHur}. Related to the entanglement entropy $E$, we can also associate an effective temperature induced by the coupling between spin and bath \cite{Williams}. Here, it is judicious to mention progress in measuring the entanglement entropy in many-body systems \cite{Harvard}. We emphasize here that rolling the spin on the sphere in the presence of a cavity or a bath may also be useful for practical (energy) applications as a quantum dynamo \cite{Henriet,EphraimCyril}. The produced coherent field in the bath also shows some relation with the square of the topological number, dynamically, through the stocked energy. A bath may be thought of as a resistance or long transmission line in a mesoscopic circuit \cite{CedraschiButtiker,LeHurBath} or a Bose-Einstein condensate \cite{Recati,Orth}. \subsection{Fractional Topological Entangled Geometry} \label{fractionaltopology} In the recent work \cite{HH}, we introduced the possibility of fractional topological entangled geometry from the curved space and the introduction of smooth fields. Here, we provide an alternative derivation of the relations between entanglement in quantum mechanics and topological properties of the spheres. Before elaborating on the model and the justifications, we emphasize on the main topological properties related to this fractional topological entangled geometry. The occurrence of one-half topological numbers on the sphere may define the occurrence of quantum entanglement at one pole. Here, each sphere presents a coherent superposition of two (equal) surfaces, one encircling the topological charge and participating in the flux production of the Berry curvature on the Riemann surface and one participating in the non-local spooky quantum correlations. From Stokes' theorem, we interpret this fractional number as a circle (encircling the topological charge) on top of a disk acting as a mirror from the point of view of the field lines produced by the topological charge such that a whole sphere gives rise to $\chi=0+1$ and effectively to $g_{eff}=\frac{1}{2}$ from the Euler characteristic of the sphere $\chi=2-2g_{eff}$ similar to the physics of a meron or half Skyrmion \cite{Meron}. For a unique sphere in Sec. \ref{spin1/2}, Stokes' theorem gives rise to two circles at the equator encircling the topological charge such that $\chi=0+0$. The Euler characteristic $\chi=1$ can be viewed as a consequence of the occurrence of a mirror disk forming a boundary at the equator in the vicinity of the topological charge. We mention here that two-dimensional black-holes in a Schwarzschild space-time metric may also be described by an Euler characteristic $\chi=1$ as a result of a boundary for the system \cite{Gibbons}. It is interesting to observe developments relating the Euler characteristic of black holes and a topological interpretation of the Hawking-Bekenstein radiation \cite{Robson}. We also emphasize here recent developments motivated by gauge fields, Aharanov-Bohm effect and Berry phase on the geometrical description of half monopoles \cite{DeguchiFujikawa}. A model to reach fractional topological numbers can be formulated through two spins-$\frac{1}{2}$ as \cite{HH} \begin{equation} {H} =-({\bf d}_1\cdot{\mathbfit{\sigma}}_1 + {\bf{d}}_2\cdot{\mathbfit{\sigma}}_2) + r f(\theta){\mathbfit{\sigma}}_{1z}{\mathbfit{\sigma}}_{2z}. \label{eq:H} \end{equation} The magnetic field ${\bf d}_i$ acts on the same sphere parameterized by $(\theta,\varphi)$ and may be distorted along the $\hat{z}$ direction with the addition of the uniform field $M_i$ according to: \begin{equation} \label{fieldsphere} {\bf{d}}_i=(d\sin\theta\cos\varphi,d\sin\theta\sin\varphi,d\cos\theta+M_i), \end{equation} for $i=1,2$. Below, we analyze the properties of this model such that we have a tensor product state at one pole or a pure state (for instance, the north pole) and an entangled state at the other pole (for instance, at south pole). At the end of this Section, we remind how to adjust the parameters to realize these prerequisites. From the smooth fields and the general definitions in Sec. \ref{smooth} which do not assume a specific class of wavefunctions but just the existence of smooth fields, we define $C_j=A_{j\varphi}(\pi) - A_{j\varphi}(0)$ for each spin $i=1,2$ such that the Nabla or del operator is now defined on each sphere locally through $\partial_{j\varphi}$ or equivalently $\partial_{1\varphi}\otimes \mathbb{I}$ and $\mathbb{I}\otimes \partial_{2\varphi}$ for the two spheres. The eigenstates of such an Hamiltonian can be written as \begin{eqnarray} \label{wavefunction} |\psi\rangle=\sum_{kl}c_{kl}(\theta)|\Phi_k(\varphi)\rangle_1|\Phi_l(\varphi)\rangle_2, \end{eqnarray} with $k,l=\pm$ and $|\Phi_+(\varphi)\rangle_j$ and $|\Phi_-(\varphi)\rangle_j$, forming the Hilbert space, can be chosen related to $|+\rangle_z$ and $|-\rangle_z$ introduced earlier in Sec. \ref{spin1/2} modulo a $\varphi$ phase corresponding to a particular gauge choice in the wavefunction. Here, we define $|\Phi_k(\varphi)\rangle_1|\Phi_l(\varphi)\rangle_2 = |\Phi_k(\varphi)\rangle_1\otimes|\Phi_l(\varphi)\rangle_2$. An important deduction from Sec. \ref{smooth} is that we can choose the same $\varphi$-gauge for the eigenstates in the north and south regions when defining the smooth fields $A'_{j\varphi}(\theta<\theta_c)$ and $A'_{j\varphi}(\theta>\theta_c)$ which have resulted in the equality $C_j=A_{j\varphi}(\pi) - A_{j\varphi}(0)$. This implies that we can introduce the same form of Hilbert space with the same phases-definitions in Eq. (\ref{wavefunction}) for the whole spheres. Since we can adiabatically tune the parameter $r$ this implies that we can also fix the same ``gauge'' for the definitions of $|\Phi_k(\varphi)\rangle_1$ and $|\Phi_l(\varphi)\rangle_2$ for all the phase diagram in the parameters space. We can also assume that the topological responses at the poles should be independent of $\varphi$ as all azimuthal angles are equivalent, $c_{kl}=c_{kl}(\theta)$. Suppose we adjust the interaction $r$ such that the ground state evolves from a product state or pure state $|\psi(0)\rangle = |\Phi_+\rangle_1|\Phi_+\rangle_2$ at $\theta=0$ to an entangled state at $\theta=\pi$ represented by a Einstein-Podolsky-Rosen (EPR) pair or Bell pair $|\psi(\pi)\rangle=\frac{1}{\sqrt{2}}(|\Phi_+\rangle_1|\Phi_-\rangle_2+|\Phi_-\rangle_1|\Phi_+\rangle_2)$ \cite{Hagley,Aspect}. Within our deductions, we can also incorporate the fact that when $r\rightarrow 0$ at south pole the corresponding wavefunction is $|\Phi_-\rangle_1 |\Phi_-\rangle_2$ (modulo a global phase independent of $\varphi$). In this way, we obtain the following identity at south pole \begin{equation} \label{eq2} A_{j\varphi}(\pi) = -i\langle \psi(\pi) | \partial_{j\varphi} |\psi(\pi)\rangle = \frac{A_{j\varphi}(0)}{2} + \frac{A^{r=0}_{j\varphi}(\pi)}{2}, \end{equation} with $A^{r=0}_{j\varphi}(\pi)$ corresponding to the Berry curvature evaluated on $|\Phi_-\rangle_1 |\Phi_-\rangle_2$ equivalent to the state at south pole for $r=0$ assuming $M<d$. Now, we can use the fact that for $r=0$, each sphere is in a topological phase such that \begin{equation} \label{eq1} A^{r=0}_{j\varphi}(\pi) - A_{j\varphi}(0) = q =1 \end{equation} with $q$ corresponding to the encircled topological charge on each sphere normalized to the charge $e$. Here, we implicitly assume that $A_{j\varphi}(0)=A^{r=0}_{j\varphi}(0)=A^{r\neq 0}_{j\varphi}(0)$ as the wavefunction at the north pole remains identical if $r=0$ or $r\neq 0$ within the fractional topological phase. Plugging Eq. (\ref{eq1}) into (\ref{eq2}) then results in the identity \begin{equation} \label{Aj} A_{j\varphi}(\pi) - A_{j\varphi}(0) = q\frac{1}{2} = C_j, \end{equation} for the situation with an entangled wave-function at south pole. In this formula, the factor $\frac{1}{2}$ is precisely related to the probability to be in a given quantum state $|\Phi_+\rangle_j$ or $|\Phi_-\rangle_j$ for the sub-system $j$ at south pole and therefore hides the information on the entangled wavefunction (as we elaborate on in Eq. (\ref{correlation})). From Sec. \ref{smooth}, we can reformulate slightly this equation as \begin{equation} \label{q2} \frac{1}{2\pi}\iint_{S^2}\bm{\nabla}_j\times{\bf A}_j \cdot d^2{\bf s} = \frac{q}{2}, \end{equation} with $d^2{\bf s}=d\theta d\varphi {\bf e}_r$, $\bm{\nabla}_1=\bm{\nabla}_1\otimes\mathbb{I}$ and $\bm{\nabla}_2=\mathbb{I}\otimes\bm{\nabla}_2$. This is also equivalent to set $A_{j\varphi}=0$ at one pole and in the thin handle joigning this pole to the equatorial plane in Fig. \ref{Edges.pdf}. From Eq. (\ref{cosine}), for one sphere we have for instance $A_{j\varphi}(0)=-\frac{1}{2}$ such that Eqs. (\ref{eq1}) and Eq. (\ref{Aj}) are both satisfied if $A_{j\varphi}(\pi)=0$. In this way, from Stokes' theorem one circle at the equator surrounds the topological charge and the entangled region gives rise to a mirror disk as if the field lines induced by the topological charge only radiate on half of the surface. A relation with the spin magnetization at the poles can yet be written similarly as Eq. (\ref{polesC}). Indeed, we can re-write Eqs. (\ref{eq2}) and (\ref{eq1}) as $A_{j\varphi}(\pi) - A_{j\varphi}(0) = \frac{1}{4}(\langle \sigma^{r=0}_{jz}(0)\rangle - \langle \sigma^{r=0}_{jz}(\pi)\rangle) = \frac{1}{2}\langle \sigma_{jz}^{r=0}(0)\rangle = \frac{1}{2}\langle \sigma_{jz}(0)\rangle$. Since for the situation with the entangled wavefunction at south pole we have $\langle \sigma_{jz}(\pi)\rangle = 0$ then we verify \begin{equation} \label{Cjspin} C_ j = \frac{q}{2} = \frac{1}{2}(\langle \sigma_{jz}(0) \rangle - \langle \sigma_{jz}(\pi) \rangle). \end{equation} This relation is equivalent to \begin{equation} C_j^2 = \frac{|q|}{4} = \frac{1}{4}. \end{equation} These relations (\ref{Aj}) and (\ref{Cjspin}) can also be verified by assuming any specific gauge representation for the two-spins wavefunction \cite{HH}. Eq. (\ref{Cjspin}) is important as it also represents a physical observable of the situation with $\frac{1}{2}$-topology and can be verified when rolling the spin from north to south pole along a meridian line adiabatically. The above equations can also be re-written as $C_1=C_2=\frac{q}{2}$ and $C_1+C_2=q=\frac{1}{2}(\langle S_z(0)\rangle -\langle S_z(\pi)\rangle)$ where $S_z=\sigma_{1z}+\sigma_{2z}$. The total Chern number can then be measured in the triplet sub-space of the two spins model where the wavefunctions at north and south poles find representations in the $S_z=+1$ and $S_z=0$ sectors. Here, we formulate local relations between correlation functions and topological properties. At north pole, on the one hand for the situation of the fractional entangled geometry we have the identities $\langle \sigma_{1z}(0)\sigma_{2z}(0)\rangle = |c_{++}(0)|^2=\langle \sigma_{iz}(0)\rangle^2 = (2C_j)^2=q^2$ with $q=1$. At south pole, we can also write down $\langle \sigma_{1z}(\pi)\sigma_{2z}(\pi)\rangle = |c_{++}(\pi)|^2 + |c_{--}(\pi)|^2 - |c_{+-}(\pi)|^2 - |c_{-+}(\pi)|^2=1-2(|c_{+-}(\pi)|^2 + |c_{-+}(\pi)|^2)$ equivalent to the Bell correlation function \cite{Aspect} if we measure the two sub-systems in the same direction. In the second equality, we have invoked the normalization of the wavefunction at south pole. Now, using the fact that for the entangled wavefunction at south pole we have $c_{++}(\pi)=c_{--}(\pi)=0$ then the normalization of the wavefunction at north and south poles also implies $|c_{++}(0)|^2 = |c_{+-}(\pi)|^2 + |c_{-+}(\pi)|^2$ which then results in the local identity at south pole \begin{equation} \label{correlation} \langle \sigma_{1z}(\pi)\sigma_{2z}(\pi)\rangle = -(2C_j)^2=1 - 2(2C_j)^2 =-1. \end{equation} It is important to emphasize here that to show this equality we have used the formula $C_j = A_{j\varphi}(\pi) - A_{j\varphi}(0) = \frac{1}{2}\langle \sigma_{jz}(0)\rangle$ which is only applicable in the situation where we have an entangled state around the south pole. We can also introduce a relation with an observable defining entanglement, the bipartite fluctuations \cite{Song}, associated simply in the present situation to the variance on the spin measure for one subsystem $1$ or $2$, \begin{eqnarray} F_1 &=& \langle \psi| \sigma_{1z}^2\otimes \mathbb{I} |\psi\rangle - (\langle \psi | \sigma_{1z}\otimes \mathbb{I} |\psi\rangle)^2, \nonumber \\ F_2 &=& \langle \psi| \mathbb{I}\otimes \sigma_{2z}^2 |\psi\rangle - (\langle \psi | \mathbb{I}\otimes \sigma_{2z} |\psi\rangle)^2. \end{eqnarray} These bipartite fluctuations are defined positively through a measure along $z$ direction for each sub-system and are symmetric $F_1=F_2$. For the case of the Bell or EPR pair at south pole, $F_1=F_2$ takes the maximum value $1$ such that we have the simple relation \begin{equation} F_1=F_2=|\langle \sigma_{1z}(\pi)\sigma_{2z}(\pi)\rangle| = (2C_j)^2=1. \end{equation} Here, $F_1=F_2=1$ traduces the quantum uncertainty as a result of the formation of the entangled wavefunction through $\langle \psi | \sigma_{jz}(\pi) |\psi\rangle=0$. For a pure state occurring at south pole for $r=0$, we would have $F_1=F_2=0$. Measuring $C_j=\frac{1}{2}$ for one sphere then may be defined as a measure of entanglement in the situation of spheres with radial magnetic fields. We can also interpret $C_j=\frac{1}{2}$ from the geometrical and transport properties of Secs. \ref{smooth} and \ref{ParsevalPlancherel}. When the boundary resides at $\theta_c=0^+$, then from Sec. \ref{smooth} we obtain the equation \begin{equation} \label{transportCj} - \oint A'_{j\varphi}(0^+) d\varphi = 2\pi C_j. \end{equation} Here, we use the correspondence for $\theta_c=0$: $A'_{j\varphi}(0^-)=A'_{j\varphi}(\theta<\theta_c)=A_{j\varphi}(0)-A_{j\varphi}(0)=0$ and $A'_{j\varphi}(0^+) = A'_{j\varphi}(\theta>\theta_c)=A_{j\varphi}(0)-A_{j\varphi}(\pi)=-C_j$. In this way, each sphere now presents a total $\pi$ Berry phase transported around the north pole of the sphere. The total system presents a Berry phase of $2\pi$ equivalent to a Chern number $C_1+C_2=1$. Now, we can re-interpret the left-hand side of Eq. (\ref{transportCj}) in the sense of transport on a cylinder geometry. Eq. (\ref{transportCj}) is equivalent to a current moving along the edge defined here for a unit time \begin{equation} \label{J} J_{\perp}^j = \frac{e}{2\pi}\oint \psi^*(0^+) i \partial_{j\varphi} \psi(0^+)d\varphi = eC_j. \end{equation} Here, $\psi(0^+)$ refers to the two-particles' wavefunction at the top of the cylinder. This gives rise to a half current compared to one sphere. The interpretation of the $\frac{1}{2}$ factor multiplying the charge $e$ may also be understood as follows, as a result of entanglement. The charge at $\theta_c=0^+$ should be in the appropriate quantum state $|\Phi_+\rangle$ for each sphere or each cylinder. In Sec. \ref{ParsevalPlancherel}, we have introduced the Newton equation and de Broglie principle which implies to follow the motion of a particle with charge $e$ and with a definite spin quantum number along the path to define the pumped current in a transverse direction for an angle $\theta=\pi\rightarrow \theta_c=0^+$. Therefore, to transmit a charge $e$ in this protocol from the south pole along the path this effectively corresponds to project $|\psi(\pi)\rangle$ onto either $\frac{1}{\sqrt{2}}|\Phi_+\rangle_1|\Phi_-\rangle_2$ or $\frac{1}{\sqrt{2}}|\Phi_-\rangle_1|\Phi_+\rangle_2$ with a probability $\frac{1}{2}$. The key point of the gauge-invariant argument in Eq. (\ref{J}) is that whatever the choice of the projected state we do the produced edge current at the north pole will be halved compared to that of one sphere. Similarly as the interpretation of the topological number as $q\frac{1}{2}$, Eq. (\ref{J}) can be thought of as $e\frac{1}{2}$ where $\frac{1}{2}$ refers to the entangled wavefunction and projection protocols required by the measure. A similar interpretation justifies the occurrence of $C_j=\frac{1}{2}$ in Eq. (\ref{conductance}) at the edges on the cylinder geometry. Here, we emphasize on the parameters related to Eq. (\ref{eq:H}) to realize this fractional topological entangled geometry. This situation can be precisely realized through a fine-tuning of the interaction $r$ for the situation of $\mathbb{Z}_2$ symmetry characterizing the $1\leftrightarrow 2$ symmetry in the presence of a global field $M_1=M_2=M$ \cite{HH}. The ground state at the north pole with $\theta=0$, is $|\Phi_{+}\rangle_1|\Phi_{+}\rangle_2$ provided that $r f(0)<d+M$. At the south pole with $\theta=\pi$, the ground state is $|\Phi_-\rangle_1|\Phi_-\rangle_2$ for $rf(\pi)<d-M$, but it is degenerate between the anti-aligned configurations for $rf(\pi)>d-M$. In that case, the presence of the transverse fields in the Hamiltonian along the path from north to south poles will then produce the analogue of resonating valence bonds. Indeed, from the south pole, the second-order perturbation theory induces an effective Ising coupling similar to $H_{eff} = - \frac{d^2 \sin^2\theta}{r}\sigma_{1x}\sigma_{2x}$ which will then effectively favor the $S_z=0$ sector of the triplet state. The curved space here is important on the one hand to ensure that the two classical antiferromagnetic states are degenerate at south pole and also to induce locally the entanglement around the south pole. As a result, we obtain half-integer Chern numbers. Furthermore, for the simple constant interaction $f(\theta)=1$, this occurs within the range \begin{equation} \label{HM} d-M<r<d+M. \end{equation} At south pole, the states $\frac{1}{\sqrt{2}}(|\Phi_+\rangle_1|\Phi_-\rangle_2 + |\Phi_-\rangle_1|\Phi_+\rangle_2)$ and $\frac{1}{\sqrt{2}}(|\Phi_+\rangle_1|\Phi_-\rangle_2 - |\Phi_-\rangle_1|\Phi_+\rangle_2)$ have respectively energies $-r -\frac{d^2\sin^2\theta}{r}$ and $-r + \frac{d^2\sin^2\theta}{r}$. We mention here that driving from north to south pole to measure the topological number of each sphere according to Eq. (\ref{drive}) and (\ref{Cjspin}) can yet be realized within the adiabatic limit adjusting the parameters such that the energetics is respected within the protocol. Adjusting the angle $\theta=vt$ in time is equivalent to apply an electric field along the polar direction as discussed in Sec. \ref{ParsevalPlancherel}. Including a ferromagnetic $xy$ interaction $r_{xy}<0$ between spins $2 r_{xy}(\sigma_1^{+}\sigma_2^- + \sigma_1^{-}\sigma_2^+)$ would enhance the energy gap between these two states and therefore reinforce the occurrence of such a phenomenon. At the north pole, the $r_{xy}$ interaction does not change the ground state to first order in perturbation theory. This ensures a certain stability of the phase with $C_j=\frac{1}{2}$ on the two spheres which is also important for practical applications. It is interesting to observe that even in the situation where $r_z=0$ then the fractional topological state $C_j=\frac{1}{2}$ can yet occur when $-d-M<r_{xy}<-|d-M|$ \cite{HH}. The phase diagram in the plane $(r_{xy},r_z)$ then shows a stable and prominent dashed region with $C_j=\frac{1}{2}$, see Fig. \ref{Phasesrxyrz}. In particular, as long as we respect the $\mathbb{Z}_2$ $1 \leftrightarrow 2$ symmetry and that Eq. (\ref{HM}) is satisfied then the $C_j=\frac{1}{2}$ occurs. It is perhaps judicious to mention here that the proofs of Eqs. (\ref{Aj}) and (\ref{q2}) can be in fact generalized to three spheres or multiple spheres simply from the fact that we assume a resonating valence state at one pole. We give further detail in Sec. \ref{GRVBT} to show the usefulness and simplicity of the formalism related to a special class of wavefunctions allowing resonance between all possible states with one bound state forming a domain wall in a classical antiferromagnet at south pole as a result of frustration in a ring geometry. For instance, for three spheres with ground state wave functions $|\psi(0)\rangle = \Pi_{i=1}^3 |\Phi_+\rangle_i$ and $|\psi(\pi)\rangle=\frac{1}{\sqrt{3}}(|\Phi_+\rangle_1|\Phi_-\rangle_2|\Phi_-\rangle_3 + |\Phi_-\rangle_1|\Phi_+\rangle_2|\Phi_-\rangle_3+|\Phi_-\rangle_1|\Phi_-\rangle_2|\Phi_+\rangle_3)$, we obtain $C_j = \frac{2}{3}q$. The interpretation of these generalized fractional numbers remains acceptable in terms of the Euler characteristics formulated as the flux of the Berry curvature on the Riemann surface associated to each sphere. Each sphere is yet in a quantum superposition of a topological surface participating in the flux production of the Berry curvature and in a quantum entangled region. These generalized fractions can be verified numerically when driving from north to south poles through Eq. (\ref{Cjspin}) \cite{HH}. \begin{figure}[t] \includegraphics[width=0.45\textwidth]{Phasesrxyrz} \caption{Phase diagram of two spheres with an Ising interaction $r_z=r$ and an interaction in the $xy$ plane $r_{xy}$ when we respect the $\mathbb{Z}_2$ symmetry $1\leftrightarrow 2$ implying that $M_1=M_2=M$. Three phases occur with $C_j=1$, $C_j=0$ and $C_j=\frac{1}{2}$.} \label{Phasesrxyrz} \end{figure} In Table \ref{tableII}, we review the quantum symbols related to the $C_j=\frac{1}{2}$ phase including a correspondence towards Majorana fermions through $\langle \sigma_{1z}(\pi)\sigma_{2z}(\pi)\rangle=\langle 2i\alpha_2\eta_1\rangle$ as developed in Sec. \ref{pwavewire}. \begin{table}[t] \caption{Symbols related to the $C_j=\frac{1}{2}$ fractional phase} \centering \begin{tabular}{c c} \hline\hline Smooth Fields and Observables & \hskip 0.5cm Definitions \\ \hline $C_j$ & $\frac{1}{2\pi}\iint_{S^{2'}} \bm{\nabla}\times{\bf A}'_j\cdot d^2{\bf s}=\frac{q}{2}=\frac{1}{2}$\\ $C_j$ & $A_{j\varphi}(\pi)-A_{j\varphi}(0)=\frac{q}{2}=\frac{1}{2}$ \\ $C_j$ & $\frac{1}{2}\left(\langle \sigma_{jz}(0)\rangle-\langle \sigma_{jz}(\pi)\rangle\right)=\frac{q}{2}$\\ $\langle \sigma_{1z}(\pi)\sigma_{2z}(\pi)\rangle=\langle 2i\alpha_2\eta_1\rangle$ & $1-2(2C_j)^2=-1$ \\ $\chi_j$ & $2-2C_j=0+1$ \\ $G_j$, $\sigma_{xy}^j$ & $\frac{1}{2}\frac{e^2}{h}$ \\ $-\oint A'_{j\varphi}(0^+) d\varphi$ & $2\pi C_j$ \\ \hline % \end{tabular} \label{tableII} \end{table} \subsection{Mesoscopic Engineering of $C_j=\frac{1}{2}$} There are certainly several ways to imagine and realize these two spins-$\frac{1}{2}$ models in atomic and mesoscopic systems. Here, we remind that the topological number of one and two spheres have been measured in Ref. \cite{Roushan} through mesoscopic quantum circuits. In this system, the authors have built a two-spins' model coupled through an $xy$ interaction from g-mons qubits and the $M$ parameter was activated on one spin only such that the fractional topological state $C_j=\frac{1}{2}$ remains to be revealed. This existing platform shows that it is possible to observe the fractional topological state with current technology assuming that a term $-M\sigma_{jz}$ acts symmetrically on the two spheres \cite{HH}. Here, we propose an alternative double-dot device to engineer the $C_j=\frac{1}{2}$ fractional topological state, from charge qubits. In particular, an antiferromagnetic $r_z$ interaction and a ferromagnetic $r_{xy}$ interaction may be both engineered. Each dot is envisioned as a superconducting charge qubit such that $\sigma_{iz}$ measures locally the charge on each individual island. We assume that the coherence time of the system is sufficiently long such that we can measure the charge response activating here the angular parameter $\theta=\omega t\in [0;\pi]$ on the sphere in real time. For an introduction on charge qubits, see \cite{SPEC,Nakamura,Yale}. The system comprises two superconducting islands or charge qubits \cite{Heij}. Varying a (global) gate voltage capacitively coupled to each island in the plane, this corresponds to engineer the parameter $M$ symmetrically on the two islands in Eq. (\ref{fieldsphere}). The precise mapping between spin-$\frac{1}{2}$ and the charge operator is such that $\sigma_{iz} = 2q_i -1$ with $q_i=0$ and $1$ corresponding to the presence of zero or one additional Cooper pair. The mapping implies that on each island we have one or zero additional Cooper pair compared to the charge neutrality situation. This will then require to adjust the DC gate voltage through $M$. The presence of a Coulomb interaction $E_c(\hat{q}_i - n)^2$, with $\hat{q}_i$ counting the number of additional Cooper pairs and $2 E_c n=V_g$ on each island, is essential to invoke the charge-spin correspondence. The system will function close to a charge degeneracy point with $n=\frac{1}{2}+\delta n$ on each dot with $E_c \delta_n=M$ such that $\delta n\ll 1$ to have $0$ or $1$ additional Cooper pairs. The charging energy $E_c$ is then the dominant energy scale for the mesoscopic system implying also that $k_B T\ll E_c$ with $T$ the temperature and $k_B$ the Boltzmann constant. Below, we will fix $M$ to satisfy the prerequisite shown in Eq. (\ref{HM}) and realize the one-half topological number on each sphere. The term $-d\cos\theta\sigma_{iz}$ can be implemented as an additional AC gate voltage $-V_0\cos(\omega t)$ to the DC gate voltage or $M$ parameter such that we can identify $d=\frac{V_0}{2}$ and the polar angle on the Bloch sphere then reads $\theta=\omega t$. For $M=d=0$, the charge states $0$ and $1$ are perfectly degenerate. To implement the transverse field acting on each island, we can proceed similarly as for superconducting charge qubits coupled to a superconducting reservoir (from the top) via Josephson junctions (usually sketched as ``crosses''). The transfer of one Cooper pair from the reservoir to each island is implemented through a term $\Delta \sigma_{i}^{+}+h.c.$ \cite{Yale}. Formally here we should interpret $\Delta=\langle b\rangle$ where $b$ corresponds to transmit one Cooper pair from the reservoir onto an island. The subtle step here is to implement the time-dependent prefactor $e^{i\omega t}$ such that the transverse field also depends on the angle $\theta$ playing the role of the polar angle on the sphere. The Cooper pairs in the superfluid reservoir are described by the Hamiltonian \begin{equation} H_{SF}= (V(t)+E_0)b^{\dagger} b + w \sum_{i} (b\sigma_{i}^{+}+h.c.) \end{equation} with $E_0$ the energy for the superfluid fraction, $V(t)$ is a potential term from a battery attached to the superconducting reservoir and $w$ represents the hopping of Cooper pairs from the superconducting substrate to each island. In the literature \cite{Yale}, it is common to modify $b$ by $\langle b\rangle$ in the coupling term $w$. From the equations of motion similarly as in quantum mechanics, the potential $V(t)$ can be absorbed as a time-dependent phase shift for the bosons or Cooper pairs describing the Cooper pairs $\langle b\rangle(t) = \langle b\rangle e^{\frac{i}{\hbar}\int_0^{t} V(t')dt'}$. The key point then will be to fine tune $V(t)=V=\hbar\omega$ such that the coupling between the superfluid reservoir and each island takes precisely the required form $\tilde{w} e^{i\omega t}\sigma_{i}^{+}+h.c.$ with $\tilde{w}=w\langle b\rangle$. Adjusting $\tilde{\omega}=d$ would realize Eq. (\ref{fieldsphere}) with a particular choice of azimuthal angle $\varphi=\frac{\pi}{4}$. Again, we remind that to measure the topological number when driving from north to south pole, any meridian line is equivalent. Deviating from $\tilde{\omega}=d$ would smoothly modify the sphere into an ellipse preserving the same topological properties from the poles of the sphere. Here, we address the implementation of the Ising interaction $r\sigma_{1z}\sigma_{2z}$ with $r>0$ and of a ferromagnetic $XY$ interaction $r_{xy}\sigma_{1}^+\sigma_2^- +h.c.$ with $r_{xy}<0$, which will also require to adjust the parameter $M$. We assume that the capacitive coupling is prominent compared to the tunnel coupling between quantum dots. The mutual capacitance between islands give rise to a term proportional to $r(\sigma_{1z}+1)(\sigma_{2z}+1)$. In addition to the required Ising interaction, we observe the occurrence of an additional $r\sigma_{iz}$ term which may be absorbed in the gate voltage such that $M\rightarrow M'=(M-r)$. At the north pole, the additional gate potential term $-\sum_{i=1,2} M'\sigma_{iz}$ helps to stabilize the ground state $(1,1)$ if $M'>0$; a state $(q_1,q_2)$ counts the number of charges on the dot $1$ and on the dot $2$ with respect to the electrostatic charge neutrality situation. To realize the fractional topological state then we must also adjust $d-M'<r<d+M'$ with $d-M'>0$ to ensure a Dirac monopole in each sphere. The first inequality implies here to fix $d=\frac{V_0}{2}<M=E_c\delta n$ and the second inequality implies $r<\frac{d+M}{2}$. To stabilize a dominant ferromagnetic interaction $r_{xy}$ between islands we envision to add a cavity, for instance, inductively or capacitively coupled to the two dots (in the plane) through a Jaynes-Cummings Hamiltonian \cite{JaynesCummings} $H_{cavity} = \hbar\omega_a a^{\dagger} a + g\sum_{i=1,2} a^{\dagger}\sigma_{i}^{-} +h.c.$. The fact that the two islands couple to the same cavity mode produces a ferromagnetic $XY$ interaction which can be visualized by completing the square such that $a\rightarrow \tilde{a}=a+ \frac{g}{\hbar\omega_a}\sum_{i=1,2}\sigma_{i}^-$ in the limit where $g\ll \hbar\omega$ then maintaining the commutation relations for the harmonic oscillator. Then, \begin{equation} H_{cavity} = \hbar\omega_a \tilde{a}^{\dagger}\tilde{a} - \frac{g^2}{\hbar \omega_a} \sigma_1^{+}\sigma_2^{-}+h.c.. \end{equation} Therefore, the coupling $- \frac{g^2}{\hbar \omega_a}$ gives rise to a ferromagnetic $XY$ interaction that would help stabilizing the fractional topological state with $C_j=\frac{1}{2}$, the wavefunction $\frac{1}{\sqrt{2}}(|10\rangle + |01\rangle)$ being stabilized around the south pole. Here, the state $|q_1 q_2\rangle = |q_1\rangle |q_2\rangle = |q_1\rangle\otimes |q_2\rangle$. The sign of $r_{xy}$ is important to decrease the energy of the entangled state at south pole. This induced coupling from a cavity and the occurrence of the eigenstate $\frac{1}{\sqrt{2}}(|10\rangle + |01\rangle)$ is observed in mesoscopic circuits \cite{Majer}. The measure of the topological number for an island would then necessitate to measure the evolution of the (average) charge in real time according to Eq. (\ref{Cjspin}). It is interesting to observe that the charge response to an AC gate voltage has engendered recent technological progress \cite{Filippone}. We also mention here recent efforts in double-dot graphene systems to tune the system from the $(1,1)$ state to the charge degeneracy region comprising the $(1,0)$ and $(0,1)$ states by increasing the power of a microwave cavity \cite{Deng}. These facts together with the recent progress realized in Ref. \cite{Roushan} pleasantly suggest that it may be possible to observe the fractional topological numbers, address the relation to entanglement properties and also engineer entangled states through smooth gates in real time (here activated by the linear evolution in time of the polar angle $\theta=\omega t$). \subsection{Robustness of Geometry through Superellipses} \label{Geometry} Here, we emphasize on the stability of the geometry from the poles through the formula $C=A_{\varphi}(\pi)-A_{\varphi}(0)$. The geometry can be adjusted from a Gabriel Lam\' e curve or superellipse in the plane. Let us proceed similarly as for the cylinder geometry in Sec. \ref{cylinderformalism} and use the dressed coordinates $(r,\varphi,z)$ such that $z=\cos\theta$ with $\theta$ referring to the polar angle in spherical coordinates and $\varphi$ yet represents the angle in the plane. We can adjust the applied magnetic field as \begin{equation} {\bf d} = d(R^{-\frac{1}{n}}|x|^{\frac{1}{n}}, R^{-\frac{1}{n}}|y|^{\frac{1}{n}}, \cos\theta). \end{equation} In the equatorial plane, the parameters' space is now defined as \begin{equation} d_x^2+d_y^2 = d^2 R^{-\frac{2}{n}} (|x|^{\frac{2}{n}} + |y|^{\frac{2}{n}}). \end{equation} Suppose that we study trajectories for the spin-$\frac{1}{2}$ with constant energy implying that we navigate on curves in the equatorial plane then $d_x^2+d_y^2=d^2$ leading to the equation of a superellipse of the form \begin{equation} |x|^\frac{2}{n} + |y|^\frac{2}{n} = R^{\frac{2}{n}}. \label{equation} \end{equation} This is the equation of a Lam\' e curve or superellipse which can now be solved through \begin{eqnarray} x &=& R\left(\cos\varphi\right)^n \hbox{sgn}(\cos\varphi) \\ \nonumber y &=& R\left(\sin\varphi\right)^n \hbox{sgn}(\sin\varphi). \end{eqnarray} The case $n=1$ with $R=1$ corresponds to the unit sphere, with the equation of a circle in the equatorial plane. Starting from the equatorial plane we build a surface delimited by the curve (\ref{equation}) and the height $H=2$. Here, $z=\frac{H}{2}=+1$ corresponds to $\theta=0$ and $z=-\frac{H}{2}=-1$ corresponds to $\theta=\pi$. The cylinder geometry of Sec. \ref{cylinderformalism} corresponds to the special case $n=1$ where the structure in the plane is a circle of unit radius $R=1$. Similarly as in Sec. \ref{cylinderformalism}, we can now identify the Berry curvature $F(\varphi,z)=-\partial_z A_{\varphi} = \frac{1}{2}$ allowing for the identification $A_{\varphi}=-\frac{z}{2}$ with $z=\cos\theta$ from the sphere. The Berry curvature is oriented in the plane such that only the vertical region of the surface participates in the topological properties. At the top surface we have $A_{\varphi}(0^+)=-\frac{1}{2}$ and at the bottom surface we have $A_{\varphi}(\pi^-)=\frac{1}{2}$. It is perhaps interesting to mention that within this approach the Berry curvature is constant on the surface. We can now adjust the parameter $R$ for a given value of $n$ to reproduce $\frac{C}{2\pi}=1$ on each surface as in Sec. \ref{cylinderformalism}. This is precisely the identity to obtain a topological charge unity in the core of the geometry. For $n=2$, the system acquires the same topology as a cube with a topological charge inside. In that case, we have the identification $4\sqrt{2}R=2\pi$ adjusting the length $\oint \sqrt{dx^2+dy^2}$ of the edge of the rhombus in the plane for $n=2$ to the perimeter of the circle forming the structure of the cylinder geometry for $n=1$. For $n=3$, then we have an astroid in the plane and we can yet adjust $3R=2\pi$ to obtain a topological charge inside the three-dimensional geometry. An important deduction from Sec. \ref{cylinderformalism} is that we can yet apply a difference of potentials $V_b-V_t$ such that within the same travelling time $T=\frac{h}{2qE}$ for a charge to navigate from north to south pole we will observe the same quantized conductance $G=\frac{q^2}{h}C$ associated to the two edge currents according to Eq. (\ref{conductance}). Also, since the vertical direction is $z=\cos\theta$ in all these geometries similarly as for the unit sphere we can then include interactions between two of those geometries and from the top and the bottom regions realize similarly a fractional topological state as in Sec. \ref{fractionaltopology}. \section{Quantum Anomalous Hall Effect} \label{anomalous} Here, we show the applicability of this formalism to topological lattice models starting from the honeycomb lattice. We begin the discussion with a situation in the presence of dispersive topological Bloch energy bands, the Haldane model \cite{Haldane}, referring to the quantum anomalous Hall effect. A quantum Hall effect driven by a uniform magnetic field, initially observed in typical MOSFET structures \cite{Hall}, can equally arise on the honeycomb lattice \cite{Zhang,Novoselov} and will be addressed in the next Section related to the light-matter coupling and quantum transport. These models belong to class $A$ in classification tables following E. Cartan notations; see for instance Refs. \cite{BernevigNeupert,RyuTable,KitaevBottTable,AltlandZirnbauer,Zirnbauer1,Zirnbauer2}. \subsection{Honeycomb lattice, Graphene and Realization of a Topological Phase} \label{spherelattice} The orbital $2p_z$ of the carbon atom leads effectively to a one-band model on the honeycomb lattice as a result of the $sp^2$ hybridization. The system is half-filled if the chemical potential is zero. Its band structure was calculated by P. R. Wallace in 1947 \cite{Wallace} related to three-dimensional graphite. The physics of one graphene plane (layer) has attracted tremendous attention in the community \cite{graphene}. The honeycomb lattice is interesting on its own due to the relation with the Dirac equation. The graphene is a semi-metal presenting two inequivalent $K$ and $K'$ Dirac points in its Brillouin zone. Introducing $K=(\frac{2\pi}{3a},\frac{2\pi}{3\sqrt{3}a})$ and $K'=(\frac{2\pi}{3a},-\frac{2\pi}{3\sqrt{3}a})$ with $a$ the lattice spacing in real space, the Hamiltonian of the tight-binding model can be written similarly as a spin-$\frac{1}{2}$ particle \begin{eqnarray} \label{Kspectrum} H({\bf p}) &=& \hbar v_F(p_x\sigma_x + \zeta p_y\sigma_y), \end{eqnarray} with $\sigma_x$ and $\sigma_y$ referring to Pauli matrices acting on the Hilbert space formed by a dipole $|A,B\rangle$ between two nearest neighbors. The Pauli matrix $\sigma_z$ plays the role of a pseudo-spin magnetization measuring the relative occupancy on each sublattice. Here, we introduce ${\bf p}=(p_x,p_y)$ as a wavevector measured from each Dirac point such that the Fermi velocity reads $v_F=\frac{3}{2\hbar}ta$ with $t$ the hopping amplitude between nearest neighbors defined through the vectors $\mathbfit{\delta}_j$ equivalently written in terms of the Bravais (triangular) lattice vectors $\mathbfit{\delta}_1-\mathbfit{\delta}_3=-{\bf b}_2$ and $\mathbfit{\delta}_2-\mathbfit{\delta}_1={\bf b}_1$ of the triangular lattice. The ${\bf b}_j$ vectors on Fig. \ref{graphenefig} are then defined as ${\bf b}_1 = \frac{a}{2}(3,-\sqrt{3})$, ${\bf b}_2 = -\frac{a}{2}(3,\sqrt{3})$ and ${\bf b}_3 = (0,\sqrt{3}a)$. The sum on ${\bf b}_j$ includes 6 second nearest-neighbors which can be added two by two similarly as for the square lattice. Here and hereafter, we will apply the definition that $\zeta=\pm$ at the $K$ and $K'$ points respectively. The energy spectrum is linear $E({\bf p})=\pm \hbar v_F|{\bf p}|$ close to a Dirac point and through the identification $p_x=|{\bf p}|\cos\tilde{\varphi}$ and $p_y=|{\bf p}|\sin\tilde{\varphi}$ this gives rise to a cone structure centered around $K$ and $K'$. The presence of a linear $-i\hbar\bm{\nabla}$ operator implies the presence of positive and negative energy bands referring to occupied particle band and empty hole bands for the ground state at zero temperature at half-filling and linking matter with anti-matter. This is an application of the Dirac equation (in two dimensions) \begin{equation} i\hbar\gamma^{\mu}\partial_{\mu}\psi-mc\psi=0, \end{equation} where Clifford matrices, usually called $\gamma^{\mu}$, are Pauli matrices. Here, we have $\gamma_1=\sigma_x$ and $\gamma_2=\sigma_y$. The particles on the lattice --- for instance, the electrons in graphene --- play the role of massless Weyl particles (with mass $m=0$) and $v_F$ can be seen as the speed of light $c$ (being 300 times smaller for graphene). When $m=0$, the two Dirac points show opposite $\pm \pi$ Berry phases such that the total Berry phase in the whole Brillouin zone is zero \cite{graphene,bilayerQSH}. Here, we introduce a class of topological models where the system becomes an insulator, with the opening of an energy gap, through the occurrence of a term $-m\zeta \sigma_z$. The energy spectrum at the two Dirac points turns into $E=\pm\sqrt{(\hbar v_F)^2|{\bf p}|^2+m^2}$ such that the energy gap $m$ can also be identified to a mass in the relativistic sense defining $m=m^*c^2$ with here $c=v_F$. \begin{figure}[ht]\centering \includegraphics[width=0.25\textwidth]{Lattice.png} \includegraphics[width=0.21\textwidth]{Brillouin.png} \caption{(Left) Honeycomb lattice in real space with the two sublattices $A$ and $B$ and two Bravais lattice vectors ${\bf b}_1$ and $-{\bf b}_2$ defining a triangular lattice. The vectors $\mathbfit{\delta}_i$ characterize the hopping of particles on nearest-neighbors' sites giving rise to the Dirac cones in the energy spectrum. The Haldane model also includes the $it_2$ hopping term between the second nearest neighbors. (Right) Brillouin zone. Here, we have represented one $M$ (high-symmetry) point. A high-symmetry point in the Brillouin zone satisfies the translation symmetry with respect to a reciprocal lattice vector $-{\bf \Gamma}_i = {\bf \Gamma}_i +{\bf G}$ with ${\bf G}=n_1{\bf g}_1 +n_2{\bf g}_2$ and $(n_1,n_2)\in\mathbb{Z}$.} \label{graphenefig} \end{figure} \begin{figure}[ht] \includegraphics[width=0.4\textwidth]{Haldanespectrum.png} \includegraphics[width=0.39\textwidth]{haldane_dos_0.159_kmesh30.pdf} \caption{(Top) Energy spectrum for the Haldane model with $t_2=0.15$ and $t=1$ (with $\hbar=1$). We also present the protocol showing the effect of each circular polarization of light at the two Dirac points related to Sec. \ref{light}. We analyze the topological and physical responses through the Bloch sphere. (Bottom) Density of states showing the importance of states close to $E=\pm m$ at the Dirac points for these parameters.} \label{Haldanespectrum.pdf} \end{figure} In this way, we observe a simple identification with the Bloch sphere formalism and the radial magnetic field \begin{eqnarray} \label{correspondence} && -d(\cos\varphi\sin\theta, \sin\varphi \sin\theta, \cos\theta) = \\ \nonumber && (\hbar v_F|{\bf p}|\cos\tilde{\varphi},\hbar v_F|{\bf p}|\sin(\zeta\tilde{\varphi}),-\zeta m). \end{eqnarray} Choosing $d,m>0$ and $d=m$, we can then identify the $K$ Dirac point at the north pole of the sphere with $\theta=0$ and the $K'$ Dirac point at the south pole with $\theta=\pi$. Close to the north pole or $K$ point, we can now relate the polar angle associated to the cone geometry in the energy band structure $\tilde{\varphi}$ with the azimuthal angle $\varphi$ of the sphere, such that $\tilde{\varphi}=\varphi\pm \pi$. The Dirac cone at the $K$ point is now centered at the north pole with \begin{equation} \label{tan} \tan\theta = \frac{\hbar v_F|{\bf p}|}{m}. \end{equation} Close to the south pole corresponding to the $K'$ point, this requires to modify $\varphi\rightarrow -\varphi$ and we also have $m\rightarrow -m$. In this way, this tight-binding model is equivalent to the topological Bloch sphere with an effective radial magnetic field induced here by the hopping term on the lattice and the `mass' term $-m\zeta\sigma_z$. The key point in the formalism is that the smooth fields defined in Sec. \ref{smooth} are invariant under the modification $\varphi\rightarrow -\varphi$. This allows us to conclude that this insulator is described from its ground state through $|\psi_+\rangle$ on the sphere by a topological invariant $C=+1$ locally from the poles. Here, it is important to emphasize that the situation of graphene corresponding to $m=t_2=0$ is special since $\tan\theta\rightarrow \pm\infty$ for any infinitesimal $v_F|{\bf p}|$ around the two Dirac points. Then we will have formally the two Dirac cones in the equatorial plane (with $C=0$). \\ \subsection{Haldane Model and Quantum TopoMetry} \label{topometrylattice} Here, we relate the physics induced by the term $-\zeta m \sigma_z$ to the Haldane model. We introduce the second-nearest-neighbor term $t_2 e^{i\phi}$ from the definitions of the Fig. \ref{graphenefig} (with $t_2$ real and to illustrate simply the effect we fix $\phi=\frac{\pi}{2}$) such that on a green triangle in Fig. \ref{graphenefig} the phase accumulated is non-zero. The important point here is to have a complex phase attached to the $t_2$ hopping term. In this discussion, particles or electrons are assumed to be spin-polarized. If we invert the direction on a path, then we should modify $\phi\rightarrow -\phi$. If a wave performs a closed path on a honeycomb cell or ring: A-B-A-B-A-B-A, since the Peierls phase associated with the nearest neighbor term is zero assuming $t\in\mathbb{R}$, we conclude that the total phase acquired is zero. From Stokes theorem, we infer that on a honeycomb cell: $\iint {\bf B}\cdot {\bf n} d^2 s =0$ with ${\bf B}$ the induced magnetic field from the vector potential. We are in a case where on each triangular lattice formed by each sub-lattice the magnetic flux $\frac{3\pi}{2}$ (defined for simplicity in unit of the phase $\frac{\pi}{2}$) is staggered. Therefore, we can check that on a honeycomb ring on the lattice, the total magnetic flux is $\frac{3\pi}{2}-\frac{3\pi}{2}=0$. Locally, the magnetic fluxes on a triangle can yet induce a topological phase. For $\phi\neq 0$ and precisely for $\phi=\frac{\pi}{2}$ here, this gives rise to an additional term in the Hamiltonian \begin{equation} \label{t2term} H_{t_2} = H_{t_2}^A + H_{t_2}^B = -\sum_{\bf k}\sum_{{\bf b}_j} 2 t_2 \sin({\bf k}\cdot{\bf b}_j) \sigma_z. \end{equation} The phase $\phi\neq 0$ is important in this proof to produce a odd function under the parity transformation ${\bf k}\rightarrow -{\bf k}$ and going from $B$ to $A$ is equivalent to modify ${\bf b}_j\rightarrow -{\bf b}_j$. For the general situation of arbitrary values of $\phi$, this results in an additional $\sin \phi$ prefactor in Eq. (\ref{t2term}). We can now write the Haldane model as an effective spin-$\frac{1}{2}$ model with a ${\bf d}$-vector \onecolumngrid \begin{eqnarray} {\bf d}({\bf k}) = \left(t\sum_{\mathbfit{\delta}_i} \cos({\bf k}\cdot \mathbfit{\delta}_i), t\sum_{\mathbfit{\delta}_i}\sin({\bf k}\cdot \mathbfit{\delta}_i), 2 t_2 \sum_{{\bf b}_j} \sin({\bf k}\cdot{\bf b}_j)\right). \label{dvector} \end{eqnarray} \twocolumngrid Each value of ${\bf k}$ corresponds to a point on the sphere $(\theta,\varphi)$. Using the form of the ${\bf b}_j$ vectors, then we identify \begin{equation} \label{hz} d_z({\bf K}) = 2t_2 \sum_{{\bf b}_j} \sin({\bf K}\cdot {\bf b_j}) = 3\sqrt{3}t_2 =m. \end{equation} Each term gives the same contribution since $\sin({\bf K}\cdot{\bf b}_1)=\sin(\frac{2\pi}{3})=\frac{\sqrt{3}}{2}$, $\sin(K_y b_3)=\sin(\frac{2\pi}{3})$ and $\sin({\bf K}\cdot{\bf b}_2)=\sin(-\frac{4\pi}{3})=\frac{\sqrt{3}}{2}$. All the $K'$ points in the Brillouin zone have the same properties. From the form of $d_z$, if we apply the parity transformation on the $K$ point, changing ${\bf k}\rightarrow -{\bf k}$, then we find $d_z({\bf K}')=-m$. We verify this specific form from the identity $\sin(K_j' b_j)=-\sin(K_j b_j)$. The $d_z$ term then is equivalent to $\zeta m$ close to the two Dirac points with $\zeta=\pm 1$ at the $K$ and $K'$ points respectively. From the preceding Sec. \ref{spherelattice}, this allows us to conclude that the $d_z$ term opens a gap at the Fermi energy and that the lowest-energy band is characterized through a topological number $C=+1$. Also, through Eq. (\ref{polesC}) the topological number can be visualized as the addition of the Berry phases around the Dirac points as verified numerically \cite{bilayerQSH}. On the lattice, $(2\pi)(A_{\varphi}(\pi)-A_{\varphi}(0))$ precisely corresponds to the addition of the Berry phases at the two Dirac points. Assuming small circles around each Dirac point the Berry curvature $A_{\varphi}$ is independant of $\varphi$ as all $\varphi$ angles are equivalent. The Berry curvature is uniformly distributed around the small circle encircling a Dirac point. The relative minus sign between $A_{\varphi}(\pi)$ and $A_{\varphi}(0)$ takes into account that close to the south pole we have redefined $\varphi\rightarrow -\varphi$ according to the discussion around Eq. (\ref{correspondence}). We emphasize here that the poles of the sphere play a special role in the proof of Sec. \ref{smooth}. Indeed, we have defined the topological number on $S^{2'}$ subtracting the two poles on the surface of the sphere where we define ${\bf A}'=0$. Also, ${\bf A}$ is uniquely defined at these points and smooth (if we fix a $\varphi$-representation of the Hilbert space) through the identification that \begin{equation} C = \frac{1}{2\pi}\iint_{S^{2'}} \bm{\nabla}\times{\bf A}'\cdot d^2{\bf s} = \frac{1}{2\pi}\iint_{S^{2'}} \bm{\nabla}\times{\bf A}\cdot d^2{\bf s}. \end{equation} This has precisely resulted in the identification $C=A_{\varphi}(\pi)-A_{\varphi}(0)$, which is then defined to be gauge invariant from the formulation of the Stokes' theorem, when moving gently the boundary in the Stokes' theorem onto one of the two poles. For a comparison, in graphene the sum of the Berry phases around the two Dirac points is zero \cite{graphene,bilayerQSH}. We emphasize here that the peculiarity of the present approach when applying the Stokes' theorem with two domains is that compared to previous approaches \cite{Kohmoto}, the Berry field ${\bf A}$ is continuous at the interface and the discontinuity is absorbed in the definition of ${\bf A}'$. In Fig. \ref{Haldanespectrum.pdf}, we show the energy band structure for the specific situation with $t=1$ and $t_2=0.15$. We have $m=0.779423...$. The density of states is important close to the Dirac points showing that the description close to the poles of the sphere is particularly meaningful for this range of $t_2$ values. As shown in Fig. \ref{Haldanespectrum2.pdf}, the system shows one chiral edge mode at the edges of the sample similar to the quantum Hall effect. In Sec. \ref{Paritysymmetry}, we will show that the topological number can also be measured from the $M$ point between $K$ and $K'$ in the Brillouin zone via the light-matter interaction through the properties of the tight-binding model. This allows us to verify that we remain within the same topological phase for larger $t_2$ where the spectrum flattens around the Dirac points. Here and hereafter the letter $M$ represents either a high-symmetry point in the Brillouin zone or a term $M\sigma_z$ in the Hamiltonian as introduced in Eq. (\ref{fieldsphere}). If we include a Semenoff mass $M\sigma_z$ corresponding to a staggered potential on the lattice \cite{Semenoff}, then one can close effectively the gap in the band structure at one Dirac point only (see Fig. \ref{Haldanespectrum2.pdf}) and induce for $M>m$ a transition towards a charge density wave with only one sublattice occupied in real space. This topological transition can also be interpreted from the fact that the topological charge leaks out from the sphere in the sense of the Poincar\' e-Hopf theorem. The topological number is described through a step Heaviside function leading to $C=\pm\frac{1}{2}$ at the transition from Eq. (\ref{polesC}) and from the fact that one pole of the sphere shows $\langle \sigma_z\rangle=0$ and the other pole $\langle \sigma_z\rangle = \pm 1$. This also leads to $C=\frac{1}{2}=A_{\varphi}(\pi)-A_{\varphi}(0)$ such that the sum of the two Berry phases encircling the two Dirac points is equal to $\pi$. This conclusion will be developed in Sec. \ref{Semenoff} from the smooth fields. The topological charge is in a superposition of `leak-in' and `leak-out' related to the sphere. Experiments on circuit quantum electrodynamics measure $C\approx 0.4$ at the topological transition related to the step function profile \cite{Roushan}. For the Haldane model, the occurrence of such a topological semi-metal occurs at the phase transition only when tuning $M$. In Secs. \ref{bilayer} and \ref{semimetalclass}, related to the two-spheres' model, we will show the possibility of topological nodal ring semi-metals where one-half topological numbers defined locally from the Dirac points, can be stabilized in a finite region of the phase space. We emphasize here that the Haldane model \cite{Haldane} becomes a standard model as it is realized in quantum materials \cite{Liu}, cold atoms \cite{Hamburg,Jotzu,Monika}, light systems \cite{HaldaneRaghu,Joannopoulos,Ozawa,KLHlightnetworks} and when shining circularly polarized light on graphene \cite{McIver} such that one can adjust the phase associated to the $t_2$ parameter in several platforms. Applying the results of Sec. \ref{polarizationlight}, then the present formalism also reveals that circular polarizations of light can turn graphene into an Haldane topological insulator. The quantum anomalous Hall effect can also be realized on the Kagome lattice as a result of artificial gauge fields for instance produced by Josephson junctions circulators \cite{Koch,KagomeAlex}. Progress in realizing these gauge fields locally in circuit quantum electrodynamics circuits have been recently achieved \cite{SantaBarbaraChiral}. From the quantum field theory perspective, coupling the electromagnetic fields to Dirac fermions can also reveal a correspondence towards Chern-Simons theory and a Green's function approach \cite{Yakovenko}. This is also related to general questions on parity anomaly \cite{Redlich,NiemiSemenoff}. It is also relevant to mention early efforts on $\hbox{Pb-Te}$ semi-conductors related to applications of the parity anomaly in condensed-matter systems \cite{Fradkin}. \begin{figure}[ht] \includegraphics[width=0.35\textwidth]{EdgesM=0} \includegraphics[width=0.35\textwidth]{HaldaneEdgesm=M} \caption{(Top) The topological properties of the model are revealed when solving the energy band structure on a cylinder or geometry presenting edges. The number of discretized points in the two directions is 100 and 30 and the parameters are identical as those in Fig. \ref{Haldanespectrum.pdf}. The system has a chiral zero-energy state localized at the edges in the topological phase associated to $C=1$. (Bottom) If $|M|=|m|$ the gap closes at one Dirac point only leading to $C=\frac{1}{2}$. On the figure, the Semenoff mass is included through $+M\sigma_z$ in the Hamiltonian (If $|M|>m$, then $C=0$).} \label{Haldanespectrum2.pdf} \end{figure} \subsection{Interaction Effects and Mott Transition} \label{Mott} Interactions can also mediate charge density wave or Mott transitions when the interaction strength becomes significant or comparable to the energy band gap. Here, we show that the smooth fields' formalism allows us to include interaction effects from the reciprocal space of the lattice model and through an analogy with the phase transition induced by a Semenoff mass we provide a simple estimate for the transition line. This momentum space approach then allows for a simple approach to describe topological properties from the poles of the sphere or from the Dirac points in the presence of interactions. For spin-polarized electrons, the dominant interaction takes the form \begin{equation} \label{interaction} H_V = V\sum_{i,p} \hat{n}_i \hat{n}_{i+p} \end{equation} with $i\in\hbox{sublattice}(A,B)$, $\hat{n}_i=c^{\dagger}_i c_i$ and $p$ sums on the 3 nearest neighbors from the other `color' or sublattice. To englobe interaction effects, we introduce the second quantization representation with creation and annihilation operators on each sublattice. We begin with a simple mean-field approach such that \begin{eqnarray} \hskip -0.5cm H_V = V \sum_{i,p} [-(\phi_0 +\phi_z)c^{\dagger}_{i+p}c_{i+p} - (\phi_0-\phi_z)c^{\dagger}_i c_i && \\ \nonumber + c^{\dagger}_i c_{i+p}(\phi_x-i\phi_y) + c^{\dagger}_{i+p} c_{i}(\phi_x+i\phi_y) && \\ \nonumber -\left(\phi_0^2 - \phi_z^2 -\phi_x^2 - \phi_y^2\right)]. && \end{eqnarray} Minimizing the ground-state energy is equivalent to introduce the definitions \begin{equation} \label{phir} \phi_r = - \frac{1}{2}\langle \Psi_i^{\dagger} \mathbfit{\sigma} \Psi_i \rangle \end{equation} with $\Psi_i = (c_i , c_{i+p})$, $\sigma_r$ are the Pauli matrices with $\sigma_0=\mathbb{I}$ the $2\times 2$ identity matrix. We can then introduce the effect of the interactions from the reciprocal space which results in the $2\times 2$ matrix \cite{Klein} \begin{equation} H(\bm{k}) = \begin{pmatrix} \gamma({\bm k}) & -g({\bm k}) \\ -g^*({\bm k}) & - \gamma({\bm k}) \end{pmatrix} \quad \end{equation} with \begin{eqnarray} \label{parameters} \gamma({\bf k}) &=& 3V\phi_z -2t_2\sum_{\bf p} \sin({\bf k}\cdot{\bf b}_p) \\ \nonumber g({\bf k}) &=& (t_1-V(\phi_x+i\phi_y))\cdot\left(\sum_{\bf p} \cos({\bf k}\cdot\mathbfit{\delta}_p) - i\sin({\bf k}\cdot\mathbfit{\delta}_p)\right). \end{eqnarray} Using the mapping with the Bloch sphere close to the Dirac points, from Secs. \ref{spherelattice} and \ref{topometrylattice}, we identify \begin{eqnarray} \cos\theta({\bf p}) &=& \frac{1}{\epsilon({\bf p})}(\zeta d_z({\bf p}) -3V\phi_z) \\ \nonumber \sin\theta({\bf p}) &=& \frac{1}{\epsilon({\bf p})}(\hbar v_F -\frac{3}{2}Va(\phi_x + i\phi_y))|{\bf p}|, \end{eqnarray} with $\epsilon({\bf p})=\sqrt{|g({\bf p})|^2+\gamma^2({\bf p})}$, and we remind that $\zeta=\pm$ at the $K$ and $K'$ Dirac points with ${\bf p}$ corresponding to a small wave-vector displacement from a Dirac point. This gives rise to \begin{equation} \langle c^{\dagger}_{A{\bf p}}c_{A{\bf p}}\rangle = \frac{1}{2} +\frac{1}{2\epsilon({\bf p})}\left(\zeta d_z({\bf p}) -3V\phi_z\right), \end{equation} and \begin{equation} \langle c^{\dagger}_{B{\bf p}}c_{B{\bf p}}\rangle = \frac{1}{2}-\frac{1}{2\epsilon({\bf p})}\left(\zeta d_z({\bf p}) -3V\phi_z\right). \end{equation} Therefore, this leads to the simple identifications \begin{eqnarray} \langle c^{\dagger}_{A{\bf p}}c_{A{\bf p}}\rangle &=& \frac{1}{2} +\frac{1}{2}\hbox{sgn}(\zeta d_z({\bf p}) -3V\phi_z)\\ \nonumber \langle c^{\dagger}_{B{\bf p}}c_{B{\bf p}}\rangle &=& \frac{1}{2} -\frac{1}{2}\hbox{sgn}(\zeta d_z({\bf p}) -3V\phi_z). \end{eqnarray} From the analogy with the effect of a Semenoff mass discussed in Fig. \ref{Haldanespectrum2.pdf}, a quantum phase transition implies that the energy gap is reduced to zero at one Dirac point such that $\langle c^{\dagger}_{A{\bf K}}c_{A{\bf K}}\rangle = \langle c^{\dagger}_{B{\bf K}}c_{B{\bf K}}\rangle = \frac{1}{2}$ corresponding to $d_z=3\sqrt{3}t_2 = -3V\phi_z$ whereas at the other Dirac point the dipole is yet polarized in the ground state so that $\langle c^{\dagger}_{B {\bf K}'} c_{B{\bf K}'}\rangle =1$. If we approximate the Fourier transform of the charge densities as the average responses at the two Dirac points, then from Eq. (\ref{phir}) we obtain a jump of $\phi_z$ from zero to $\phi_z\sim -\frac{1}{4}$ for a given bond in real space. This simple analytical argument supports the occurrence of a first order charge density wave or Mott transition induced by the nearest-neighbors' interaction which was first reported via Exact Diagonalization (ED) in the literature \cite{Varney}. This roughly leads to the estimate $V_c\sim 4\sqrt{3}t_2$. The linear increase of $V_c$ is also in agreement with a numerical self-consistent solution of the coupled equations and with the Density Matrix Renormalization Group (DMRG) results \cite{Klein}. From the definition of the topological number written in terms of $\langle \sigma_z\rangle=\cos\theta$ at the two poles of the sphere, then we deduce that $C$ will also jump from $1$ to $0$ at the transition in accordance with numerical results \cite{Klein}. Another understanding can be simply obtained through an energetics analysis from the ground state. Using the Hellmann-Feynman theorem such that $-2\phi_z=\langle c_i^{\dagger} c_i - c_{i+p}^{\dagger} c_{i+p}\rangle$. In this calculation, we suppose a specific sublattice $i$ which then contributes to $1/2$ of the bond-energy in the definition of the interaction energy (\ref{interaction}). Then, this results in the correspondence $-2\phi_z=\frac{1}{6N}\sum_{\bf k} \frac{\partial E_{gs}}{\partial (V\phi_z)}$ with the ground-state energy $E_{gs}=-\sum_{\bf k}\epsilon({\bf k})$ with $N$ corresponding to the number of unit cells $\{A;B\}$. This gives rise to the equation \begin{equation} \frac{4}{3V}=\frac{1}{N}\sum_{\bf k}\frac{1}{\epsilon({\bf k})}. \end{equation} In the limit $t_2\rightarrow 0$, a numerical evaluation of this equation leads to $V_c\sim\frac{4}{3}t_1$. This value tends to agree with results from exact diagonalization leading to $V_c\sim 1.38t_1$ \cite{Capponi}. Assuming that the energy spectrum is smoothly varying with ${\bf k}$ and approximating $\epsilon({\bf k})\sim m=3\sqrt{3}t_2$ as in Fig. \ref{Haldanespectrum.pdf}, then we also verify from this equation $V_c\sim 4\sqrt{3}t_2$. A careful numerical analysis of the Ginzburg-Landau functional shows a relation with a $\phi^6$-theory \cite{Klein}. We mention here that the possibility of stabilizing a quantum anomalous Hall phase from interactions in two dimensions has been questioned in the literature in relation with topological Mott insulators \cite{PesinBalents} through the introduction of a second-nearest-neighbor or a long-range interaction at a mean-field level \cite{Honerkamp,RKKY}. Present numerical calculations such as exact diagonalization do not confirm this possibility \cite{Capponi}. This remains as an open question. Very recently, a topological Mott insulator, a quantum anomalous Hall state in a correlated limit, has been shown in a twisted bilayer system through DMRG \cite{Vafek}. On the other hand, a Hund's coupling between local magnetic impurities and conduction electrons can stabilize a quantized Hall conductivity in the presence of magnetism as for instance recently observed in Kagome materials \cite{Guguchia,Felser,LegendreLeHur}. The presence of localized magnetic impurities has also be shown to produce topological Kondo physics \cite{Dzero}. The discussion above is specific to fermions. A similar discussion can be addressed for bosons including the possibility of superfluidity. Interestingly, the specific form of the honeycomb lattice can now allow an additional chiral superfluid phase with condensation of the bosons close to the Dirac points as in a Fulde-Ferrell-Larkin-Ovchnnikov (FFLO) superconducting phase \cite{FuldeFerrell,Larkin}. The Mott phase can also reveal topological particle-hole pairs excitations in this case which have been identified through Dynamical Mean-Field Theory, ED and also Cluster Perturbation Theory within a Random Phase Approximation. The phase diagram of the interacting bosonic Haldane model can be found in Ref. \cite{Vasic}. \subsection{Stochastic Topological Response and Disorder} The mean-field approach can also be reformulated as a path integral approach \cite{Schulz} and the variational principle through the identification \cite{Klein} \begin{equation} e^{\frac{V}{8}(c_i^{\dagger}\sigma_r c_{i+p})^2} = \int D\phi e^{-2V\left(\phi_{r}^{i+\frac{p}{2}}\right)^2+V\left(\phi_{r}^{i+\frac{p}{2}}\right)(c^{\dagger}_i \sigma_r c_i)}. \end{equation} Here, $\phi_r=\phi_{r}^{i+\frac{p}{2}}$ is centered in the middle of a bond formed by the two sites $i$ and $i+p$. Within this approach, the stochastic variables $\phi_r$ can be thought as classical static variables and the sampling ranges from $-\infty$ to $+\infty$. Even though one can redefine the prefactor of each term, their relative weight is fixed according to the variational principle $\frac{\partial S}{\partial \phi_r}=0$, which here reproduces Eq. (\ref{phir}). This is equivalent in the reciprocal space to an action in imaginary time \begin{eqnarray} S &=& \int_0^{\beta} d\tau \sum_{\bf k} \Psi_{\bf k}(\partial_{\tau}-{\bf d}\cdot\mathbfit{\sigma})\Psi_{\bf k} \\ \nonumber &+&\sum_{\bf k,q,p} \Psi_{\bf q}^{\dagger} h_V({\bf k},{\bf q},{\bf p}) \Psi_{\bf k} + \sum_{{\bf k},r} 6V|\phi_r^{\bf k}|^2. \end{eqnarray} For simplicity, we use the same symbols for the electron fields or creation/annihilation operators in second quantization and the Grassmann variables defining the path integral. To characterize the ground state, we can use the Fourier transform and perform a development around $\omega\rightarrow 0$ and in the long-wave length limit ${\bf k}\rightarrow {\bf q}$ corresponding to a momentum transfer ${\bf p}\rightarrow {\bf 0}$ for the stochastic variables. The interacting part of the Hamiltonian $h_V$ then becomes quite simple such that we obtain a mean-field Hamiltonian of the form $H_{mf}({\bf k})$. The stochastic variational approach then also allows to develop a Ginzburg-Landau analysis \cite{Klein}. Compared to quantum Monte-Carlo which shows a sign problem for the interacting Haldane model, the present approach allows us to have a good understanding of interactions effects from the reciprocal space to evaluate ground state observables. The formalism is also very useful to incorporate effects of disorder \cite{Klein}. More precisely, we can for instance assume a disordered interaction $V$ with Gaussian fluctuations defining $\tilde{v}=(\tilde{V}-V)/V$. From statistical physics, we can then define the disordered averaged magnetization \begin{equation} \langle \sigma_z\rangle = \int_{-\infty}^{+\infty} d\tilde{v} P(\tilde{v})\langle \sigma_z(\tilde{v})\rangle \end{equation} and \begin{equation} P(\tilde{v}) = \frac{1}{\sqrt{2\pi\xi(V)}} e^{-\frac{1}{2}\tilde{v}^2\xi^{-1}(V)}, \end{equation} with the normalization factor $\xi(V)=1/(12V)$. This implies that we can also define a disordered-averaged topological number \begin{eqnarray} \langle C\rangle &=& \frac{1}{2}\int_{-\infty}^{+\infty} d\tilde{v} P(\tilde{v})(\langle \sigma_z(0,\tilde{v})-\langle \sigma_z(\pi,\tilde{v})\rangle) \\ \nonumber &=& \int_{-\infty}^{+\infty} d\tilde{v} P(\tilde{v})C(\tilde{v}). \end{eqnarray} The key point here is that the matrix $H_{mf}({\bf k})$ is symmetric under the variable $V$ and $\phi_r$. For the calculation of $\langle \sigma_z\rangle$ only the variable $\phi_z=\phi$ will enter in the calculation of $\langle C\rangle$ such that we can equivalently write it as a stochastic topological number \begin{equation} \langle C \rangle = \int_{-\infty}^{+\infty} d\phi P(\phi) C(\phi) \end{equation} with the identification $\tilde{v}=(\phi-\phi_{mf}^z)$ and $\phi_{mf}^z$ corresponds to the ground-state value of the stochastic variable such that $\phi_{mf}^z=0$ if we start with $V<V_c$. For a given value of $V$, then we identify a critical value of $\phi$ due to the Gaussian fluctuations such that $3V|\phi_c|=3\sqrt{3}t_2$. Therefore, making an average on samples with different disorder configurations then we estimate \begin{equation} \langle C\rangle \approx 1- e^{-\frac{(2m)^2}{(k_B T_{eff})^2}} \hskip 0.2cm \hbox{and} \hskip 0.2cm k_B T_{eff}\propto \sqrt{V}. \end{equation} Corrections to the topological number from fluctuations are driven by values of $\tilde{v}=\phi\approx |\phi_c|$. This shows that fluctuations induced by the disorder can play a similar role as temperature (heating) \cite{Rivas} or driving effects mediating inter-band transitions \cite{MunichWilczekZee}. In particular, the correction to the topological number takes a similar form as the probability to cross in the upper energy band in a dynamical Landau-Zener-Majorana protocol \cite{HH}. In the present case, the correction is slightly different from an Arrhenius law at finite temperature due to the Gaussian form of the disorder distribution. Averaging the topological response on stochastic variables can then be an efficient way to measure fluctuations effects from the ground state. It is also interesting to mention here the progress in cold atoms to measure the effect of inter-band transitions in the topological response \cite{MunichWilczekZee}. Recently, a numerical study of the interacting Haldane model with disorder was performed using ED and also DMRG \cite{Yi}. This shows that the transition from topological to Mott phase becomes continuous when increasing the disorder strength. This analysis also finds that the averaged topological number becomes different from the one in the presence of disorder within the (interacting) topological phase. In addition, the possibility of an Anderson topological insulating phase induced by disorder \cite{ATI,Beenakker} is also shown to be robust towards interaction effects. \section{Observables from the Geometry and Smooth Fields} \label{Observables} Here, we describe the usefulness of the smooth fields formalism to access analytically observables, specifically transport properties and responses to circularly polarized light, within a topological phase of the lattice model. In particular, we emphasize here that the topological responses may be measured locally from the Dirac points corresponding to the poles of the sphere and also from the $M$ point in the Brillouin zone. Then, we introduce local Chern markers in the reciprocal space. It is then important to mention here recent proposals for introducing local Chern markers in real space \cite{BiancoResta,Cambridge}. \subsection{Berry Curvature and Conductivity} \label{curvature} We begin with Eq. (\ref{polarization}) where the induced charge polarization reads \begin{equation} \label{DeltaP} \Delta P = eC = \int_0^T dt j(t). \end{equation} Here, $j(t)=J_{\perp}$ corresponds to the measured transverse current in the protocol when driving a charge $e$ particle from north to south pole as a result of a longitudinal electric field ${\bf E}=E{\bf e}_{\theta}$. Now, we can write general relations and build a link with the Karplus and Luttinger velocity \cite{KarplusLuttinger} (defined hereafter in Eq. (\ref{velocity})). To derive a simple proof, we define the vector associated to the Chern number \begin{equation} \label{pumpingC} {\bf C} = \frac{1}{2\pi}\iint d{\bf k}\times {\bf F}, \end{equation} where ${\bf F}=\bm{\nabla}\times {\bf A}$ parallel to the normal vector to the surface (here the sphere) and ${\bf C}$ has the direction of the induced perpendicular current. Here, we use the identification $d{\bf k}=d\varphi d\theta {\bf e}_{\theta}$ and in fact we will not use the specific periodicity of the boundary conditions (such that the proof can also be adapted for a graphene plane defined with the appropriate Brillouin zone). In this dynamical protocol, the Berry curvature depends on the component of the wave-vector parallel to the electric field as in Sec. \ref{ParsevalPlancherel} and therefore here ${\bf F}={\bf F}(t)$. From Newton's equation on a charge $q=e$, we also have \begin{equation} \label{chargeE} \hbar\dot{\bf k}=e{\bf E}. \end{equation} We can then integrate on $\varphi \in[0;2\pi]$ such that \begin{equation} {\bf C} = \int dt \frac{e}{\hbar} {\bf E}\times {\bf F}. \end{equation} These relations lead to the anomalous velocity or anomalous current density \begin{equation} {\bf j}({\bf k})=\frac{e^2}{\hbar} {\bf E}\times {\bf F}. \end{equation} The measured current density is perpendicular to the electric field referring to the topological response here. The density of particles in the reciprocal space is $n=1$, therefore the anomalous velocity on the lattice participating in the topological properties is \begin{equation} \label{velocity} {\bf v} = \frac{e}{\hbar} {\bf E}\times {\bf F}. \end{equation} The possibility of anomalous Hall currents in crystals was for instance reviewed in Refs. \cite{Nagaosa,Liu}. It is also important to mention important progress in ultracold atoms allowing to measure $C$ with a high accuracy for instance from a semiclassical analysis of wavepackets and the Karplus-Luttinger velocity \cite{MunichKarplusLuttinger}. The total current density on the lattice is obtained as \begin{equation} {\bf j} = \iint \frac{dk_x d k_y}{(2\pi)^2} {\bf j}({\bf k}). \end{equation} The two directions of the Brillouin zone are usually defined in an identical manner. Integrating the current on ${\bf k}$ and playing with cross product rules leads to \begin{equation} |{\bf j}| = \frac{e^2}{h}\iint |(d {\bf k}\times {\bf F})\cdot {\bf E}| = \frac{e^2}{h} C | {\bf E}|, \label{xyconductivity} \end{equation} therefore to \begin{equation} \label{transportxy} \sigma_{xy} = \frac{e^2}{h}C. \end{equation} Since we also have the local formulation of the global topological number $C = (A_{\varphi}(\pi) - A_{\varphi}(0))$ from Sec. \ref{smooth} this implies that the quantum Hall conductivity (usually defined with an integration on the Brillouin zone) can be measured from the Dirac points only. We mention here that Eq. (\ref{transportxy}) is also in agreement with the general analysis of edges on the cylinder geometry of Sec. \ref{cylinderformalism}. From the Karplus-Luttinger velocity in Eq. (\ref{velocity}), we can also relate results of Sec. \ref{ParsevalPlancherel} with general relations of quantum many-body physics from the current density in a one-dimensional pump geometry \cite{Thouless1983} \begin{equation} {\bf j} = e\int \frac{d k}{2\pi} {\bf v}(k) \end{equation} with $k=k_{\parallel}$ referring to the wave-vector parallel to the motion of the particle. For the sphere we have the identification between time and angle $\theta$ from Newton mechanics, de Broglie principle and Coulomb force, $\theta=eEt/\hbar$. The Karplus-Luttinger velocity reads $|{\bf v}| = \frac{e}{\hbar}E |F_{\varphi\theta}|$ with precisely $F_{\varphi\theta} = (\partial_{\varphi} A_{\theta} - \partial_{\theta} A_{\varphi}) = - \partial_{\theta} A_{\varphi}$. Therefore, this gives rise to \begin{equation} \Delta P = e\oint \frac{d\varphi}{2\pi} A'_{\varphi}(\theta<\theta_c), \end{equation} when integrating on the polar angle from $0$ to an angle $\theta$, in agreement with Eq. (\ref{Jperp}). We emphasize here that this equation is applicable for one and also for interacting spheres leading then to a one-half transverse pumped charge in the situation of Sec. \ref{fractionaltopology}. In Secs. \ref{quantumspinHall} and \ref{topomatter}, we address applications of this formalism for planes where the Berry vector field ${\bf A}$ can be smoothly defined on the whole Brillouin zone and the presence of Dirac magnetic monopoles can also be described through the discontinuity of the vector field ${\bf A}'$ at the angle $\theta_c$. The semiclassical approach is generally judicious to verify certain laws of quantum transport which can also be obtained using many-body physics and a Green's function approach. To link with the next Section, it is useful to relate the quantum Hall conductivity \cite{Thouless} with the Thouless-Kohmoto-Nightingale-Nijs formula $\sigma_{xy}=\lim_{\omega\rightarrow 0}\sigma_{xy}(\omega)$ from the Kubo formula. Then, $\sigma_{xy}(\omega)$ can be re-written as $\frac{e^2}{\hbar}\Pi_{xy}$ where \begin{eqnarray} \label{xy} \Pi_{xy} &=& \frac{1}{N}\sum_{{\bf k},n,m} \hbox{Im}(\langle {\bf k}, n | \partial_{k_x} H | {\bf k}, m\rangle \\ \nonumber &\times& \langle {\bf k}, m | \partial_{k_y} H | {\bf k}, n\rangle)\frac{f_{{\bf k},n} - f_{{\bf k},m}}{(E_n({\bf k}) - E_{m}({\bf k}))^2}. \end{eqnarray} Here, for the honeycomb lattice, $N$ means the number of unit cells $\{A;B\}$ (with a normalized lattice spacing equal to one) and the current density is written through the general definition of the velocity $\frac{1}{\hbar}\partial_{k_i} H$ and $i=x,y$. We have introduced a general notation of energy band eigenstates $|{\bf k},n\rangle$ defined for a given wavevector ${\bf k}$. At zero temperature, the Fermi functions related to the lower and upper bands satisfy respectively $f_{{\bf k},n}=1$ and $f_{{\bf k},m}=0$ or vice-versa. \subsection{Local and Global Quantized Topological Responses} Here, we derive an equation \cite{C2} showing that the quantum Hall conductivity is related to the Berry curvatures at the Dirac points (only). The idea is simply to swap the indices and go from $F_{\theta\varphi}$ on the sphere to $F_{p_x p_y}$ on the lattice with the definitions of Sec. \ref{spherelattice}. Again, we will work closely to the two Dirac points of the energy band structure or close to the poles of the sphere such that we can use the Dirac equation. From Eq. (\ref{Kspectrum}), then we have the important equalities \begin{equation} \label{swap} \partial_{p_x}{H}= \frac{\partial{H}}{\partial p_x} = \hbar v_F\sigma_x\ \hbox{and}\ \partial_{\zeta p_y}{H}=\frac{\partial{H}}{\partial (\zeta p_y)} = \hbar v_F\sigma_y, \end{equation} with $\zeta=\pm 1$ at the $K$ and $K'$ Dirac points. Now, we can use the Pauli matrix representations of the pseudo-spin operators and apply them on the two eigenstates $|\psi_+\rangle$ and $|\psi_-\rangle$ on the sphere. From general properties of the Berry curvatures (see Appendix \ref{Berrycurvature}), we can equivalently write \begin{equation} \label{F0} F_{p_x p_y}(\theta) = i\frac{(\langle \psi_-|\partial_{p_x}{H}|\psi_+\rangle\langle \psi_+|\partial_{p_y}{H}|\psi_-\rangle-(p_x\leftrightarrow p_y))}{(E_--E_+)^2}, \end{equation} with $|\psi_+\rangle$ on the sphere being the lowest-energy state and $E_- - E_+ = 2m$ at the Dirac points for the topological lattice model. This formula allows a direct link with the general formulation of the quantum Hall conductivity (\ref{xy}). For a relation between polarization, transport and Wannier centers on the lattice see Ref. \cite{Vanderbilt}. On the sphere, the calculation close to the poles is simple to do using Eqs. (\ref{eigenstates}). Close to the north pole of the sphere or the $K$ Dirac point with $\theta\rightarrow 0$, then we derive \cite{C2,Meron} \begin{equation} \label{FBerry} F_{p_y p_x}(\theta) = -F_{p_x p_y}(\theta)=\frac{(\hbar v_F)^2}{2 d^2} \cos\theta. \end{equation} This identity supposes that $t_2\neq 0$ on the lattice or that the energy spectrum has a gap between the ground state and excited state. This form is gauge invariant and is also invariant under the change $\varphi\rightarrow -\varphi$ from $\zeta=-1$ in the energy band structure close to the $K'$ Dirac point. Close to the $K'$ point with $\theta+\pi\rightarrow \pi$, then we also have \begin{equation} F_{-p_y p_x}(\theta+\pi)= \frac{(\hbar v_F)^2}{2 d^2} \cos(\theta+\pi)= - F_{p_y p_x}(\theta+\pi). \end{equation} From Eq. (\ref{polesC}), we can then formulate a relation to the topological properties \cite{C2}: \begin{eqnarray} \label{F} \left(F_{p_y p_x}(0) \pm F_{\pm p_y p_x}(\pi)\right) = C \frac{(\hbar v_F)^2}{m^2}. \end{eqnarray} Eq. (\ref{F}) is valid assuming that the Dirac approximation of the energy spectrum is correct. From Sec. \ref{Mott}, we deduce that it remains correct in the presence of interactions as long as we stay within the same topological phase and at the Mott transition it will jump to zero at the Mott transition. This formula may be verified within current technology in ultra-cold atoms in optical lattices. It is interesting to observe that we may measure the topological number $C$ from the Dirac points only either through Eq. (\ref{polesC}) or through Eq. (\ref{F}). This local description from the Dirac points will also be useful to study the magnetoelectric effect in topological insulators \cite{Morimoto,SekineNomura}. We can verify that Eq. (\ref{FBerry}) agrees with general lattice relations for the conductivity related to Eq. (\ref{xy}). Indeed, from the general form of the Hamiltonian $H=-{\bf d}\cdot \mathbfit{\sigma}$ we have the identification $\partial_{k_x} H = \partial_{k_x} \sum_{i=x,y,z} d_i \sigma_i$. Now, using the form of ${\bf d}$ derived from the Dirac points in Eq. (\ref{correspondence}) this leads to \begin{eqnarray} \sigma_{xy} &=& \frac{e^2}{\hbar}\sum_{{\bf k},m,n} \frac{\partial_{k_x} d_x \partial_{k_y} d_y}{4d({\bf k})^2} \\ \nonumber &\times&\hbox{Im}(\langle {\bf k},n| \sigma_x | {\bf k},m\rangle\langle {\bf k},m| \sigma_y| {\bf k},n\rangle), \end{eqnarray} where from Eqs. (\ref{F0}) and (\ref{FBerry}) we identify \begin{equation} \hbox{Im}(\langle {\bf k},n| \sigma_x | {\bf k},m\rangle\langle {\bf k},m| \sigma_y| {\bf k},n\rangle) = 2\cos\theta = 2\frac{d_z({\bf k})}{d({\bf k})} \end{equation} such that the conductivity can be equivalently written in terms of the ${\bf d}$ vector \cite{Volovik} \begin{eqnarray} \sigma_{xy} = \frac{e^2}{2N\hbar} \sum_{\bf k} \frac{(\partial_{k_x} d_x)(\partial_{k_y} d_y) d_z}{d({\bf k})^3} \end{eqnarray} or equivalently \begin{equation} \label{dvectorsigma} \sigma_{xy} = \frac{e^2}{h}\iint \frac{d^2{\bf k}}{4\pi} \frac{(\partial_{k_x} d_x)(\partial_{k_y} d_y) d_z}{d({\bf k})^3}. \end{equation} For a relation with the Green's function approach, see \cite{Volovik,Ishikawa}. This can be equally written in terms of the normalized ${\bf n}=\frac{\bf d}{|{\bf d}|}$ vector such that \begin{equation} \label{nvectorsigma} \sigma_{xy} = \frac{e^2}{h}\iint \frac{d^2{\bf k}}{4\pi} (\partial_{k_x} n_x).(\partial_{k_y} n_y)n_z. \end{equation} Then, \begin{equation} W= \iint \frac{d^2{\bf k}}{4\pi} (\partial_{k_x} {\bf n})\times (\partial_{k_y} {\bf n}) \cdot {\bf n} \end{equation} can be visualized as a winding number on the unit sphere \cite{QiZhang,Volovik} and has a similar interpretation in quantum Hall ferromagnets \cite{Girvin} related to Skyrmion physics \cite{Skyrme,NagaosaTokura} where we turn the momentum into real space variables. A simple correspondence between the spin-$\frac{1}{2}$ Hamiltonian and the winding number $W$ is formulated in the book of Nakahara, see Eq. (10.163) \cite{Nakahara}. This is also a direct measure of the topological charge related to this effective Dirac monopole. It is relevant to mention here a recent direct experimental determination of the topological winding number through polarized X-ray scattering in Skyrmions materials \cite{ZhangSkyrmions}. In the case of a meron \cite{Meron}, this is similar as a half Skyrmion covering half of the sphere similarly as for the two entangled spheres of Sec. \ref{fractionaltopology}. From an historical perspective, the meron was first predicted related to the Yang-Mills equation and the possibility of $\frac{1}{2}$ instantons \cite{Alfaro}. In the two spheres' model, the $\frac{q}{2}$ topological numbers are intrinsically protected from the form and nature of the entangled wavefunction at one pole, as shown through Eq. (\ref{Cjspin}). In the presence of interactions, the characterization of the topological properties from the momentum space \cite{Volovik,Gurarie} presents certain advantages such that in the presence of quasiparticle and quasihole excitations, the pole of the single-particle Green's function leads to the robustness of the quantized topological response through the Ishikawa-Matsuyama formula \cite{Ishikawa}. The momentum representation of Sec. \ref{Mott} allows then a simple analytical stochastic variational approach to describe the Mott transition on the sphere. Also, the sphere elegantly reveals topological properties of quantum Hall states \cite{Haldanesphere,Papic,Fluctuations}. Dynamical mean-field theory methods \cite{DMFT} are also very efficient to evaluate topological properties \cite{WuQSH,KimKrempa,Cocks,Vasic,Plekhanov,Julian}. Another interesting approach in the strong-coupling regime is the cluster perturbation theory combined with a Random Phase Approximation which also allows for the evaluation of the quasiparticle spectral function. For the interacting bosonic Haldane model, this has allowed to reveal topological excitations at finite frequency on top of the insulating ground state \cite{Vasic}. It is also important to remind here that Niu {\it et al.} \cite{NiuThoulessWu} have introduced the technique of twisted boundary conditions such that the topological first Chern number can be rephrased in terms of twisted phases in the parameter space for two-dimensional insulators. This approach also pleasantly links the bulk topological number with the properties of the edge states \cite{QiWuZhang}. \subsection{Effect of a Semenoff Mass} \label{Semenoff} The sphere formalism also allows for a simple description of the geometry in the presence of a Semenoff mass \cite{Semenoff}. We describe the situation as in Fig. \ref{Haldanespectrum2.pdf} where the gap closes at the $K$ point only implying that here we add the Semenoff mass as $+M\sigma_z$ in the Hamiltonian. In the vicinity of the poles, we can yet write down eigenstates in the same form as in Eq. (\ref{eigenstates}). Close to the north pole of the sphere corresponding to the $K$ point on the lattice, we identify \begin{equation} \cos\theta = \frac{\tilde{d}_z(0)}{\sqrt{\tilde{d}_z(0)^2 +(\hbar v_F)^2 |{\bf p}|^2}} \end{equation} with \begin{equation} \tilde{d}_z(0)=d_z-M=m-M. \end{equation} In this way, close to the north pole, we have \begin{equation} {A}_{\varphi}(\theta,\tilde{d}_z(0))=-\frac{\cos\theta}{2}. \end{equation} Similarly, close to the south pole of the sphere corresponding to the $K'$ point on the lattice, we identify \begin{equation} \cos\theta = \frac{\tilde{d}_z(\pi)}{\sqrt{\tilde{d}_z(\pi)^2 +(\hbar v_F)^2 |{\bf p}|^2}} \end{equation} with \begin{equation} \tilde{d}_z(\pi)=-d_z-M=-m-M. \end{equation} In this way, close to the south pole, we have \begin{equation} {A}_{\varphi}(\theta,\tilde{d}_z(\pi))=-\frac{\cos\theta}{2}. \end{equation} Precisely at the poles we have $\sin\theta\rightarrow 0$ corresponding to $v_F|{\bf p}|\rightarrow 0$. Then, as long as $M<m$ meaning that we are within the same topological phase, we verify that we yet have ${A}_{\varphi}(\pi)-{A}_{\varphi}(0)=C=+1$, with ${A}_{\varphi}(0)=-\frac{1}{2}$ and ${A}_{\varphi}(\pi)=+\frac{1}{2}$ through the eigenstates of Eq. (\ref{eigenstates}). This shows the perfect quantization of the topological number as long as we stay within the same topological phase (assuming a very clean sample). The equation relating to the conductivity in Eq. (\ref{F}) is also modified as \begin{equation} \frac{\tilde{d}_z^2(0)}{(\hbar v_F)^2}\tilde{F}_{p_y p_x}(0) \mp \frac{\tilde{d}_z^2(\pi)}{(\hbar v_F)^2}\tilde{F}_{p_y\mp p_x}(\pi) =C, \end{equation} with $\tilde{F}$ that takes a similar form as $F$ when adjusting the $d_z$ component with $\tilde{d}_z$ such that \begin{equation} \frac{\tilde{d}_z^2(0)}{(\hbar v_F)^2}\tilde{F}_{p_y p_x}(0) = \frac{C}{2}\hbox{sgn}(m-M). \end{equation} This term remains identical to $F_{p_y p_x}(0)$ as long as we remain within the topological phase with $M<m$. Similarly, at the $K'$ Dirac point, we identify \begin{equation} \frac{\tilde{d}_z^2(\pi)}{(\hbar v_F)^2}\tilde{F}_{p_y p_x}(\pi) = \frac{C}{2}\hbox{sgn}(m+M). \end{equation} This term remains identical to $F_{p_y p_x}(\pi)$ within the topological phase with $m>-M$. As long as the gap between the topological valence and conduction bands does not close then the winding number $W$ will remain quantized, which can also be deduced from $\langle \sigma_z\rangle$ at the poles in Eq. (\ref{polesC}) \cite{Orsaytheory,CayssolReview}, and similarly for the quantum Hall conductivity. It is interesting to observe that the quantum phase transition with one Dirac point yet engenders recent interest in the formulation of topological invariants \cite{Verresen} related to Chern-Simons theory with half-integer prefactors. Within the present description, the topological transition at $m=M$ corresponds to ${A}_{\varphi}(\theta=0)-{A}_{\varphi}(\theta=\pi)=\frac{1}{2}$ through the limit $v_F|{\bf p}|=0^+$. This confirms that $C=\frac{1}{2}$ when $m=M$ and that there is also a jump of the topological number. Above the transition, we also have from the equations above that $A_{\varphi}(\theta=0)=A_{\varphi}(\theta=\pi)=\frac{1}{2}$ and $C=0$. In Sec. \ref{light}, we show that these relations are useful to make a link between the quantum Hall conductivity on the lattice and response to circular polarizations of light from the Dirac points only. The formalism is similar to a spin-$\frac{1}{2}$ atom and therefore it allows simple derivations regarding the light-matter interaction building an analogy with nuclear magnetic resonance. Recent theory work related to circular dichroism of light and experiments have integrated the photo-induced current on all the Brillouin zone and a relation with the quantum Hall conductivity is elegantly built from the Kubo formula \cite{Goldman}. Within the present formalism, we will address locally the light response from the Dirac points and show a quantized photo-electric effect related to $C^2$ in Eq. (\ref{alpha}) following our recent progress \cite{C2,Klein}. The word local here refers to specific points in the Brillouin zone. We also show that $C^2$ is measurable from the evolution in time associated to the lowest band population \cite{C2}. In our description, the dipole description is built from the pseudo-spin $\frac{1}{2}$ associated to the two sublattices $A$ and $B$ of the honeycomb lattice whereas in Ref. \cite{Goldman}, the dipole is measured with the position operator. As shown in Appendix \ref{Berrycurvature}, $C^2$ is also related to the quantum distance. In this Appendix, we also address the metric, the ${\cal I}(\theta)$ function and light. It is also interesting to emphasize the recent relation between quantum metric from the reciprocal space and a gravitational approach through the Einstein Field Equation \cite{BlochMetric}. \subsection{Quantized Photo-Electric Response and ${\cal I}(\theta)$ function} \label{light} Here, we study the light response in the Haldane model. Light propagates in $z$ direction and we assume circular polarizations of the vector potential. Within the Dirac equation, the light-matter interaction can be written through a term ${\bf A}\cdot \mathbfit{\sigma}$ \cite{Klein}. First, we can use the results of Sec. \ref{lightdipole} for the present lattice situation. From Eqs. (\ref{alpha}), we deduce that the response to circularly polarized light is quantized at the two Dirac points and reveal $C^2$ if we satisfy the proper energy conservation due to a light quantum absorption. We mention here that $C^2$ in Eq. (\ref{alpha}) acquires another possible physical interpretation on the lattice close to the Dirac points through \cite{C2} \begin{equation} \label{alphalight} \alpha(\theta) = \frac{{\cal I}(\theta)}{2(\hbar v_F)^2}, \end{equation} with the function \begin{eqnarray} \label{Itheta} {\cal I}(\theta) &=& \left\langle \psi_+ \left|\frac{\partial {H}}{\partial p_x} \right|\psi_-\right\rangle \left\langle \psi_- \left|\frac{\partial {H}}{\partial p_x} \right|\psi_+\right\rangle \nonumber \\ &+& \left\langle \psi_+ \left|\frac{\partial {H}}{\partial p_y} \right|\psi_-\right\rangle \left\langle \psi_- \left|\frac{\partial {H}}{\partial p_y} \right|\psi_+\right\rangle \nonumber \\ &=& 2(\hbar v_F)^2\left(\cos^4\frac{\theta}{2} +\sin^4\frac{\theta}{2}\right). \end{eqnarray} It is perhaps relevant to emphasize here the usefulness of the introduced smooth fields in Eq. (\ref{smoothfields}) to relate locally ${\cal I}(\theta)$ with the topological properties. In particular, in Ref. \cite{Klein}, the relation between light response and the topological invariant was revealed when acting through small portions (slices) on the Bloch sphere from the equatorial plane onto the poles, in agreement with the form of the photo-induced currents. From the smooth fields in Eq. (\ref{CA'}), we observe that ${\cal I}(\theta)$ directly measures the square of the topological invariant $C^2$ at the poles of the sphere \cite{C2}. To the best of our knowledge, the relation between the ${\cal I}(\theta)$ function and the topological properties from a local interpretation of the cosine and sine functions of the spin-$\frac{1}{2}$ eigenstates (surprisingly) was not mentioned before in the literature. From energy conservation, the $K$ Dirac point will absorb a light quantum from the right-handed polarization described here through the vector potential ${\bf A}_+=A_0 e^{-i\omega t} e^{+ikz}({\bf e}_x - i {\bf e}_y)$ and the $K'$ Dirac point will absorb a light quantum from the left-handed polarization described through ${\bf A}_-=A_0 e^{-i\omega t} e^{ikz}({\bf e}_x + i {\bf e}_y)$. Through the correspondence between momentum and vector potential, the light-matter coupling can be described on the Bloch sphere through the vector potential itself. The system is in the plane $z=0$. Fixing $A_0<0$, then the electric field takes the form ${\bf E}_{\pm}=\mp \omega |A_0| {\bf e}_{\varphi}$ referring to the right-handed and left-handed polarizations. Here, we show the relation with the evolution in time of the lowest-band population due to the light-matter coupling at the Dirac points. In terms of smooth fields, to address the physics at the $K$ Dirac point we place the interface $\theta_c$ close to the north pole with $-{A}'_{\varphi}(\theta>\theta_c)=-{A}'_{\varphi}(\theta=0^+)=C$. From Sec. \ref{lightdipole}, at the north pole, we can write the light-matter coupling from the right-handed $(+)$ polarization, as \begin{equation} \label{evolve} \delta{H}_+ = A_0 e^{i\omega t} e^{-i\varphi} (-A_{\varphi}'(\theta>\theta_c)) |\psi_-\rangle \langle \psi_+| +h.c. \end{equation} and $-{A}'_{\varphi}(\theta>\theta_c)=C$. Here, we must be careful with the correspondence with the honeycomb lattice as formulated in Sec. \ref{spherelattice}. Indeed, the polar angle in the reciprocal space of the tight-binding model takes the form $\tilde{\varphi}=\varphi\pm \pi$ and also we must re-adjust $\varphi\rightarrow -\varphi$ between the north and south poles. At the north pole, the additional phase $\pm \pi$ is equivalent for instance to change ${\bf e}_x\rightarrow -{\bf e}_x$ and therefore is equivalent to turn a right-moving wave into a left-moving wave. To compensate for this lattice effect compared to the Bloch sphere description of Sec. \ref{lightdipole}, we can then change $\omega\rightarrow -\omega$ in Eq. (\ref{deltaH}) which results in Eq. (\ref{evolve}). At the south pole, since we also modify $\varphi\rightarrow -\varphi$ to validate the sphere-lattice correspondence, then we do not need to modify $\omega\rightarrow -\omega$ in Eq. (\ref{deltaH}). Indeed, $\varphi\rightarrow -\varphi$ is equivalent to swap back the direction of the wave. Then, developing the evolution operator in time to first order in $\delta{\cal H}_+$, we have \begin{eqnarray} |\psi_{+}(t)\rangle &=& e^{\frac{i}{\hbar}m t}|\psi_{+}(0)\rangle \\ \nonumber & - & \frac{e^{\frac{i}{\hbar}m t}}{\hbar\tilde{\omega}} A_0 e^{i\varphi} {A}'_{\varphi}(\theta>\theta_c)\left(e^{i\tilde{\omega}t} -1\right)|\psi_-(0)\rangle, \end{eqnarray} with $\tilde{\omega}=\omega-2m/\hbar$ and $|\psi_-(0)\rangle=|B\rangle$. Here, we have selected the right-handed polarisation term $\delta {H}_+$ because in the limit $t\rightarrow +\infty$ we verify that it satisfies the energy conservation in agreement with the Fermi golden rule approach. In this way, we obtain the transition probability ${\cal P}(\tilde{\omega},t)= |\langle B | \psi_{+}(t)\rangle|^2$ to reach the upper energy band \cite{C2} \begin{equation} \label{short} {\cal P}(\tilde{\omega},t) = \frac{4 A_0^2}{(\hbar\tilde{\omega})^2} ({A}'_{\varphi}(\theta>\theta_c))^2 \sin^2\left(\frac{1}{2}\tilde{\omega}t\right). \end{equation} Here, $|B\rangle$ represents the upper energy state at the north pole or the $K$ Dirac point in the topological band structure. This formula is reminiscent of the nuclear magnetic resonance inter-band transition formula where we identify an additional geometrical factor encoding the topological properties from the radial magnetic field. It is interesting to mention that inter-band transition probabilities in time can be possibly measured with current technology, and for instance, in ultra-cold atoms \cite{MunichWilczekZee}. From Fourier transform, the signal will reveal two $\delta$ peaks which in nuclear magnetic resonance find applications for imaging. At short times, we obtain \begin{equation} {\cal P}(dt^2) = \frac{A_0^2}{\hbar^2} C^2 dt^2, \end{equation} such that \begin{equation} \label{density} \frac{dN_+}{dt^2} = - \frac{A_0^2}{\hbar^2} C^2. \end{equation} Here, $N_+(t) = |\langle \psi_+(t) |\psi_+(t)\rangle|^2= N_+(0) - {\cal P}(t) = 1-{\cal P}(t)$ describes the normalized number of particles in the lowest band at time $t$. This equation shows locally the relation with the smooth fields and the global topological invariant $C^2$. In fact, if we select the resonance frequency $\tilde{\omega}\rightarrow 0$, then the relation \begin{equation} {\cal P}(t) \sim \frac{A_0^2}{\hbar^2} C^2 t^2, \end{equation} can be measured for long(er) times. To describe the physics at the $K'$ point, within the geometrical approach we can move the interface $\theta_c\rightarrow \pi$, such that we have the identification $C={A}'_{\varphi}(\theta<\theta_c)={A}'_{\varphi}(\pi^-)$ for the left-handed polarisation. Using the identity between eigenstates $|\psi_+(\theta=0)\rangle = -|\psi_-(\theta=\pi)\rangle$ and $|\psi_-(0)\rangle = |\psi_+(\theta=\pi)\rangle$, then we obtain \begin{eqnarray} \delta{H}_- &=& A_0 e^{i\omega t} e^{-i\varphi}(-C)|\psi_+(\pi)\rangle \langle \psi_-(\pi)| +h.c. \\ \nonumber &=& A_0 e^{i\omega t} e^{-i\varphi}C|\psi_-(0)\rangle \langle \psi_+(0)| +h.c. \end{eqnarray} We obtain a similar formula as in Eq. (\ref{evolve}) close to the $K$ point. From the time evolution of the population in the lower band (or equivalently upper band), the two light polarizations play a symmetric role one at a specific Dirac point $K$ or $K'$. In Appendix \ref{lightconductivity}, we verify that the photo-induced currents measure $|C|=C^2$ from the poles of the sphere. This reveals circular dichroism of light \cite{Goldman} referring to an induced current from the left-handed polarization being different compared to to the induced current from the right-handed polarization. In the present situation, the difference between the two responses being topologically quantized. This Appendix \ref{lightconductivity} also shows why in the calculation of the photo-induced currents it is identical to integrate the light response on all the wavevectors in the Brillouin zone, as in Ref. \cite{Goldman}, or to consider the Dirac points only for the analysis of the light response to second order in $A_0$ \cite{Klein,C2}. We mention here that topological properties of the lattice may also be revealed from coupling a honeycomb array in a quantum electrodynamics circuit to a local microscope and selecting the frequency of the incoming $AC$ signal to resolve the physics at the Dirac points \cite{JulianLight}. In this case, it is also possible to resolve the topological informations from the Dirac points, without the presence of (circular) polarizations for the light field in the probe, from the energy conservation. \subsection{Parity Symmetry and Light at the $M$ point} \label{Paritysymmetry} Here, we show some useful relations from the lattice at the high-symmetry $M$ point in Fig. \ref{graphenefig} related to the light-induced topological response. The superposition of the two circularly polarized lights at the $K$ and $K'$ Dirac points is equivalent to a linearly polarized wave. If we use the Bravais lattice vectors ${\bf u}_1=-{\bf b}_2=\frac{a}{2}(3,\sqrt{3})$ and ${\bf u}_2 = {\bf b}_1 = \frac{a}{2}(3,-\sqrt{3})$, we can write the graphene Hamiltonian at the $M$ point in the form \begin{equation} {H}(M) = w\sigma^+ +h.c., \end{equation} with \begin{equation} \label{w} w = t\left(1+\sum_{i=1}^2 e^{-i {\bf k}\cdot{\bf u}_i}\right), \end{equation} and $k^M_x=\frac{2\pi}{3a}$, $k^M_y=0$. We can justify the choice of local gauge in Eq. (\ref{w}) as follows. Within our definition of the Brillouin zone, at this $M$ point since $k_y=0$, the Hamiltonian should be invariant under the symmetry $k_y\rightarrow -k_y$ which implies that the term $d_2\sigma_y$ in the formulation of Fu and Kane \cite{FuKane} should be defined to be zero. The Hamiltonian at this $M$ point should be equivalent to $d_1\sigma_x=d_1\hat{P}$, with $d_1=\frac{1}{2}\left(w+w^*\right)$ and with $\hat{P}$ defined to be the parity operator defined in a middle of a bond in a unit cell in real space, corresponding then to interchange $A\leftrightarrow B$ sublattices through the transformation $x\rightarrow -x$ or $k_x\rightarrow -k_x$. This $M$ point in the middle of $K$ and $K'$ is in fact special since $\hbox{sgn}(d_1)=-1$ within our definitions of $w$ whereas at the other high symmetry points, we find $\hbox{sgn}(d_1)=+1$. These definitions are also in agreement with the fact that the light-matter response is invariant under $\varphi\rightarrow -\varphi$. This results in \begin{eqnarray} \frac{\partial{w}}{\partial k_x} = -i t (2u_x)\hbox{sgn}(d_1) = (3ita). \end{eqnarray} Here, $\hbox{sgn}(d_1)=-1$ traduces that the eigenvalue of the $\sigma_x$ or parity operator on the lattice takes a negative value at this specific point, as in the definition of Fu and Kane. In a similar way, \begin{eqnarray} \frac{\partial{w}}{\partial k_y} = 0. \end{eqnarray} Then, we verify the same form as the one obtained above within the Dirac approximation \begin{eqnarray} \hskip -0.7cm \frac{1}{(\hbar v_F)^2}\left\langle \psi_+ \left|\frac{\partial H}{\partial k_x} \right|\psi_-\right\rangle \left\langle \psi_- \left|\frac{\partial H}{\partial k_x} \right|\psi_+\right\rangle = 4\cos^4\frac{\theta}{2}. \end{eqnarray} Here, we take into account the $\delta(E_b-E_a\mp \hbar\omega)$ function such that either $w\sigma^+$ or $w\sigma^-$ contributes for a specified light polarization. Since the sine and cosine functions are equal at $\theta=\frac{\pi}{2}$, it allows us to verify that this quantity at the $M$ point is also equal to ${\cal I}(\theta)$. At the $M$ point for $\theta=\frac{\pi}{2}$, then we have \begin{eqnarray} \left\langle \psi_+ \left|\frac{\partial H}{\partial k_x} \right|\psi_-\right\rangle \left\langle \psi_- \left|\frac{\partial H}{\partial k_x} \right|\psi_+\right\rangle = {\cal I}(M), \end{eqnarray} for all values of $t_2$. This results in \cite{C2} \begin{equation} \label{resonancemiddle} {\cal I}(M) = \frac{{\cal I}(0)}{2} = \frac{{\cal I}(\pi)}{2} \end{equation} for the light response, in agreement with Eq. (\ref{onehalf}). These equations are also in agreement with the fact that the addition of the electric fields around the two Dirac points produces an electric field along $x$ direction at the $M$ point, similarly as a linearly polarized wave: $$ {\bf E} = {\bf E}_+ + {\bf E}_- = 2 e^{i\frac{\pi}{2}}A_0 \omega e^{-i\omega t} {\bf e}_x. $$ For a specified light polarization, from the geometry, the response of the system will be halved compared to the Dirac points because $\frac{\partial w}{\partial k_y}=0$. \subsection{Quantum Hall Effect and Light} Here, we compare the topological properties of the sphere model with $C=1$, the Haldane model and the situation giving rise to the quantum Hall effect on the carbon graphene system with a uniform magnetic field in $z$ direction. The effect of the uniform magnetic field is now directly included within the Dirac formalism on the honeycomb lattice. We mention here that the validity of the Dirac approximation can be verified from the lattice and from the Azbel-Harper-Hofstadter Hamiltonian \cite{Azbel,Harper,Hofstadter} including Peierls phases associated to the magnetic field \cite{graphene}. The magnetic field in $z$ direction is described similarly as the light-matter coupling through the vector potential, here in a one-dimensional gauge ${\bf B}=B{\bf e}_z = (-Ay,0,0)$. We can now absorb the vector potential in the $2\times 2$ matrix associated to Eq. (\ref{Kspectrum}). The wavefunction of the system can be written as $\Phi({\bf r}) = e^{ikx} \Phi(y)$, where $\Phi(y)$ is associated to the spinor $|\Phi_A(y),\Phi_B(y)\rangle$ such that we have a plane wave solution along $x$ direction. It is judicious to introduce the dimensionless position operator $\hat{r} = \frac{y}{l_B}+k l_B$ with the momentum operator $-i \hbar\partial_r = l_B(-i\hbar\partial_y)$ such that $[\hat{r},-i\hbar\partial_r] = i\hbar$. The cyclotron length takes the usual form $l_B = \sqrt{\frac{\hbar}{q B}}$ with $q>0$. Introducing the normalized ladder operator \begin{eqnarray} {\cal O} &=& \frac{1}{\sqrt{2}}\left(\hat{r} +\partial_r\right) ={\cal O}_K = {\cal O}^{\dagger}_{K'} \\ {\cal O}^{\dagger} &=& \frac{1}{\sqrt{2}}\left(\hat{r} - \partial_r\right) = {\cal O}^{\dagger}_K = {\cal O}_{K'} \end{eqnarray} such that $[{\cal O},{\cal O}^{\dagger}]=1$, the Hamiltonian takes the form \begin{equation} H = \hbar \omega_c^* \left( \begin{matrix} 0 & {\cal O}^{\dagger} \\ {\cal O} & 0 \end{matrix} \right), \end{equation} with the cyclotron frequency $\omega_c^*=\sqrt{2}\frac{v_F}{l_B}$. Introducing the operator $\hat{N}={\cal O}^{\dagger}{\cal O}$ such that $[H,\hat{N}]=0$, the energy eigenvalues read \begin{equation} E = \pm \hbar\omega_c^* \sqrt{N}. \end{equation} The energy eigenvalues at the two Dirac points become quantized in units of $\hbar\omega_c^*$ as mentioned by J. W. McClure in 1956 \cite{McClure}. The main difference compared to the solution of the Schr\" odinger equation is that at $N=0$, then $E=0$. The spectrum is also doubly degenerate since the $K$ and $K'$ points give identical solutions. For $N=0$, then we have a zero-energy mode shared between the $K$ and $K'$ Dirac points. The ground state at the $K$ point is projected on $\Phi_A$ and satisfies ${\cal O}\Phi_A=0$ and equivalently $\Phi_B=0$. Due to the inversion of ${\cal O}(K)$ and ${\cal O}^{\dagger}(K')$, at the $K'$ point, the ground state is projected on $\Phi_B$, similarly as for the Haldane model. Since we have positive and negative energy plateaus, we may have electron and hole conductivity plateaus. The formation of Landau energy levels in graphene has been observed in various experiments \cite{Zhang,Novoselov} as a result of the applied magnetic field including the presence of the additional plateau at $N=0$ compared to metals with a quadratic energy dispersion and the presence of positive and negative energy plateaus. The quantum Hall conductivity has been measured. Here, $\sigma_{xy}$ takes the form of $\pm 2(2N+1)\frac{e^2}{h}$ associated to the filled Landau energy levels. For $N=0$, the factor $2$ comes from the presence of two Dirac energy ladders at the $K$ and $K'$ points. Interestingly, the formation of the quantum Hall effect in graphene occurs at relatively large temperatures comparable to room temperatures. The light response in quantum Hall systems can be studied using an analogy with spin-$\frac{1}{2}$ particles \cite{NathanNigel}. For the specific situation of the quantum Hall effect in graphene, we show below that coupling with circular polarizations of light can be described using the same formalism as in Sec. \ref{light} allowing a correspondence with the Haldane model if we analyse specifically the induced transitions between energy levels $N=0$ and $N=1^+$. Including the vector potential due to the light field results in the two equations \begin{eqnarray} \left(\hbar\omega_c^*{\cal O}+\hbar\omega_c^*(A_0 l_B) e^{\mp i\omega t}\right)\Phi_A &=& i\hbar\frac{d}{dt}\Phi_B \\ \nonumber \left(\hbar\omega_c^*{\cal O}^{\dagger} +\hbar\omega_c^*(A_0 l_B) e^{\pm i\omega t}\right)\Phi_B &=& i\hbar\frac{d}{dt}\Phi_A. \end{eqnarray} The signs in these two equations refer to the two light polarizations, right- and left-handed, respectively. The corrections in energy occur to second-order in $A_0$. This can be seen simply as the light-matter coupling dresses the cyclotron physics with $$ {\cal O}\rightarrow {\cal O}+(A_0 l_B)e^{\mp i\omega t}, $$ such that corrections in energy occur through ${\cal O}^{\dagger}{\cal O}=N+(A_0 l_B)^2$, and therefore to second-order in $A_0 l_B$. This implies that similarly as for the Haldane model, to calculate the correction to first order in $A_0 l_B$ to the eigenstates, we can replace on the right-hand side of these equations $\Phi_{\pm}(t)=\Phi e^{\mp i\sqrt{N}\hbar\omega_c^* t}$, with the unmodified (`bare') energies for the quantum Hall system such that \begin{eqnarray} \left(\hbar\omega_c^*{\cal O}+\hbar\omega_c^*(A_0 l_B) e^{\mp i\omega t}\right)\Phi_A &=& \pm\sqrt{N}\hbar\omega_c^* \Phi_B \\ \nonumber \left(\hbar\omega_c^*{\cal O}^{\dagger} +\hbar\omega_c^*(A_0 l_B) e^{\pm i\omega t}\right)\Phi_B &=& \pm\sqrt{N}\hbar\omega_c^*\Phi_A. \end{eqnarray} We assume here that we start with the Landau level $N=0$ filled and describe transitions towards the $+1$ state through absorption of a light quantum. For the plateau at $N=0$, then \begin{eqnarray} \left(\hbar\omega_c^*{\cal O}+\hbar\omega_c^*(A_0 l_B) e^{\mp i\omega t}\right)\Phi_A &=& 0 \\ \nonumber \left(\hbar\omega_c^*{\cal O}^{\dagger} +\hbar\omega_c^*(A_0 l_B) e^{\pm i\omega t}\right)\Phi_B &=& 0. \end{eqnarray} A natural ansatz here is $\Phi_B=0$ and \begin{equation} \tilde{\Phi}_A(0)=\Phi_A(0) - f_A\Phi_A(1)e^{-i\omega_c^*t}. \end{equation} Since we have fixed the energies to zero, this implies that $f_A$ here should be time-independent such that $\dot{f}_A$ does not provide a contribution in $A_0$. Selecting the right-handed polarization when we study transitions $0\rightarrow 1+$, for $\omega=\omega_c^*$ then this leads to $f_A=(A_0 l_B)$. This solution requires the synchronization of cyclotron orbits with the circular light polarizations. The probability to reach the upper band at short times $t\ll 1/\omega_c^*$ is $(A_0 l_B)^2 (\omega_c^* t)^2$. Compared to the Haldane model, the prefactor $C^2=1$ coming from smooth fields becomes similarly $1$ in this case if we define the dimensionless $\tilde{A_0}=A_0 l_B$ and the dimensionless time unit $\omega_c^* t$. The $1$ counts the number of Dirac points or zero-energy mode involved for a given light polarization similarly as the quantum Hall plateau present between the $N=0$ and $1+$ Landau levels. Taking into account the degeneracy of a Landau level, the response will be multiplied by $\nu_{max}=\frac{\Phi}{\Phi_0}$ with the magnetic flux $\Phi=BS$, $S$ the area of the plane and $\Phi_0$ the flux quantum. We obtain a similar effect at the $K'$ Dirac point. This is equivalent to modify ${\cal O}\rightarrow {\cal O}^{\dagger}$ in the equations changing the role of $\Phi_A$ and $\Phi_B$ similarly as in the Haldane model, and to modify the light polarization $+\rightarrow -$. If we add the effect of the two light polarizations then similarly as the quantum Hall conductivity we measure the fact that a zero-energy state can be equally distributed at the $K$ or $K'$ Dirac point. The fractional quantum Hall effect (FQHE) \cite{Laughlin} is also observed in graphene as an effect of interactions and for instance a quantum plateau at $\nu=\frac{1}{3}$ is clearly identified \cite{Andrei,Bolotin}. The observation of the fractional quantum Hall state has also been shown in a graphene electron-hole bilayer \cite{MITQHE}. We mention here some recent theoretical efforts to detect the FQHE with circular dichroism via ED on a Laughlin state of bosons at $\nu=\frac{1}{2}$ \cite{CecileNathan}. There is then an interesting link between the many-body topological number and the circular dichroic signal. It is relevant to mention here that quantum Hall states have been generalized at a level of a ladder or an assemblage of wires \cite{TeoKane,Kanewires}, in three-dimensional models \cite{Halperin3D,Montambaux,Berneviggraphite} and also in four-dimensional systems \cite{Price}, such that the light response in these systems could be studied further through the smooth fields. The quantum Hall effect has been recently observed in a ladder in a strongly-correlated regime \cite{Zhou}, in coupled planes and three dimensions \cite{Li3D} and also in four dimensions \cite{Munich4D}. The bosonic Laughlin state at $\nu=\frac{1}{2}$ can be realized in a ladder \cite{PetrescuLeHur,PetrescuPiraud,Taddia,Mazza} through Thouless pump measures, edge properties and quantum information tools towards coupled-ladders geometries \cite{FanKaryn} showing a relation with high-$T_c$ cuprates, Andreev and Mott physics \cite{KarynMaurice}. In Sec. \ref{3DQHE}, we relate the quantum Hall effect on the top and bottom surfaces of topological insulators and the spheres model with $\frac{1}{2}$ topological number. \subsection{Photovoltaic Hall Effect in graphene} \label{graphenelight} Recent experiments on quantum anomalous Hall effect induced in graphene through circular polarizations of light show a growing signal when increasing the laser drive pulse fluence (intensity)\cite{McIver}. Motivated by these results, we describe the effect of circularly polarized light on graphene when $t_2=m=0$. We show that a quantum Hall response can now be measured when applying an additional fixed DC electric field ${\bf E}$. This effect was introduced as a photovoltaic Hall effect in graphene \cite{OkaAoki1,OkaAoki2} and traduces the fact that if we produce a topological phase either through the Floquet theory within a Magnus high-frequency expansion \cite{MoessnerCayssol} or within the rotating frame, then we can effectively induce a topological phase. The response may be calculated through a Floquet formalism combined with the Keldysh approach \cite{OkaAoki1,OkaAoki2,Balseiro}. A similar Floquet protocol is applied to produce the quantum anomalous Hall effect in ultra-cold atoms \cite{Jotzu,Kitagawa,Weitenberg,Hauke} and in circuit quantum electrodynamics architectures \cite{FloquetQAH}. Theoretical calculations then verify that the quantum Hall conductivity scale as the light intensity\cite{OkaAoki1,OkaAoki2}. Here, we analyze the formation of the light-induced topological phase from the geometry and results of Sec. \ref{polarizationlight} fixing the resonance condition with both Dirac points in the rotating frame. The objective is to evaluate transport properties from the geometry and results of Sec. \ref{ParsevalPlancherel}. Within this protocol, this is achieved through the superposed effect of the left-handed and right-handed light polarizations. The light-induced topological phase from Sec. \ref{polarizationlight} corresponds to a Haldane model with a sign change of the $d_z$ term at the two Dirac points corresponding then to an effective radial magnetic field acting on the sub-lattice Hilbert space of graphene. This allows for a simple $2\times 2$ matrix approach that can be then combined with the geometry to evaluate transport properties. Including the light-matter coupling from the `resonant' rotating frame is equivalent to have the effective Hamiltonian close to the poles of the sphere \ref{polarizationlight} \begin{eqnarray} H_{eff} = \begin{pmatrix} \frac{\hbar\omega}{2}\cos\theta & A_0 + \hbar v_F|{\bf p}| \\ A_0 + \hbar v_F|{\bf p}| & -\frac{\hbar\omega}{2}\cos\theta \\ \end{pmatrix}.\quad \end{eqnarray} Here, $|{\bf p}|$ measures a wave-vector deviation from one of the two Dirac points. Within the Dirac approximation, the vector potential $A_0=\frac{E_0}{\omega}$ couples directly to the pseudo-spin $\bm{\sigma}$. The off-diagonal term $A_0$ will then slightly modify the form of the effective angle $\theta$ at the poles of the sphere. The eigenstate associated with the lowest eigen-energy $-d=-\sqrt{\left(\frac{\hbar\omega}{2}\right)^2 + (A_0+\hbar v_F|{\bf p}|)^2}$ takes the form \begin{equation} |\psi_-\rangle = - \sin\left(\frac{\tilde{A}_0}{\hbar\omega}\right)|a'\rangle + \cos \left(\frac{\tilde{A}_0}{\hbar\omega}\right)|b'\rangle \end{equation} with $|a'\rangle$ and $|b'\rangle$ introduced in Sec. \ref{polarizationlight} and $\tilde{A}_0=A_0+\hbar v_F|{\bf p}|$. From the point of view of the rotated frame, the effective azimuthal angle is zero. Going back to the original frame, this is identical to have \begin{equation} \label{psi-} |\psi_-\rangle = - \sin\left(\frac{\tilde{A}_0}{\hbar\omega}\right)e^{-i\frac{\omega t}{2}} |a\rangle + \cos\left(\frac{\tilde{A}_0}{\hbar\omega}\right)e^{i\frac{\omega t}{2}} |b\rangle. \end{equation} For simplicity, we have omitted the global phase factor of the eigenstate coming from the evolution in time. On the Bloch sphere, this is indeed similar to have an effective polar angle `boost' $\theta=\frac{2\tilde{A}_0}{\hbar\omega}$ and effective azimuthal angle $\varphi=\omega t$. From the results of Sec. \ref{ParsevalPlancherel}, we deduce that the light-matter coupling itself will then produce a transverse pumped charge $e\sin^2\left(\frac{\theta}{2}\right)\sim e\left(\frac{\tilde{A}_0}{\hbar\omega}\right)^2$ on the Bloch sphere from the north pole proportional to the light intensity. \subsection{Weyl Semimetals and Light} Weyl and Dirac semimetals analogues of graphene in three dimensions have attracted attention recently and several reviews have been written on the subject \cite{SekineNomura,Armitage,Rao}. A Weyl semimetal presents points in its Brillouin zone with a linear energy dispersion similarly to graphene and develops interesting topological properties giving rise to surface states as a result of the chiral anomaly. The anomalous Hall conductivity is finite and may become quantized when we have an energy difference between two Weyl points or the two valleys \cite{BurkovBalents,Haldanesemimetal,Wan,Juan}. Here, the objective is to discuss the response to circularly light polarized from Sec.\ref{graphenelight} on graphene. Weyl semimetals are described through the Hamiltonian $H_{0}({\bf k}) = \pm\hbar v_F {\bf k}\cdot \bm{\sigma}$ where $\zeta=\pm$ for each valley and $\bm{\sigma}$ acts on the space of Pauli matrices. The energy spectrum presents branches described through $\pm \hbar v_F\sqrt{k_x^2+k_y^2+k_z^2}$. We describe the induction of an anomalous Hall effect from the coupling to circularly polarized light which gives rise in a valley to a term $-{\bf b}\cdot \bm{\sigma}$ with ${\bf b}$ a radial magnetic field defined according to Eq. (\ref{E0field}). We suppose an energy difference $\mu_5 \tau_z=\mu_5\zeta$ between the left and right valleys such that we can select the frequency of the wave and study the response at one Weyl point. The action of such systems then takes the form \begin{eqnarray} S &=& \int dt d^3 k \psi^{\dagger}i(\partial_t - ie A_0)\psi \\ \nonumber &-& \psi^{\dagger}(H_0({\bf k}) - {\bf b}\cdot \bm{\sigma} - \mu_5\tau_z)\psi. \end{eqnarray} Through an infinitesimal gauge transformation related to the Fujikawa method \cite{Fujikawa}, then a Weyl semimetal gives rise to a so-called ${\bf E}\cdot{\bf B}$ term in the quantum electrodynamics of the specific term $\frac{\alpha}{4\pi^2}\int dt d^3 r \theta({\bf r},t) {\bf E}\cdot {\bf B}$ with the fine-structure constant $\alpha=\frac{e^2}{(\hbar c)}=\frac{1}{137}$. In the presence of the $\bf{b}$ term, the $\theta$ parameter takes the form $2({\bf b}\cdot {\bf r}-\mu_5 t)$ and should be distinguished from the polar angle. For this aspect, we can refer for instance to the detailed recent review of Sekine and Nomura \cite{SekineNomura}. This axion type quantum electrodynamics \cite{Wilczek} has attracted attention related to the understanding of cosmology, quantum chromodynamics, strings and dark matter \cite{SvrcekWitten,PreskillWiseWilczek,PecceiQuinn,darkmatterreview}. For a recent review with applications in condensed-matter systems, see Ref. \cite{Nenno}. This also produces interesting transport properties and in particular gives rise to the circular photogalvanic effect in the presence of the light-matter coupling as shown by Juan {\it et al.} \cite{Juan}. The presence of the ${\bf b}$ term, acting as an effective magnetic field in the sub-space of $\bm{\sigma}$, produces a current density as \cite{SekineNomura} \begin{equation} \label{nabla} {\bf j}({\bf r},t) = \frac{e^2}{4\pi^2\hbar} \bm{\nabla}\theta\times {\bf E}. \end{equation} Here, light also produces a ${\bf b}$ term from the rotating frame similarly as for the chiral magnetic effect. Similarly as in Sec. \ref{graphenelight}, we study the photo-induced current as a function of the light field amplitude in one valley, which is allowed as a result of the $\mu_5$ term. From the rotating frame in Sec. \ref{polarizationlight} in the presence of one polarization $\pm$ for circular light interacting with one valley in the reciprocal space, we have $\bm{\nabla}\theta = 2eA_0{\bf e}_r=2e\frac{E_0}{\omega}{\bf e}_r$ with ${\bf e}_r\sim {\bf e}_x$ and ${\bf E}_{\pm}=\mp E_0 {\bf e}_{\varphi}\sim\mp E_0 {\bf e}_y$ in the $xy$ plane. The factor $e$ in $2eE_0{\bf e}_r$ comes from the fact that the dipole operator is defined through a unit charge in Eq. (\ref{energyshift}). Eq. (\ref{nabla}) then produces a current perpendicular to the plane and with a universally quantized response written in terms of the light intensity with a relative $\mp$ sign for the two light polarizations. From the original frame, the calculation of \cite{Juan} also reveals that the universal prefactor hides the chiral anomaly associated to the Weyl point. Various experiments have been performed to observe this effect and we mention here applications in $\hbox{RhSi}$ Weyl semimetals \cite{Orenstein}. Recent experiments have also found a spatially dispersive circular photogalvanic effect in \hbox{MoTe$_2$} and \hbox{Mo$_{0.9}$W$_{0.1}$Te$_2$} Weyl semi-metals \cite{Ji}. \section{Quantum Spin Hall Effect and Two Spheres} \label{quantumspinHall} Here, we introduce the formalism related to two-dimensional topological insulators and the quantum spin Hall effect \cite{Book,BernevigZhang,KaneMele1}. We show \cite{C2} that the $\mathbb{Z}_2$ spin Chern number \cite{Sheng} can be measured locally in the reciprocal space from light related to the zeros of the Pfaffian \cite{KaneMele1}. We also elaborate on interaction effects and the Mott transition. \subsection{Two Spheres and Two Planes} Here, we generalize the formalism and introduce the Berry connection for multiple spheres \begin{equation} A_{j\nu}({\bf R}) = -i\langle \psi |\partial_{j\nu} |\psi\rangle, \end{equation} with $j$ refers to a sphere, $\partial_{j\nu}=\frac{\partial}{\partial_{j{\nu}}}$ to $\partial_{j\theta}$ and $\partial_{j\varphi}$. We assume a general form for the wave-function $|\psi\rangle$ of the system. We can then introduce the smooth fields ${\bf A}'_j$ such that \begin{equation} \bm{\nabla}_j\times {\bf A}_j=\bm{\nabla}_j\times {\bf A}'_j={\bf F}_j \end{equation} where $\bm{\nabla}_1$ (written in terms of $\partial_{1\nu}$) is equivalently written as $\bm{\nabla}_1\otimes\mathbb{I}$ and $\bm{\nabla}_2$ is equivalently written as $\mathbb{I}\otimes\bm{\nabla}_2$. Similarly as in Eq. (\ref{smoothfields}), we introduce \begin{eqnarray} A'_{i\varphi}(\theta<\theta_c) &=& A_{i\varphi}(\theta) - A_{i\varphi}(0) \nonumber \\ A'_{i\varphi}(\theta>\theta_c) &=& A_{i\varphi}(\theta) - A_{i\varphi}(\pi). \end{eqnarray} If the spheres are subject to the same radial magnetic field ${\bf d}$ giving rise to the Hamiltonian \begin{equation} H=\sum_i H_i = -\sum_i {\bf d}_i\cdot\mathbfit{\sigma}_i \end{equation} then we can measure the same topological number \begin{equation} \label{topological} C_i = \frac{1}{2\pi}\iint_{S^2} {\bf F}_i\cdot d^2 {\bf s} = 1 \end{equation} defined on each sphere with $d^2{\bf s}=d\theta d\varphi$ and with the Berry curvature \begin{equation} {\bf F}_i=F_i {\bf e}_r = F_i = \partial_{i\theta} A'_{i\varphi}- \partial_{i\varphi} A'_{i\theta}. \end{equation} On each sphere, from the results in Sec. \ref{smooth} we also have the correspondence \begin{equation} C_i = A_{i\varphi}(\pi) - A_{i\varphi}(0) = A'_{i\varphi}(\theta<\theta_c) - A'_{i\varphi}(\theta>\theta_c). \label{Ci} \end{equation} In particular, the topological information can yet be resolved from the poles. Two spheres can for instance describe two graphene planes described by a Haldane model resulting in a total topological number $C_{tot}=\sum_{i=1}^2 C_i = 2$. \subsection{Quantum Spin Hall Effect} Here, we introduce the Kane-Mele model on the same honeycomb lattice, where we also include the spin dynamics \cite{KaneMele1}. In the Haldane model, electrons were assumed to be spin-polarized through the application of an in-plane magnetic field. In graphene, the spin-orbit coupling is usually small, yet the model finds various applications for instance related to Mercury HgTe/CdTe materials and the Bernevig-Hughes-Zhang model \cite{Konig,BernevigZhang,Book}, Bismuth thin films \cite{Wurzburgfilms}, and to three dimensional Bismuth materials, introducing a class called topological insulators \cite{Konig,RMPColloquium,Murakami}. It is also important to mention the recent synthesis of monolayer 1T'-WTe$_2$ with a large bandgap in a robust two-dimensional materials family of transition metal dichalcogenides \cite{Te2}. A quantum spin Hall insulator has also been revealed in 1T'-WSe$_2$ \cite{Crommie}. The model is also realized with other platforms such as light \cite{Ozawa,Rechtsman} and progress is also on-going in ultra-cold atoms \cite{Monika,Ketterle}. Topological insulators belong to the AII class in the tables \cite{BernevigNeupert}. The spin-orbit coupling can be seen as an atomic spin-orbit interaction $L_z s_z$ with $s_z$ measuring the spin-polarization of a spin-$\frac{1}{2}$ electron $\uparrow$ or $\downarrow$ and $L_z$ referring to the $z$ component of the angular momentum. The angular momentum is proportional to the momentum which produces a $t_2$ term with an imaginary $i$ prefactor in real space. Related to the Haldane model, the spin-orbit coupling then produces a term (for each spin polarization) of the form \cite{KaneMele1}: \begin{equation} H_{t_2}^{KM} = - h_z\sigma_z\otimes s_z, \end{equation} with $h_z({\bf K}')=-h_z({\bf K})$. The parameter $h_z$ is related to $t_2$ similarly as the term $d_z$ in Eq. (\ref{dvector}). Similarly as an Ising interaction, this term shows a $\mathbb{Z}_2$ symmetry corresponding here to simultaneously change $\sigma_z\rightarrow -\sigma_z$ and $s_z\rightarrow -s_z$. At the $K$ point, the states $|A\rangle\otimes|\uparrow\rangle$ and $|B\rangle\otimes|\downarrow\rangle$ are degenerate in energy, associated to the energy $-m$, and the states $|B\rangle\otimes|\uparrow\rangle$ and $|A\rangle\otimes|\downarrow\rangle$ are degenerate in energy, associated to the energy $+m$. If we go to the other Dirac point, the energies will evolve according to $m\rightarrow -m$. One can equally see the Hamiltonian as a $4\times 4$ matrix in the second-quantization basis $\Psi({\bf k})=(c_{A{\bf k}\uparrow}, c_{B{\bf k}\uparrow}, c_{A{\bf k}\downarrow}, c_{B{\bf k}\downarrow})$. In Fig. \ref{KaneMeleSpectrum}, we show the energy spectrum for the Kane-Mele model, including also a Semenoff mass $M$ corresponding to an energy difference between $A$ and $B$ sublattices. At the transition, here the gap is closing at the two Dirac points simultaneously. To describe the $\mathbb{Z}_2$ order associated to the Kane-Mele model it is useful to introduce the Dirac algebra representation \cite{KaneMele2} \begin{equation} \label{classification} H({\bf k}) = d_1({\bf k})\Gamma_1+d_{12}({\bf k})\Gamma_{12} +d_{15}({\bf k})\Gamma_{15}. \end{equation} Here, $\Gamma_1=\sigma_x\otimes\mathbb{I}$, $\Gamma_{12}=-\sigma_y\otimes\mathbb{I}$ and $\Gamma_{15}=\sigma_z\otimes s_z$. The $2\times 2$ matrix $\mathbfit{\sigma}$ acts on the sublattice subspace and the $2\times 2$ matrix $\bf{s}$ acts on the spin polarization subspace. The first two terms are then related to the graphene Hamiltonian and the $d_{15}$ term describes the mass structure in this model. Within the linear spectrum approximation close to a Dirac point, we have $d_1(\zeta K)=v_F|{\bf p}|\cos\tilde{\varphi}$, $d_{12}(\zeta K)=v_F|{\bf p}|\sin(\zeta\tilde{\varphi})$ with ${\bf p}$ measuring deviations from a Dirac point and $d_{15}=-h_z=-\zeta m$. This model is invariant under time-reversal symmetry; changing time $t\rightarrow -t$ can be seen as changing the wave-vector ${\bf k}\rightarrow -{\bf k}$ and also the spin polarization $\uparrow \rightarrow -\downarrow$ of a particle (see Appendix \ref{timereversal} for a simple description of this symmetry). This symmetry has deep consequences on physics and in particular implies the existence of two counter-propagating edge modes at the edges of the sample. As already mentioned in Sec. \ref{Paritysymmetry}, the characterization of the parity symmetry also plays an important role in the physical responses. Within the choice of our Brillouin zone, the parity transformation takes the form $\hat{P}=\sigma_x\otimes \mathbb{I}$ such that $[H({\bf k}),\hat{P}]=0$ at the $M$ point between $K$ and $K'$. Therefore, following Fu and Kane \cite{FuKane}, since $H({\bf k})$ commutes with both the time-reversal symmetry and this parity symmetry, we may obtain useful information from the Pfaffian and from symmetries allowing to define a $\mathbb{Z}_2$ topological invariant from the four high-symmetric points in the Brillouin zone, the 3 $M$ points in the middle of the 3 pairs of $K$ and $K'$ points and the $\Gamma$ point in the center in Fig. \ref{graphenefig}. This invariant reads \cite{FuKane} \begin{equation} (-1)^{\nu} = \prod_{i=1}^4 \delta_i \end{equation} with the product running on these four points and the function $\delta=-\hbox{sgn}(d_1)$ defined in Sec. \ref{Paritysymmetry}. In the topological phase, $\nu=1$ and for a non-topological band insulator, $\nu=0$. Generalizations to three-dimensional topological insulators exist \cite{MooreBalents,FuKaneMele,Teo,Roy}. A quantum field theory description is developed in Ref. \cite{Qi}. The quantum spin Hall phase is also generalized on the square \cite{Cocks} and Kagome lattice \cite{Franz} where the physics of on-site potentials and interactions can also be addressed through various analytical and numerical methods \cite{IrakliJulian}. \begin{center} \begin{figure}[ht] \includegraphics[width=0.5\textwidth]{KaneMeleSpectrum} \caption{(Left) Band structure of the Kane-Mele model with $t_2=0.15$ and $t=1$ showing a double (spin) degeneracy associated to a $\mathbb{Z}_2$ symmetry. The double degeneracy corresponds to simultaneously change $\sigma_z\rightarrow -\sigma_z$ and $\tau_z\rightarrow -\tau_z$ such that the states $A_{\uparrow}$ $(A_{\downarrow})$ and $B_{\downarrow}$ $(B_{\uparrow})$ have the same energy. (Right) Transition when including a Semenoff mass $+M\sigma_z$ for $M=3\sqrt{3}t_2=0.779423...$. At the $K$ point, the states $A_{\uparrow}$ and $B_{\uparrow}$ meet at $E=0$ and similarly at the $K'$ point the states $A_{\downarrow}$ and $B_{\downarrow}$ meet at $E=0$.} \label{KaneMeleSpectrum} \end{figure} \end{center} \subsection{$\mathbb{Z}_2$ number, Pfaffian, Light and Spin Pump} \label{lightKM} To describe physical observables, here we apply the correspondence with the spheres' model. This model can be seen as two spheres described by radial magnetic fields such that ${H}_{\uparrow}({\bf k})=-{\bf d}_{\uparrow}({\bf k})\cdot\mathbfit{\sigma}_{\uparrow}$ and ${H}_{\downarrow}({\bf k})={\bf d}_{\downarrow}({\bf k})\cdot\mathbfit{\sigma}_{\downarrow}$ and $d_{\uparrow x}=d_{\downarrow x}=d_1$, $d_{\uparrow y}=d_{\downarrow y}=d_{12}$, $d_{\uparrow z}=\zeta m$ and $d_{\downarrow z}=d_{\uparrow z}$. We keep the same definition as before such that $\zeta=\pm$ at the $K$ and $K'$ Dirac points respectively. Related to Eq. (\ref{correspondence}), within our definitions, the polar angle around a Dirac point in the recripocal space is related to the Bloch sphere as $\tilde{\varphi}_\uparrow=\varphi\pm \pi$ and $\tilde{\varphi}_\downarrow=\varphi$. Going from sphere $1=\uparrow$ to sphere $2=\downarrow$ is equivalent to change the role of the lower and upper energy eigenstates in Eq. (\ref{eigenstates}) and to adjust the topological number accordingly as $C_{\uparrow}=+1$ and $C_{\downarrow}=-1$. This modifies ${A}'_{\varphi}(\theta>\theta_c)\rightarrow - {A}'_{\varphi}(\theta>\theta_c)=\cos^2\frac{\theta}{2}$ and ${A}'_{\varphi}(\theta<\theta_c)\rightarrow -{A}'_{\varphi}(\theta<\theta_c)=-\sin^2\frac{\theta}{2}$ for sphere $2$. The quantum Hall conductivity is zero in this case since $\sum_{i=1}^2 C_i=0$. In Fig. \ref{KaneMele}, at the edges of the cylinder, a $\uparrow$ particle moves in one direction whereas the $\downarrow$ particle moves in the counter direction with $I_{b\uparrow}=-I_{b\downarrow}$ and similarly $I_{t\uparrow}=-I_{t\downarrow}$. Therefore, one can introduce the spin Chern number \cite{Sheng} \begin{equation} C_s = C_{\uparrow} - C_{\downarrow} = \pm 2, \end{equation} as a $\mathbb{Z}_2$ formulation of the topological invariant. The $\mathbb{Z}_2$ number is defined modulo $\pm $ related to the $1\leftrightarrow 2$ symmetry of the system or structure of smooth fields. Above, we have implicitly assumed a structure with symmetric topological masses $m_{\uparrow}=-m_{\downarrow}$. The situation with asymmetric masses can also be realized, such as in bilayer systems from a topological proximity effect in graphene \cite{bilayerQSH}. In this case, it is yet possible to observe a topological spin Chern number equal to $2$. The description of smooth fields in the $4\times 4$ matrix description allows us to describe the occurrence of a $\mathbb{Z}_2$ topological spin Chern number for the two lowest bands \cite{bilayerQSH}. We will discuss specifically the situation of asymmetric masses in Sec. \ref{hop} related to an induced topological proximity effect in graphene. The quantum spin Hall phase is also stable towards other forms of anisotropic spin-orbit couplings at weak interactions \cite{Shitade,ShitadeLiu,Thomale}. This topological number $C_s$ can be measured directly when driving from north to south pole on each sphere simultaneously or through circular polarizations of light resolved at the Dirac points. If we write generally the light-matter coupling as in Eq. (\ref{evolve}) for both spin polarizations, then from the energy conservation, the $+$ light polarization will promote interband transitions at the $K$ point for the $\uparrow$ sphere and at the $K'$ point for the $\downarrow$ sphere. Therefore, detecting the effect of the $+$ (right-handed) and $-$ (left-handed) light polarizations in the inter-band transition probabilities, we can measure $C_s=C_{\uparrow}^2+C_{\downarrow}^2=|C_{\uparrow}|+|C_{\downarrow}|$ from the Dirac points. The additivity of the light responses for the two spin polarizations or the occurrence of the spin topological number $C_s$ can also be understood from the photo-currents \cite{C2}. We emphasize here that on the lattice model, the topological information can yet be resolved from the $M$ point in the Brillouin zone (see Sec. \ref{Paritysymmetry}). The analysis of the light responses leads to an interesting relation with the nuclear magnetic resonance \cite{C2}. For the Kane-Mele model, adding the responses for the two spin polarizations around the two Dirac points this also establishes a relation to the Pfaffian $P({\bf k})$ \cite{C2} \begin{equation} \label{Pf} \alpha_{\uparrow}(\theta) + \alpha_{\downarrow}(\theta) = |C_s| - \left(\hbox{P}({\bf k})\right)^2 \end{equation} with \begin{equation} \hbox{Pf}_{ij}=\epsilon_{ij}P({\bf k})=\langle u_i({\bf k})| U |u_j({\bf k})\rangle=\hbox{Pf}_{ji} \end{equation} and $(i,j)=(\uparrow,\downarrow)$ where the time-reversal operator is introduced as $U$ in Appendix \ref{timereversal}. The function $\alpha_i(\theta)$ corresponds to the light response for each spin polarization generalizing Eq. (\ref{alphalight}). Eq. (\ref{Pf}) is shown in Table \ref{tableI}. \begin{center} \begin{figure}[t] \hskip -0.2cm \includegraphics[width=0.5\textwidth]{KaneMele.pdf} \caption{(Left) Two Spheres' model describing the Kane-Mele model with a topological $\mathbb{Z}_2$ spin Chern number $C_s=C_{\uparrow}-C_{\downarrow}=\pm 2$. (Center) Lattice representation including a spin-orbit interaction on second nearest-neighboring sites represented through the imaginary hopping terms $\pm it_2$ in real space for an electron with spin polarization $\uparrow$ and $\downarrow$ respectively. (Right) Cylinder representation showing that the edge structure is characterized by a $\uparrow$ particle moving in one direction and a $\downarrow$ particle moving in the other direction.} \label{KaneMele} \end{figure} \end{center} Here, we define eigenstates related to the two-lowest filled energy bands on the lattice and show a relation between $P({\bf k})$ and the sphere angles. Close to the $K$ point, an eigenstate related to Eq. (\ref{classification}) and spin polarization $\uparrow$ with energy $E$ can be re-written from the lattice as \begin{eqnarray} \hskip -0.2cm |u_{\uparrow}({\bf K})\rangle=\frac{1}{\sqrt{(E+m)^2 +v_F^2 |{\bf p}|^2}} \left( \begin{matrix} v_F|{\bf p}| \\ (E+m) e^{i\tilde{\varphi}} \end{matrix} \right) && \end{eqnarray} with ${\bf k} = {\bf K}+{\bf p}$. For an eigenstate with energy $E$ and spin polarization $\downarrow$, assuming symmetric masses, we have \begin{eqnarray} \hskip -0.4cm |u_{\downarrow}({\bf K})\rangle =\frac{1}{\sqrt{(E-m)^2 +v_F^2 |{\bf p}|^2}} \left( \begin{matrix} v_F|{\bf p}| \\ (E-m) e^{i\tilde{\varphi}} \end{matrix} \right). && \end{eqnarray} In this way, $Pf_{\uparrow\downarrow}= \langle u_{\uparrow}({\bf p}) | u_{\uparrow}(-{\bf p})\rangle^*$. Going from $K$ to $K'$ corresponds to modify $m\rightarrow -m$ and $\tilde{\varphi}\rightarrow -\tilde{\varphi}$. Inserting the eigen-energies $E\rightarrow \pm \sqrt{m^2+v_F^2|{\bf p}|^2}$, \begin{equation} Pf_{\uparrow\downarrow} = \frac{v_F |{\bf p}|}{m} = Pf_{\downarrow\uparrow}. \end{equation} For the case of symmetric masses, the Pfaffian satisfies \begin{equation} P({\bf k}) =\frac{v_F|{\bf{p}}|}{m}\approx \sin\theta. \end{equation} This result can also be verified on the sphere representation. Within our definition of the Brillouin zone, the transformation ${\bf k}\rightarrow -{\bf k}$ is also equivalent to $k_y\rightarrow -k_y$ such that $P({\bf k})=\langle \psi_+(0)|\psi_+(\pi)\rangle^*$. The perfect quantized light response at the Dirac points or poles of the sphere corresponds to the zeros of the Pfaffian. The light response at the Dirac points measures $|C_s|$. This argument is in fact yet valid in the presence of additional perturbations such as a mass asymmetry or a Rashba spin-orbit interaction $\alpha (\mathbfit{s}\times {\bf p})\cdot \bf{e}_z$ \cite{KaneMele2,Sheng} (with ${\bf p}$ the momentum and $\bf{e}_z$ a unit vector in $z$-direction) as long as the properties of the smooth fields at the poles of the sphere are unchanged and the single-particle gap is not closing. For asymmetric masses, we can deform one sphere smoothly into an ellipse showing the robustness of $C_s$. We can also formulate a relation between the light response and a spin pump analysis in the cylinder geometry. We have the correspondence from the sphere \begin{equation} \sigma_{\uparrow z} - \sigma_{\downarrow z} = (\hat{n}_{a\uparrow} - \hat{n}_{a\downarrow}) - (\hat{n}_{b\uparrow} - \hat{n}_{b\downarrow}) = s_{az} - s_{bz}, \end{equation} where we introduce the density operators related to the two spheres. On the left-hand side, we generalize the pseudo-spin magnetization $\sigma_z$ of Appendix \ref{lightconductivity} for each sphere or each spin polarization. On the right-hand side, we have the spin magnetization resolved on a sublattice. In this way, the topological spin Chern number $C_s=C_1-C_2$ is equal to \begin{equation} C_s = \frac{\langle s_{az}(0)\rangle -\langle s_{bz}(0)\rangle -\langle s_{az}(\pi)\rangle +\langle s_{bz}(\pi)\rangle}{2}. \end{equation} Using the structure of the eigenstates at the two Dirac points in Fig. \ref{KaneMeleSpectrum}, we can re-write $C_s$ in terms of the spin magnetization on one sublattice (for instance, $A$) as \begin{equation} \label{Cs} C_s = \langle s_z(0)\rangle - \langle s_z(\pi)\rangle = -\int_0^{\frac{\pi}{v}} \frac{\partial \langle s_z\rangle}{\partial t} dt. \end{equation} In this sense, the topological spin Chern number is related to the transport of the spin magnetization from north to south pole on the sphere in a pump geometry. Now, we can equivalently link this analysis to the cylinder geometry of Sec. \ref{cylinderformalism} to reveal the spin structure for the edge states. For the Kane-Mele model, due to the structure of the smooth fields, we have now two cylinders such that ${\bf F}_1={\bf F}$ and ${\bf F}_2=-{\bf F}$. To activate the spin pump we apply an electric field ${\bf E}$ parallel to the polar angle, from north to south pole on the sphere, acting on a charge $q$ such that from Newton equation $\theta(t)=vt$ with $v=\frac{q E}{\hbar}$ in Eq. (\ref{Cs}). From the Parseval-Plancherel theorem in Sec. \ref{ParsevalPlancherel}, this produces transverse currents on the two spheres related to the smooth fields $J_{\perp}^1=J_{\perp}(\theta)=\frac{q}{t}A'_{\varphi}(\theta<\theta_c)$ and $J_{\perp}^2=-J_{\perp}(\theta)$. To relate with the light response, we navigate such that $\theta\in[0;\pi]$ in a time $T=\frac{h}{2qE}$ producing a spin current \begin{equation} \label{Cscylinder} J_{\perp}^1-J_{\perp}^2=\frac{2q^2}{h}C_s E. \end{equation} The factor $2$ specifies that a charge $-q$ also navigates in opposite direction. On the cylinder, we have the same spin current from the smooth fields identification. If we introduce a voltage drop on the cylinders $EH=(V_t-V_b)$ we verify the formation of edge modes at the boundaries with the disks, \begin{equation} J_{\perp}^1-J_{\perp}^2=G_s(V_t-V_b)\ \hbox{and}\ G_s=\frac{q^2}{h}C_s. \end{equation} The analysis at the edges of the cylinder geometry then reveals that the light response resolved at the two Dirac points is related to a spin conductance measurement through $C_s$. We mention here that in the presence of an important Zeeman effect polarizing electrons, a quantum anomalous Hall effect can then be obtained as experimentally observed with magnetic dopants in $\hbox{HgTe}$ \cite{Budewitz} and Bismuth thin films \cite{Chang}. \subsection{Interaction Effects} \label{MottKM} The stability of the quantum spin Hall phase towards interactions can be shown in various ways from renormalization group arguments \cite{KaneMele1}, gauge theories and simply mean-field theories \cite{Mott}. Here, we show that the mean-field theory can be developed in a controllable variational stochastic way to reproduce the Mott transition line in one equation analytically in agreement with Cluster Dynamical Mean-Field Theory (CDMFT) \cite{WuQSH} and Quantum Monte-Carlo \cite{Hohenadler}. The key point is a proper derivation of the logical theoretical steps. Similarly as in Sec. \ref{Mott}, it is useful to first write down a mean-field Hamiltonian of the Hubbard interaction $H_U = U\hat{n}_{i\uparrow}\hat{n}_{i\downarrow}$ as \begin{eqnarray} H_U &=& -U\sum_i (\phi_0+\phi_z)c^{\dagger}_{i\downarrow}c_{i\downarrow} - U \sum_i (\phi_0 - \phi_z)c^{\dagger}_{i\uparrow}c_{i\uparrow} \nonumber \\ &-& U\sum_i (\phi_0^2-\phi_z^2) \nonumber \\ &+& U\sum_i (\phi_x-i\phi_y) c^{\dagger}_{i\uparrow}c_{i\downarrow} + U\sum_i (\phi_x+i\phi_y)c^{\dagger}_{i\downarrow}c_{i\uparrow}) \nonumber \\ &+& U\sum_i (\phi_x^2+\phi_y^2). \label{interactionsPhi} \end{eqnarray} The Mott transition in the Kane-Mele-Hubbard model corresponds in fact to a magnetic or N\' eel transition in the $XY$ plane \cite{Mott} as the gap in the electron spectral function calculated from CDMFT does not reduce to zero \cite{WuQSH}. The system evolves adiabatically from a band insulator onto a Mott insulator. In the quantum spin Hall phase, from the wave-function at $U=0$, we estimate that the spin-spin correlation functions decay vary rapidly with the distance similar as in a gapped quantum spin liquid phase in the bulk \cite{Mott}. Here, we introduce the magnetic channels $S_r= c^{\dagger}\sigma_r c$ with $r=x,y,z$ \cite{QSHstoch} and to minimize the interaction energy we verify the solution $\phi_r = -\frac{1}{2}\langle S_r\rangle$. At half-filling, we also identify $\phi_0 = - \frac{1}{2}\langle c^{\dagger}_{i\uparrow}c_{i\uparrow} + c^{\dagger}_{i\downarrow}c_{i\downarrow}\rangle=-\frac{1}{2}$ and this equality works equally for the quantum spin Hall phase and the Mott phase. Similarly to the interacting Haldane model in Sec. \ref{Mott}, the equality $\phi_r = -\frac{1}{2}\langle S_r\rangle$ can be implemented from a path-integral approach re-writing the interaction as a spin interaction $U\sum_i \hat{n}_{i\uparrow}\hat{n}_{i\downarrow} = U\sum_{i,r} \eta_r S_{ir} S_{ir}$. Since for $t_2\rightarrow 0$, the magnetic ground state would equally prefer to occur in any direction this implies to select $\eta_x=\eta_y=\eta_z$ which then leads to the specific choice $\eta_x=\eta_y=\eta_z=-\frac{1}{8}$ and $\eta_0=\frac{1}{8}$ \cite{QSHstoch}. In this way, we have the precise correspondence \begin{equation} H_U = \frac{U}{8}\sum_{i,r=x,y,z} {S}_{ir}\cdot {S}_{ir} +\frac{U}{4}\sum_i (\hat{n}_{i\uparrow} + \hat{n}_{i\downarrow}). \end{equation} We can then introduce the stochastic variables $\phi_i$ in a path-integral manner such that they satisfy the minimum action principle. To evaluate ground-state properties, we can then Fourier transform the interaction terms assuming the long wavelength and zero-frequency limit corresponding to the limit of `static' and uniform Gaussian stochastic variables \begin{eqnarray} \mathbb{Z} \sim \int \Pi_{{\bf k},r=x,y,z} D\phi_{r} \int D\psi^{\dagger}_{\bf k} \psi_{\bf k} e^{-S_{KM}[\psi^{\dagger}_{\bf k},\psi_{\bf k}]} && \nonumber \\ \times e^{\frac{U}{2}\int_0^{\beta} d\tau \phi_{r}\phi_{r} +\phi_{r}S_{{\bf k}r}} &.& \end{eqnarray} Here, $S_{KM}$ corresponds to the action of the Kane-Mele model. The last term can then be absorbed in the Kane-Mele Hamiltonian such that we obtain the (degenerate) eigenenergies which depend on the stochastic variables $\phi_x$ and $\phi_y$, \begin{equation} \epsilon^{\pm}({\bf k}) = \frac{U}{2}\pm \sqrt{\epsilon({\bf k})^2 + \left(\frac{U}{2}\right)^2(\phi_x\phi_x+ \phi_y\phi_y)}, \end{equation} and $\epsilon({\bf k)}$ the eigenenergies of the Haldane model with $V=0$ in Sec. \ref{Mott}. We assume here for simplicity that $\phi_z=0$ for the Mott insulating phase which can be justified from a strong-coupling effective theory when $t_2\neq 0$ \cite{Mott}. The Mott transition can be identified from various ways \cite{QSHstoch} such as the Green's function approach and free energy. Here, we show that the Hellmann-Feynman theorem gives a similar and simple answer for the ground-state energetics similarly as in Sec. \ref{Mott} for the Haldane model. The ground state energy takes the simple $E_{gs}=2\sum_{\bf k} \epsilon^-({\bf k})$ where the factor $2$ accounts for the spin-degeneracy. Then, minimising $E_{gs}$ with respect to $\phi_x$ gives the following equation \begin{equation} \frac{\partial E_{gs}}{\partial \phi_x}= -\frac{U^2}{2}\sum_{\bf k} \frac{\phi_x}{\epsilon({\bf k})}, \end{equation} assuming here $\phi_x\rightarrow 0^+$ close to the quantum phase transition. From the definition of the mean-field theory we also have $\frac{\partial E_{gs}}{\partial \phi_x} = U\sum_i \langle c^{\dagger}_{i\uparrow}c_{i\downarrow}+c^{\dagger}_{i\downarrow}c_{i\uparrow}\rangle = -2UN\phi_x$ where $N$ represents the number of unit cells (or half the number of total sites) such that we obtain the equation for the transition line \begin{equation} \frac{1}{U_c} = \frac{1}{4N}\sum_{\bf k} \frac{1}{\epsilon({\bf k})}. \end{equation} Interestingly, this transition line is in good agreement \cite{QSHstoch} with numerical methods and also in the limit of $t_2\rightarrow 0$ where it is usually difficult to perform analytical calculations beyond the so-called large-$N$ method \cite{Herbut} with here $N$ referring to the number of flavors or species or similarly the $3-\epsilon$ expansion \cite{Mottepsilon}. We identify $U_c(t_2\rightarrow 0)\sim 4t$. Within the present approach, a magnetic transition for the channel $\phi_z$ would occur at larger $U$ values when $t_2\neq 0$. For $U<U_c$, then $\phi_x=0$ which shows that the smooth field description of the quantum spin Hall phase remains quasi-identical as the situation for $U=0$. The variational approach also allows a control on the fluctuations from the calculation of the polarization bubbles \cite{QSHstoch}. When adding a small mass term in the theory at the Dirac points, the fluctuations in the infra-red become reduced compared to graphene \cite{polarizationgraphene} since we should satisfy $\hbar\omega>\sqrt{(\hbar v_F q)^2+(2m)^2}$. The one-loop contribution to the polarizability $\Pi({\bf q},\omega)=i\frac{e^2}{8}\frac{{\bf q}^2}{\sqrt{v_F^2 |{\bf q}|^2 -\omega^2}}$ becomes regularized by $2m$ when $q=|{\bf q}|\rightarrow 0$. From the numerical sense, adding a small term opening a gap is then interesting to control the infra-red divergences of the fluctuations. For the typical situation of graphene with $t_2=0$, the Mott transition is only quantitatively tractable through the large $N$ method with $N$ referring to the number of flavors associated to fermions leading to $U_c\sim \frac{v_F}{2N}$ in this case \cite{Herbut}. This situation can be for instance achieved from a Hofstadter model on the square lattice where tuning magnetic fluxes allows to tune the number $q$ of Dirac points leading to $U_c\sim q^{-2}$ \cite{Cocks}. The mean-field variational stochastic approach was recently generalized to the Kagome lattice \cite{Julian}. The edge theory of the quantum spin Hall phase corresponds to the helical Luttinger liquid theory revealing the two counter-propagating modes in a quantum field theory way \cite{WuC}. At the Mott transition, the presence of a magnetic order in the $XY$ plane then produces a massive theory for the helical liquid description at the edges reflecting the breaking of the time-reversal symmetry at the transition \cite{WuQSH}. From the edge theory on the cylinder in Eq. (\ref{Cscylinder}), then we infer that $C_s=0$ at the transition. This approach is justified from the reciprocal space when the Mott transition develops a magnetic order and can then be combined with the smooth field description. Gauge theories also allow the identification of topological Mott phases described by a quantum spin Hall effect for the spin degrees of freedom \cite{Kallin,Mott,PesinBalents}. On the other hand, the analysis of gauge fluctuations following Polyakov \cite{Polyakov} must be taken with care in two dimensions of space such that this requires additional flavors or spin species to stabilize this phase of matter. Such a topological Mott phase has also been identified in three dimensions related to iridates' materials \cite{PesinBalents}. For two-dimensional iridates, the Mott phase is described by spin textures \cite{ShitadeLiu, Thomale}. Furthermore, it is relevant to mention the emergence of a chiral spin state for the Kane-Mele-Hubbard model for bosons identified through DMFT, ED and quantum field theory \cite{Plekhanov}. \section{Topological Proximity Effects} \label{proximityeffect} \subsection{Induction of Topological State in Graphene from Interlayer Hopping} \label{hop} Topological insulating states induced from proximity effect have attracted attention these last years theoretically \cite{Hsieh,Hofstetter} and experimentally \cite{AndoProximity}. Here, we introduce a $\mathbb{Z}_2$ topological proximity effect when coupling a graphene plane to a thin material described by a topological Haldane model forming then another plane \cite{bilayerQSH}. A small hopping term between planes induce a topological phase in graphene to second-order in perturbation theory. It is important to mention here recent specific progress in engineering materials with graphene to achieve such a topological proximity effect \cite{QSHgraphene,Tiwari}. The tunneling Hamiltonian takes the form \begin{equation} {H}_t = \int \frac{d{\bf k}}{(2\pi^2)} \sum_{\alpha\beta} \left(r c^{\dagger}_{g\alpha} \mathbb{I} c_{h\alpha} + \gamma({\bf k}) c^{\dagger}_{g\alpha} \sigma^{+}_{\alpha\beta} c_{h\beta} + h.c.\right). \end{equation} Here, $c^{\dagger}_{g}$ and $c^{\dagger}_h$ refer to electron (creation) operators in graphene and Haldane systems and the Pauli matrices $\bm\sigma$ act on the sub-lattice subspace (A,B) common for the two systems. Here, $r$ involves coupling between the same sublattice $A$ or $B$ in the two planes. The topological proximity induced by a weak-$r$ value can be understood as follows \cite{bilayerQSH}. A particle starts from graphene in sub-lattice $A$, then hops onto the same sub-lattice in the Haldane layer, and after the action of the second nearest-neighbor tunneling term $t_2$ giving a phase $+\phi$, the particle goes back in the graphene lattice producing an effective $t_2^{eff}$ term in the graphene layer proportional to $-\frac{|r|^2}{|d_z^h|^2} d_z^h\sigma_z$ in Eq. (\ref{hz}). As described below, the $-$ sign results from the second-order in perturbation theory, and should also reveal that for the B sub-lattice the perturbation theory gives an opposite sign because of the nature of the $t_2$ term in the Haldane layer. Assuming a $AA$ and $BB$ stacking between planes, in this case the effect of the $\gamma({\bf k})$ term in Eq. (\ref{parameters}) coupling here different sublattices is negligible in the proximity effect. This results from the fact that the sum of the three vectors ${\bf b}_i$ gives zero in Fig. \ref{graphenefig} when summing the effect of the different sites involved in the $\gamma({\bf k})$ term in the Haldane system. The proximity effect then can be simply described through a constant $r$ coupling between the two thin materials. Here, we describe the effect in terms of two simple mathematical approaches \cite{bilayerQSH}. The effective Hamiltonian in the graphene layer, assuming that $r\ll t_2$, takes the form \begin{equation} {H}_{eff}^g = P H_g P + P H_t (1-P) \frac{1}{(E-H)} (1-P) H_t P, \end{equation} where $H_g$ refers to the graphene lattice Hamiltonian. Here, the projector $P$ acts on the sub-space where the lowest band of the Haldane system is completely filled linked to the ground state $|GS\rangle$, and the projector $(1-P)$ produces virtually a quasiparticle in the upper Haldane Bloch band. In this sense, the second term in $H_{eff}^g$ can be understood as follows, after introducing the notation ${c^{\dagger}}^u_{h\alpha}({\bf k}) | GS\rangle$ which creates a quasiparticle in the upper Haldane band and refers to the $(1-P)$ projector : \begin{eqnarray} \hskip -0.5cm - \frac{|r|^2}{|d_z^h({\bf k})|} \langle GS | c^{\dagger}_{g\alpha}({\bf k})c_{h\alpha}^u({\bf k}){c^{\dagger}}^{u}_{h\alpha}({\bf k}) |GS\rangle \frac{d^h_z({\bf k})\sigma_z}{| d^h_z({\bf k}) |} \\ \nonumber \times\langle GS | c_{h\beta}^u({\bf k}) {c^{\dagger}}^u_{h\beta}({\bf k}) c_{g\beta}({\bf k}) |GS \rangle. \end{eqnarray} The factor $\frac{d_z^h({\bf k})}{|d_z^h({\bf k})|}$ takes into account the term $t_2$ in the Haldane system and the relative phase difference between the $A$ and $B$ sub-lattices. This gives rise to an effective Hamiltonian in the graphene system \begin{equation} {H}_{eff}^g = P H_g P + P \frac{-|r|^2}{|d_z^h({\bf k})|^2} c^{\dagger}_{g\alpha}({\bf k}) d_z^h({\bf k}) \sigma_z c_{g\beta}({\bf k}) P. \end{equation} The term $d_z^h({\bf k})\sigma_z$ takes into account the phase accumulated for a particle in sub-lattice $A$ or/and sub-lattice $B$ when travelling in the Haldane layer. At weak $r$-coupling, we find that the induced gap in the graphene layer corresponds to an induced $d_z$ term of the form $-|r|^2/(27t_2^2 \sin^2\phi)d_z^h\left({\bf k}\right)\sigma_z $ in Eq. (\ref{hz}) which changes of sign at the two Dirac points, in the two valleys. In the analysis above, the system is spin-polarized as a result of Zeeman effects. As an application of quantum field theory, this result can also be verified through a path integral approach and simple transformations on matrices. The induced term in graphene can be obtained defining $\zeta_h({\bf k}) = (c_{hA}({\bf k}), c_{hB}({\bf k}))$ and $\bar{\zeta}_h({\bf k}) = (c^{\dagger}_{hA}({\bf k}), c^{\dagger}_{hB}({\bf k}))$ for the partition function describing the Haldane system. The induced term in graphene can be obtained similarly as with classical numbers, when completing the `square' to obtain a Gaussian integral. It is appropriate to introduce the matrix $M({\bf k})={\bf d}^h({\bf k})\cdot\bm\sigma$ and its inverse such that $M^{-1}({\bf k})\cdot M({\bf k})=M({\bf k})M^{-1}({\bf k})=\mathbb{I}$. In the weak-coupling limit $r\ll |{\bf d}^h({\bf k})|$, we can then redefine the dressed (fermionic) operator such that \begin{equation} ({\zeta}_h^*({\bf k}))^T \approx (\zeta_h({\bf k}))^T + r M^{-1}({\bf k})(\zeta_g({\bf k}))^T \end{equation} and \begin{equation} \bar{\zeta}_h^*({\bf k}) \approx \bar{\zeta}_h({\bf k}) + r^* \bar{\zeta}_g({\bf k})M^{-1}({\bf k}). \end{equation} After completing the square, this gives rise to the induced term in graphene \begin{eqnarray} \hskip -0.5cm \int {\cal D}\zeta^*_h({\bf k}) {\cal D}\bar{\zeta}_h^*({\bf k}) e^{-\int_0^{\beta} d\tau \int \frac{d^2 k}{2\pi^2} \bar{\zeta}_h^*({\bf k})[\partial_{\tau} + M({\bf k})](\zeta_h^*({\bf k}))^T} && \\ \nonumber \times e^{+\int_0^{\beta} d\tau \int \frac{d^2 k}{2\pi^2} |r|^2 \bar{\zeta}_g({\bf k}) M^{-1} (\zeta_g({\bf k}))^T} &.& \end{eqnarray} The induced term in graphene is only correct for time scales sufficiently long such that the gap in the Haldane layer has been formed. Here, we neglect the dynamical effect in $\omega_n |r|^2/|{\bf d}^h({\bf k})|^2$ where $\omega_n$ are Matsubara frequencies related to real frequencies through the Wick rotation $\omega_n\rightarrow \omega+i0^+$, since we study ground-state properties. The effective Hamiltonian and the induced gap in graphene are in agreement with the perturbation theory. We can also re-interpret this result as an effective Ising coupling induced between two spins-1/2 in ${\bf k}$-space, as follows. If we introduce two distinct Pauli matrices $\bm\sigma^g$ and $\bm\sigma^h$ to describe the Hamiltonians of each system, the induced term in the graphene layer can be identified as \begin{equation} - \frac{|r|^2}{|d_z^h({\bf k})|^2}d_z^h({\bf k}) \sigma_{gz} \rightarrow \frac{|r|^2}{|d_z^h({\bf k})|} \sigma_{gz}({\bf k})\sigma_{hz}({\bf k}) \end{equation} in the limit where $d_z^h$ is large. The induced pseudo-magnetic field in the $z$ direction in the graphene layer for a given ${\bf k}$ wave-vector depends on the direction of the spin (position of the particle in $A$ or $B$ sublattice) in the Haldane layer. Here, we discuss the applicability of the topological $\mathbb{Z}_2$ number in the situation of asymmetric masses related to the occurrence of this topological proximity effect. A mass asymmetry can be in fact re-written as a term $\lambda_v$ in the classification of Kane and Mele in Eq. (\ref{classification}) \cite{KaneMele2}, corresponding to a global staggering potential. A mass asymmetry gives rise to a perturbation of the form \begin{equation} \delta {H}=d_2\sigma_z\otimes\mathbb{I}+\tilde{d}_{15} \sigma_z\otimes s_z=d_2\Gamma_2+\tilde{d}_{15}\Gamma_{15}, \end{equation} in Eq. (\ref{classification}) where $d_2=\frac{\delta m}{2}$ and $\tilde{d}_{15}=-\frac{\delta m}{2}$. In this way, the mass asymmetry then is equivalent to the effect of a global staggering potential $d_2$ on the lattice with a topological mass $d_{15}+\tilde{d}_{15}=-\frac{1}{2}(m_1+m_2)$ in the $\Gamma_{15}$ term. The quantum spin Hall effect is known to be stable as long as $|d_2|<|d_{15}+\tilde{d}_{15}|$ which means here $|m_1-m_2|<(m_1+m_2)$. This emphasizes the robustness of the $\mathbb{Z}_2$ topological phase even if one topological mass is small(er), as in the bilayer model described above. This is also in agreement with Fig. \ref{KaneMele} and with the fact that the topological number $C_s=C_1-C_2$ would keep the same form when driving from north to south pole (see Eq. (\ref{polesC})). From the point of view of spheres, as long as the directions of the effective magnetic fields at the poles do not change sign then the $C_s$ Chern number remains identical. On the other hand, the bulk-edge correspondence must be discussed with care in the topological proximity effect as a $r$ tunnel coupling at the edges would open a gap to first order in perturbation theory. The bulk proximity effect is revealed to second-order in $r$. Therefore, to stabilize two counter propagating modes at the edges in Fig. \ref{KaneMele} this requires to have two cylinders of different lengths, as verified numerically \cite{bilayerQSH}. We emphasize here that the form of the eigenstates in the $4\times 4$ matrix description allows us to show that the two lowest bands are indeed described by topological numbers $\pm 1$ for a vast range of parameters \cite{bilayerQSH}. \subsection{Coulomb Interaction Between two Planes} \label{Coulomb} Here, we discuss in more detail interaction effects from an analogy with the Ising spin interaction, which will be useful to identify the possibility of a mapping towards the two spheres' model with $C_j=\frac{1}{2}$ described in Sec. \ref{fractionaltopology}. Each plane is described by a topological Haldane model with identical topological mass terms. Here, we describe the effect of the Coulomb interaction between the two planes. For a model of two spheres or two planes $1$ and $2$, we can project the interaction on the lowest filled band(s) to write an effective interaction at the Dirac points. For this purpose, we can simply use the form of the projectors \begin{equation} \hat{n}_1^i = \frac{1}{2}\left(1\pm\sigma_{1z}\right) \end{equation} and \begin{equation} \hat{n}_2^i =\frac{1}{2}\left(1\pm\sigma_{2z}\right), \end{equation} with $i=a,b$ referring to the two sub-lattices. Now, we use the structure of the eigenstates at the Dirac points such that for the two planes described by a Haldane model, the dominant interaction is of the form at the $K$ point $H_{Int}({\bf K})=\lambda \hat{n}_1^a \hat{n}_2^a$. Similarly, close to the $K'$ point within the lowest band, the main interaction channel is of the form $H_{Int}({\bf K}')=\lambda \hat{n}_1^b \hat{n}_2^b$. These two local interactions at the Dirac points can be re-written as \begin{eqnarray} \label{Ising} H_{Int}^1 = \frac{\lambda}{4}\sigma_{1z}\sigma_{2z} +\frac{\lambda}{4} +\frac{\lambda}{4}\zeta(\sigma_{1z}+\sigma_{2z}). \end{eqnarray} In this sense, the Coulomb interaction between planes in this case gives rise to an antiferromagnetic Ising interaction between the pseudo-spins measuring the relative occupancy on each sublattice $A$ or $B$ and a renormalization of the mass structure at the Dirac points. One can add the radial magnetic field acting on the two spheres and show that as long as $\lambda<2m$ the pseudo-spin polarization at the poles $\langle \sigma_{iz}\rangle$ remain identical such that from Eq. (\ref{polesC}), the total topological number $C_1+C_2$ remains equal to $2$. We can also include long-range interactions in the reciprocal space between Dirac points. This can result in an additional term \begin{equation} H_{int}^2 =\lambda'\left(\hat{n}_1^a({\bf K})\hat{n}_2^b({\bf K}') +\hat{n}_1^b({\bf K}')\hat{n}_2^a({\bf K})\right), \end{equation} that we take as constant here for simplicity. This increases the energy of the lowest energy states at the poles such that \begin{equation} E_{aa}({\bf K})+E_{bb}({\bf K}')=(-4d+2\lambda+2\lambda'). \end{equation} From the ground state, this is in fact identical to define a dressed interaction $\lambda_{eff}=\lambda+\lambda'$ in Eq. (\ref{Ising}). A similar argument can be made for the Kane-Mele model. This derivation shows that usually as long as the interaction $\lambda_{eff}$ is smaller than the band gap, the topological phase is stable, as also discussed in Secs. \ref{Mott} and \ref{MottKM}. When including Semenoff masses $M_1$ and $M_2$ in the two spheres we may obtain fractional geometry for relatively weak interactions as described in Sec. \ref{fractionaltopology}. The term $M_1=M_2$ on the lattice corresponds to a global staggered potential or modulated potential applied to the two planes. This Section then opens some perspective on realizing fractional topological numbers from the Coulomb interaction and an analogy between charge and spin. In Sec. \ref{further}, we show a correspondence between the two spheres' model and two wires coupled through a Coulomb interaction \ref{further}. \begin{center} \begin{figure}[t] \includegraphics[width=0.45\textwidth]{FigBandStructure} \caption{Band Structure of the four-band model in the topological semimetal phase for $M$ of the order of $d_z$ with $d_z-M<r<d_z+M$ showing the Berry phases at the $K$ and $K'$ points.} \label{4bandstructure} \end{figure} \end{center} \subsection{Topological Semimetal in a Bilayer System} \label{bilayer} Bilayer systems in graphene with Bernal stacking \cite{graphene,McCann} and also Moir\' e pattern \cite{AndreiMacDonald} have attracted a lot of attention these last years related to the quest of novel phases of matter \cite{Yazdanimagic,Louk,Herrero}. Topological semimetals in three dimensions have also attracted a growing attention these last years experimentally \cite{Bian,YanFelser,GuoDresden} and theoretically \cite{Burkovsemimetal,Ezawa,FangFu,YangNagaosa}. In 2015 Young and Kane have suggested the possibility of a two-dimensional Dirac semimetal on the square lattice \cite{YoungKane} with time-reversal symmetry. Here, we show the possibility to realize a topological nodal ring semimetal in a bilayer system of two planes coupled through a hopping term $r$ as described in Sec. \ref{hop} \cite{HH}. This situation can be realized in optical lattices \cite{bilayerQSH}. Regarding quantum materials, one may design appropriate platforms to observe this topological semimetal. In Sec. \ref{semimetalclass}, we show a precise application with one layer graphene. We also present a relation between this topological model and the two spheres' model with $C=\frac{1}{2}$ per sphere or per plane in Sec. \ref{topomatter}. The matrix model in the Hilbert space basis $(c^{\dagger}_{A1}, c^{\dagger}_{B1}, c^{\dagger}_{A2}, c^{\dagger}_{B2})$ takes the form \begin{equation} H(\bm{k}) = \begin{pmatrix} \zeta d_z +M & d_x-id_y & r & 0 \\ d_x+id_y & - \zeta d_z - M & 0 & r \\ r & 0 & M+\zeta d_z & d_x - i d_y \\ 0 & r & d_x+i d_y & -\zeta d_z - M \end{pmatrix} \quad . \end{equation} Here, the components $d_x$ and $d_y$ describe the graphene physics in each plane according to Sec. \ref{spherelattice} and $\zeta=\pm$ at the $K$ and $K'$ Dirac points, respectively. We assume that $d_z=m>0$ and $M>0$ with $M<d_z$. Compared to the two spheres' model introduced in Sec. \ref{fractionaltopology}, we have inverted the direction of the two ${\bf d}_i$ vectors. The hopping term between planes is identical to that in Sec. \ref{hop}. We can diagonalize the matrix and verify the four energy states close to the $K$ point \cite{HH}: \begin{eqnarray} E_1({\bf K}) &=& -r -\sqrt{d_x^2 + d_y^2 + (d_z+M)^2} \\ \nonumber E_2({\bf K}) &=& r - \sqrt{d_x^2+d_y^2 + (d_z+M)^2} \\ \nonumber E_3({\bf K}) &=& -r + \sqrt{d_x^2 + d_y^2 + (d_z+M)^2} \\ \nonumber E_4({\bf K}) &=& r + \sqrt{d_x^2+d_y^2 + (d_z+M)^2}. \label{Eeigenvalues} \end{eqnarray} The energies are structured such that $E_1$ corresponds to the lowest eigenenergy and $E_4$ the largest eigenenergy. See Fig. \ref{4bandstructure}. The corresponding eigenstates at the $K$ Dirac point are of the form $\psi_1=\frac{1}{\sqrt{2}}(0,-1,0,1)$, $\psi_2=\frac{1}{\sqrt{2}}(0,1,0,1)$, $\psi_3=\frac{1}{\sqrt{2}}(-1,0,1,0)$ and $\psi_4=\frac{1}{\sqrt{2}}(1,0,1,0)$. Precisely at the $K$ point, we have $d_x=d_y=0$ such that $E_1$ and $E_2$ correspond to the two lowest filled energy eigenstates at half-filling as long as $r<(d_z+M)$. In this case, the ground state then corresponds to the two-particles wavefunction $\psi_1 \psi_2=e^{i\pi} c^{\dagger}_{B1} c^{\dagger}_{B2}|0\rangle$ with $|0\rangle$ referring to the vacuum state (added to the filled states). There is also the formation of an energy gap in the bulk at the Fermi energy at the $K$ Dirac point. Similarly at the $K'$ point, changing $d_z\rightarrow -d_z$, the eigenenergies read: \begin{eqnarray} E_1' &=& -r -\sqrt{d_x^2 + d_y^2 + (d_z-M)^2} = E_1({\bf K}') \\ \nonumber E_2' &=& r - \sqrt{d_x^2+d_y^2 + (d_z-M)^2} = E_3({\bf K}') \\ \nonumber E_3' &=& -r + \sqrt{d_x^2 + d_y^2 + (d_z-M)^2} = E_2({\bf K}')\\ \nonumber E_4' &=& r + \sqrt{d_x^2+d_y^2 + (d_z-M)^2} = E_4({\bf K}'). \end{eqnarray} The energy bands $E'_i$ refer to the same energy bands as defined at the $K$ point if we simply change $d_z\rightarrow -d_z$. The related eigenstates are of the form $\psi'_1 = \frac{1}{\sqrt{2}}(-1,0,1,0)$, $\psi'_2=\frac{1}{\sqrt{2}}(1,0,1,0)$, $\psi'_3=\frac{1}{\sqrt{2}}(0,-1,0,1)$ and $\psi'_4=\frac{1}{\sqrt{2}}(0,1,0,1)$. To realize the topological semi metal, then the important prerequisite is to select accordingly $d_z-M<r<d_z+M$ similarly as Eq. (\ref{HM}) for the two spheres. Indeed, in that case $E'_1=E_1({\bf K}')$ yet corresponds to the lowest-energy band, but then we also have an inversion between the bands $2'$ and $3'$ in this situation such that we can redefine $E'_3=E_2({\bf K}')$ and $E'_2=E_3({\bf K}')$. Within our definitions, energies are classified such that $E_1<E_2<E_3<E_4$. At the $K'$ Dirac point there is a gap from the band $E'_3=E_2({\bf K}')$ to the Fermi energy. The occurence of a nodal ring in the region of the $K'$ Dirac point can be seen from the fact that $E_2=E_3=0$ gives \begin{equation} v_F^2|{\bf p}|^2 = r^2 - (d_z-M)^2, \end{equation} implying then a crossing effect between these two bands at two points located around the $K'$ point. In this equation, the angle $\varphi$ can vary from $0$ to $2\pi$ which then defines a circle equation in the plane. At the $K'$ Dirac point, the ground state corresponds to the two-particles wavefunction with one particle in band of energy $E_1({\bf K}')$ and one particle in band of energy $E'_3({\bf K}')=E_2({\bf K})$ such that the two-particles wavefunction reads $|\psi_g\rangle = \psi_1' \psi_3' |0\rangle = \frac{1}{2}(-c_{A1}^{\dagger} + c_{A2}^{\dagger})(-c_{B1}^{\dagger} + c_{B2}^{\dagger})|0\rangle$ which can be re-written as \cite{HH} \begin{equation} |\psi_g\rangle = \frac{1}{2}(c^{\dagger}_{A1} c^{\dagger}_{B1} - c^{\dagger}_{A1} c_{B2}^{\dagger} - c^{\dagger}_{A2} c^{\dagger}_{B1} + c^{\dagger}_{A2} c^{\dagger}_{B2})|0\rangle. \end{equation} The topological nature of the system is revealed from the presence of one edge mode at zero energy in the band structure \cite{HH}. The total topological number $1$ is in agreement with the occurrence of one edge mode in the system which has $50\%$ chances to occupy each plane. The stable co-existence of the nodal semi-metallic ring and of the edge mode for $d_z-M<r<d_z+M$ can be understood from the fact that scattering events involving the $K$ and $K'$ regions would correspond to a wavelength that is not commensurate with the lattice spacing. Including a hopping term $\gamma({\bf k})$ between the planes, as in Sec. \ref{hop}, does not modify the wavefunction at the $K$ and $K'$ Dirac points which then reveals the stability of the nodal ring semi-metal towards this perturbation. At $r=d_z-M$ we observe that the nodal ring shrinks as $E_2({\bf K}')=E_3({\bf K}')=0$ in that case. This traduces the presence of an insulating phase with total topological numbers $2$ when $r<d_z-M$. For $r<d_z-M$, the physics is similar to that at $r=0$ showing two planes with the same topological number $1$ for the two lowest bands. For $r>d_z+M$, the two lowest bands show $+1$ and $-1$ topological numbers as in the Kane-Mele model with asymmetric topological masses \cite{bilayerQSH}. Now, we describe the properties of the semi-metal related to the two spheres' model with $C_j=\frac{1}{2}$. Here, we assume two planes $1$ and $2$ related to the two spheres. \subsection{$C_j=\frac{1}{2}$: Topological Properties and Light} \label{topomatter} To show the relevance of the fractional topological numbers for this situation, we can resort to Eq. (\ref{Cjspin}) with the correspondence that $\sigma_{jz}$ measures the relative density $\hat{n}_{Bj}-\hat{n}_{Aj}$ in a plane $j$ resolved in momentum space. At the $K$ point, since the ground state is $c^{\dagger}_{B1} c^{\dagger}_{B2}|0\rangle$ then this gives rise to $\langle \sigma_{jz}(0)\rangle=1$ similarly as for the two-spheres' model. At the $K'$ point, the duality with the spheres' model works as follows. The components $c^{\dagger}_{A1} c^{\dagger}_{B1}$ and $c^{\dagger}_{A2} c^{\dagger}_{B2}$ of $|\psi_g\rangle$ give rise to $\langle \sigma_{jz}\rangle=0$ and therefore for the calculation of $C_j$ this is equivalent to project $|\psi_g\rangle$ onto the reduced wavefunction $\frac{1}{\sqrt{2}}(c^{\dagger}_{A1} c^{\dagger}_{B2} + c^{\dagger}_{A2} c^{\dagger}_{B1})$ which is identical to the entangled wavefunction on the sphere at south pole. This gives rise to $\langle \sigma_{jz}(\pi)\rangle = 0$ and therefore to $C_j=\frac{1}{2}$. This result is verified generalizing Eq. (\ref{eq2}) with the wavefunction $|\psi_g\rangle$. An important deduction from the topometry analysis in Sec. \ref{smooth} is that here we can define the function $A_{\varphi}$ to be smooth on the whole surface and also at the two Dirac points. This function is also well defined at the two crossing points since a particle would occupy different sublattices for the eigenstates $\psi_2'$ and $\psi_3'$. Therefore, we obtain the relation between Berry curvatures \begin{equation} A_{j\varphi}({\bf K}') = \frac{1}{2}A_{j\varphi}({\bf K}) +\frac{1}{2}A^{r=0}_{j\varphi}({\bf K}'), \end{equation} which is equivalent to Eq. (\ref{eq2}) on the sphere and therefore also leads to $C_j=\frac{1}{2}$ from Eq. (\ref{Aj}). Related to probes of this fractional topological number, the quantum Hall conductivity from Sec. \ref{curvature} and Eq. (\ref{J}) will reveal $\sigma_{xy}^j=\frac{1}{2}\frac{e^2}{h}$ for a plane $j$ or a spin polarization $j$. This is also in agreement with the fact that if we prepare the system at the K' point in the state $|\psi_g\rangle$ and we trace on one particle, then the other particle has $\frac{1}{2}$ probability to be transported in one plane towards the $K$ Dirac point. The Hall conductivity of the total system is quantized as $\frac{e^2}{h}$ at low temperatures in the presence of bulk metallic states crossing the Fermi energy, justifying the word `topological semimetal'. We can also associate `fractional' in the sense that each plane is characterized through a topological number $\frac{1}{2}$ and also through the dichotomy of Bloch bands into two regions, one topological and one entangled. This can be viewed as an application of quantum entanglement in bands theory. From the mapping onto the two spheres' model, the ground state shows $\sum_j A_{j\varphi}({\bf K}')-A_{j\varphi}({\bf K})=1$ where the sum acts on the planes basis. This is equivalent to say that for the ground state the sum of Berry phases is $\pm 2\pi$ at the $K$ point and $0$ at the $K'$ Dirac point. The $\pm$ sign refers to the fact that we can define the orientation of the $\varphi$ angle or $\tilde{\varphi}$ angle around a Dirac point in one way or the other. We can verify this fact from the energy bands where band $1$ reveals precisely the $\pm 2\pi$ Berry phase whereas the band $2$ reveals a total Berry phase of zero when summing the contributions at the $K$ and $K'$ Dirac points. For band $1$ the eigenstates at $K$ and $K'$ are adiabatically linked with those when $r=0$ implying that for band $1$, the sum of the Berry phases in the two valleys should be equal to $\pm 2\pi$. In Fig. \ref{4bandstructure}, we choose the definition of Sec. \ref{quantumphysics} with quantized $\pi$ Berry phases at each Dirac point identically as in Ref. \cite{bilayerQSH}. For band $2$, we can observe that the eigenstates at $K$ and $K'$ involve the same sub-lattice polarization similarly as the non-topological phase induced by a Semenoff mass for one plane. Therefore, for band $2$, the Berry phases at two Dirac points are $\pm \pi$ and $\mp \pi$ respectively. In the present situation, the occurrence of these gapless states at low energy does not hinder the quantization of the quantum Hall conductivity as a result of symmetry and of the particular class of entangled wave-functions. It is then an interesting fact as usually the presence of a Fermi surface can deform the quantization of the topological properties \cite{Haldane2004,KagomeAlex}. Here, we discuss the light response related to Sec. \ref{light} and show that it allows us to reveal the superposition of two geometries or two regions in the reciprocal space one encircling a topological charge $q=1$ and the other, the entangled region, with an effective topological charge $q=0$. These two regions are also defined in terms of $\sum_j A_{j\varphi}(K)=-1$ and $\sum_j A_{j\varphi}(K')=0$ where now $j$ acts on the bands basis. For simplicity, we suppose an incoming wave along $z$ direction perpendicular to the plane of the system. We assume an identical light-matter coupling for the two planes or two spin polarizations which can be re-written in terms of the eigenstates for the bilayer (2-planes) system similarly as in Sec. \ref{lightdipole}. At the $K$ Dirac point, since the two-filled bands forming the ground state are related to the eigenenergies $E_1$ and $E_2$ then the light-matter coupling for a right-handed circularly polarized light reads \begin{equation} \delta {\cal H}_{+} = A_0 e^{i\omega t}(|\psi_3\rangle \langle \psi_1| + |\psi_4\rangle\langle \psi_2|) +h.c. \end{equation} A similar representation is obtained in second quantization identifying $|\psi_ i \rangle = \psi^{\dagger}_i$ and $\langle \psi_i |=\psi_i$ with $i=1,2,3,4$ referring to the band energy. Then, adapting the calculation of Eq. (\ref{short}) in time we find a similar result for the transition probabilities with $\tilde{\omega}=\omega-E_3+E_1$ or $\tilde{\omega}=\omega-E_4+E_2$. The energies at the $K$ Dirac point are shown in Eqs. (\ref{Eeigenvalues}). For inter-band transitions between bands $1$ and $3$ the prefactor one can be interpreted as $C^2=|C|=1$ with $C$ referring to the topological number of band $1$. Transitions between bands $2$ and $4$ reveal a similar prefactor one at the $K$ point which is in agreement with the fact that bands $1$ and $2$ are characterized through the same Berry phase. Therefore, for the situation of the $K$ Dirac point, the light responses will reveal a similar structure as a topological ground state with $C=1$. Selecting the light frequency of one polarization we can then mediate transitions from $1\rightarrow 3$ or $2\rightarrow 4$. We can proceed in the same way at the $K'$ Dirac point. Now, writing the operator $|a_1\rangle\langle b_1| + |a_2\rangle\langle b_2|$ in terms of the eigenstates $\psi_i'$ we find that since the energy bands $E_1'$ and $E_3'$ form the ground-state and the two bands at energies $E_2'$ and $E_4'$ are empty then the light-matter cannot generate inter-band transitions at the $K'$ Dirac point as a result of the bands' $2$ and $3$ inversion, independently of the choice of the light polarization. This is similar as if $C^2=|C|=0$ in the response. Averaging the light response on the two regions then reproduces the superposition of two topologically distinct regions in the reciprocal space in accordance with an averaged $|C|=\frac{1}{2}(1+0)$. The light response and the quantum Hall conductivity or transport then reveal complementary information on the topological entangled system's nature. \subsection{Topological Fermi liquid, Classification and Realization in one Layer Graphene} \label{semimetalclass} Here, we elaborate on the properties of this nodal ring semimetal with $\mathbb{Z}_2$ symmetry re-introducing the two spins-$\frac{1}{2}$ matrices of the Kane-Mele and two-planes' models such that $\mathbfit{\sigma}$ acts on the sublattice sub-space and $\bf{s}$ on the flavor, spin or plane sub-space. This is useful to have a simple justification for the formation of this $\mathbb{Z}_2$ nodal ring semimetal. This representation also gives further insight on the possible realization of this state of matter in a mono-layer graphene system respecting the $\mathbb{Z}_2$ symmetry. The system is stable towards interaction and disorder effects and can be viewed as a symmetry-protected topological state \cite{Semimetal}. The model reads \begin{eqnarray} \label{Hmodel} H &=& (\zeta d_z + M)\sigma_z\otimes \mathbb{I} + d_1 \sigma_x\otimes \mathbb{I} + d_{12}\sigma_y\otimes \mathbb{I} \\ \nonumber &+& r\mathbb{I}\otimes s_x. \end{eqnarray} Here, $d_1$ and $d_{12}$ correspond to the graphene Hamiltonian in each plane as defined in Appendix \ref{timereversal}, $\zeta d_z$ to the topological induced term in each plane with $\zeta=\pm 1$ at the two Dirac points and with $d_z=3\sqrt{3} t_2$ as defined in Sec. \ref{anomalous}. We observe that $[H,\mathbb{I}\otimes s_x]=0$ such that we can classify the eigenstates in terms of $|\psi_{\pm}\rangle$ in Eq. (\ref{eigenstates}) associated to the radial magnetic field for one sphere or one plane and the eigenstates of $s_x=\pm $ for the planes or layers $\mathbb{Z}_2$ symmetry. An important aspect to realize this model is the form of the $M\sigma_z\otimes \mathbb{I}$ potential term. Physically, suppose general potential terms $V_{1}^a, V_{1}^b, V_2^a, V_2^b$ acting on the two planes and resolved on a given sublattice, then we have the general relation \begin{eqnarray} &&V_1^a \hat{n}_1^a + V_1^b \hat{n}_1^b + V_2^a \hat{n}_2^a +V_2^b \hat{n}_2^b \\ \nonumber &=& \frac{1}{4} \left( (V_1^a +V_1^b) - (V_2^a +V_2^b)\right)\mathbb{I}\otimes s_z \\ \nonumber &+& \frac{1}{4} \left( (V_1^a-V_1^b) + (V_2^a-V_2^b)\right)\sigma_z\otimes \mathbb{I} \\ \nonumber &+& \frac{1}{4} \left( (V_1^a-V_1^b) - (V_2^a-V_2^b)\right)\sigma_z\otimes s_z. \end{eqnarray} This equality can be understood from $\hat{n}_j^i = \hat{P}_i\otimes \hat{P}_j$ with $\hat{P}_i$ the probability to occupy the sublattice $i$ and to be in the plane $j$. There is also an additional term proportional to $\mathbb{I}\otimes\mathbb{I}$ which shifts the energy scale. Therefore, to realize the $\mathbb{Z}_2$ symmetry in the Hamiltonian, this requires that we have an electric field acting along the planes such that $V_{1}^a = V_2^a$ and $V_{1}^b = V_2^b$. In this sense, there is no electric field in the transverse direction to the planes. We observe that this also requires $V^a \neq V^b$ which can in principle be realized through a modulated electric field in the direction of the planes. The $\mathbb{Z}_2$ symmetry of the nodal ring semimetal implies the form $M\sigma_z\otimes \mathbb{I}$ of the potential term which is essential to realize a topological semimetal (if the symmetry is not respected, then a disorder term of the form $\sigma_z\otimes s_z$ would be a relevant perturbation). To satisfy the perfect equalities $V_{1}^a = V_2^a$ and $V_{1}^b = V_2^b$ and the $\mathbb{Z}_2$ symmetry, the model then can also be implemented with one layer (thin plane) of graphene. Suppose a graphene plane with spins-$\frac{1}{2}$ electrons such that $\bf{s}$ acts on the physical spin space. The two planes then mean the two spin polarizations of an electron along $z$ direction with $\uparrow$ and $\downarrow$ referring to $s_z=\pm 1$ respectively. Applying a modulated electric field or potential term in the plane then would satisfy $V_{1}^a = V_2^a$ and $V_{1}^b = V_2^b$ in this situation. In this way, the $r$ term then can also be implemented as a Zeeman effect from a tunable magnetic field along $x$ direction. Tuning the magnetic field in the plane then allows us to reach $d_z-M<r<d_z+M$ and therefore to realize the topological semimetal. The term $\sigma_z\otimes\mathbb{I}$ can be precisely engineered from a modulated electric field applied in the graphene plane or also through interaction effects with a honeycomb layer in a Mott phase or CDW phase. The particles of the CDW system sit either on $A$ or $B$ sites and act as a potential on the graphene lattice. A Coulomb interaction reads $V\left(\hat{n}_{CDW}^a\hat{n}_g^a + \hat{n}_{CDW}^b\hat{n}_g^b\right)$ with $\langle \hat{n}_{CDW}^a \rangle =1$ and $\langle \hat{n}_{CDW}^b \rangle = 0$, and therefore we have $V\hat{n}_g^a = \frac{V}{2}(1+\sigma_z\otimes \mathbb{I})$. In this case, we would assume that $r$ is sufficiently strong such that the spins of the particles are polarized along $x$ direction and such that the two spin polarizations $\uparrow$ or $\downarrow$ sees the same potential on each site. We have the identification $M=\frac{V}{2}$ with the prerequisite $d_z-M<r<d_z+M$ to realize the topological nodal ring semimetal. Then, the $\zeta d_z \sigma_z \otimes \mathbb{I}$ term can be implemented though circularly polarized light for the two spin states $\uparrow$ and $\downarrow$ equally through the protocol(s) of Sec. \ref{polarizationlight} and Sec. \ref{graphenelight} which is realizable with current technology in graphene \cite{McIver}. A similar protocole can be implemented with atoms in optical lattices \cite{Jotzu}. Within this representation, we have \begin{equation} H = {\bf d}\cdot\mathbfit{\sigma}\otimes\mathbb{I}+ r\mathbb{I}\otimes s_x \end{equation} and \begin{eqnarray} H^2 = \left(|{\bf d}|^2 +r^2\right)\mathbb{I}\otimes\mathbb{I} +2r {\bf d}\cdot\mathbfit{\sigma}\otimes s_x \end{eqnarray} with here the form of the ${\bf d}$ vector \begin{equation} {\bf d} = (d_1,d_{12},(\zeta d_z+M)) \end{equation} such that $|{\bf d}|^2={\bf d}\cdot {\bf d} = d_1^2 + d_{12}^2 + (\zeta d_z+M)^2$. Close to the two Dirac points $d_1 = v_F|{\bf p}| \cos\tilde{\varphi}$, $d_{12}=v_F |{\bf p}|\sin\tilde{\varphi}$ such that $d_1^2 + d_{12}^2 = v_F^2 |{\bf p}|^2$. To classify eigenstates we can then introduce $|\psi_+\rangle$ and $|\psi_-\rangle$ as defined in Eq. (\ref{eigenstates}) corresponding here to energies $\pm |{\bf d}|$ and $|+\rangle_x$, $|-\rangle_x$ corresponding to eigenvalues $s_x=\pm 1$. The lowest and top energy levels correspond to $|\psi_-\rangle \otimes |-\rangle_x$ and $|\psi_+\rangle \otimes |+\rangle_x$ such that $2r {\bf d}\cdot\mathbfit{\sigma}\otimes s_x = 2r {\bf d}\cdot\mathbfit{\sigma}$ when acting on these two states. Therefore, the energies of these two states satisfy \begin{equation} E^2 = (r+ |{\bf d}|)^2, \end{equation} with respectively \begin{equation} E_1 = -(r+|{\bf d}|) \end{equation} and \begin{equation} E_4 = (r+|{\bf d}|). \end{equation} The energies at the two Dirac points are different due to $\zeta=\pm 1$. Now, we study the occurrence of the semi-metal through the two middle or intermediate bands which correspond to the two eigenstates $|\psi_-\rangle \otimes |+\rangle_x$ and $|\psi_+\rangle \otimes |-\rangle_x$ such that $2r {\bf d}\cdot\mathbfit{\sigma}\otimes s_x =-2r {\bf d}\cdot\mathbfit{\sigma}$. These two energy states are then described through \begin{equation} E^2 = (-r+ |{\bf d}|)^2. \end{equation} To have a semimetal this requires these two bands to meet such that $E^2=0$ implying the general relation $r=|{\bf d}|=\sqrt{d_1^2+d_{12}^2+(\zeta d_z+M)^2}$. The formation of the semimetal implies that we should tune $r$ such that $d_z-M < r <d_z+M$ close to the Dirac points which then implies that this can be satisfied only if $\zeta=-1$. In this sense, there is no inversion symmetry ${\bf k}\rightarrow -{\bf k}$. The origin for the occurrence of the nodal ring semimetal is the plane or flavor $\mathbb{Z}_2$ $1\leftrightarrow 2$ symmetry which leads to a pair of degenerate bands in $E^2$ occurring at two points close to $K'$. We emphasize that this topological semimetal is stable in the presence of a disordered potential as long as we respect the $\mathbb{Z}_2$ symmetry $V_1^a=V_2^a$ and $V_1^b=V_2^b$ which is the satisfied if $i=1,2$ means spin polarization. At the $K'$ point, the eigenstate of the energy band $E_2'=E_3({\bf K}')=r-|{\bf d}|$ reads $|\psi_-\rangle\otimes|+\rangle_x$ and the eigenstate of the energy band $E_3'=E_2=|{\bf d}|-r$ reads $|\psi_+\rangle\otimes|-\rangle_x$. The ground state at the $K'$ Dirac point corresponds effectively to the two-particles' wavefunction such that the two lowest-energy bands are occupied. Therefore, this corresponds to the (2-particles) ground-state wavefunction \begin{equation} |\psi_g\rangle = \left(|\psi_-\rangle\otimes|-\rangle_x\right)\left(|\psi_+\rangle\otimes|-\rangle_x\right). \end{equation} Setting $\theta=\pi$ in Eq. (\ref{eigenstates}), then $|\psi_+\rangle$ corresponds to a particle in sublattice $B$ and $|\psi_-\rangle$ a particle in sublattice $A$. In the planes' subspace, we may also identify $|-\rangle_x = \frac{1}{\sqrt{2}}(|+\rangle_z - |-\rangle_z) = \frac{1}{\sqrt{2}}(|1\rangle - |2\rangle)$ corresponding to a quantum superpositions on plane 1 and plane 2 respectively. For simplicity, we have fixed the relative phases entering as a gauge choice to zero. The wavefunction $|\psi_g\rangle$ is certainly highly entangled in the planes-basis and if we take the trace on a particle, for instance on particle $2$, then the particle $1$ has probability $\frac{1}{2}$ to be in plane $1$ or $2$ corresponding to sublattice $A1$ or $A2$. Similarly, if we take the trace on particle $1$, then particle $2$ has also probability $\frac{1}{2}$ to be in plane $1$ or $2$ corresponding to sublattice $B1$ or $B2$. The stability of the semimetal towards interaction effects can be verified at a mean-field level and also perturbatively. From Eq. (\ref{interactionsPhi}), we observe that the $U\phi_x$ term can be absorbed in the $r$ term corresponding to a transverse magnetic field in the implementation with graphene and the two spin polarizations. From Eq. (\ref{Hmodel}), assuming that the $r$ term is real then $\phi_y=0$. The system is also stable towards a term $U\phi_z\mathbb{I}\otimes s_z$. To lowest order in the perturbation theory this term is irrelevant as an effective hopping term between bands $2$ and $3$ at the crossing points would require an operator of the form $\sigma^+\otimes(s_y-is_z)$ or equivalently $\sigma^-\otimes(s_y-is_z)$. Here, the operator $s_y$ does not occur to lowest order in the Hamiltonian and is not generated to higher orders in perturbation theory. Increasing the $U$ term then is equivalent to smoothly rotate the spin eigenstates of the operator $r \mathbb{I}\otimes s_x+U\phi_x \mathbb{I}\otimes s_z$ similarly as the eigenstates (\ref{eigenstates}) using now the basis $|+\rangle_x$ and $|-\rangle_x$. The stability of the semimetal towards interaction effects can also be understood from the fact that the energy spectrum evolves quadratically close to the two crossing points (when we develop the $d_z$ term from Eq. (\ref{dvector})) traducing a topological Fermi liquid and a finite density of states at zero energy. This behavior is then distinct from interaction effects in graphene \cite{polarizationgraphene}. \section{Topological Planes or Spheres in a Cube, Geometry and Infinite Series} \label{Planks} Stabilizing three-dimensional quantum Hall phases and topological insulating phases from coupled planes \cite{Orth,Halperin3D,Berneviggraphite} has attracted attention in the community. Very recently, the possibility of a fractional second-order topological insulator from an assemblage of wires in a cube was also identified \cite{JelenaWiresFTI}. Here, we study a network of $j=0,1,...N$ planes ($N+1$ planes) described through a Haldane model \cite{Haldane} with alternating $(-1)^j t_2 e^{i\phi}$ second-nearest neighbors' hopping terms in a cube. Each plane is also equivalent to a topological sphere. We suppose the limit of weakly coupled planes with a hopping term or interaction (much) smaller than the topological energy gap in the band structure such that the two-dimensional band structure in each plane (sphere) is stable. \subsection{Even-Odd effect} The topological number in each plane can be defined through an integration on the Brillouin zone according to previous definitions \begin{equation} C_j = \frac{1}{2\pi}\iint d^2 k{\bf F}_j\cdot{\bf e}_z \end{equation} with $d^2 k =dk_x d k_y$ and ${\bf e}_z$ the unit vector in the vertical direction. To study geometrical aspects in a three-dimensional space, we re-organize the planes in a cube structure in the $(k_x,k_y,z)$ space with $k_x$ and $k_y$ defined in the reciprocal space such that the integral of the Berry curvature for each square plane agrees with $C_j$ defined above on a torus in Eq. (\ref{pumpingC}). Here, $z$ represents the vertical coordinate in real space measuring the number of planes. Suppose we have a fixed number of planes $N+1$ in the cube, then we can define an even-odd effect for such a topological effect in agreement with the Green and divergence theorems. The Berry curvature in a plane takes the form \begin{equation} \label{Fz} {\bf F}_z = (-1)^j F_{k_x k_y}\delta_{z j} {\bf e}_z = F_z{\bf e}_z \end{equation} with the same Berry curvature $F_{k_x k_y}$ for each plane. Equivalently, we can measure $C_j$ and associated observables in each plane, following Sec. \ref{Observables}, from $F_{p_x p_y}(0)\pm F_{\pm p_x p_y}(\pi)$ on the sphere with $p_x$ and $p_y$ measuring deviations for the momentum from the Dirac points. From the divergence theorem, we can define a non-local topological number from the top and bottom surfaces \begin{eqnarray} \label{number} &&C_{top} - C_{bottom} = \frac{1}{2\pi}\int d k_x dk_y \int_{z_{bottom}}^{z_{top}} \frac{\partial F_z}{\partial z}dz \\ \nonumber &=& \frac{1}{2\pi}\int d k_x \int \left( F_z(k_x, k_y , z_{top}) - F_z(k_x,k_y,z_{bottom})\right) dk_y. \end{eqnarray} Here, top and bottom mean equally up and down. Also, we can verify Eq. (\ref{number}) in the limit of a dilute number of planes (planks) by calculating $\frac{\partial}{\partial z} [(-1)^j \delta_{zj}] = (-1)^j \delta'_{zj}$. The vertical surfaces of the cube give zero as we assume a system where ${\bf F}=F_z {\bf e}_z$ is perpendicular to the normal vector for these faces. In this sense, although the divergence theorem refers to an integration on a closed surface encircling the volume, here $C_{top}-C_{bottom}$ can be defined simply from the horizontal boundary faces of the cube. This is equivalent to \begin{equation} C_{top} - C_{bottom} = C((-1)^N - 1) \end{equation} with \begin{equation} C = \frac{1}{2\pi}\iint dk_x dk_y F_{k_x k_y}=1. \end{equation} Here, the bottom surface corresponds to $j=0$ such that $C_{top}-C_{bottom}$ is defined to be real, alternatively $-2$ and $0$ when $N\in \mathbb{N}$. In addition, we can define the total conductivity in response to an electric field for each plane as in Sec. \ref{curvature} such that \begin{equation} \label{planessigmaxy} \sigma_{xy} = \frac{e^2}{h}\sum_{j=0}^N (-1)^j. \end{equation} For $N$ odd or a even number $(N+1)$ of planes, the top and bottom surfaces can develop a $\mathbb{Z}_2$ spin Chern number such that with the present definition of the $t_2$ term, $C_{top}-C_{bottom}=-2$ similarly as for the Kane-Mele model, and $\sigma_{xy}=0$ \cite{KaneMele1}. The $\mathbb{Z}_2$ symmetry $C_{top}-C_{bottom}=\pm 2$ here characterizes the parity symmetry with respect to the center (middle) of the cube corresponding to modify $z_{bottom}\leftrightarrow z_{top}$. Since $\sigma_{xy}=0$, the system preserves time-reversal symmetry. The two horizontal facets of the cube develop edge modes moving counter-clockwise and in this way we realize a $\mathbb{Z}_2$ topological insulator from the top and bottom surfaces. For $N$ even or a odd number $(N+1)$ of planes, $C_{top}-C_{bottom} = 0$. Since we start at $j=0$ the counting of the planes, then the total conductivity agrees with that of the quantum Hall effect $\sigma_{xy}=\frac{e^2}{h}$. Therefore, in the limit of a dilute number of planes we observe a even/odd effect (referring to the number $N$) where the system behaves alternatively as an effective two-dimensional quantum Hall and a quantum spin Hall system. \subsection{Thermodynamical Limit and Ramanujan Series} \label{onehalfcharge} Here, we address the thermodynamic limit of Eq. (\ref{planessigmaxy}) with $N\rightarrow +\infty$ in relation with the Ramanujan alternating series $({\cal R})$, re-written as \begin{equation} \label{infiniteseries} \hbox{lim}_{\epsilon\rightarrow 0} \sum_{j=0}^{+\infty} \left((-1)^ j (1-\epsilon)^j\right) = \frac{1}{1+(1-\epsilon)} = \frac{1}{2}. \end{equation} The mathematical regularization is here defined in the sense of Abel simply through a term $(1-\epsilon)^j<1$ with $\epsilon$ being any infinitesimal number. The $\frac{1}{2}$ can be understood from the mathematical sense as follows. The Ramanujan series corresponds to a string of numbers $S=1-1+1-1+...$ such that $S=1-S$ in the thermodynamic limit implying that $S=\frac{1}{2}$. Some questions then arise: can we measure a halved conductivity through Eq. (\ref{planessigmaxy}) in the limit of an infinite number of planes? Is there a relation with the sphere model? There are certainly various ways to address the thermodynamical limit in experiments through a staircase of assembled planes. The infinity may be reached here saying simply there is no difference between even and odd. From the quantum Hall conductivity, the $\frac{1}{2}$ may be then seen as a superposition of two regions similarly as in a two-spheres' model with respectively a topological charge $1$ or topological charge $0$ on the surface. In the bulk, it is as if there is a screening effect between two alternating planes such that a pair $+ -$ (with $\pm$ referring to the number $j$ in the factor $(-1)^j$ in Eq. (\ref{infiniteseries})) related to two alternating planes does not modify the quantum Hall conductivity. A halved quantum number can then occur from a boundary such that $|C_{boundary}|=\frac{1}{2}(1+0)$. It is also topologically equivalent to one Dirac point. In Sec. \ref{planks}, we study in detail one specific realization of this $\frac{1}{2}$ thermodynamical limit related to the sphere model. In Appendix \ref{GeometryCube}, we also verify that to address the thermodynamical limit from the geometry we can assume an infinite system $z\in [0;+\infty[$ with a Berry curvature $F_z$ being now a continuous function along the $z$-axis, \begin{equation} \label{Fz} F_z=e^{-i\pi z} F_{k_x k_y}\theta(z), \end{equation} where, as above, $F_{k_x k_y}$ does not depend on the $z$ direction. From Eq. (\ref{number}), then we can verify that in this situation the Heaviside step function gives a $\frac{1}{2}$ precisely at the boundary $z=0$ and integrating $z\in[0;+\infty[$ this is equivalent to say that there is no outer vertical surface in the divergence theorem such that effectively $F_z(top)=0$. In this case, the geometry would also predict the occurrence of half quantum numbers from the particular mathematical form of the Berry curvature when approaching $z=0$. In this sense, a continuous version of Eq. (\ref{number}) with $z\in [0;+\infty[$ can also reveal a $\frac{1}{2}$ topological number on a surface. In the next Sec. \ref{LightSeries}, we aim to relate the $\theta(z)$ function behavior in Eq. (\ref{Fz}) from the point of view of a plane at $z=0$ in the light response of the effectively infinite system from Series and the Riemann-Zeta function $\zeta(s)$. When studying the light response of the $z=0$ plane with topological number $C=1$, the proximity effect with the infinite number of planes will act precisely as if we would renormalize $C=1$ into $1+\zeta(0)=\frac{1}{2}$ at the boundary. This analysis shows that it is indeed possible to predict one-half quantum numbers with specific realizations or physical interpretations of the thermodynamical limit. In Sec. \ref{3DQHE}, we relate this physics with surfaces of three-dimensional topological insulators which are also characterized through a similar $\frac{1}{2}$ quantum number. \subsection{Transport and Light from Infinite Series} \label{LightSeries} Before addressing a specific physical application, we show that the $\frac{1}{2}$ topological number naturally occurs in observables associated to the resummation of these infinite series. Suppose we apply an electric field along the planes with a small gradient in the vertical direction ${\bf E}={\bf E}_0 (1-\epsilon z)$ then, since the $\epsilon$ term is independent of $k_x$ and $k_y$, we can reproduce the calculation of Sec. \ref{curvature} in a given plane. Using Eqs. (\ref{DeltaP}) and (\ref{chargeE}), we obtain \begin{equation} \sigma_{xy}^j = \frac{e^2}{h} (-1)^j (1-\epsilon j) = \frac{e^2}{h} (-1)^j (1-\epsilon)^j. \end{equation} Summing all the currents, this protocol measures $\sigma_{xy} = \sum_{j=0}^{+\infty} \sigma_{xy}^j = \frac{1}{2}\frac{e^2}{h}$ revealing an effective topological number $C_{eff}=\frac{1}{2}$. Here, we describe the light response in the situation of circular polarizations and generalize protocols of Sec. \ref{light}. For $j=2m+1$ (odd), if we assume an identical vector potential in each plane, the right-handed (left-handed) circular light will measure $(*)=\sum_{j=2m+1=1}^{+\infty} C_j^2 = 1+1+1+1 +1+... $ from the $K$ $(K')$ Dirac point related to the inter-band transition probabilities of Eq. (\ref{density}) in time. Similarly, in this situation, the right-handed (left-handed) circularly polarized light will measure the other planes with $j=2m$ even from the $K'$ $(K)$ Dirac point $(**)=\sum_{j=2m=0}^{+\infty} C_j^2 = 1+1+1+1+1+... $ For the dilute limit of planes, each light polarization can then reveal the staggered structure of the topological mass in the transverse direction or the number of planes with a positive topological mass $+m$ at the $K$ or $K'$ Dirac point. In the thermodynamic limit, if we add the two responses $(*)+(**)$ (supposing that the light intensity can remain perfectly symmetric in all the planes), the infinite sum would then diverge in the usual sense. On the other hand, similarly as the protocole for the conductivity, we can regularize the sum as follows. Suppose that we shine light from the boundary plane at $j=z=0$ with an amplitude $A_0$ for the vector potential and that the amplitude of the vector potential now smoothly evolves with a power law in the bulk of the system such that $A_0\rightarrow A_0\frac{1}{{z}^{s/2}}$ if $z\geq 1$ and $s\geq 0$. In this case, the total light response can be written as \begin{equation} \label{S} C_0^2 + \sum_{j=1}^{+\infty} C_j^2 \frac{1}{j^s} = 1 + \zeta(s), \end{equation} with $\zeta(s)$ being the Riemann-Zeta function. In the `ideal' limit where the planes would interact quasi-symmetrically with light, then this would be similar as if we fix $s\rightarrow 0$ and re-interpret the series for even values of $s$ (such that $s/2$ is an integer in the power-law decay of $A_0(z)$) \begin{equation} \zeta(s=2n) = \frac{(-1)^{n+1} B_{2n}(2\pi)^{2n}}{2(2n!)}. \end{equation} In that sense, $\zeta(0)$ is defined through the Bernouilli number $B_0=+1$ and $\zeta(0)=-\frac{1}{2}$. The situation is similar as if light would reveal \begin{equation} \sum_{j=0}^{+\infty} C_j^2 = \sum_{j=0}^{+\infty} |C_j| = \frac{1}{2}. \end{equation} The re-interpretation of $C_j^2=|C_j|$ comes from the definition of the photo-induced currents in each plane in Eq. (\ref{photocurrents}). Eq. (\ref{S}) can also be viewed as a realization of $S=1-S=\frac{1}{2}$ for the Ramanujan series. The resummation of the physical responses associated to the planes for $z\geq 1$ give $-\frac{1}{2}$ (in the thermodynamical sense) in the ideal situation where light would equally couple to each plane. This is similar to say that from this proximity effect, the plane at $z=0$ reveals then an effective $1+\zeta(0)$ effective response to light at $z=0$ similarly as a $\theta(z)$ function at the boundary $z=0$. Eq. (\ref{S}) may suggest possible applications for similar assemblages of integer quantum Hall planes. \begin{center} \begin{figure}[ht] \includegraphics[width=0.5\textwidth]{Plankcartoon} \caption{Two cubes with an additional plank: The system $S_1$ or $S_2$ is from the top surface in a superposition of a even and odd number of planes. For the top yellow layer a measure of the pumped current $I_1$ or $I_2$ is assumed for half of the surface involving precisely one additional Dirac point for $S_1$ or $S_2$.} \label{Plankhalf} \end{figure} \end{center} \subsection{Planks, Geometry and Spheres} \label{planks} Here, we discuss a specific implementation of the thermodynamical limit related to the sphere model. We prepare a cube (system $S_1$) with $j=0,...,N$ corresponding then to $(N+1)$ planes or planks, as described above, and then prepare another identical cube (system $S_2$) with the same number of planes or planks. We assume that there is a thin insulator between the two cubes in purple. Now, we place another `yellow' plane in a slightly different way (see Fig. \ref{Plankhalf}) such that from the point of view of $S_1$ and $S_2$ the last plane is in a superposition of $j=N$ and $j=N+1$ equally. The yellow plane with $j=N+1$ satisfying Eq. (\ref{Fz}) is then equally shared between the two systems $S_1$ and $S_2$. It is similar as if $S_1$ has 1 additional (massive) Dirac point and $S_2$ has also 1 additional Dirac point from the reciprocal space. From the point of view of the $j=(N+1)$ sphere related to the yellow $j=(N+1)$ plane the topological charge in Fig. \ref{Plankhalf} is equally encircled by the north hemisphere associated to $S_1$ and south hemisphere associated to $S_2$. We can then understand the $\frac{1}{2}$ of Eq. (\ref{infiniteseries}) through the sphere in Fig. \ref{Plankhalf} associated to the yellow plane and quantum transport (see Sec. \ref{ParsevalPlancherel}). The north pole corresponds to the Dirac point in $S_1$ and the south pole corresponds to the Dirac point in $S_2$. Here, $S_1$ and $S_2$ correspond to the two hemispheres with the interface $\theta_c=\frac{\pi}{2}$ precisely. We can then describe the transport similarly as in Sec. \ref{ParsevalPlancherel} from the point of view of a charge $e$ moving from the north to the equatorial plane in along a path in $S_1$. This produces the perpendicular pumped current shown in Fig. \ref{Plankhalf} which is equal to \begin{equation} I_1 = \frac{e}{t} A'_{\varphi}\left(\theta=\frac{\pi}{2}^{-}\right), \end{equation} with $A'_{\varphi}(\theta=\frac{\pi}{2}^-)=\frac{|C_{N+1}|}{2}$ from the general results on one sphere in Sec. \ref{spin1/2}. The time $t$ corresponds to the time to travel from each pole to the equatorial plane for a charge. Similarly, we can evaluate the perpendicular pumped current in $S_2$ associated to a charge $-e$ moving in opposite direction to the electric field towards the equatorial plane resulting in the pumped perpendicular current of Fig. \ref{Plankhalf} \begin{equation} I_2=-\frac{e}{t}A'_{\varphi}\left(\theta=\frac{\pi}{2}^+\right)=-\frac{e}{t}\frac{|C_{N+1}|}{2}. \end{equation} The presence of one additional Dirac point then leads to an additional $\frac{1}{2}$ transverse pumped current in $S_1$ and $S_2$, respectively. This is similar as if we would measure the response to circularly polarized light at the $M$ point for the last yellow plane only, see Secs. \ref{lightdipole} and \ref{graphenelight}. \subsection{Topological Insulators in three dimensions, Quantum Hall effect on a surface and $\theta$ in Electrodynamics} \label{3DQHE} Here, we show a relation between the formation of $C=\frac{1}{2}$ in the two spheres' model from three-dimensional topological insulators and the top/bottom surfaces. The bulk of the system is described through a three-dimensional band structure which develops a gap as a result of spin-orbit coupling, for instance. The physics of these systems is also related to the axion physics or the $\theta$ term in electrodynamics \cite{SekineNomura}. A three-dimensional topological insulator respects time-reversal symmetry \cite{RMPColloquium}. From electrodynamics, the magnetic field is odd under $t\rightarrow -t$ and the electric field is even which allows a specific term in the Lagrangian of the system of the form $\frac{\theta e^2}{4\pi^2 \hbar c}{\bf E}\cdot{\bf B}$ with $\theta=\pi$ such that this term preserves time-reversal symmetry since $\pi=-\pi$. This term has been in fact derived from microscopic grounds for specific materials such as Bi$_2$Se$_3$ around the $\Gamma$ point from the Fujikawa's method \cite{SekineNomura}. This is also related to topological quantum field theories from a general sense \cite{QiZhang,Qi}. In this case, on the top and bottom surfaces of one cube the system will develop a metallic state described through one Dirac cone. Assuming we can turn the semi-metallic state into a topologically non-trivial insulator for instance with magnetic dopants or through the quantum Hall effect developing on the top and bottom surfaces, as realized experimentally \cite{Xu,Yoshimi}, then the occurrence of the magneto-electric effect can also be understood within the same geometrical foundations as in Sec. \ref{planks}. On the top surface, from the reciprocal space, the system is described similarly as a spin-$\frac{1}{2}$ with a magnetic field ${\bf d}({\bf k})=(v_F k_y, -v_F k_x, m)$ with the mass $m$ induced from a magnetic proximity effect at the $\Gamma$ point. Since we have only one Dirac point, the topological state of the top surface is similar to the one of the north hemisphere associated to the $S_1$ sub-system in the description above. The topological properties of this surface are similar to those for the two-spheres model with $C_j=\frac{1}{2}$ \cite{HH} in Sec. \ref{fractionaltopology} in the sense that the topological properties are described by an hemisphere or one pole only and the mirror disk in the equatorial plane. The north pole corresponding now to a Dirac point centered at $\Gamma$. We have the identification $(v_F k_y, - v_F k_x, m)=(-d\sin\theta\sin\tilde{\varphi},d\sin\theta\cos\tilde{\varphi},m)$ with $\varphi = \tilde{\varphi}+\frac{\pi}{2}$ compared to Eq. (\ref{correspondence}). The system is topological equivalent to one hemisphere or a half unit sphere, similarly as for a `meron' \cite{QiZhang} because for wave-vectors sufficiently distant from the $\Gamma$ point then this is as if the polar angle is $\frac{\pi}{2}$. Applying an electric field similarly as in Fig. \ref{Plankhalf} the pumped transverse current from the north hemisphere then corresponds to $I_1$ associated to a halved quantum Hall conductivity. The halved quantum Hall conductivity can also be understood from the local Eq. (\ref{F}) in the case of one Dirac point reproducing the halved quantized response $\sigma_{xy}=\frac{e^2}{h}\frac{1}{2} \hbox{sgn}(m)$. It is interesting to mention the possibility to create merons on the surface states of three-dimensional topological insulators from superlattice effects and spin-orbit coupling \cite{Cano}. As found in Ref. \cite{Qi}, the $\frac{1}{2}$ charge can be precisely thought of as a domain wall between a non-trivial three-dimensional topological insulator and the vacuum described through a $\theta=0$ coefficient in the ${\bf E}\cdot{\bf B}$ term. \section{Applications to Superconductivity and Majorana Fermions} \label{further} Here, we show the applicability of the formalism for other systems such as topological superconductors in the Nambu basis related to Majorana fermions which are their own antiparticles \cite{WilczekMajorana,Wilczekclass}. The quest of Majorana fermions has engendered a cross-fertilization between areas such as nuclear physics, dark matter, neutrinos and condensed-matter physics \cite{ElliottFranz,Alicea,Beenakker,SatoAndo} with potential applications in quantum information and computing \cite{HalperinMajorana}. In condensed matter physics, it is perhaps important to remind early applications of Majorana fermions \cite{EmeryKivelson,Clarke,SenguptaGeorges,Coleman,KarynMajoranadwave} related to quantum impurity models and solutions of the two-channel Kondo model \cite{NozieresBlandin,AffleckLudwig,AndreiBetheAnsatz}. This gives rise to a $\frac{1}{2}\ln 2$ thermodynamical boundary entropy related to the physics of heavy-fermions quantum materials \cite{CoxJarrell} and signatures related to quantum information probes \cite{Alkurtass}. The goal here is to show the application of the quantum topometry approach related to physical protocols. For two-dimensional topological lattice models, the correspondence with the sphere is such that $(k_x,k_y)\rightarrow (\theta,\varphi)$ and for the one-dimensional topological superconductor, we may define $(k,\tilde{\varphi})\rightarrow (\theta,\varphi)$ with the phase of the superfluid reservoir now corresponding to the azimuthal angle. Since all the meridian lines are equivalent on the Bloch sphere, this preserves the gauge invariance of the topological response with respect to the action of fixing of the macroscopic phase of the superfluid system. \subsection{Topological Superconducting Wire and Bloch Sphere} \label{pwavewire} We begin with a one-dimensional $p$-wave superconductor described through the Kitaev Hamiltonian \cite{Kitaev}: \begin{equation} \label{SCwire} H = \sum_i \left(-t c^{\dagger}_i c_{i+1}+ \Delta c^{\dagger}_i c^{\dagger}_{i+1}+h.c.\right) - \mu c^{\dagger}_i c_i. \end{equation} This model has been suggested for the physics of nanowires with Rashba spin-orbit coupling related to important experimental progress to reveal Majorana particles \cite{Oreg,Lutchyn,Aasen,Delft,Pikulin,MicrosoftQuantum} in particular through $dI/dV$ measurements. This model is also realized in superconducting circuits \cite{GoogleMajorana}. Other recent applications of Majorana fermions in one dimension include carbon nanotubes \cite{Desjardins} and ferromagnetic atomic chains \cite{Yazdani}. On the lattice, the fermionic operators written in the second quantization anticommute such that $\{c_i, c^{\dagger}_i\}=1$. The $p_x$-wave symmetry is reflected through the fact that under the parity symmetry with respect to a bond $[i;i+1]$, the BCS pairing superconducting term is modified as $\Delta\rightarrow -\Delta$. The presence of a $\mathbb{Z}_2$ symmetry in the system can be understood from the occurrence of changing locally $c_i\rightarrow -c_i$ and similarly $c_i^{\dagger}\rightarrow -c_i^{\dagger}$. To show the topological aspects in this system, it is judicious to introduce Majorana fermions at each site $\eta_j = \frac{1}{\sqrt{2}}(c_j+c^{\dagger}_j)$ and $\alpha_j =\frac{1}{\sqrt{2}i}(c_j^{\dagger}-c_j)$ such that $\{\eta_j,\eta_j\}=1=\{\alpha_j , \alpha_j\}$ and $2i\eta_j\alpha_j = 1-2c^{\dagger}_j c_j$. In this way, this preserves the anticommutation relations for the fermions $\{c_i,c_i^{\dagger}\}=1$ and $\{c_i,c_i\}=0$. If we suppose the special value $t=\Delta$ and the half-filled situation with $\mu=0$, the Hamiltonian can be written simply as \begin{equation} H = \sum_i 2i t \alpha_{i+1} \eta_i . \end{equation} The superconducting phase corresponding to the pairing of Majorana fermions on nearest neighboring sites such that at the two boundaries the system will reveal two zero-energy modes characterized through the operators $\alpha_0$ and $\eta_N$ where $N$ represents the last site of the one-dimensional wire and $0$ the first site \cite{Kitaev}. These Majorana modes are usually robust to a small local potential distortion in the sense that the Majorana fermions $\eta_0$ and $\alpha_N$ are gapped such that the density operator $c^{\dagger}_i c_i$ cannot be a relevant operator in the long wavelength limit. This model for $t=\Delta$ is equivalent to the transverse field Ising model \cite{Pfeuty,IsingdeGennes} from the Jordan-Wigner transformation such that mapping fermions onto spins is also equivalent to \begin{equation} H = \sum_i \left(J_{\perp} S_{ix} S_{i+1 x} + J_z S_{ix}\right), \end{equation} with $J_z=\frac{\mu}{2}$ and $J_{\perp}=t$ implying two quantum phase transitions at $\mu=\mp 2t$. A quantum phase transition refers to the occurrence of a charge density wave on the lattice or to a strong-paired phase \cite{Alicea}. Here, the chemical potential drives the physics which then favors the pairing of two Majorana fermions on a site such that there is no free Majorana fermions at the boundary. The topological phase is referred to as a weak-paired phase in the literature \cite{Alicea}. To have a further understanding of the topological properties of the system and of the role of the chemical potential $\mu$, we will navigate on the Bloch sphere from the reciprocal space. Using the Fourier representation \begin{equation} c_i = \frac{1}{\sqrt{L}}\sum_k e^{-i k x_i} c_k, \end{equation} with $L=Na$ being the length of the wire, $a$ the lattice spacing such that the Hamiltonian $H=\sum_{k\in [-\frac{\pi}{a};+\frac{\pi}{a}]} H(k)$ reads \begin{equation} H = \sum_k \epsilon(k) c^{\dagger}_k c_k + \Delta e^{i\tilde{\varphi}} e^{+i ka} c^{\dagger}_{-k} c^{\dagger}_{+k} +h.c. \end{equation} Here, $\tilde{\varphi}$ represents the macroscopic superfluid phase attached to $\Delta$ or to the superconducting reservoir such that we have shifted $\Delta\rightarrow \Delta e^{i\tilde{\varphi}}$ with $\Delta$ taken to be real herafter. Then, $\epsilon(k)=-2t\cos(k a) -\mu=\epsilon(-k)$. We can also act with the parity transformation $k\rightarrow -k$ such that \begin{eqnarray} H &=& \sum_{-k} \epsilon(k) c^{\dagger}_{-k} c_{-k} - \Delta e^{i\tilde{\varphi}}e^{-ika} c^{\dagger}_{-k} c^{\dagger}_{k} +h.c. \end{eqnarray} For one specific value of $k$: \begin{eqnarray} H(k) &=& \frac{1}{2}(\epsilon(k)c^{\dagger}_k c_k-\epsilon(k)c_{-k}c^{\dagger}_{-k}) \\ \nonumber &+& i\Delta e^{i\tilde{\varphi}} \sin(ka)c^{\dagger}_{-k}c_k^{\dagger}+h.c. . \end{eqnarray} In the Nambu basis $(c_k , c^{\dagger}_{-k})^T$, this is equivalent to \begin{eqnarray} \label{matrix} H(k) = \frac{1}{2}\left( \begin{array}{cc} \epsilon(k) & 2i\Delta e^{-i\tilde{\varphi}}\sin(ka) \\ -2i\Delta e^{i\tilde{\varphi}} \sin(ka) & -\epsilon(k) \\ \end{array} \right). \end{eqnarray} From the identification with the spin-$\frac{1}{2}$ \begin{eqnarray} H(k) = \left( \begin{array}{cc} - d\cos\theta -\frac{m}{2} & -d\sin\theta e^{-i\varphi} \\ -d\sin\theta e^{i\varphi} & +d\cos\theta + \frac{m}{2} \\ \end{array} \right) \label{spinidentification} \end{eqnarray} for $t=\Delta$, then this gives rise to \begin{eqnarray} \theta &=& \theta_k = ka \\ \nonumber \varphi &=& \varphi_k = \tilde{\varphi} +\frac{\pi}{2} \end{eqnarray} with $m$ playing the role of the chemical potential and $t=\Delta=d$. We have effectively a two-dimensional map seeing the superfluid phase of the reservoir as an independent variable which can be tuned through interferometry and SQUID geometry. For $m=0$, the spin-$\frac{1}{2}$ eigenstates read \begin{eqnarray} |\psi_+\rangle &=& \cos\frac{\theta}{2} \left( \begin{array}{c} 1 \\ 0 \\ \end{array}\right) + i \sin\frac{\theta}{2}e^{i\tilde{\varphi}} \left( \begin{array}{c} 0 \\ 1 \\ \end{array}\right) \\ \nonumber |\psi_-\rangle &=& \sin\frac{\theta}{2} \left( \begin{array}{c} 1 \\ 0 \\ \end{array}\right) -i\cos\frac{\theta}{2}e^{i\tilde{\varphi}} \left( \begin{array}{c} 0 \\ 1 \\ \end{array}\right). \end{eqnarray} For the superconducting wire, eigenstates can be defined through the quasiparticle operators \begin{eqnarray} \eta_k &=& \cos\frac{\theta}{2} c_k + i e^{i\tilde{\varphi}}\sin\frac{\theta}{2} c_{-k}^{\dagger}. \label{eta} \end{eqnarray} We can verify $\{ \eta_k , \eta_k^{\dagger} \}=1$. The $|BCS\rangle$ wavefunction where $|BCS\rangle=\prod_k |BCS\rangle_k$ takes the form \begin{equation} |BCS\rangle = R\prod_k \left(\cos\frac{\theta}{2} + i \sin\frac{\theta}{2} e^{i\tilde{\varphi}}c^{\dagger}_k c^{\dagger}_{-k}\right)|0\rangle, \end{equation} with $R=(\delta_{\mu<-2t} +(1-\delta_{\mu<-2t})c_0^{\dagger})(\delta_{\mu<2t} +(1-\delta_{\mu<2t})c^{\dagger}_{\pi})$ such that for $\theta=k=0$ we have $|BCS\rangle = c^{\dagger}_0|0\rangle$ and for $\theta=ka=\pi$ we have $|BCS\rangle = |0\rangle$ within the topological phase $-2t<\mu<2t$. The Hamiltonian can be diagonalised as $H(k) = E(k) \eta_k^{\dagger} \eta_k$ with $E(k)=\sqrt{\epsilon(k)^2 +4\Delta^2 \sin^2(ka)}$ and the BCS ground state corresponds to a vaccum of quasiparticles $\eta_k |BCS\rangle=0$. Since the pairing function proportional to $\sin(ka)$ goes to zero at $k=0$ and $ka=\pi$, we can smoothly deform $\Delta\neq t$ accordingly such that a quantum phase transition should only involve the ratio $\frac{\mu}{2t}$. The occurrence of Majorana fermions can be understood from general relations related to the particle-hole symmetry $\omega_x H \omega_x = - H^{\dagger}$ with the charge conjugation matrix \begin{eqnarray} \omega_x = \sigma_x = \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \\ \end{array} \right). \end{eqnarray} Usually, for a BCS Hamiltonian, this implies specific relations between the kinetic $T=T^{\dagger}$ and pairing $\Delta = - \Delta^{\dagger}$ operators. This requires that the pairing term is odd when changing $k\rightarrow -k$. For one-dimensional spin-polarized superconductors, this prerequisite is satisfied through the fact that the most dominant pairing term occurs between nearest neighbors. In addition to particle-hole symmetry or charge conjugation, the system also possesses time-reversal symmetry in a simple way through the fact that the $c$ fermions operators are invariant under time-reversal symmetry defined simply through the transformation $i\rightarrow -i$ (see Appendix \ref{timereversal}). The two Majorana fermions $\alpha$ and $\eta$ are precisely transformed accordingly as $\eta\rightarrow \eta$ and $\alpha\rightarrow -\alpha$. Here, one may wonder about the role of the superfluid phase producing possibly a complex pairing term in Eq. (\ref{SCwire}). This phase refers to a gauge freedom in the model. On the other hand, we may verify using the Luttinger formalism that we may absorb simply this phase into a re-definition of the superfluid phase $\theta$ associated to the bosonic particles or Cooper pairs forming the superfluid reservoir (see Appendix \ref{interactions} and Eq. (\ref{pairingfunction})). Therefore, we can define $\tilde{\varphi}=0$ for simplicity in (\ref{matrix}). The topological superconducting wire is usually referred to as a BDI phase in the topological classification tables, defined through the square of the time-reversal operator, charge conjugation or particle-hole symmetry and also through chiral symmetry \cite{BernevigNeupert,FidkowskiKitaev}. Defining the pseudo-spin operator, as introduced by P. W. Anderson \cite{Anderson} \begin{equation} {\bf S} = \frac{1}{2}\psi^{\dagger}_k {\mathbfit{\tau}}\psi_k, \end{equation} such that $S_z = (c^{\dagger}_k c_k - c_{-k}c^{\dagger}_{-k})$, we introduce an analogy with the sphere where the two variables are then $\theta=k$ and $\varphi=\tilde{\varphi}+\frac{\pi}{2}$. This mapping is elegant as it allows for a relation with geometry. In particular, for the sphere model we can then use important relations such as Eq. (\ref{polesC}). This implies that to measure the $\mathbb{Z}$ topological invariant one can drive from north to south pole along a particular line represented through fixed $\varphi=\tilde{\varphi}+\frac{\pi}{2}\in[0;2\pi]$. In this representation of $S^2$, $\theta\in[0;\pi]$. In the wire, since we describe half of the Brillouin zone because with the pseudo-spin mapping a particle in $[0;\pi]$ is related to a hole in $[-\pi;0]$, we can then also define a $\mathbb{Z}$ topological invariant in a similar way as on the Bloch sphere $C=\frac{1}{2}(\langle S_z(0)\rangle -\langle S_z(\pi)\rangle)$ where here we assume the (BCS) ground state. The north and south poles correspond to $k=0$ and $ka=\pi$ respectively in the Brillouin zone. In our analysis, the spin magnetizations at the pole depend only on the sign of $\epsilon(k)=-2t\cos(k a) -\mu$ at $k=0$ and $k a=\pi$. Therefore, we verify that the topological transition takes place when $\mu=\pm 2t$ with $C=1$ in the topological phase and $C=0$ in the polarized phases. From the smooth fields in Eq. (\ref{CA'}), for the half-filled situation $\mu=0$ and $\Delta=t$ we also identify a correspondence between the averaged charge and topological number: \begin{eqnarray} \langle \hat{Q}\rangle &=& \frac{L}{2\pi}\int_{-\pi}^{+\pi} \langle BCS| c^{\dagger}_k c_k |BCS\rangle dk = \frac{C}{2}L. \end{eqnarray} This $\mathbb{Z}$ topological number is similarly described as a Zak phase \cite{Zak} or as the rolling of the phase $\theta_k$ on the whole Brillouin zone \cite{TewariSau,TrifTserkovnyak}: \begin{equation} C = \frac{1}{2\pi}\oint d\theta_k. \end{equation} Through $\oint$ we suppose periodic boundary conditions for the Brillouin zone. On the other hand, we can equivalently write $C = \frac{1}{\pi}\int_0^{\pi} d\theta_k$ similarly as in Eq. (\ref{Zak}). The Zak phase corresponds to a linear evolution of the phase associated to the polar angle on the sphere. This formulation of the topological invariant is similar for the Su-Schrieffer-Heeger model of polyacetylene \cite{Su}, which has been recently engineered in different quantum platforms \cite{Gadway,Tal,Molenkampcircuit,Rosenthal,Optique}, and for the Rice-Mele model \cite{MunichRiceMele}. We mention here that other measures can be defined related to quantum information. Bipartite fluctuations of the charge on a sub-region $A$ of a superconducting wire of length $L$ allow to reveal $F(A)=i_Q L +b\log(L) +{\cal O}(1)$ with \cite{Herviou2017} \begin{equation} i_Q = \lim_{L\rightarrow +\infty} \frac{1}{L}\langle \hat{Q}^2\rangle = q \int_{BZ} \frac{dk}{4\pi} \sin^2\theta_k. \end{equation} At the transition $\mu=-2t$, the matrix (\ref{matrix}) reveals a linear gapless mode around $k=0$ associated to one Majorana fermion with here $b<0$. The quantum phase transition is described through one free (gapless) Majorana fermion in the bulk \cite{Herviou}. A similar gapless Majorana chain with central charge $c=\frac{1}{2}$ may be realized in the presence of magnetic impurities \cite{KarynMajorana}. A one-dimensional quantum liquid with $U(1)$ charge conservation in contrast would produce $b>0$ \cite{Song}. The function $i_Q$ is the quantum Fisher. This also reveals information about the quantum phase transition through charge fluctuations. Coupling a cavity or $LC$ circuit to a p-wave superconductor also allows to measure the dynamical susceptibility of the p-wave superconducting wire \cite{Olesia,Matthieu}. It is also relevant to mention theoretical efforts related to the entanglement spectrum \cite{LiHaldane} applied to topological p-wave superconducting wires \cite{Maria1}. A generalization of the Zak phase and finite temperature effects have also been studied in Ref. \cite{Maria2}. Various theoretical works have studied the stability of the topological phase for one wire in the presence of weak interactions \cite{PascalDaniel,Stoudenmire,Schuricht,Jelena,Herviou} and also in the presence of a (moderate) inter-wire hopping term \cite{FanKaryn}. The stability of the topological number towards interactions can be understood from the Luttinger formalism in Appendix \ref{interactions} and can be viewed as an application of symmetry-topologically-protected phenomenon. The fact that the structure of Majorana fermions remains identical at the edges can first be understood from the fact that there is no gap closing in the system. Effectively from renormalization group arguments then the system behaves as if $t\sim \Delta$ in the low-energy fixed point. To strengthen this conclusion, we can also apply the stochastic approach developed in Secs. \ref{Mott} and \ref{MottKM}. Including the Cooper channel then the BCS Hamiltonian becomes modified as \begin{eqnarray} H &=& \sum_i (-t +V(\phi_x-i\phi_y))c^{\dagger}_i c_{i+1} + (\Delta + V\phi_{\Delta})c^{\dagger}_i c^{\dagger}_{i+1} \nonumber \\ &+& h.c. - (\mu+\phi_0) c^{\dagger}_i c_i, \end{eqnarray} with $\phi_x+i\phi_y=-\frac{1}{2}\langle c^{\dagger}_i c_{i+1}\rangle$ and the additional pairing channel $\phi_{\Delta} = -\frac{1}{2}\langle c_{i+1} c_i \rangle$. The $\phi_0$ term can be set to zero if we redefine the interaction in a symmetric way from half-filling $V(n_i - \frac{1}{2})(n_{i+1}-\frac{1}{2})$. In the $2\times 2$ matrix, then this renormalizes $2t\rightarrow 2t-2V(\phi_x-i\phi_y)$ and $-2\Delta\rightarrow -2\Delta - 2V\phi_{\Delta}$. From the BCS ground state, we identify $\sum_i \langle BCS| c^{\dagger}_i c_{i+1} |BCS\rangle=\frac{N}{2}$ and $\sum_i \langle BCS| c_{i+1} c_i |BCS\rangle =-\frac{N}{2}$ with $N$ being the number of sites such that effectively the system stays on a line as if $t=\Delta$. Increasing interactions then a Mott transition should occur when $V\sim 2t$ with our definitions \cite{Schuricht}. \subsection{$\mathbb{Z}_2$ topological invariant and ${\cal I}(\theta)$ function} \label{SCtopo} Related to the local topological invariant $C^2$ \cite{C2} introduced in Sec. \ref{light} we can equally define a $\mathbb{Z}_2$ formulation of the topological invariant for the $p$-wave superconducting wire related to the Pfaffian definition of Fu-Kane-Mele for topological insulators \cite{FuKaneMajorana}. From Eq. (\ref{polesC}), we equivalently have \begin{equation} \langle S_z(0)\rangle \langle S_z(\pi)\rangle = 1-2C^2 = \mp 1. \label{wiretopo} \end{equation} As a measure of the $\mathbb{Z}_2$ topology, then we can define the quantity from the sphere \begin{equation} \label{invariant} \langle S_z(0)\rangle \langle S_z(\pi) \rangle = \Pi_{i=0,\pi} \xi_{i}, \end{equation} which agrees with the $\mathbb{Z}_2$ index formulated for the wire \cite{Kitaev,SatoAndo}. The variable $\xi_{i}=\langle S_z(i)\rangle$ is defined to have values $\pm 1$ for $i=0,\pi$. Similarly as in the Kane-Mele model, $\xi$ involves the kinetic term at special (symmetry) points in the Brillouin zone. If we are in the weak-paired topological phase this quantity is $-1$ and in the strong-paired phase this is $+1$. We verify that the topological transition takes place when $\mu=\pm 2t$ implying that the function $\langle S_z \rangle$ vanishes at one pole producing a step function in Eq. (\ref{invariant}). Related to this definition of the $\mathbb{Z}_2$ topological number, we propose a measure allowing an analogy with the light-matter detection in two dimensions and the function ${\cal I}(\theta)$ in Eq. (\ref{Itheta}). Including a potential $V(t)$ acting on the Cooper pairs in the BCS reservoir, this gives rise to a time-dependent phase shift for the bosons describing these Cooper pairs \begin{equation} \langle b\rangle(t) = \langle b(t)\rangle_{V_{ac}=0} \times e^{-\frac{i}{\hbar}\int_0^{t} V(t')dt'}. \end{equation} If we develop the phase shift to first order in $V_{ac}$, then \begin{equation} \langle b\rangle(t) \approx \langle b(t)\rangle_{V_{ac}=0}\left(1-\frac{i}{\hbar\omega}V_{ac}\sin(\omega t)\right). \end{equation} In the wire model, then we have an additional off-diagonal term in the Nambu basis of the form \begin{equation} \label{A0} \delta {H}(k) = A_0 \sin(ka)\sin(\omega t)c^{\dagger}_k c^{\dagger}_{-k}+h.c., \end{equation} with $A_0 = \frac{V_{ac}\langle b\rangle}{\hbar\omega}$ and the identification $\sigma^+=c^{\dagger}_k c^{\dagger}_{-k}$. A similar term can be obtained through an $AC$ potential acting on the wire directly. This time-dependent perturbation can also be induced by coupling the wire to a linearly polarized light along $x$ direction with a vector potential of the form $A_0 \sin(\omega t){\bf e}_x$. In the sense of the $2\times 2$ matrices in Eq. (\ref{spinidentification}), the term is similar to the one in Sec. \ref{lightdipole} on light-induced dipole transitions with the modification that here we have a $\sin(ka)$ function entering in $\delta{H}$. From the sphere-wire identification $k=\theta$, we can formulate an interesting relation between Eq. (\ref{onehalf}) or the response at $ka=\frac{\pi}{2}$ for a sphere and the $\mathbb{Z}_2$ topological invariant in Eq. (\ref{wiretopo}). From the $|BCS\rangle$ wavefunction, we identify the following relation $c^{\dagger}_k c^{\dagger}_{-k} |BCS\rangle \rightarrow \cos\frac{\theta}{2} c^{\dagger}_k c^{\dagger}_{-k}|0\rangle$ when projecting $\delta H(k)$ on the vacuum (of quasiparticles). Adjusting $\hbar \omega \sim 2E(k)$ then the time-dependent perturbation can produce two quasiparticles (quasiholes) in the upper (lower) energy band through the identification for the wavevector $k$ \begin{eqnarray} \eta^{\dagger}_k |BCS\rangle &=& \cos^2\frac{\theta}{2} c^{\dagger}_k |0\rangle \\ \nonumber \eta^{\dagger}_{-k} |BCS\rangle &=& \cos^2\frac{\theta}{2} c^{\dagger}_{-k} |0\rangle. \end{eqnarray} Close to $ka=\theta=\frac{\pi}{2}$, from the relations between the smooth fields in Eq. (\ref{smoothfields}) then within the topological phase we identify \begin{equation} \delta H |BCS\rangle = (4A_0\sin(\omega t)\sqrt{2C^2-1}\eta^{\dagger}_k \eta^{\dagger}_{-k}+h.c.) |BCS\rangle. \end{equation} We apply the formula $2C^2-1\sim \tan^2\frac{\theta}{2}$ when $\theta\sim \frac{\pi}{2}$ within the topological phase through Eq. (\ref{smoothfields}). Evaluating transition rates between $|BCS\rangle$ and the state with two quasiparticles $\eta^{\dagger}_k \eta^{\dagger}_{-k}|BCS\rangle$ or quasiholes $\langle BCS| \eta_{-k}\eta_k$ then leads to an identical calculation as the light-matter response in Eq. (\ref{rates}). \subsection{$C=\frac{1}{2}$ and Majorana Fermions} \label{SpheresMajorana} Here, we discuss another understanding for the occurrence of the fractional topological numbers of Sec. \ref{fractionaltopology} on the sphere through the Majorana fermions of Sec. \ref{pwavewire} which also suggests a relation with two wires. For two spheres, the role of the transverse fields acting on each sphere is to produce an effective interaction $\sigma_{1x}\sigma_{2x}$ such that from south pole the model is equivalent to \begin{equation} H_{eff} = r\sigma_{1z}\sigma_{2z} - \frac{d^2 \sin^2\theta}{r} \sigma_{1x}\sigma_{2x}. \end{equation} At the north pole, from the Jordan-Wigner transformation with two spins we can define $\sigma_{iz}=2c^{\dagger}_i c_i - 1$ with $i=1,2$. Since the superconducting gap is going to zero in the vicinity of the north pole then the ground state satisfies $\sigma_{iz}|GS(0)\rangle =+1=2i\alpha_i\eta_i|GS(0)\rangle$ so that $c^{\dagger}_i c_i|GS(0)\rangle=+1$. In the case of fractional entangled topology, this requires to have $d-M<r<d+M$ implying for the superconducting sphere $M=\frac{m}{2}$ and $t=d=\Delta$ such that $H_{eff}$ is satisfied. In addition we should define the Jordan-Wigner transformation such that $\langle GS(\pi)| \sigma_{iz} |GS(\pi)\rangle =0$ at south pole as long as $t-\frac{\mu}{2}<r<t+\frac{\mu}{2}$, i.e. equally for the two situations $\mu\rightarrow 2t^-$ (such that $\langle c^{\dagger}_{\pi}c_{\pi}\rangle=0$) and $\mu=2t^+$ (such that $\langle c^{\dagger}_{\pi}c_{\pi}\rangle =1$) for $r>0$. This requires then to have $\sigma_{1z} = \frac{1}{i}(c^{\dagger}_1- c_1)$, $\sigma_{1x}= c^{\dagger}_1+c_1$, $\sigma_{2z} = \frac{1}{i}(c^{\dagger}_2- c_2)e^{i\pi c^{\dagger}_1 c_1}$, $\sigma_{2x}= (c^{\dagger}_2+c_2)e^{i\pi c^{\dagger}_1 c_1}$, such that \begin{equation} H_{eff} = -r(c_1+c_1^{\dagger})(c^{\dagger}_2-c_2) -\frac{d^2 \sin^2\theta}{r}(-c_1+c_1^{\dagger})(c_2+c_2^{\dagger}). \end{equation} This representation satisfies $\langle GS(\pi)|\sigma_{iz}| GS(\pi)\rangle=0$. This can be re-written in terms of the Majorana fermions as \begin{equation} H_{eff} = -2r i \eta_1\alpha_2 - \frac{2id^2}{r}\sin^2\theta \alpha_1\eta_2. \end{equation} The ground state reveals $2i\eta_1 \alpha_2|GS(\pi)\rangle = +|GS(\pi)\rangle$ which then leads to another possible way to write Eq. (\ref{correlation}) for $C_j=\frac{1}{2}$ as \begin{equation} \label{corr} \langle \sigma_{1z}(\pi)\sigma_{2z}(\pi)\rangle = \langle 2i\alpha_2\eta_1\rangle = -1 = -(2C_j)^2. \end{equation} This equation is in Table \ref{tableII} and relates the non-local entangled structure of the paired Majorana fermions $\alpha_2$ and $\eta_1$ with the half topological number of a sphere. Within this formulation, the Majorana fermions $\alpha_1$ and $\eta_2$ measure the $\ln 2$ entropy associated to the degeneracy between the two states $|\Phi_+\rangle_1 |\Phi_-\rangle_2$ and $|\Phi_-\rangle_1 |\Phi_+\rangle_2$ precisely at the south pole. Yet, if we perform an average on an ensemble of measures, then $\langle GS(\pi)| \sigma_{iz} |GS(\pi)\rangle=0$ such that the result equally reveals the presence of the two possible states $|\Phi_+\rangle_1 |\Phi_-\rangle_2$ and $|\Phi_-\rangle_1 |\Phi_+\rangle_2$. Deviating slightly from $\theta=\pi$ then this will produce the entangled wavefunction preserving $\langle GS(\theta)| \sigma_{iz} |GS(\theta)\rangle=0$. From the identification between charge and spin as formulated in Sec. \ref{Coulomb}, we deduce that the two spheres' model with an Ising interaction is analogous to a two-wires' model with a Coulomb interaction. It is then relevant to mention here that this two wires' superconducting model gives rise to a Double Critical Ising phase (DCI) \cite{Herviou} which presents in fact similar properties as the fractional topological phase (on the sphere at $k=\pi$) with two gapless Majorana fermions delocalized in the bulk of the system \cite{delPozo}. The Semenoff mass $M$ on the two spheres corresponds simply to the chemical potential for the superconducting wires. The Coulomb interaction between wires precisely favors a charge ordering around $k=\pi$ along a wire and the Mott phase then corresponds to an antiferromagnetic ordering of the charges on the two wires. Tuning $M$ within the DCI phase, the system yet presents two critical gapless Majorana modes (one per wire) similar similar to the topological quantum phase transition of one wire. In the DCI phase, the two gapless Majorana fermions correspond to two Ising field theories in two dimensions. The interacting spheres' model then gives further insight on correlated (topological) superconductors from the interplay with Mott physics. The DMRG approach reveals $\frac{1}{2}$ topological number(s) for the DCI phase in a accurate manner \cite{delPozo}. \subsection{p+ip superconductor} It is important to mention recent progress to realize and engineer topological superconductors in higher dimensions. This includes the possibility to realize a $p+ip$ superconductor \cite{ReadGreen} on surface states of Bi$_2$Se$_2$ \cite{FuKane}. Topological superconductivity was for instance reported on Cu$_x$Bi$_2$Se$_3$ \cite{AndoSC}. The $p+ip$ superconductor can also be realized in multi-wires' architectures \cite{Kanewires}, in particular similarly as for the Haldane model through local magnetic fluxes and zero-net flux in a unit cell \cite{FanKaryn}. The physics of the $p+ip$ superconductor is also related to the $\nu=\frac{5}{2}$ fractional quantum Hall effect \cite{MooreRead} and to the Kitaev spin model on the honeycomb lattice in the $B$ phase in the presence of a magnetic field \cite{KitaevHoneycomb,Burnell}. The physics of p-wave superconductors also occurs in He$_3$ \cite{Volovik}, quantum materials such as Sr$_{2}$RuO$_4$ \cite{KallinBerlinsky} and also graphene coupled to a high-$T_c$ superconductor \cite{Angelo}. On the honeycomb lattice, the presence of zero-energy bound states and topological states naturally occur within the BCS theory \cite{DoronKaryn,WilczekGhaemi,Scherer}. A chiral topological superconductor can be engineered from proximity effect in a material described by a quantum anomalous Hall state \cite{QiHughesRaghuZhang}. The honeycomb lattice can also give rise to other interesting phases from interaction effects such as a FFLO $p$-wave superconducting phase \cite{TianhanFFLO} and to $d+id$ topological superconducting phases in the presence of interactions \cite{Annica,AnnicaWeiKaryn,AnnicaKaryn,Schererd,Wolf}. Here, we show that the local description approach in the reciprocal space can also be applied to a $p+ip$ superconductor. Suppose a $p_x+ip_y$ superconductor on the square lattice in the Nambu basis $(c_{\bf k}, c_{-{\bf k}}^{\dagger})$. Around the $\Gamma=(0,0)$ point in the reciprocal space and the edges of the Brillouin zone at ${\bf k} a=(\pi,\pi)$, we can develop the $2\times 2$ matrix assuming a small deviation ${\bf p}$ from these two points such that ${\bf k}={\bf p}+(0,0)$ and ${\bf k}a =(\pi,\pi)-{\bf p}a$. In this way, the matrix takes the form \begin{eqnarray} \left( \begin{array}{cc} -2t\zeta\cos(pa) -\frac{\mu}{2} & \Delta p a e^{-i\tilde{\varphi}} \\ \Delta p a e^{i\tilde{\varphi}} & 2t\zeta\cos(pa) +\frac{\mu}{2} \\ \end{array} \right), \end{eqnarray} where $p_x+ip_y = p e^{i\tilde{\varphi}}$ with $\zeta=+1$ at the $\Gamma$ point and $\zeta=-1$ at $(\pi,\pi)$. The particular situation $\tilde{\varphi}=\frac{\pi}{4}$ refers to the $p+ip$ superconductor and we also identify the limits of the one-dimensional topological wire corresponding to $\tilde{\varphi}=0$ and $\tilde{\varphi}=\frac{\pi}{2}$; see Fig. \ref{2dtrajectories}. This suggests that the $\Gamma$ point can be placed at the north pole on $S^2$ and similarly the $(\pi,\pi)$ point now corresponds to the south pole. Assuming that we follow a diagonal path in the reciprocal space joining for instance the $\Gamma$ point to $(\pi,\pi)$ then this now corresponds to a line joining the north to the south pole on the Bloch sphere where the polar angle becomes $\theta=p a$. The model is then equivalent to the matrix \begin{eqnarray} \hskip -0.5cm \left( \begin{array}{cc} -2t\zeta\cos(pa) -\frac{\mu}{2} & \Delta (\sin(pa)-i\sin(pa)) \\ \Delta (\sin (pa)+ i\sin(pa)) & 2t\zeta\cos(pa) +\frac{\mu}{2} \\ \end{array} \right), \end{eqnarray} with the dressed coordinates $k_{\parallel}=\frac{1}{2}(k_x+k_y)$, $k_{\perp}=\frac{1}{2}(k_x-k_y)$ and along the diagonal $k_x=k_y$ such that $k_{\perp}=0$ and $k_{\parallel}=k_x=k_y=p$. From the identification with Eq. (\ref{correspondence}), we have effectively $d=2t=-\sqrt{2}\Delta$ with a Semenoff mass $M=\frac{m}{2}=\frac{\mu}{2}$. This is equivalent to an azimuthal angle $\tilde{\varphi}=\varphi=\frac{\pi}{4}$. Similarly, as for the p-wave superconducting wire, we can define the topological invariant on half of the one-dimensional Brillouin zone due to the particle-hole symmetry. We can define the topological invariant from the poles of the sphere only. Through the pseudospin-$\frac{1}{2}$ analogy then we have \begin{equation} C = \frac{1}{2}(\langle S_z(0,0)\rangle - \langle S_z(\pi,\pi)\rangle), \end{equation} which is equivalent to \begin{equation} \label{pipSC} C = \frac{1}{2}\left(\hbox{sgn}\left(2t+\frac{\mu}{2}\right) - \hbox{sgn}\left(-2t+\frac{\mu}{2}\right)\right). \end{equation} This implies that the system resides in the topological phase when $-4t<\mu<4t$ and reaches the strong-paired phases when $\mu<-4t$ and $\mu>4t$. This also implies that the $p+ip$ superconductor may be defined through the $\mathbb{Z}_2$ topological number \begin{equation} C^2-\frac{1}{2} = -\langle S_z(0,0)\rangle \langle S_z(\pi,\pi)\rangle. \end{equation} This topological invariant or $C^2-\frac{1}{2}$ then may be detected from the protocol introduced in Sec. \ref{SCtopo} locally from the diagonal point ${\bf k} a=(\frac{\pi}{2},\frac{\pi}{2})$ on the path in the Brillouin zone. The occurrence of zero-energy modes at the edges can be verified similarly as in the article of Read and Green \cite{ReadGreen}. Close to the $\Gamma$ point $(0,0)$, looking for solutions of the form $u c_{\bf p} + v c^{\dagger}_{-{\bf p}}\sim u c_{(0,0)}+ v c^{\dagger}_{(0,0)}$ when fixing the chemical potential $\mu\sim -4t$ then this gives rise to the two coupled equations \begin{eqnarray} i \frac{\partial u}{\partial t} &=& \left(-2t -\frac{\mu}{2}\right) u + \Delta a (p_x-ip_y) v \\ \nonumber i \frac{\partial v}{\partial t} &=& \Delta a (p_x+ip_y) u + \left(2t +\frac{\mu}{2}\right)v, \end{eqnarray} which admit solutions such that $u=v^*$ corresponding then to a (gapless) chiral Majorana fermion. Fixing ${\bf p}\rightarrow 0$ is equivalent to $\theta\rightarrow 0$ on the sphere and to $z=-\frac{H}{2}$ on the cylinder such that the chiral edge mode also occurs at the edge at the bottom of the cylinder in Fig. \ref{Edges.pdf}. Here, we introduce the Green's function of an electron following Wang and Zhang \cite{Wang} and define the ${\bf h}=(\sin k_x,\sin k_y, m'+2-\cos k_x-\cos k_y)$ vector with $a=\Delta=2t=1$ and $m'+2=-M$ acting on the pseudospin-$\frac{1}{2}$ in the reciprocal space. At zero frequency, the ${\bf h}$ vector also defines the inverse of the electron Green's function ${\cal G}^{-1}(0,{\bf k})$. The Green's function at zero frequency diverges at specific points in the Brillouin zone. At the quantum phase transition driven by $m'=0$ corresponding to $\mu=-4t$ then ${\cal G}^{-1}$ shows a zero which can then engender another definition of the location of the transition. The relation with the topology can be precisely verified calculating the topological number in the Brillouin zone through Eq. (\ref{dvectorsigma}) such that \begin{equation} C = \frac{1}{2\pi}\iint F_{xy} d^2 k, \end{equation} with $F_{xy} = \frac{1}{2}\epsilon^{abc} n^a \partial_{k_x} n^b \partial_{k_y} n^c$ with ${\bf n}=\frac{{\bf h}}{|{\bf h}|}$. The zeros of the vector ${\bf h}$ also plays a key role in the topological description. Performing a development around ${\bf k}=(0,0)$ such that ${\bf h}\sim (k_x,k_y,m')$ then $F_{xy}=\frac{1}{2}$ if $\mu<-4t$ and $F_{xy}=-\frac{1}{2}$ if $\mu>-4t$. This implies that the change of $C$ at the topological transition can be defined locally as \begin{equation} \Delta C = F_{xy}(0,0,m'\rightarrow 0^-) - F_{xy}(0,0,m'\rightarrow 0^+) = 1 \end{equation} which is then equivalent to Eq. (\ref{pipSC}). From the correspondence with the Bloch sphere, we also infer that $C=A_{\varphi}(\pi,\pi) - A_{\varphi}(0,0)$ with $A_{\varphi}(\pi,\pi)=\frac{1}{2}$ and $A_{\varphi}(0,0)=-\frac{1}{2}$ from the eigenstates in Eq. (\ref{eigenstates}). This formalism on the Green's functions also applies in the presence of interactions and uses the fact that the eigenvalues of the inverse of the Green's function are real at zero frequency. This formalism may be then developed further for two-dimensional and three-dimensional topological superconductors in the presence of interactions. \section{Generalized Resonating Valence Bond Theory} \label{GRVBT} Here, we elaborate on the smooth fields' formalism related to the fractional entangled geometry for generalized resonating valence bond states. Such resonating valence bond states have been shown to play a key role in the understanding of high-$T_c$ cuprates with the presence of hot spots with preformed pairs forming at the corners of the two-dimensional half-filled Fermi surface of Fig. \ref{2dtrajectories} and a Fermi liquid area surrounding the diagonals \cite{plainvanilla,KarynMaurice}. A possible relation with quantum Hall systems can also be revealed through the Kalmeyer-Laughlin approach \cite{KalmeyerLaughlin}. \begin{center} \begin{figure}[ht] \includegraphics[width=0.4\textwidth]{Figpwave} \caption{Different trajectories in the Brillouin zone represented through the polar angle $\tilde{\varphi}$ or azimuthal angle $\varphi$. The axes are defined as $k_xa$ for the horizontal direction and $k_ya$ for the vertical direction.} \label{2dtrajectories} \end{figure} \end{center} Assembling coupled spheres, two by two starting from the $C_j=\frac{1}{2}$ state, may allow to build quantum networks or circuits with superposition of polarized spins at one pole and an Anderson resonating valence bond state at the other pole \cite{AndersonRVB} similar to the Kitaev spin chain \cite{Majorana1}. To give a perspective on the fractional topological numbers on the sphere, we find it useful to show here that the formalism of Sec. \ref{fractionaltopology} allows us to justify the fractions observed in spin arrays in a ring geometry for a odd number of sites \cite{HH}. Along the lines of the Affleck-Kennedy-Lieb-Tasaki (AKLT) \cite{AKLT} approach which plays a key role in the Matrix Product States foundations \cite{MPS}, it is possible to find analytically some solutions with generalized resonating valence bond states. To lighten the definitions, up state equally refers to $|\Phi_+\rangle=\uparrow$ and down state refers to $|\Phi_-\rangle=\downarrow$. For $N$ finite, we fix the parameters at south pole such that the system shows precisely $N$ degenerate ground states. For $N=5$: $\downarrow \uparrow \downarrow \uparrow \downarrow$, $\uparrow \downarrow \uparrow \downarrow \downarrow$, $\downarrow \uparrow \downarrow \downarrow \uparrow$, $\uparrow \downarrow \downarrow \uparrow \downarrow$ and $\downarrow \downarrow \uparrow \downarrow \uparrow$. At south pole, the ground state energy is precisely $-(N-2)r-(d-M)$ and lowered with the presence of one pair $\downarrow\downarrow$. The key point here is the presence of the transverse field which allows these $N=5$ states to resonate one onto the other \begin{equation} |GS\rangle = \frac{1}{\sqrt{5}}\left(\downarrow \uparrow \downarrow \uparrow \downarrow + \uparrow \downarrow \uparrow \downarrow \downarrow + \downarrow \uparrow \downarrow \downarrow \uparrow + \uparrow \downarrow \downarrow \uparrow \downarrow + \downarrow \downarrow \uparrow \downarrow \uparrow\right). \end{equation} The proof is generalizable to any $N$ for $N$ odd such that the classical antiferromagnetic ground state is frustrated. Here, the resonating valence bond state is formed with the hopping of one bound state $\downarrow \downarrow$ (described through a spin-$\frac{1}{2}$ or `spinon' in the antiferromagnetic N\' eel state) from the transverse field giving rise to terms $- \frac{d^2\sin^2\theta}{r}\sigma_{ix}\sigma_{jx}$ with dominant contributions from $(i,j)$ nearest neighbors. This formalism may be then developed further for two-dimensional and three-dimensional topological superconductors in the presence of interactions. At north pole the ferromagnetic ground state is maximally polarized. At south pole, then we have $\langle GS| \sigma_{iz} |GS\rangle = - \frac{1}{N}$ for $N$ odd. The presence of a domain wall and of the transverse field produces a specific class of resonating valence bonds states. We can then apply the methodology with the Berry fields $A_{j\varphi}$ defined smoothly on the whole surface of the sphere generalizing Eqs. (\ref{eq2}) and (\ref{eq1}). This gives rise to \begin{equation} A_{j\varphi}(\pi) = \frac{N-1}{2N} A_{j\varphi}(0) + \frac{N+1}{2N}A_{j\varphi}^{r=0}(\pi). \end{equation} This equation simply tells that for a sphere at south pole, the probability to be in the down state is precisely $\frac{N+1}{2N}$ and the probability to be in the up state is $\frac{N-1}{2N}$. Now, we can also use the fact that adiabatically setting $r=0$, we have all the spheres englobing one monopole \begin{equation} A_{j\varphi}^{r=0}(\pi) - A_{\varphi}(0) = q =1. \end{equation} This equation shows that the presence of a Dirac monopole in the core of each site or each sphere may reveal other forms of states for low-dimensional quantum spin chains \cite{AffleckHaldane}. Combining these two equations simply lead to \begin{equation} C_j = A_{j\varphi}(\pi) - A_{j\varphi}(0) = \frac{N+1}{2N}q, \end{equation} which reproduces nicely the numerical results for $N=3$ $(C_j=\frac{2}{3})$ and $N=5$ $(C_j=\frac{3}{5})$ \cite{HH}. This formula is also in agreement with \begin{equation} C_j = \frac{1}{2}(\langle \sigma_{jz}(0)\rangle - \langle \sigma_{jz}(\pi)\rangle) = \frac{1}{2}\left(1+\frac{1}{N}\right). \end{equation} In the thermodynamical limit, for each sphere $C_j\rightarrow \frac{1}{2}$. The physics at south pole is equivalent to a delocalized bound state with an energy spectrum $-2J_{\perp}\cos(ka)$ where $J_{\perp}=\frac{d^2\sin^2\theta}{r}$ from a wavefunction $\psi(x)=\frac{1}{\sqrt{N}}e^{i kx}$. The increase of local energy $2r$ from the formation of the bound state and two aligned spins is absorbed in the definition of the ground state energy $-(N-2)r-(d-M)$. The bound state then is delocalized along the chain from the minimum of energy at $k=0$. From the Majorana fermions representation of Sec. \ref{SpheresMajorana}, we obtain $i\langle GS| \alpha_1 \eta_2 |GS\rangle_{k\rightarrow 0}\rightarrow 1$ such that the bound state has a probability ${\cal O}(\frac{1}{N})$ to be on each site. In the thermodynamical limit, this is similar to have one gapless Majorana fermion per site and from the spin magnetization $\langle GS|\sigma_{iz} |GS\rangle\rightarrow 0$ similarly as for a quantum spin liquid. The situation with $N$ even may reveal various situations of entangled states at south pole with a relation towards $C_j=\frac{1}{2}$ \cite{HH}. For four spheres forming a quantum box model we identify a correspondence between models of spheres with $\frac{1}{2}$-fractional topological numbers and the Kitaev spin model \cite{KitaevHoneycomb} in ladder geometries \cite{HH,Majorana1,Majorana2}. \section{Summary} \label{Summary} To summarize, we have elaborated on the formalism related to a (quantum) geometrical approach for topological matter through Berry fields ${\bf A}$ smoothly defined on a whole surface representing for instance the surface of a Bloch sphere in quantum mechanics. Equivalently, the fields ${\bf A}'$ show a discontinuity at the interface (boundary) within the applicability of Stokes' theorem revealing the presence of a topological charge in the core. The definition of ${\bf A}'$ also reveals the presence of a Dirac string transporting the induced information from the core to the poles. The global topological number is then defined locally from the poles. From the formalism, we verify the robustness of the induced response towards gentle deformations of the surface, for instance the sphere becoming a cylinder with uniform Berry curvatures. We have shown applicability of this quantum topometry for transport properties in time from Newtonian physics and for the light-matter coupling introducing a function ${\cal I}(\theta)$ which is naturally related to the quantum distance and metric. The theoretical approach was then developed related to topological cystals and energy bands with specific applications of the light-matter and circularly polarized light to topological matter from the reciprocal or momentum space. The formalism allows us to include interaction effects from the momentum space within a variational stochastic approach. Through interaction effects between two Bloch spheres, as a result of a $\mathbb{Z}_2$ symmetry, we have elucidated on the possibility of fractional entangled topology yet in the presence of a monopole inside each sphere, through a pure state at one pole and an entangled state at the other one pole. One-half of the surface then radiates the Berry curvature and the topological response of the two spheres becomes similar to that of a pair of merons or half-Skyrmions, which may be then engineered in mesoscopic or atomic systems. The meron physics then links with possible solutions of the Yang-Mills equation. We have identified a relation between quantum entanglement and $\frac{1}{2}$ topological numbers. Fractional numbers yet arise in an assemblage of geometries for instance in a ring geometry with a odd number of spheres giving rise to generalized entangled states at south pole. We have described applications of this fractional geometry in relation with topological semimetals in bilayer and monolayer honeycomb planes models showing the emergence of a protected topological Fermi liquid in two dimensions. In this way, this formulates an application of quantum entanglement into band theory. The topological semimetals are described through a quantized quantum Hall conductivity. In three dimensions, we have formulated several understandings of $\frac{1}{2}$ topological numbers in a cube through geometry, Ramanujan alternating infinite series and also transport, responses to circularly polarized light. For three-dimensional topological insulators, it is similarly known that surface states on a cube can be equally described through one Dirac point or a meron. We have then developed the formalism to topological p-wave superconducting systems and Kitaev wires and built a correspondence between Majorana fermions for the two spheres' model and fractional topology. The two-spheres' model may find further applications related to quantum circuits and quantum information, the production of entangled states locally on the Bloch sphere and may also be applied in networks related to Matrix Product States developments. Spheres in a quantum bath may also be applied for energy applications through the quantum dynamo effect. Interestingly, the correspondence with classical physics for the smooth vector fields may suggest further applicability of the formalism for the physics of planets related to the quest of Dirac monopoles, black holes and gravitational aspects. \\ K.L.H. is grateful to discussions and presentations via zoom during the difficult Covid isolation period when this work was initiated, in particular at Cambridge, Lisbon, Oxford, Montreal and in person at ENS Paris, Aspen, Dresden and Les Diablerets. K.L.H. also acknowledges students, postdoctoral associates and colleagues for discussions, collaborations and support related to these ideas. K.L.H. acknowledges support from Ecole Polytechnique, CNRS, the Deutsche Forschungsgemeinschaft (DFG) under project number 277974659 and from ANR BOCA. Numerical evaluations on some figures have benefitted from the Pythtb platform. This review is dedicated to my family. \begin{appendix} \section{Berry curvature, Metric and ${\cal I}(\theta)$ function} \label{Berrycurvature} Here, we develop the formalism on the Berry curvature from the sphere to the lattice, then introducing the quantum metric and quantum distance, and linking with the ${\cal I}(\theta)$ function. First, for completeness we re-derive Eq. (\ref{F0}) for a plane (surface) parametrized by ${\bf R}=(R_x,R_y)$ with here $R_i=p_i=\hbar k_i$ with $i=x,y$ in the reciprocal space. We define the Berry connection \begin{equation} A_{\nu}({\bf R}) = -i\langle \psi |\partial_{\nu} |\psi\rangle. \end{equation} Here, $|\psi\rangle$ refers to the ground state or lower-energy state for a spin-$\frac{1}{2}$ model or for a two-band model. From $\bm{\nabla}\times{\bf A}$, then we evaluate the Berry curvature \cite{Berry} \begin{equation} F_{\mu\nu} = \frac{\partial}{\partial R^{\mu}}A_{\nu} - \frac{\partial}{\partial R^{\nu}}A_{\mu} = \partial_{\mu}A_{\nu} - \partial_{\nu}A_{\mu} = -F_{\nu\mu}. \end{equation} We can go step by step. First, \begin{equation} \partial_{\mu}A_{\nu} = -i\langle \partial_{\mu}\psi| \partial_{\nu}\psi\rangle -i \langle \psi| \partial_{\mu}\partial_{\nu}\psi\rangle. \end{equation} Therefore, \begin{equation} F_{\mu\nu}({\bf R}) = -i\left(\langle \partial_{\mu}\psi| \partial_{\nu}\psi\rangle-\langle \partial_{\nu}\psi| \partial_{\mu}\psi\rangle\right). \end{equation} Now, we can insert the relation $\sum_n |n\rangle\langle n|=1$ including $|n\rangle=|\psi\rangle$ and all the other states in the energy spectrum such that \begin{equation} \label{Fmunu} F_{\mu\nu}({\bf R}) = -i\sum_n \left(\langle \partial_{\mu}\psi |n\rangle\langle n| \partial_{\nu}\psi\rangle-\langle \partial_{\nu}\psi| n\rangle\langle n| \partial_{\mu}\psi\rangle\right). \end{equation} If $|\psi\rangle=|n\rangle$, then the result is zero. Therefore, the sum implies $|n\rangle\neq |\psi\rangle$. Playing with the equation \begin{equation} H({\bf R}) |n\rangle = E_{n} |n\rangle \end{equation} and applying the differential operator $\frac{\partial}{\partial R_{\alpha}}$ on both sides, with $\alpha=\mu$ or $\nu$, then we obtain the identity \begin{equation} \langle n| \partial_{\alpha}\psi\rangle = \frac{\left\langle n \left| \frac{\partial H}{R_{\alpha}}\right| \psi \right\rangle }{(E_n-E_{\psi})}. \end{equation} If we invert the role of $n$ and $\psi$ then \begin{equation} \langle \partial_{\alpha}\psi |n\rangle = \frac{\left\langle \psi \left| \frac{\partial H}{R_{\alpha}}\right| n \right\rangle }{(E_n-E_{\psi})}. \end{equation} This is equivalent to \begin{equation} \label{munu} F_{\mu\nu} = i\sum_{n\neq \psi} \frac{\left(\left\langle n \left| \frac{\partial H}{R_{\mu}}\right| \psi \right\rangle \left\langle \psi \left| \frac{\partial H}{R_{\nu}}\right| n \right\rangle - \mu\leftrightarrow \nu\right)}{(E_n-E_{\psi})^2}. \end{equation} In the case of a $2\times 2$ matrix Hamiltonian, we have only one $|n\rangle$ excited state. Here, $F_{\mu\nu}$ can be defined equally on the sphere with $\mu$, $\nu$ representing the angles $\varphi$ and $\theta$ through $F_{\theta\varphi}=\partial_{\theta}A'_{\varphi}-\partial_{\varphi} A'_{\theta}=\frac{\sin\theta}{2}$ and on the lattice. It is also useful to derive relations from the definitions on the honeycomb lattice introducing $\mu,\nu$ as $p_x,p_y$. Close to the Dirac point $K$, to linear order ${\cal O}(\theta)$, from Eqs. (\ref{eigenstates}) and (\ref{tan}) the lowest-band wavefunction in the Haldane model $|\psi\rangle = |\psi_+\rangle$ reads: \begin{equation} \label{psi+} |\psi_+\rangle = \left( \begin{matrix} 1 \\ - \frac{1}{2}\frac{\hbar v_F|{\bf p}|}{m} e^{i\tilde{\varphi}} \end{matrix} \right), \end{equation} with $\tan\theta = \frac{\hbar v_F|{\bf p}|}{m}\approx \sin \theta$. Here, we have inserted the form of the lowest-band eigenstate from Eq. (\ref{eigenstates}) and implemented the relation between polar angle around a Dirac point and the azimuthal angle on the sphere $\tilde{\varphi}=\varphi\mp \pi$ from Eq. (\ref{correspondence}). We can then derive useful identities \begin{equation} \partial_{p_x}|\psi_+\rangle = \left( \begin{matrix} 0 \\ -\frac{1}{2}\frac{\hbar v_F}{m} \end{matrix} \right) \end{equation} and \begin{equation} \partial_{p_y}|\psi_+\rangle = \left( \begin{matrix} 0 \\ -\frac{i}{2}\frac{\hbar v_F}{m} \end{matrix} \right), \end{equation} such that \begin{equation} i\partial_{p_y}\langle \psi_-| \partial_{p_x} |\psi_+\rangle = \frac{\hbar^2 v_F^2}{4m^2}. \end{equation} Also, we have \begin{equation} i\partial_{p_x}\langle \psi_+| \partial_{p_y} |\psi_+\rangle = -\frac{\hbar^2 v_F^2}{4m^2}. \end{equation} Defining $F_{p_y p_x} = - F_{p_x p_y} = +i\partial_{p_y}\langle \psi_+| \partial_{p_x} |\psi_+\rangle - i\partial_{p_x}\langle \psi_+| \partial_{p_y} |\psi_+\rangle$, then to linear order in $\theta$, we have \begin{equation} \label{Fpypx} F_{p_y p_x} = \frac{\hbar^2 v_F^2}{2m^2}. \end{equation} This equation is gauge-independent. This relation, formulated as in Eq. (\ref{Fpypx}), was also established in Ref. \cite{Ryu}. On the other hand, the geometrical method presented in Eq. (\ref{swap}) allows us to verify in a simple way the presence of $\cos\theta$ as a global prefactor when going to higher orders in $\theta$ in this formula starting from the general sphere eigenstates making a link with $C=1$ at the two Dirac points. This formula tends to agree with the analysis of Ref. \cite{Meron}. We can also verify from (\ref{psi+}) that the Berry connections $A_{p_x}$ and $A_{p_y}$ are zero at the Dirac points. From the definitions, for the diagonal terms, we have $F_{p_x p_x}=0=F_{p_y p_y}$. We find then judicious to introduce \begin{equation} f_{\mu\mu} = \langle \partial_{\mu} \psi| \partial_{\mu} \psi\rangle \end{equation} with here $\mu=p_x$ or $p_y$ for the diagonal response. We show below that it is related both to the Fubini-Study metric and to the quantum distance and also to the response to circularly polarized light through the ${\cal I}(\theta)$ function. It is first useful to generalize Eq. (\ref{munu}) for the $f$ function defining \begin{equation} f_{\mu\mu} + f_{\nu\nu} = \langle \partial_{\mu} \psi | \partial_{\mu} \psi\rangle + \langle \partial_{\nu} \psi | \partial_{\nu} \psi\rangle. \end{equation} Inserting an eigenstate $|n\rangle$ similarly as for $F_{\mu\nu}$ this gives rise to the identity \begin{equation} f_{\mu\mu} + f_{\nu\nu} = \sum_{n\neq \psi} \frac{{\cal I}_{\mu\mu}+{\cal I}_{\nu\nu}}{(E_n-E_{\psi})^2}, \end{equation} where \begin{equation} {\cal I}_{\mu\mu} = \left\langle \psi \left| \frac{\partial H}{\partial R_{\mu}} \right| n\right\rangle \left\langle n \left| \frac{\partial H}{\partial R_{\mu}} \right|\psi \right\rangle, \end{equation} and similarly for ${\cal I}_{\nu\nu}$. Introducing $\mu=p_x$ and $\nu=p_y$ then we observe that ${\cal I}_{p_x p_x}+{\cal I}_{p_y p_y}$ function precisely corresponds to ${\cal I}(\theta)$ that we introduced in Eq. (\ref{Itheta}). This implies that the diagonal function ${\cal I}_{p_x p_x}+{\cal I}_{p_y p_y}$ is also a good measure of topological properties through $C^2$ locally from the Dirac points or from the poles of the sphere, which precisely enters in the response in time to circularly polarized light in Sec. \ref{light}. The function $|\langle \psi_-| \sigma_x | \psi_+\rangle|=|\langle n|\sigma_x|\psi\rangle|$ can also be measured in principle through the corrections in energy due to the light-matter coupling with a dipole interaction as in Eq. (\ref{energyshift}). At the Dirac points, $(E_n-E_{\psi})^2$ becomes equal to $(2m)^2$. Therefore, the function $f_{p_x p_x}+f_{p_y p_y}$ is also a good marker of topological properties locally on the sphere and in the reciprocal space of the topological lattice model. We can also evaluate the quantum distance \cite{Ryu,BlochMetric} defined in a symmetric way through \begin{equation} \langle \psi_+({\bf k}-d{\bf k})| \psi_+({\bf k}+d{\bf k})\rangle. \end{equation} At the Dirac points, due to the fact that $A_{p_x}$ and $A_{p_y}$ are zero, then we have \begin{equation} \langle \psi_+({\bf K}-d{\bf k})| \psi_+({\bf K}+d{\bf k})\rangle = -\langle \partial_{k_{\mu}} \psi_+ | \partial_{k_{\mu}} \psi_+ \rangle d k_{\mu}^2. \end{equation} Therefore, we can access the metric $g_{ij}$ \begin{equation} g_{ij} dk_i dk_j = 1-|\langle \psi_+({\bf k}-d{\bf k}) | \psi_+({\bf k}+d{\bf k})|^2. \end{equation} At the Dirac points, then we have the precise identity \begin{eqnarray} g_{\mu\mu} &=& 2\hbox{Re}(\langle \partial_{k_{\mu}}\psi_+ | \partial_{k_{\mu}}\psi_+\rangle) \\ \nonumber &=& 2\hbox{Re}(f_{\mu\mu}) = \frac{1}{2}\frac{\hbar^2 v_F^2}{m^2}C^2. \end{eqnarray} This formula is in agreement with the formula of Matsuura and Ryu but the authors did not identify the presence of $C^2$. This equation also traduces that the metric is effectively flat in the vicinity of the poles of the sphere. Recently, some efforts have been made to relate this quantum metric from the reciprocal or momentum space to a gravitational approach \cite{BlochMetric}. In the sense of the Einstein Field Equation, this metric close to the poles of the sphere or the Dirac point traduces a vacuum for the gravitational field assuming a pure quantum state $|\psi\rangle=|\psi_+\rangle$. It is also interesting to observe that the response to circularly polarized light precisely measures the quantum distance through ${\cal I}(\theta)$ \cite{C2}. The recent work \cite{BlochMetric} has also identified a relation between stress-energy, entropy, Bloch bands and gravitational potential from the momentum space. \section{Photo-Induced Currents and Conductivity} \label{lightconductivity} The photocurrents are responses to the electric field related to light. The electric field takes the form ${\bf E}=e^{i\frac{\pi}{2}}A_0\omega e^{-i\omega t}({\bf e}_x\mp i{\bf e}_y)$ such that $\hbox{Re}{\bf E}=-(|A_0|\omega)(\sin\omega t,\mp \cos\omega t,0)$. If we suppose $A_0<0$, then at short times, the physics is analogous to the effect of an electric field ${\bf E}=\mp \omega |A_0| {\bf e}_{\varphi}$ with the unit vector tangent to the azimuthal angle in the equatorial plane ${\bf e}_{\varphi}\sim{\bf e}_y$ and $\pm$ refers to the right-handed $(+)$ and left-handed $(-)$ polarizations respectively as defined in Sec. \ref{electricfield}. The two light polarizations produce photocurrents turning in different directions. Now, we calculate the photo-currents. We begin with the continuity equation \begin{equation} \bm{\nabla}\cdot{\bf J} +\frac{\partial \hat{n}}{\partial t}=0. \end{equation} Due to the structure of the $2\times 2$ matrix Hamiltonian, then $\hat{n}_a(t) = \frac{1}{2}\left(\mathbb{I} + \sigma_z\right)$ and $\hat{n}_b(t) = \frac{1}{2}\left(\mathbb{I} - \sigma_z\right)$. Here, $\mathbb{I}$ refers to the identity matrix. Then, starting from the reciprocal space, we can write \begin{equation} \frac{d\hat{n}_a}{dt} = -\frac{d\hat{n}_b}{dt}, \end{equation} such that for transport properties, the current density in this model can be defined from \begin{equation} \hat{J}(t) = \frac{d}{dt}(\hat{n}_a({\bf k},t)-\hat{n}_b({\bf k},t)). \end{equation} On the lattice, we can approximate $\frac{\partial \hat{J}_i}{\partial x_i}\sim \frac{\hat{J}_i}{a}$ with the lattice spacing set to unity $a=1$ and from Fourier transform we can equivalently evaluate the current density from the reciprocal space. In this sense, the current refers to the current from an electrical dipole measuring the charge polarization between a $a$ and $b$ state for a given ${\bf k}$. Transferring one $a$ particle (from a lower energy band) to a $b$ particle (to a upper energy band) at the $K$ point due to a light quantum will induce a current. Now, we use the definitions related to the Ehrenfest theorem. We have \begin{equation} \langle \sigma_z \rangle (t) = \langle \psi(t) | \sigma_z | \psi(t)\rangle = \langle \psi(0) | e^{\frac{i H t}{\hbar}} \sigma_z e^{\frac{-i H t}{\hbar}} |\psi(0)\rangle. \end{equation} Therefore, we can assume equivalently that the operator $\sigma_z(t)$ now evolves in time such that \begin{equation} \sigma_z(t) = e^{\frac{i H t}{\hbar}} \sigma_z e^{\frac{-i H t}{\hbar}}, \end{equation} and that the wave-functions are at fixed time $t=0$. From the Ehrenfest theorem, \begin{equation} \frac{d}{dt}\langle \sigma_z\rangle(t) = \frac{i}{\hbar}\langle \psi(t) | [H,\sigma_z] |\psi(t)\rangle, \end{equation} this is equivalent to \begin{equation} \frac{d}{dt}\sigma_z(t) = \frac{i}{\hbar} [H,\sigma_z(t)]. \end{equation} For a charge $e=1$, we have \begin{equation} \hat{J}(t) = \frac{1}{2}\frac{d}{dt}\sigma_z(t). \end{equation} Now, from the form of the Hamiltonian, including the light-matter coupling we obtain \begin{equation} \hat{J}(t)=v_F\left((p_x+A_x(t))\sigma_y - (\zeta p_y +A_y(t))\sigma_x\right). \end{equation} When we evaluate the current response to second-order in $A_0$ from Fermi golden's rule, it is the same as keeping just the $A_x(t)$ and $A_y(t)$ in this equation \cite{Klein}. Therefore, we see that it is the same to select a Dirac point from a light polarization setting $p_x=p_y=0$ or to perform an average on all the wave-vectors \cite{Goldman}. Therefore, we can equally calculate the response at a Dirac point leading to \begin{equation} \label{Kstructure} \hat{J}_{\pm,\zeta}(t) = \frac{1}{2\hbar}A_0e^{-i\omega t}\left(\frac{\partial H}{\partial(\zeta p_y)} \pm i \frac{\partial H}{\partial p_x}\right)+h.c. \end{equation} Now, we can apply the Fermi golden rule on the current density. This results in \begin{eqnarray} \tilde{\Gamma}_{\pm} &=& \frac{2\pi}{\hbar} \frac{A_0^2}{2\hbar^2} \left| \left\langle \psi_- \left | \left( \pm i\frac{\partial H}{\partial p_x} + \frac{\partial H}{\partial p_y}\right) \right |\psi_+\right\rangle \right|^2 \nonumber \\ &\times& \delta(E_-(0)-E_+(0)-\hbar\omega). \end{eqnarray} Here, we have taken into account the structure of the eigenstates $|\psi_+(0)\rangle = - |\psi_-(\pi)\rangle$ and $|\psi_-(0)\rangle=|\psi_+(\pi)\rangle$. For the currents we must rather evaluate $\tilde{\Gamma}_{+}(K)-\tilde{\Gamma}_{-}(K')$. This leads to \begin{eqnarray} &&\frac{\tilde{\Gamma}_{+}(K,\omega)-\tilde{\Gamma}_{-}(K',\omega)}{2} = -\frac{2\pi}{\hbar}\frac{A_0^2}{(\hbar v_F)^2} \\ \nonumber &\times& m^2\left(F_{p_y p_x}(0)-F_{-p_y p_x}(\pi)\right)\delta(E_-(0)-E_+(0)-\hbar\omega). \end{eqnarray} We recall that $E_+(0)$ refers to the lowest eigenenergy. This equation shows that the photo-induced currents at the Dirac points are related to $C$ and therefore to the quantum Hall conductivity through Eq. (\ref{F}). If we integrate on frequencies, this leads to \begin{equation} \label{photocurrents} \left| \frac{\tilde{\Gamma}_+(K) - \tilde{\Gamma}_-(K')}{2} \right| = \frac{2\pi}{\hbar} A_0^2 |C|. \end{equation} When measuring the variation of population in time, since $dN/dt^2<0$ then $|C|$ should occur in the response. \section{Time-Reversal Symmetry} \label{timereversal} From the time-dependent quantum equation \begin{equation} i\hbar\frac{\partial}{\partial t}\psi = H\psi \end{equation} if we modify $t\rightarrow -t$, this also requires to modify $i\rightarrow -i$ to preserve the validity of this equation. This has deep consequences, such as the momentum is also reversed ${\bf p}=-i\hbar\bm{\nabla}\rightarrow -{\bf p}$. Therefore, we can represent the effect of time reversal through an operator, for instance, in the reciprocal space \begin{equation} \Theta |\psi({\bf k})\rangle = |\psi(-{\bf k})\rangle^*. \end{equation} The symbol $^*$ means that we change $i\rightarrow -i$ in the phase factors (and all the factors) associated to the wave-function. We can equivalently absorb the effect of $\Theta$ in a re-definition of the Hamiltonian $\Theta^{-1} H \Theta$ such that when calculating averaged values of observables on $\psi$ the effect is identical. For the Haldane model, changing $i\rightarrow -i$ in the phase factor of the $t_2$ term then the Hamiltonian is not time-reversal invariant. In the Kane-Mele situation, the wave-function is a spinor, and the effect of time-reversal must be slightly modified. As shown in Fig. \ref{KaneMeleSpectrum}, if we change time $t\rightarrow -t$, then we also have $K\rightarrow K'$ such that for a fixed energy the $\uparrow$ particles flip to $\downarrow$ and vice-versa. This can be accounted for similarly as a rotation acting in the Hilbert space of spin degrees of freedom. The mass term at the Dirac points takes the form \begin{equation} H_{t_2}^{KM}=-h_z({\bf k}) \sigma_z\otimes(\left |\uparrow\rangle \langle \uparrow | - |\downarrow\rangle \langle \downarrow | \right). \end{equation} To define the time-reversal operator, this requires that \begin{equation} U^{-1} H_{t_2}^{KM} U = + h_z(-{\bf k}) \sigma_z\otimes(\left |\uparrow\rangle \langle \uparrow | - |\downarrow\rangle \langle \downarrow | \right), \end{equation} with $h_z(-{\bf k})=-h_z({\bf k})$ such that the (total) Hamiltonian is invariant under time-reversal symmetry. We can then define $U$ as a rotation perpendicular to the $z$ axis with here \begin{equation} U=i(\mathbb{I}\otimes s_y)\Theta \end{equation} with the identity matrix acting in the sublattice space. In this way, we have $-is_y|\uparrow\rangle=|\downarrow\rangle$, $-is_y|\downarrow\rangle=-|\uparrow\rangle$ and similarly $\langle \uparrow | (i s_y)= \langle \downarrow |$ and $\langle \downarrow | (i s_y)= \langle \uparrow |(-1)$. Under time-reversal, $s_z\rightarrow -s_z$. Also, we have the interesting property that $U^2=-1$ for a spin-$\frac{1}{2}$ particle and also for topological insulators of the $\hbox{AII}$ category. In comparison, for topological spinless models such as for one-dimensional topological superconducting wires, time-reversal symmetry may be only defined through $\Theta$ such that $\Theta^2=\mathbb{I}$. Another way to see the time-reversal symmetry is to use the form from Eq. (\ref{classification}) \begin{equation} H({\bf k})=d_1({\bf k})\Gamma_1 + d_{12}({\bf k})\Gamma_{12} + d_{15}({\bf k})\Gamma_{15}, \end{equation} such that \begin{equation} H(-{\bf k}) = d_1({\bf k})\Gamma_1 + d_{12}(-{\bf k})\Gamma_{12}-d_{15}(-{\bf k})\Gamma_{15}. \end{equation} This is equivalent to modify $K\rightarrow K'$. The time-reversal symmetry of the Hamiltonian can be formulated as follows. Here, formally we should write down $d_{12}(K')=-d_{12}(K)$, but as mentioned previously this simply means here that we should modify $\tilde{\varphi}\rightarrow -\tilde{\varphi}$ between the two Dirac points. We remind here that close to the two Dirac points $d_1({\bf k})=v_F|{\bf k}|\cos\tilde{\varphi}$, $d_{12}=v_F|{\bf k}|\sin\tilde{\varphi}$ where ${\bf k}$ measures a small deviation from the Dirac points. Now, we also have $d_{15}({\bf k})=-m\zeta$ such that $-d_{15}(-{\bf k})=d_{15}({\bf k})$. The $-$ sign in $-d_{15}(-{\bf k})\Gamma_{15}$, at the origin of the time-reversal symmetry of the Hamiltonian, is equivalent to modify $s_z\rightarrow -s_z$ and therefore corresponds indeed to modify $|\uparrow\rangle\rightarrow|\downarrow\rangle$ when time $t\rightarrow -t$. The time-reversal symmetry has other important applications such as the two-fold degeneracy of the energy spectrum related to the Kramers degeneracy. \section{Geometry in the Cube} \label{GeometryCube} Here, we elaborate further on the geometry in a cube related to a Berry curvature of the form \begin{equation} F_z = (-1)^z F_{k_x k_y} \theta(z), \end{equation} and to the divergence theorem in the continuum limit assuming a dense limit of planes or an infinite number of planes. The goal is to show that the divergence theorem in this case may reveal a halved topological quantum number when $z\in [0:+\infty[$. The presence of the $\theta(z)$ function means that we have a face boundary at $z=0$ where $F_z$ jumps to zero in a step form in the vicinity to the vacuum. Setting the limit $z\rightarrow +\infty$ is similar to have $F_{k_x k_y}=0$ at the top surface in Eq. (\ref{number}) in the sense that there is no `flux' coming out through this region. We verify below that the system behaves then as a topological system with a fractional topological number $\frac{1}{2}$ arising from the step $\theta(z)$ function at $z=0$. Here, simply we can write down \begin{equation} \label{curvaturez} \frac{\partial F_z}{\partial z} = -i \pi e^{-i \pi z} F_{k_x k_y}\theta(z) + e^{-i \pi z} F_{k_x k_y} \delta(z). \end{equation} Integrating $\frac{\partial F_z}{\partial z}$ with respect to $z$ as in Eq. (\ref{number}) with $z\in[0;+\infty[$, the second term at $z=0$ gives $\frac{1}{2}F_{k_x k_y}$. This precisely corresponds to the surface term at $z=0$. We can then verify that the real part of the first term gives $0$. Here, we can use the identities $\int_0^L \sin(\pi z)dz = \frac{1}{\pi}(1-\cos(\pi L)) = \frac{2}{\pi} \sin^2\frac{\pi L}{2}$ and $\sin\frac{\pi L}{2}= \frac{\pi}{2}\int_0^{L} \cos\left(\frac{\pi x}{2}\right)dx$ which converges to $0$ when $L\rightarrow +\infty$ from the definition of the Dirac $\delta$ function in the sense of Fourier transforms and distributions. The second term in Eq. (\ref{curvaturez}) then leads to \begin{equation} \label{surface} \frac{1}{2\pi}\iint dk_x dk_y F_{k_x k_y} \int_0^{+\infty} \cos(\pi z) \delta(z) dz = \frac{1}{2}. \end{equation} The continuum limit in the $z$ direction then produces the same $\frac{1}{2}$ number as Eq. (\ref{infiniteseries}). This can also be verified from the evaluation of $F_z(k_x, k_y , z_{top}) - F_z(k_x,k_y,z_{bottom})$. Here, $F_z(k_x,k_y,z_{bottom})-F_z(k_x, k_y , z_{top})=\left(\cos(\pi L)-\frac{1}{2}\right)F_{k_x k_y}$ with $\hbox{lim}_{L\rightarrow +\infty}\cos(\pi L) = \hbox{lim}_{L\rightarrow +\infty}(\cos(\pi L) -1 + 1) = \hbox{lim}_{L\rightarrow +\infty} -2\sin^2\frac{\pi L}{2} +1 =+1$ again in the sense of Fourier transforms and distributions. This leads to the same conclusion as above that $F_z(k_x, k_y , z_{top}) - F_z(k_x,k_y,z_{bottom})=\frac{1}{2}$. It is important to mention that from Eq. (\ref{number}), the primitive of $\frac{\partial F_z}{\partial z}$ with respect to $z$ can be re-defined as $F_z(k_x, k_y , z) + a$ with $a$ being a number. On the other hand, from Eq. (\ref{surface}) we identify that the $\frac{1}{2}$ comes precisely from the boundary square at $z=0$. The surface becomes topologically equivalent to one Dirac point through the $\theta(z)$ function. \section{Interactions in a wire} \label{interactions} Here, in a Luttinger formalism, we verify the stability of the topological phase in the Kitaev model. For this purpose, we begin simply with the Jordan-Wigner transformation between fermions and spins $c_i = S^-_i e^{i\pi\sum_{p<i} \pi n_i}$, $c_i^{\dagger} = S^+_i e^{-i\pi\sum_{p<i} \pi n_i}$ and $2 c^{\dagger}_i c_i -1=S_{iz}$ where $n_i=c^{\dagger}_i c_i$ represents the number of particles at site $i$. In the continuum limit $c_i\rightarrow c(x)$ and we have the dimensional correspondence $\sum_i c^{\dagger}_i c_i = \int \frac{dx}{a} c^{\dagger}(x) c(x)$. In the long-wavelength limit, spin operators located at sites $i$ and $j\neq i$ commute such that $[S_i^+,S_j^-]=0$. This algebra is similar to bosons leading to \begin{equation} c(x) = b(x) e^{\pm i\pi\int^x n(x')dx'}, \end{equation} with the bosonic superfluid operator $b(x)=\sqrt{\rho(x)}e^{i\theta(x)}$ and the $\pm$ signs come from the fact that $e^{i\pi}=e^{-i\pi}$. Taking the continuum limit, subtleties occur. In particular, since the system is infinite and we assume spinless fermions such that $(c^{\dagger}(x))^2=0$ then the number of particles in the wire is also infinite. Since the density of bosons satisfy $b^{\dagger}(x) b(x) = c^{\dagger}(x)c(x) \propto \frac{1}{a}$ from the dimensional analysis, then this implies that \begin{equation} \hbox{lim}_{L\rightarrow +\infty}\int_0^{L} \langle c^{\dagger}(x) c(x)\rangle dx \approx \frac{L}{a}\rightarrow +\infty. \end{equation} This requires a proper regularization for the low-energy theory. Close to the two Fermi points located at $+k_F$ and $-k_F$ in the one-dimensional band structure, then a common approach is to substract the infinite number of particles such that the ground state corresponds to the `vacuum' such that all infinite observables are regularized through $\hat{{\cal O}} = \hat{\cal O} - \langle GS| \hat{\cal O} |GS\rangle$ subtracting then the infinities. Since the physical observables will correspond to smooth deformations or fluctuations of the density of the particles around the two Fermi points, then this implies that the number of particles located quite away from the Fermi points will not modify substantially the low-energy theory. This has the important consequence that we can fix the mean density of particles modulo a global phase such that $b^{\dagger}(x) b(x) \sim \frac{1}{a}$. Following the usual normalization in the literature for the density of particles, then we reach \begin{equation} c(x) = \frac{1}{\sqrt{2\pi a}} e^{i\theta(x)} e^{\pm i \pi\int^x n(x')dx'}. \end{equation} Now, we can introduce right- and left-moving particles around these two Dirac points (with respectively a positive and negative momentum) \begin{equation} c(x) = c_R(x) e^{+i k_F x} + c_L(x) e^{-i k_F x}, \end{equation} allowing to fix appropriately the sign $\pm$ for each mover and decompose the density as \begin{equation} n(x) = c^{\dagger}(x) c(x) = b^{\dagger}(x) b(x) = \rho_0 +\frac{\partial_x \phi}{\pi}. \end{equation} In this way, we verify the standard form for the Haldane Luttinger theory \cite{HaldaneLuttinger} \begin{equation} c_p = \frac{1}{\sqrt{2\pi a}}e^{i(\theta(x) + p\phi(x))} \end{equation} with $p=\pm 1$ for right and left-movers. The motion of particles is similar to that of a vibrating quantum string and similar to the harmonic oscillator we have the commutation relations $[\frac{\phi(x)}{\pi},\theta(y) ] = iH(x-y)$ with $H$ being the Heaviside step function. This is equivalent to $[\partial_x\phi(x),\theta(y)] = i\pi\delta(x-y)$. To ensure that $\{ c_L, c_R\} = 0$ then this requires to introduce Klein factors $U_p$ with $U_L U_R=-i$. We remind here that to evaluate $c_L c_R$, this formally means $c_L(x-a) c_R(x)$ and requires the introduction of the Baker-Campbell-Hausdorff formula for the operators. In this way, a Dirac Hamiltonian in one dimension (coming from the fact that we linearize the energy spectrum around the two Fermi points) takes the form \begin{eqnarray} H_0 &=& -iv_F \int dx (c^{\dagger}_R(x)\partial_x c_R(x) - c^{\dagger}_L(x)\partial_x c_L(x)) \\ \nonumber &=& \frac{v_F}{2\pi}\int dx ( \partial_x\phi(x))^2 +(\partial_x\theta(x))^2 \end{eqnarray} with the Fermi velocity $v_F= 2 ta\sin(k_F a)\sim 2ta$. Interactions can be easily introduced within this formalism such that \begin{equation} H_{Int} = \int dx V n(x) n(x+a) \sim \int dx \frac{V}{(2\pi)^2}(\partial_x\phi(x))^2, \end{equation} leading to the Luttinger theory \begin{equation} H=H_0 +H_{Int} = \frac{v}{2\pi}\int dx \frac{1}{K}(\partial_x\phi(x))^2 + K(\partial_x\theta(x))^2. \end{equation} We have the identifications \begin{eqnarray} vK &=& v_F \\ \nonumber \frac{v}{K} &=& v_F +\frac{V}{2\pi}. \end{eqnarray} The first equality traduces usually the Galil\' ee invariance. For repulsive interactions, $V>0$ implying $K<1$ and for attractive interactions, $V<0$ implying $K>1$. For free electrons, we have $K=1$. Including a superconducting pairing term and assuming $\sin(k_F a)\sim 1$ close to half-filling in the Fourier transform leads to \begin{equation} \label{pairingfunction} \Delta c^{\dagger}_L(x) c^{\dagger}_R(x) +h.c. = \frac{\Delta}{2\pi a}\cos(2\theta(x)), \end{equation} and here we have applied precisely $U_L U_R =-i$. To visualize the effect of this term on ground-state properties we can write down \begin{eqnarray} \langle \cos(2\theta(x))\rangle &=& \hbox{Tr}(e^{-\beta H}\cos(2\theta(x)) \\ \nonumber &\approx& \Delta \int_{\frac{a}{v}}^{\beta} d\tau \langle \cos(2\theta(x,\tau))\cos(2\theta(x,0))\rangle_{H_0} \end{eqnarray} in imaginary time formalism with $\beta=1/(k_B T)$. The key property here is that the Green's function of the superfluid phase $\theta$ can be calculated from the Gaussian model $H_0$ and for free electrons $K=1$ this results in \begin{equation} \langle \cos(2\theta(x,\tau))\cos(2\theta(x,0))\rangle_{H_0} = e^{-[\theta(\tau)-\theta(0)]^2} = \frac{a^2}{v^2}\frac{1}{\tau^2}. \end{equation} In this way, we verify the correspondence \begin{equation} \langle \cos(2\theta(x))\rangle \approx \Delta \frac{a}{v}, \end{equation} traducing the formation of the superconducting gap in the band structure and such that $\Delta \frac{a}{v}$ represents the fraction of the particles participating in the superfluid or Bardeen-Cooper-Schrieffer (BCS) ground state. The stability of the BCS topological phase regarding interactions can be understood related to the phenomenon of charge fractionalization in one dimension \cite{Safi,Pham,Steinberg,fractionalcharges}. In the presence of interactions in one dimension, an electron gives rise to fractional charges such that $N_R + N_L =N$ and $v(N_R - N_L)=v_F J$ where $N$ corresponds to the number of injected electrons and $J$ measures the difference $(N_R-N_L)$ between the number of electrons going to the right and to the left. In this way, when we inject an electron at $+k_F$ this corresponds to $N=+1$ and $J=+1$ resulting in $N_R=\frac{1+K}{2}$ and $N_L = \frac{1-K}{2}$. If we inject an electron at $-k_F$, this corresponds to $N=+1$ and $J=-1$ such that $N_R=\frac{1-K}{2}$ and $N_L=\frac{1+K}{2}$. Therefore, when we inject a Cooper pair from the superconducting reservoir, we inject both an electron at $+k_F$ and $-k_F$ such that in average we have a total charge $N_R^{tot}=+1$ moving to the right and a total charge $N_L^{tot}=-1$ moving to the left, the two charges remaining entangled through the BCS mechanism. In this way, moderate (repulsive) interactions in the wire will not alter the formation of Cooper pairs and therefore of the superfluid state. From renormalization group arguments, we justify below that the pairing term flows to values of the order of the kinetic term or $t$ justifying why the Majorana fermions structure remains similar. The effect of the interactions can be understood from the change of variables $\theta=\frac{1}{\sqrt{K}}\tilde{\theta}$ and $\phi={\sqrt{K}}\tilde{\phi}$ such that the Hamiltonian for the variables $\tilde{\phi}$ and $\tilde{\theta}$ is identical to $H_0$ and such that the pairing term becomes $\cos(2\theta)=\cos(\frac{2\tilde{\theta}}{\sqrt{K}})$. The dressing of the superfluid phase with interactions traduces the fact that the superflow acquires a different velocity. Using standard renormalization group techniques, developing the partition function to second-order in $\Delta$ we obtain the equation \begin{equation} \frac{d\Delta}{dl} = \left(2-\frac{1}{K}\right)\Delta, \end{equation} with $l = \ln\left(\frac{L}{a}\right)$. For $K=1$, solving this equation leads to a typical length scale defined as \begin{equation} \frac{a\Delta}{v} = \frac{a}{L}, \end{equation} with $\Delta(a)=\Delta$ at which the pairing term flows to strong couplings and becomes renormalized to a value close to the hopping term $\Delta(L)\sim \frac{v}{a}$. In the presence of interactions, $\Delta(l)$ yet flows to strong couplings as long as $K>\frac{1}{2}$ to the same typical value $\frac{v}{a}$ at a typical length scale $L\sim a \left(\frac{v}{a\Delta}\right)^{\frac{1}{2-\frac{1}{K}}}$. For long wavelengths corresponding to lengths larger than $L$, the `effective' theory is similar as for the topological superconducting wire with $t=\Delta$, which is another way to interpret the stability of the superconducting phase towards moderate interactions. This can be viewed as an application of symmetry-protected topological phase as the interaction preserves the $\mathbb{Z}_2$ symmetry, $c\rightarrow -c$. \end{appendix}
proofpile-arXiv_065-3987
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:introduction} Since the pioneering and fundamental works of Shannon \cite{seminal-Shannon}, the dominant paradigm for designing a communication system is that communications must satisfy quality requirements. Typically, the bit error rate, the packet error rate, the outage probability, or the distortion level must be minimized. It turns out that the conventional paradigm consisting in pursuing communication reliability or possibly security may not be suited to scenarios such as systems where communications occur in order for a given task to be executed. For instance, transmitting an image of 1 Mbyte to a receiver that only needs to decide about the absence/presence of a given object in the image might be very inefficient. In this example, the receiver only needs one bit of information and this bit could have been directly sent by the transmitter and make the use of the communication and computation resources much more efficient. This simple example shows the potential of making a communication task- or goal-oriented (GO). In this paper, the focus is on the problem of signal compression when the compressed signal is used for a given task which is known. More precisely, we focus on the signal quantization problem, which is often a key element of a signal transmitter. \textcolor{black}{Introducing and developing a goal-oriented quantization (GOQ) approach} is very relevant for many applications. We will mention three of them. First, it appears in controlled networks that are built on a communication network. A simple example is given by modern power systems such as the smart grid. A data measurement system such as a smart meter may have to quantize or cluster the measured series for complexity or privacy reasons \cite{poor-privacy}. It is essential that the quantization or clustering operation does not impact too much the quality of the decision (e.g., a power consumption scheduling strategy) taken e.g., by an aggregator. Second, GOQ is fully relevant for wireless RA problems. For instance, if a wireless transmitter receives some quantized information from the receivers/sensors through a limited-rate feedback channel \cite{zheng-TSP-2007,Lee-TWC-2015,Yeung-TC-2009,Love-JSAC-2008,Kountouris-ICASSP-2007}. Third, for future wireless communication systems such as 6G systems \cite{Saad-NW-2020,Giordani-CM-2020,Bertin-IEEE-2022,Letaief-CM-2019}, GOQ and more generally GO data compression constitutes a very powerful degree of freedom of increasing final spectral efficiency since only the minimum number of bits to execute the task is transmitted through the radio channel. The conventional quantization \textcolor{black}{approach} \cite{Gray_TIT_1998} is to minimize some distortion measure between the original signal and its representation, regardless of the system task. In the literature, there exist works on the problem of adapting the quantizer to the objective. For instance, in the wireless literature, the problem of quantizing channel state information (CSI) for the feedback channel has been well studied (see e.g., \cite{Rao-TIT-2006} for a typical example). The practical relevance of low-rate scalar quantizers to transmit high dimensional signals has been defended for MIMO systems in \cite{Rini-2017}\cite{Choi-2017}\cite{Li-2017}. By combining the system task with the quantization process, \cite{Eldar-TSP1-2019}\cite{Eldar-ISIT-2019} investigated the influence of scalar quantization on specific tasks and characterized the limiting performance in the case of recovering a lower dimensional linear transformation of the analog signal and reconstruction of quadratic function of received signals. \textcolor{black}{Deep-learning-based quantizers have also be considered in \cite{Choi-arxiv-2019,Hanna-JSAIT-2020,Hanna-JSAIT-2021,Sohrabi-TWC-2012} to adapt to the task by training neural networks.} The main point to be noticed is that for all existing works either the impact of quantization on a given performance metric is studied or a very specific performance metric is considered (the Shannon transmission rate being by far the most popular metric) and the proposed quantizer design is often an ad hoc scheme. In contrast with this line of research works, we introduce a general framework for GOQ \textcolor{black}{illustrated in Fig. \ref{fig:GOQ-OP}}. The task or goal of the receiver is chosen to be modeled by a generic optimization problem (OP) which contains both decision variables and parameters. One fundamental point of the conducted analysis is that both for the performance analysis and the design, the goal function is a generic function $f(x;g)$, $x$ being the decision \textcolor{black}{with dimension $d$} to be made based on a quantized version of the function parameters $g$ \textcolor{black}{with dimension $p$}. \textcolor{black}{This setting allows us to derive analytical results and acquire completely new insights} into how to adapt a quantizer to the goal, these insights relying in part on the high resolution (HR) regime analysis \cite{Misra_TIT_2011,Fleicher_TIT_1964,Farias-TSP-2014}. To be sufficiently complete concerning the technical background associated with the present contributions, we also would like to clearly position our works w.r.t. recent works on semantic communications \cite{Kountouris-CM-2021,Shi-CM-2021,Barbarossa-CN-2021,Zhang-ISIT-2021,Qin-TSP-2021,Sana-2021,Qin-JSAC-2021,Qin-JSAC2-2021,Yun-ISWCS-2021,Saad-arXiv-2022,Niyato-arXiv-2022,Lan-2021,Debbah-Tcom-2021}. Semantics is employed here with its etymological meaning, that of significance. It can be seen as a measure of the usefulness/importance of messages with respect to the system task \cite{Kountouris-CM-2021}. There have been several tutorials and surveys to discuss possible structures and architectures of this novel communication paradigm. By studying the semantic encoder and semantic noise, \cite{Shi-CM-2021} proposed two models based on shared knowledge graph and semantic entropy, respectively. Reference \cite{Barbarossa-CN-2021} indicated that by properly recognizing and extracting the relevant information to the system task, the communication efficiency and reliability can be enhanced without using more bandwidth. In \cite{Kountouris-CM-2021}, it is explained how semantic information attributes of transmitted messages could be exploited, which entails a task-oriented unification of information generation, transmission, and reconstruction. By introducing intrinsic states and extrinsic observations, \cite{Zhang-ISIT-2021} uses indirect rate-distortion theory to characterize the reconstruction error of semantic information induced by lossy source coding schemes. Information bottleneck is also an approach to find the optimal tradeoff between compressing and reliability. \textcolor{black}{Inspired by this approach, \cite{Qin-TSP-2021} proposed a relevant loss function whose relevance was supported in \cite{Sana-2021} and designed an end-to-end DeepSC network architecture, using Transformer as the semantic encoder and joint source-channel coding schemes to ensure the semantic information transmission. Similar models \cite{Qin-JSAC-2021}\cite{Qin-JSAC2-2021} are extended to audio transmission and Internet-of-things (IoT) applications.} Other learning tools have also been implemented to extract important attributes in semantic communications, such as reinforcement learning\cite{Yun-ISWCS-2021}, curriculum learning \cite{Saad-arXiv-2022}, and distributed learning \cite{Niyato-arXiv-2022}\cite{Lan-2021}. Some additional information can also be used for the semantic encoder, such as contextual reasoning\cite{Debbah-Tcom-2021}. Compared to the quoted works, three main points have to be noticed. First, most works focus on the novel communication architecture or use learning tools to extract important features but the works are not supported by theoretical derivations. Second, we not only consider the transmission problem of the semantic information but also the influence of distorted information on the subsequent decision-making (DM) entity and the system task, namely, how the semantic information exchange will affect the system performance (effectiveness level). Third, we address a precise technical problem which is the quantization problem and assume a fully generic goal. The closest contributions to the present work have been produced by the authors through \cite{Zhang_Wiopt_2017}\cite{Zou_WinCom_2018}\cite{hang-pimrc-2019}\cite{Zhang-AE-2021}. To the best of the authors knowledge, the concept of GOQ has been introduced for the first time in \cite{Zhang_Wiopt_2017} and applied in other contexts in \cite{Zou_WinCom_2018}\cite{hang-pimrc-2019}\cite{Zhang-AE-2021}. In these references, mainly numerical results are provided and the focus is on a Lloyd-Max (LM)-type algorithm \cite{Lloyd}\cite{Max}. In particular the formal HR analysis is not conducted and the fundamental role of the goal function is not investigated. This paper is structured as follows. In Sec. \ref{sec:problem_formulation}, we define the performance metric of a GO quantizer. In Sec. \ref{sec:scalar_approximation}, the performance analysis of scalar GOQ is conducted in the HR regime and the impact of the goal function on the optimality loss \textcolor{black}{(OL)} is assessed through analytical arguments. In Sec. \ref{sec:vector_approximation}, we address the more challenging case of vector GOQ by providing an HR equivalent of the HR OL and a practical GOQ algorithm. In Sec. \ref{sec:Numerical_Results}, we show the potential benefit from using GOQ for important RA problems that are relevant for quantizing information in wireless, controlled, and power systems. Sec. \ref{sec:Conclusions} concludes the paper. \section{Problem Formulation} \label{sec:problem_formulation} \begin{definition} \textcolor{black}{Let $d\geq 1$ be an integer and $\mathcal{G}$ be a subset of $\mathbb{R}^d$.} Let $M \geq 1$ be an integer. An $M-$quantizer $\mathcal{Q}_M$ is fully determined by a piecewise constant function $Q_M: \mathcal{G} \rightarrow \mathcal{G} $ that is defined by $Q_M(g) = z_m$ for all $z_m \in \mathcal{G}_m$ where: $m \in \{1,...,M\}$, the sets $\mathcal{G}_1,...,\mathcal{G}_M$ are called the quantization regions and define a partition of $\mathcal{G}$, and the points $z_1,...,z_M$ are called the region representatives. \end{definition} \begin{figure}[tbp] \centering{}\includegraphics[scale=0.16]{GOC_v4.jpg} \caption{Proposed definition for the goal-oriented quantization \textcolor{black}{approach} \label{fig:GOQ-OP}} \end{figure} Since $M$ is a fixed number, from now on and for the sake of clarity, we will omit the subscript $M$ from the quantization function and merely refer to it as $Q$. We will only make $M$ appear for comparison purposes, mainly in the simulations. Also, when needed, we will also use the quantity $R = \log_2 M$ which represents the number of quantization bits per sample. \textcolor{black}{Equipped with these notations, we can now define mathematically the GO approach we propose for quantization.} \begin{definition} Let $\chi(g)$ be the decision function providing the minimum points for the goal function $f(x;g)$, \textcolor{black}{whose decision variable is $x \in \mathbb{R}^p$ ($p\geq 1$ is an integer)}, $g$ being fixed: \begin{equation} \chi(g)\in\arg\underset{{x}\in\mathcal{X}}{\min} \quad f({x};{g}) \label{eq:ODF}. \end{equation} The optimality loss induced by quantization is defined by: \begin{equation} \textcolor{black}{L} \left( \textcolor{black}{Q};f\right) = \alpha_f \int_{{g}\in\mathcal{G}} \left[f \left(\chi \left({Q} \left( {g}\right)\right);{g} \right) - f \left(\chi \left({g} \right);{g} \right) \right] \phi \left({g} \right)\mathrm{d}{g} \label{eq:def-OL} \end{equation} where $\phi$ is the probability density function (p.d.f) of $g$ and $\alpha_f >0$ is a scaling/normalizing factor which does not depend on $Q$. \end{definition} Several comments concerning the OL definition are in order. Note that the conventional quantization \textcolor{black}{approach} can be obtained from the GOQ \textcolor{black}{approach} by observing that \textcolor{black}{the second term of the OL functional $L(Q;f)$ (that is, a function of function)} is independent of $Q$ and by specializing $f$ as $f(x;g) = \| x - g \|^2$, $\|. \|$ standing for the Euclidean norm. With the conventional \textcolor{black}{approach}, quantization aims at providing a version of $g$ that resembles to $g$. However, under the GOQ \textcolor{black}{approach}, what matters is the quality of the end decision taken. The design of such a quantizer therefore depends on the mathematical properties of $f$ and the underlying decision function $\chi$, which constitutes a key difference w.r.t. the conventional \textcolor{black}{approach}. In this respect, studying analytically the relationship between the nature of $f$ and the quantization performance is a nontrivial problem. For instance, for a fixed OL level, how do the functions requiring a small (resp. large) $M$ (that is, a small -resp. large- amount of quantization resources) look like? \textcolor{black}{The normalizing factor $\alpha_f$ is precisely introduced to conduct fair comparisons between different goal functions.} From the OL definition, it can also be noticed that the knowledge of the p.d.f. of $g$ is implicitly assumed. One may replace the statistical mean with an empirical mean version and rewrite the OL under a \textcolor{black}{data-based form where the integral is replaced with a sum over the data samples obtained from a training set. Indeed, the knowledge of the input distribution $\phi$ is indeed convenient, especially for the analysis. However, for the design it is not required. This is why the proposed GO quantization algorithm is applied to the problem of data clustering, in which only a database is available.} \textcolor{black}{The case of a time-varying input distribution is not addressed here and would require to design an adaptive quantizer, which is left as a relevant extension of the present work.} Also note that the set $\mathcal{X}$ and the function $\chi(g)$ are assumed to integrate the possible constraints on the decision $x$. At last, note that when the optimal decision function (ODF) $\chi(g)$ is not available, other decision functions that are suboptimal but easier to implement may be considered; this situation will be studied in the numerical analysis. In what follows, the main focus is on the regime of large $M$, which is called the high resolution regime. This regime is not only very useful to conduct the analysis and make interpretations but also to provide \textcolor{black}{neat approximants or expressions. These expressions are both exploited to obtain useful insights for the design of general quantizers and used in the proposed quantization algorithm. As it will be seen in the numerical performance analysis, the proposed algorithm performs remarkably well in the low resolution regime.} Note that the direct minimization of the general form of the OL is an NP-hard problem since it is a mathematical generalization of the conventional quantization problem (see e.g., \cite{Garey-TIT-1982,Hanna-JSAIT-2020}). Therefore, using approximants and suboptimal procedures is a classical approach in the area of quantization especially for vector quantization. \section{Scalar GOQ in the high resolution regime} \label{sec:scalar_approximation} In this section we assume that both the decision to be taken and the parameter to be quantized are scalar that is, $d = p = 1$. For a wireless communication, this would occur for instance when a receiver has to report a scalar channel quality indicator (such as the SINR\textcolor{black}{, the carrier/interference ratio, or the received signal power}) to a transmitter and the transmitter tunes in turn its transmit power. Similarly, a real-time pricing system \cite{mohsenian-2010} in which an electrical power consumer reports its time-varying satisfaction parameter to an aggregator who chooses the price dynamically corresponds to the scalar case. Additionally, many systems, for complexity reasons, implement a set of independent scalar quantizers instead of a vector one. This is the case for example for some image compression standards such as JPEG or for MIMO communications with quantized CSI feedback \cite{Xu-TSP-2010,Makki-TCOMM-2013,Makki-TCOMM-2015}. \textcolor{black}{In the general case, finding a quantizer amounts to finding both the regions $\mathcal{G}_1,...,\mathcal{G}_M$ (which are just intervals in the scalar case) and the representatives $z_1,...,z_M$. However, the calculation of regions and representatives can be simplified in the HR regime. One could use probabilistic density function to represent the density of quantization points, which allows us to approximate summations by integrals.} To be precise, we assume the HR regime in the following sense \cite{Gray_TIT_1998}. For any point $g$, let us introduce the quantization step $ \Delta(g) = \min_{1\leq m \leq M} | g- z_m | $. Then, let us introduce the (interval/representative) density function $ \rho(g) $ which is defined as follows: \begin{equation} \rho(g) = \lim_{M \rightarrow +\infty} \frac{1}{M \Delta(g)}. \label{eq:definition_representative_density} \end{equation} \textcolor{black}{\subsection{Optimal quantization interval density function}} By construction, the number of quantization intervals or representatives in any interval $[a,b]$ can be approximated by $M\displaystyle\int_a^b\rho(g)\mathrm{d}g$. Therefore, the problem of finding a GOQ in the HR regime amounts to finding the density function that minimizes the OL that we will denote, with a small abuse of notation but for simplicity by $L(\rho; f)$. Remarkably, the expression of the optimal density in the HR regime can be obtained, at least by assuming the goal and decision functions to be sufficiently regular or smooth. This is the purpose of the next proposition. \begin{proposition}\label{prop:optimal-density} Let $f$ be a fixed goal function. Assume $f$ $\kappa$ times differentiable and $\chi$ differentiable with \begin{equation} \kappa = \min \left \{i \in \mathbb{N} : \left.\forall g,\,\,\frac{\partial^{i}f(x;g)}{\partial x^{i}} \right|_{x=\chi(g)}\neq 0 \,\,\mathrm{a.s.} \right \}. \label{eq:k_definition} \end{equation} In the HR regime the OL $L(\rho; f)$ is minimized by using the following quantization interval/representative density function: \begin{equation} \rho^{\star}(g)= C \left[\left(\frac{\mathrm{d}\chi(g)}{\mathrm{d} g} \right)^\kappa\frac{\partial^{\kappa}f(\chi\left(g\right);g)}{\partial x^{\kappa}}\phi(g) \right]^{\frac{1}{\kappa+1}} \label{eq:lambda_op_general} \end{equation} where \textcolor{black}{$\frac{1}{C} = \displaystyle\int_\mathcal{G} \left [\left(\frac{\mathrm{d}\chi(t)}{\mathrm{d} t}\right)^\kappa\frac{\partial^{\kappa}f(\chi\left(t\right);t)}{\partial x^{\kappa}}\phi(t)\right ]^{\frac{1}{\kappa+1}}\mathrm{d}t $}. \end{proposition} \begin{proof} See Appendix A. \end{proof} Although the optimal density is derived in the special case of scalar quantities and the HR regime, the corresponding result is insightful both for the analysis and the design. The conventional result when distortion minimization is pursued is that the optimal density $\rho^{\star}$ is proportional to $\phi^{\frac{1}{3}} \left(g \right)$. In practice this means allocating more quantization bits to more likely realizations of $g$. Under the GOQ \textcolor{black}{approach}, this conclusion is seen to be questioned. Indeed, the best density is seen to result from a combined effect of the parameter density $\phi$, the variation speed of $f$ w.r.t. the decision $x$ (that is, the sensitivity of the goal regarding the decision), and the smoothness of the decision function $\chi$ w.r.t. the parameter to be quantized. As a consequence all these three factors need to be acccounted for in practice to design a good GOQ and allocate quantization bits in particular. Let us illustrate this with a simple example that is relevant to the problem of energy-efficient wireless transmit power control. \textit{Example.} Consider the following energy-efficiency (EE) performance metric $f(x;g) = -\frac{\exp \left(-\frac{c}{xg} \right)}{x^{\eta}}$ with $c > 0$ and $\eta \geq 2$. Here $x$ represents the transmit power and $g$ the channel gain \cite{vero-TSP-2011} . Assume the channel gain $g$ is exponentially distributed that is, $\phi \left( g\right) = \frac{1}{\overline{g}} \exp \left(-\frac{g}{\overline{g}} \right)$ with $\mathbb{E}(g) = \overline{g} > 0$. One obtains that $\kappa=2$, $\chi(g) = \frac{c}{\eta g}$ and \begin{equation} \rho^{\star}(g)= C \left[ \frac{\eta^{\eta+1}}{c^\eta e^\eta}g^{\eta-2} \phi \left( g \right) \right]^{\frac{1}{3}}. \end{equation} For instance, for $\eta=3$, it is easy to check that the quantization interval density $\rho^{\star}$ is increasing for $0 \leq g \leq \overline{g}$ then decreasing for $ g \geq \overline{g}$. This result thus markedly differs from the conventional distortion-based approach. Indeed, under the latter approach, one would allocate more quantization bits to small values of the channel gain (since $\phi$ is strictly decreasing). Under the GOQ approach, most of the allocation bits should be allocated for values around the mean value of $g$. In this section, we have been searching for the best scalar GOQ for a given goal function $f$. Now, we would like to provide some elements about the relationship between the nature of $f$ and the quantization performance. For example, it is known that compressing a signal for which its energy is concentrated at small frequencies is generally an easy task. Similarly, here, we would like to know more about the connection between the regularity properties of the goal function and the level of difficulty to quantize its parameters. Since, this relevant issue constitutes a challenging mathematical problem, we only provide some preliminary results to explore this promising direction. For this purpose, we assume the chosen quantizer to be given by the optimal HR quantizer given by $\rho^{\star}$ and study the impact of $f$ on $L(\rho^{\star}; f)$. To be rigorous and clearly indicate the dependency of $\rho^{\star}$ regarding $f$, we will use the notation $ \rho^{\star}_f $. \textcolor{black}{\subsection{About choosing the scaling factor $\alpha_f$}} So far, since $f$ was fixed, the scaling factor $\alpha_f$ in the definition of the OL $L$ was not relevant. But when it comes to minimizing $L(\rho_f^{\star}; f)$ w.r.t $f$, this factor plays an important role. Indeed, if one wants to compare the hardness to compress of two functions, the retained performance criterion has to possess some invariance properties. In particular, it should be invariant to affine transformations. The OL has not this property regarding $f$ since a function of the form $F = A f +B$ (with $A>0$) would produce a large OL when $A$ is large even if the OL obtained for the original $f$ is small. Hence the need for normalizing the OL properly and thus the presence of $\alpha_f$. Here, we consider two choices for $\alpha_f$, which amounts to considering two different reference case for the performance comparison. The first reference case is uniform quantization. For this case, the normalizing factor is denoted by $\alpha_f^{\mathrm{UQ}}$ and chosen to be the reciprocal of the OL obtained when using a HR uniform quantizer (UQ). It expresses as: \begin{equation} \frac{1}{\alpha_f^{\mathrm{UQ}}} = \displaystyle\int_{\mathcal{G}} C_g^{-\kappa}\left(\frac{\mathrm{d}\chi(g)}{\mathrm{d} g}\right)^\kappa\frac{\partial^{\kappa}f(\chi(g);g)}{\partial x^{\kappa}}\phi(g)\mathrm{d}g \end{equation} where $\displaystyle \int_{\mathcal{G}}C_g\mathrm{d}g=1$. This case allows one to quantify the potential gain from using a GOQ instead of a standard quantizer which is independent of the goal function. The second reference case we consider corresponds to the situation where the DM entity takes a constant decision (CD) \textcolor{black}{independently of} the value of $g$. This would correspond to the situation where no instantaneous information about $g$ is available and only statistics can be exploited. Although this reference case is not necessarily the right benchmark for a given application it is still of interest for extracting useful insights because, this time, it is not about comparing two quantizers but more about measuring the intrinsic difficulty to compress a given function. By defining $\overline{x}$ the chosen constant decision as $\widebar{x} \in\underset{x \in \mathcal{X}}{\arg\min}\ \mathbb{E}_g \left[ f \left(x;g \right) - f \left(\chi(g); g\right) \right]$, the corresponding normalizing factor is denoted by $\alpha_f^{\mathrm{CD}}$ and expresses as: \begin{equation} \frac{1}{\alpha_f^{\mathrm{CD}}}= \frac{1}{(2M)^{\kappa} \kappa! (\kappa+1)} \displaystyle \int_{g \in \mathcal{G}} \left[ f \left(\widebar{x};g \right) - f \left(\chi \left(g\right);g \right) \right] \phi \left( g\right)\mathrm{d}g \end{equation} where $\overline{x}$ is the chosen constant decision. The above quantity represents the OL obtained when using the best CD multiplied par a term in $\kappa$ which comes from the HR approximation (see App. A for more details). \textcolor{black}{\subsection{On the impact of the goal function on the OL}} Equipped with these two versions of the (normalized) OL, comparing different goal functions becomes a well posed problem. For this purpose, we have selected several functions \cite{vero-TSP-2011,Meshkati-JSAC-2007,Berri-EUSIPCO-2016} that frequently appear in wireless resource allocation problems. For the selected functions, all quantities at hand can be expressed analytically and the integral associated with the OL can be computed. The obtained results appear in \textcolor{black}{Table \ref{tab:scalr_cost_function_comparison}}. \textcolor{black}{With the parameter space taken to be the interval $[0.1,10]$, the table assumes two different choices for the p.d.f. $\phi$, the uniform distribution and a {truncated} exponential distribution namely, $\phi(g)=\frac{\exp(-g)}{\displaystyle \int_{0.1}^{10}\exp(-x)\mathrm{d}x}$}. The two columns providing the value of the OL allows one to establish some hierarchy between the selected functions. The obtained results suggest that logarithm-type goal functions provide a relatively small OL. These types of function would be qualified as easy to compress, which means for example that a rough description of the parameter is sufficient to take a good decision. Quantizing finely the parameter would lead to a waste of resources. This interpretation which is based on the HR analysis will be confirmed by simulations performed in arbitrary regimes. In a wireless system, this would e.g., mean that transmission rate-type performance metrics are not very sensitive to quantization noise and therefore a coarse feedback on CSI is suited to the goal. The table shows a different behavior for exponential-type functions, which are typically used to model energy-efficiency in wireless systems. These types of function require a more precise description of the function parameters (e.g., the CSI). Implementing the GOQ approach for such functions is seen to still provide a quite significant gain in terms of OL when compared to uniform quantization. We see that the HR analysis of the scalar quantization case provides useful insights that could be both used for an ad hoc design of a goal-oriented quantizer and deepened by considering more complex performance metrics. \begin{table}[tbp] \centering \begin{tabular}{|p{1.8cm}|p{1.05cm}|p{1.28cm}|p{1.25cm}|p{1.25cm}|} \hline \textbf{{Goal function $f(x;g)$}} & {$\textbf{p.d.f.}$~$\phi\left(g \right)$} &{$\textbf{ODF}$~$\chi\left(g \right)$} & \textbf{OL} ($\alpha_f= \alpha_f^{\mathrm{UQ}}$) & \textbf{OL} ($\alpha_f= \alpha_f^{\mathrm{CD}}$) \\ \hline {$\log \left(1+10gx \right)-x$} & {{uniform}}& {$[1-\frac{1}{10g}]^+$} & {$0.00399$}& $0.0488$\\ \hline {$\frac{\exp(-\frac{1}{gx})}{x}$} & {{uniform}} & {$\frac{1}{g}$} & {$0.648$} & $6.5943$\\ \hline {$\frac{(1-\exp(-{gx}))^{10}}{x}$} &{{uniform}} & {$\frac{3.6150}{g}$} & {$0.648$} & $19.4565$\\ \hline {$(x-g)^2$} &uniform & {$g$} & {$1$}& $24$ \\ \hline {$\log \left(1+10gx \right)-x$} & {{exp}}& {$[1-\frac{1}{10g}]^+$} & {$0.0019$} & 0.4859\\ \hline {$\frac{\exp(-\frac{1}{gx})}{x}$} & {{exp}} & {$\frac{1}{g}$} & {$0.083$}& 18.75\\ \hline {$\frac{(1-\exp(-{gx}))^{10}}{x}$} &{{exp}} & {$\frac{3.6150}{g}$} & {$0.083$}& 61.12\\ \hline {$(x-g)^2$} &exp & {$g$} & {$0.24$} & 48.50\\ \hline \end{tabular} \caption{Comparison of different goal functions \label{tab:scalr_cost_function_comparison}} \end{table} \section{Vector GOQ: High resolution analysis and proposed quantization algorithm} \label{sec:vector_approximation} \subsection{High resolution analysis} As motivated in Sec.~\ref{sec:scalar_approximation}, for some applications vector quantization is not used for reasons such as computational complexity. This is the case for instance for MIMO systems where the transfer channel matrix entries are quantized by a set of scalar quantizers. But, for optimality reasons or because of the definition of the quantization problem, vector quantization may be necessary. For instance, it is of high practical interest to be able to cluster series of the non-flexible electrical power consumption over one day for example \cite{beaude-tsg}\cite{clustering}\cite{Zhang-AE-2021}, which leads to a sample dimension of $p=48$ when the power signal is sampled every $30$ minutes. By construction, this clustering problem is similar to a vector quantization problem for which one wants to create a certain number ($M$ with our notation) of data subsets. For this specific problem one may want to fix $M$ to a small number, say $M=4$, and distinguish between $4$ consumption behaviors. For the scalar case, it has been seen that the HR regime allows to determine the best goal-oriented quantizer, which is fully characterized by the density function $\rho^{\star}$ (see (\ref{eq:lambda_op_general})). However, in the vector case, even under the HR assumption, the problem remains challenging in general. This is one of the reasons why we resort to approximations. The full analytical characterization of the corresponding approximations is left as a relevant extension of the present work. The goal in this paper is threefold: to show how these approximations can be used for the quantizer design; to support the choices made by simulations performed with a low and moderate number of quantization bits; to focus on the potential gains that can be brought by the GOQ approach. One the main results of this section consists in providing an exploitable approximation of the OL in the vector case. This approximation will be directly exploited further in this section for the quantizer design part. The result is stated \textcolor{black}{through} the following proposition. \begin{proposition} \label{prop:upper_lower_bound} Assume $d\geq 1$, $p \geq 1$, and $\kappa=2$. Assume $f$ and $\chi$ twice differentiable. Denote by $\mathbf{H}_f(x;g)$ the Hessian matrix of $f$ and denote by $\mathbf{J}_{\chi}(g)$ the Jacobian matrix of $f$ evaluated for an optimal decision $\chi(g)$. In the regime of large $M$, the optimality loss function $L(Q;f)$ \textcolor{black}{defined as} in (\ref{eq:def-OL}) can be approximated as follows: {\footnotesize\begin{equation} L(Q;f) = \underbrace{\alpha_f \sum_{m=1}^M \int_{\mathcal{G}_m} (g - z_m)^{\mathrm{T}} \mathbf{A}_{f,\chi}(g) (g - z_m) \phi(g) \mathrm{d}{g}}_{\widehat{L}_M(Q;f)} + \textcolor{black}{o(M^{-\frac{2}{p}})} \end{equation}} where $\mathbf{A}_{f,\chi}(g) =\mathbf{J}_{\chi}^{\mathrm{T}}(g) \mathbf{H}_f(\chi(g);g) \mathbf{J}_{\chi}(g)$. Additionally, by assuming the Gersho hypothesis \cite{Gersho_TIT_1979} (see App. B), the above first order HR equivalent of $L$ can be bounded as \textcolor{black}{$L_M^{\min}((Q;f) \leq \widehat{L}_M(Q;f) \leq L_M^{\max}(Q;f) $} with \begin{equation} \textcolor{black}{L_M^{\min}(Q;f)} = \frac{p \mu_p}{2} M^{-\frac{2}{p}}\left(\displaystyle\int_{\mathcal{G}}\left(\lambda_{\min}(g;f) \phi({g})\right)^{\frac{p}{p+2}}\mathrm{d}g\right)^{\frac{p+2}{p}} \label{eq:lower_bound_vec} \end{equation} \begin{equation} \textcolor{black}{L_M^{\max}(Q;f)}= \frac{p \mu_p}{2} M^{-\frac{2}{p}}\left(\displaystyle\int_{\mathcal{G}}\left(\lambda_{\max}(g;f) \phi({g})\right)^{\frac{p}{p+2}}\mathrm{d}g\right)^{\frac{p+2}{p}} \label{eq:upper_bound_vec} \end{equation} where: $\lambda_{\min}(g;f)$ (resp. $\lambda_{\max}(g;f)$) is the smallest (resp. largest) \textcolor{black}{eigenvalue} of $\mathbf{A}_{f,\chi}(g)$ and \textcolor{black}{$\mu_{p}$ is the least normalized moment of inertia of the $p$-dimensional tessellating polytope $\mathbb{T}_p$ defined by \begin{equation} \mu_p=\min_{\mathbb{T}_p,z}\frac{1}{p}\frac{1}{\mathrm{vol}(\mathbb{T}_p)^{1+2/p}}\displaystyle\int_{\mathbb{T}_p}\|{g}-{z}\|^2\mathrm{d}{g}. \end{equation}} \end{proposition} \begin{proof} See Appendix B. \end{proof} The first-order equivalent in Prop. \ref{prop:upper_lower_bound} is seen to depend on the matrix $\mathbf{A}_{f,\chi}(g)$. This matrix corresponds to the vector generalization of the product $\left(\frac{\mathrm{d}\chi(g)}{\mathrm{d} g} \right)^2\frac{\partial^{2}f(\chi\left(g\right);g)}{\partial x^{2}}$ that appears in the scalar case and shows how the OL is related to the regularity properties of the goal function $f$. For the conventional quantization approach ($f(x;g) = \|x-g \|^2$), one has merely that $\mathbf{A}_{f,\chi}(g) = \mathbf{I}$. Therefore in the HR regime, the structure of the equivalent shows that considering a general goal function $f$ amounts to introducing an appropriate weighting matrix in the original distortion function. This matrix will be precisely used to derive an algorithm to compute a good vector GO quantizer that is tailored to the goal function. The derived lower and upper bounds can be used both for characterizing the performance of a GOQ and for the quantizer design, which is explained at the end of this section. The bounds are tight in special cases such as when $p=1$ (in which case $\mu_p = \frac{1}{12}$) and when $f(x;g) = \|x-g \|^2$ (with no restrictions on the dimensions $d$ and $p$. Generally speaking, the gap between the two bounds is observed to be small when $p$ is less or much less than $d$. Now if $p\geq d$, it can be seen that $\lambda_{\min}(g;f) = 0$ since the matrix $\mathbf{A}_{f,\chi} \left( g\right)$ is not full rank. As a consequence, the lower bound derived in (\ref{eq:lower_bound_vec}) is not tight anymore. Hence, it is necessary to derive a tighter lower bound in this scenario. To this end, one can treat $\mathbf{J}_{\chi}(g) e_m$, with $e_m=\frac{{g}-{z}_m }{\|{g}-{z}_m\|}$, as a vector and thus ${e}_m^{\mathrm{T}}\mathbf{A}_{f,{\chi}} \left({g}\right) {e}_m$ is minimized if and only if $\mathbf{J}_{{\chi}}({g}){e}_m$ is aligned with the eigenvector associated with the smallest eigenvalue of $\mathbf{H}_f({\chi} \left( {g}\right);{g})$. By denoting $\nu_{\min}({g};f)$ the smallest eigenvalue of \textcolor{black}{$\mathbf{H}_f({\chi} \left( {g}\right); {g})$}, the term ${e}_m^{\mathrm{T}}\mathbf{A}_{f,{\chi}} \left( {g}\right) {e}_m$ can be lower bounded by $\nu_{\min}({g};f)\mathsf{a}(\mathbf{J}_{{\chi}}({g}))$, where $\mathsf{a}(\text{J}_{{\chi}}({g}))$ is the scalar factor between $\mathbf{J}_{{\chi}}({g}){e}_m$ and the smallest eigenvector of $\mathbf{H}_f({\chi} \left({g}\right);{g})$. By replacing $\lambda_{\min}(g;f)$ with $\nu_{\min}(g;f)\mathsf{a}(\mathbf{J}_{{\chi}}({g}))$, a new lower bound can be derived for the case where $p\geq d$. The proposed refinement procedure can also be used for the upper bound on the OL but note that the upper bound is mainly dependent on $p$ and is much less dependent on the dimensionality $d$, which makes the corresponding refinement generally less useful. \subsection{Proposed quantization algorithm} As mentioned in the last subsection, the bounds provided by Prop.~\ref{prop:upper_lower_bound} can be used to characterize the performance of a quantizer and study, at least numerically, the impact of the nature of $f$ on the OL. In the present subsection, the main objective is to exploit the HR equivalent of Prop.~\ref{prop:upper_lower_bound} to design a practical quantization algorithm. Considering the fact that the optimal decision function may produce solution at the boundary of the decision set and that only sub-optimal decision function may be available in real systems, we relax here the optimality first order condition $\frac{\partial f(x;g)}{\partial x }|_{x=\chi(g)}=0$. Therefore, the optimality loss can be written for algorithmic purposes in a more general form: \begin{equation} \begin{split} &L(Q;f) \\ = & \sum_{m=1}^M \int_{\mathcal{G}_m} \left[\left(\frac{\partial f(x;g)}{\partial x}\left|_{x=\chi(g)}\right.\right)^{\mathrm{T}} \right. \left(\chi(z_m)-\chi(g)\right)\\ &+ \left.\frac{1}{2}\left(\chi(z_m)-\chi(g)\right)^{\mathrm{T}} \mathbf{H}_{f,\chi}(g) \left(\chi(z_m)-\chi(g)\right) \right]\phi(g) \mathrm{d}g\\ &+o\left(\|\chi(z_m)-\chi(g)\|^2\right) \end{split} \end{equation} where $\left(\mathbf{H}_{f,\chi}(g) \right)_{i,j}=\frac{\partial^2 f(x;g)}{\partial x_i \partial x_j}|_{x=\chi(g)}$ for $1 \leq i,j \leq p$. By using the Taylor expansion, we have that: \begin{equation} \begin{split} &\chi(z_m)-\chi(g)\\= &\mathbf{J}_{\chi}(g)(z_m-g)+\begin{bmatrix} (z_m-g)^{\mathrm{T}}\mathbf{H}_{\chi_1}(g)(z_m-g)\\ (z_m-g)^{\mathrm{T}}\mathbf{H}_{\chi_2}(g)(z_m-g)\\ \dots\\ (z_m-g)^{\mathrm{T}}\mathbf{H}_{\chi_{d}}(g)(z_m-g)\\ \end{bmatrix}\\ +&o\left(\|z_m-g\|^2\right) \end{split} \end{equation} where $ \left( \mathbf{H}_{\chi_i} \left(g \right) \right)_{\ell,k}= \frac{\partial^2\chi_i(g)}{\partial g_\ell \partial g_k} $ for $1\leq l,k \leq p$ and $1\leq i \leq d$, $\chi(g)=[\chi_1(g),\dots,\chi_d(g)]^{\mathrm{T}}$. Plugging this expression in the expression of $L(Q;f)$, the optimality loss can be re-expressed as \begin{equation} \begin{split} &L(Q;f)\\=&\frac{1}{2} \sum_{m=1}^M \int_{\mathcal{G}_m} (g - z_m)^{\mathrm{T}} \mathbf{B}_{f,\chi}(g) (g - z_m) \phi(g) \mathrm{d}{g}\\+&\frac{1}{2} \sum_{m=1}^M \int_{\mathcal{G}_m} (g - z_m)^{\mathrm{T}} \mathbf{A}_{f,\chi}(g) (g - z_m) \phi(g) \mathrm{d}{g} + o\left(M^{\frac{-2}{p}}\right) \end{split} \end{equation} where $\mathbf{B}_{f,\chi}(g)=\sum_{i=1}^{d}\bigtriangledown f_i(g)\mathbf{H}_{\chi_i}(g)$ with\[\frac{\partial f(x;g)}{\partial x}|_{x=\chi(g)}=\left(\bigtriangledown f_1(g),\bigtriangledown f_2(g),\dots,\bigtriangledown f_d(g)\right)\] and $\mathbf{A}_{f,\chi}(g) =\mathbf{J}_{\chi}^{\mathrm{T}}(g) \mathbf{H}_f(x;g) \mathbf{J}_{\chi}(g)$. By using this new expression of the OL, one exhibits a natural structure for applying an alternating optimization algorithm and thus to minimize $\widetilde{L}=\sum_{m=1}^M \displaystyle\int_{\mathcal{G}_m} (g - z_m)^{\mathrm{T}} \left(\mathbf{B}_{f,\chi}(g)+\mathbf{A}_{f,\chi}(g)\right) (g - z_m) \phi(g) \mathrm{d}{g}$ as follows: \begin{itemize} \item {\textit{Representative updating step}: To minimize $\tilde{L}$ with \textbf{fixed regions}, the problem boils down to find the representative $z_m$ such that $\displaystyle \int_{\mathcal{G}_m} (g - z_m)^{\mathrm{T}} \left(\mathbf{B}_{f,\chi}(g)+\mathbf{A}_{f,\chi}(g)\right) (g - z_m) \phi(g) \mathrm{d}{g}$ can be minimized. One can apply a gradient descent technique to achieve that where the gradient can be easily found: \begin{equation} \frac{\partial \tilde{L}}{\partial z_m} = 2 \displaystyle \int_{\mathcal{G}_m} \mathbf{E}_{f,\chi}\left(g \right) \left(g-z_m \right) \phi \left( g\right) \mathrm{d}g \end{equation} where $\mathbf{E}_{f,\chi}\left(g \right) = \mathbf{B}_{f,\chi}\left(g \right) + \mathbf{A}_{f,\chi}\left(g \right)$. } \item{\textit{Region updating step}: For given representatives, the region can be computed as:\[\begin{split}\mathcal{G}_m=&\ \left\{ g|(g - z_m)^{\mathrm{T}} \mathbf{E}_{f,\chi}(g) (g - z_m) \right.\\& \left.\leq (g - z_{m'})^{\mathrm{T}} \mathbf{E}_{f,\chi}(g) (g - z_{m'})\right\}\end{split}\] where $m'\neq m$.} \end{itemize} The approximate individual optimality loss is thus defined by $\widetilde{\ell}_f \left(g,z \right)$ of the parameter $g$ w.r.t. a representative $z$ as: \begin{equation} \widetilde{\ell}_f \left(g,z \right) \triangleq \left(g - z \right )^{\mathrm{T}} \mathbf{E}_{f,\chi}(g) \left(g - z\right). \end{equation} our goal-oriented quantization algorithm is summarized in pseudo-code form through \textcolor{black}{algorithm} \ref{alg:goq_gradient}. \textcolor{black}{The proposed algorithm can be applied to the scalar case. In the latter case, the matrix $\mathbf{A}_{f,\chi}\left(g \right)$ becomes $\left(\frac{\mathrm{d}\chi(g)}{\mathrm{d} g} \right)^2 \frac{\partial^{2}f(\chi\left(g\right);g)}{\partial x^{2}}$ which corresponds to the term appearing in Equation \ref{eq:lambda_op_general} with $\kappa = 2$. And we have that $\mathbf{B}_{f,\chi}\left(g \right) = 0 $. The reason for this is that either the first-order optimality condition holds or the lower and upper bounds of the quantization interval are fixed points.} \begin{algorithm}[tbp] {\bf{Inputs:}} goal function $f\left(x;g \right)$, $\chi \left(g \right)$, error tolerance $\varepsilon$, \textcolor{black}{number of cells $M$} and number of iterations $T$;\\ {\bf{Inputs:}} $\mathcal{Z}^{\left(0\right)}=\left\{ {z}_{1}^{\left(0\right)},\dots,{z}_{M}^{\left(0\right)}\right\}$;\\ {\bf{Inputs:}} $\mathcal{G}^{\left(0\right)}=\left \{ \mathcal{G}_{1}^{\left(0\right)},\dots,\mathcal{G}_{M}^{\left(0\right)}\right\};$\\ \For{$t = 1$ \bf{to} $T$} { \For{$m = 1$ \bf{to} $M$}{ Update $\mathcal{G}^{\left(t\right)}_m $ by $\left \{ g \left |\widetilde{\ell}_f \left(g,z^{(t-1)}_m \right) \leq \widetilde{\ell}_f \left(g,z^{(t-1)}_{m'} \right), \forall m' \neq m \right. \right\}$;\\ Update ${z}^{\left(t\right)}_m$ by ${z}^{\left(t\right)}_m = {z}^{\left(t-1\right)}_m - r_t \frac{\partial \widetilde{L} \left(\mathcal{Z}^{\left(t-1\right)} \right)} {\partial z^{\left(t-1\right)}_m}$ with the step size $r_t > 0$ s.t. ${z}^{\left(t\right)}_m \in \mathcal{G}$;\\ } \If{$\sum_{m=1}^{M}\left\Vert {z}_{m}^{\left(t\right)}-{z}_{m}^{\left(t-1\right)}\right\Vert ^{2}<\varepsilon$}{ \bf{Break}; } } {\bf{Outputs:}} \textcolor{black}{$\mathcal{Z}^{\star} = \mathcal{Z}^{\left(t\right)}$} and $\mathcal{G}^{\star} = \mathcal{G}^{\left(t\right)}$; \\ \caption{ Goal-oriented Quantization Algorithm} \label{alg:goq_gradient} \end{algorithm} \section{Numerical performance analysis} \label{sec:Numerical_Results} In this section we both want to illustrate some analytical results derived in the preceding sections and also see, from purely numerical results, to what extent some insights obtained from the HR analysis hold in scenarios where main assumptions such as smoothness are relaxed. For this purpose, we consider four goal functions: an exponential-type goal function and a log-type goal function which are relevant for GO information quantization problems in wireless resource allocation problems; a quadratic-type goal function which is typically relevant for GOQ in controlled systems; an $\mathrm{L}_\mathrm{P}$ norm-type goal function which is relevant for GO data clustering/quantization in power systems. \textcolor{black}{\subsection{Impact of the goal function on the OL for wireless metrics}} Table \ref{tab:scalr_cost_function_comparison} provides analytical results for the scalar case in the HR regime. It suggests that for a given quantization scheme, log-type goal functions lead smaller values for the OL than exp-type goal functions. Let us consider the performance metric introduced by \cite{Meshkati-JSAC-2007} to measure the EE of a multiband communication: $f^{\text{EE}}\left(x;g \right) =- \frac{\sum_{i=1}^{S} \exp\left(-\frac{c}{x_i g_i} \right)}{\sum_{i=1}^{S} x_i}$ where $S$ is the number of bands, $c>0$, $x_i$ is the transmit power for band $s$, and $g_i$ the channel gain for band $s$. The log-type function is taken to be the classical spectral efficiency (SE) function $f^{\text{SE}} \left(x;g \right) = - \sum_{i=1}^{S} \log \left(1+ x_i\frac{g_i}{\sigma^2} \right)$. We impose that $x_i \geq 0 $ and $\sum_{i=1}^{S} x_i \leq P_{\max}$. For $\frac{P_{\max}}{\sigma^2} = 5$, $c=1$, $S = 2$, and a uniform quantizer Fig.~\ref{fig:ee_vs_sum_rate} depicts the relative OL in percentage (relatively to the \textcolor{black}{ideal} case): \begin{equation} \mathrm{Relative \ OL}(\%) = 100 \times \left(\frac{f(\chi(Q(g));g) - f(\chi(g);g) }{f(\chi(g);g)} \right) \end{equation} averaged over $10000$ independent Rayleigh fading realizations (with $\mathbb{E}(g)=1$) against the number of quantization bits per realization of $g$. We see for a given number of bits per sample, the OL for the SE function is much smaller than the SE function. We retrieve the hierarchy suggested by Table \ref{tab:scalr_cost_function_comparison}. This shows that the SE function can accommodate a rough quantization of the parameters (that is, the channel gains) without degrading significantly the DM process, which is to choose a good power allocation vector. Using a fine quantizer would lead to waste of resources for the SE function (here we see that a 1-bit quantizer yields an OL of about $2\%$, which illustrates well the importance of adapting the quantizer to the goal function. \begin{figure}[tbp] \begin{centering} \includegraphics[scale=0.4]{ROL_bits-eps-converted-to.pdf} \par\end{centering} \caption{The figure shows the impact of the number of quantization bits on the decision-making quality (measured in terms of optimality loss) on two different well-used goal functions. Log-type SE functions appear to accommodate very well with very rough quantization for its parameters (CSI) which is the not the case for exp-type EE functions. This simulation is in accordance with the analytical results of Table I. \label{fig:ee_vs_sum_rate} } \end{figure} \textcolor{black}{\subsection{Performance gains obtained from tailoring the quantizer to the (control) goal}} Now we assume $d = p = 2$ and consider the following quadratic function: \begin{equation} f^{\text{QUA}}(x;g) = \left(x_1 - h_1 (g) \right)^2 + \left({x}_2 - h_2 ({g}) \right)^2 + \left({x}_1 - {x}_2 \right)^2 \label{eq:cost_function_poly_2} \end{equation} with $h_1({g})= 2{g}_1 {g}_2-\frac{1}{2} {g}_1^2 {g}_2^2 $ and $h_2({g}) = {g}_1^2 {g}_2^2 - {g}_1 {g}_2 $. Parameters are assumed to be i.i.d. and exponentially distributed, i.e., $\phi\left(g \right) = \exp\left(-g_1-g_2\right)$. One can check that $\chi({g}) = [{g}_1 {g}_2, \frac{1}{2} {g}_1^2 {g}_2^2]^{\mathrm{T}}$. In Fig. \ref{fig:ROL_quadratic}, the relative OL in percentage (relatively to the \textcolor{black}{ideal} case) against the number of regions $M$ is represented for a conventional vector quantizer (namely, a distortion-based quantizer implementing the Lloyd-Max algorithm \cite{Lloyd,Max}), \textcolor{black}{hardware-limited task-based quantization (HLTB) in \cite{Eldar-TSP1-2019}} and for the proposed vector GOQ computed thanks to algorithm 1. Although Algorithm 1 is based on a HR approximation, it is seen to provide a very significant gain in terms of OL even for a small number of regions. For $M=5$ a conventional quantizer would lead to a relative OL of $70\%$ which is a significant performance degradation w.r.t. the \textcolor{black}{ideal} case where $g$ is perfectly known, whereas the proposed GOQ allows the OL to be as low as $10\%$. \textcolor{black}{Besides, compared to HLTB quantizer which is also goal-oriented, the optimality loss reduction of proposed algorithm is still considerable in low-resolution regime.} The explanation behind this performance gain is already available through Example 1 in which we have seen the importance of adapting the ``density" or more generally the concentration of the regions (and thus allocating the quantization bits) not according to the parameter distribution (conventional \textcolor{black}{approach}) but to an appropriately weighted distribution. This difference is illustrated through Fig. \ref{fig:pdf_comparison}. The top subfigure shows the p.d.f. of the parameter $g$ (namely $\phi(g)$). The bottom subfigure shows $\lambda_{\max} \left(g; f^\text{QUA} \right) \phi \left( g\right)$. The analysis conducted in Sec. \ref{sec:scalar_approximation} suggests to concentrate the quantization regions according to this weighted density, which is markedly different from $\phi$. By doing so, Algorithm 1 provides a very significant improvement, the main powerful insight being not to allocate quantization resources to the most likely realizations of the information source but to the ones that impact the most the goal, which is measured through the weighted density $\lambda_{\max} \left(g; f^\text{QUA} \right) \phi \left( g\right)$. \textcolor{black}{Notice that the above numerical results are obtained when the p.d.f. of $g$ is known. In practice, it might happen that this p.d.f. is not available or is time-varying. Then one can easily adapt algorithm 1 by replacing statistical means with empirical/sample means and possibly, refreshing the database on the fly if the statistics need to be tracked. Fig. \ref{fig:quadratic_data_oriented} precisely shows the loss that would be induced by using a relatively small database instead of knowing the input distribution perfectly. One can observe that the data-based GO quantizer still could achieve a relative optimality loss of $9\%$ for a database with only $1000$ data points, which illustrates the relevance of the proposed method when the input distribution is not available.} \begin{figure}[tbp] \begin{centering} \includegraphics[scale=0.44]{ROL_quadratic_cost_bis-eps-converted-to.pdf} \par\end{centering} \caption{The goal function being a quadratic function, the figure shows the importance in terms of (decision) optimality loss of adapting the quantizer to the goal instead of using the conventional distortion-based quantization \textcolor{black}{approach}. \label{fig:ROL_quadratic}} \end{figure} \begin{figure}[tbp] \begin{centering} \includegraphics[scale=0.44]{data_oriented-eps-converted-to.pdf} \par\end{centering} \caption{ \textcolor{black}{The figure assesses the performance loss due to not knowing the input distribution $\phi$ perfectly but rather with a low number of samples took from a database.} \label{fig:quadratic_data_oriented}} \end{figure} \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{poly_2_pdf_goq-eps-converted-to.pdf} \caption{New density $\lambda_{\max} \left(g; f^\text{QUA} \right) \phi \left( g\right)$ of GOQ algorithm } \label{fig:pdf_goq} \end{subfigure} \hfill \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{poly_2_pdf-eps-converted-to.pdf} \caption{Original probability distribution $\phi \left(g \right)$} \label{fig:pdf_org} \end{subfigure} \caption{The figure shows the marked difference between the parameter probability distribution (bottom curve) and the probability distribution of interest that is relevant to the decision-making task (top curve). It implies in particular that quantization regions (and thus quantization bits) should be allocated in a very different way from the conventional way.} \label{fig:pdf_comparison} \end{figure} \textcolor{black}{\subsection{Goal-oriented quantization and power consumption scheduling}} $\blacktriangleright$ Now we assume \textcolor{black}{$d=p=24$}. We consider a performance metric which is relevant for a communication problem in the smart grid. Indeed, we consider that the goal function $f^{\mathrm{PCS}}(x;g) = \| x+g\|_{\mathrm{P}}$, $\mathrm{P}$ being the exponent power parameter of the $\mathrm{L}_\mathrm{P}$ norm, and $\mathrm{PCS}$ stands for power consumption scheduling. This time the vector $x=(x_1,...,x_d)$ ($d=p$ here) represents the chosen flexible power consumption scheduling strategy; we impose that $ x_i \geq 0 $ and $\sum_{i=1}^{d} x_i \geq E$, $E>0$ being the desired energy level \textcolor{black}{chosen as $30$ kWh in our simulation setting}. The parameter vector $g$ represents the non-controllable part of the power. When $\mathrm{P}$ becomes large, the problem amount to limiting the peak power. \textcolor{black}{The clustering problem is a data-based counterpart of the quantization problem in which a finite set of realizations for $g$ is available (instead of the knowledge of $\phi$).} We want to cluster a finite dataset into clusters or groups of data (instead of continuous regions). And the goal is to minimize $f^{\mathrm{PCS}}$ by only having a clustered version of the data. \textcolor{black}{For the purpose of applying the GOQ approach to clustering, we make the following two choices in terms of implementation. }First, the statistical expectation is replaced with its empirical version in the algorithm; the empirical mean is performed over the $300$ time series of the Pecanstreet dataset. Second, since the number of samples is small, representatives are computed by directly minimizing $L(\textcolor{black}{Q},f)$ (as in \cite{Zhang-AE-2021}) instead of the approximated version $\widetilde{L}$. For a given relative OL of $5\%$ one then looks at the number of required clusters (that is, $M$) versus the exponent power parameter of the $\mathrm{L}_\mathrm{P}$ (that is, $\mathrm{P}$). In Fig. \ref{fig:Linfty}, \textcolor{black}{we compare the performance of the the GO clustering technique with the $k-$means algorithm (which is exactly {the data-based counterpart} of the LM algorithm) and hierarchical clustering (HC) algorithm for the Pecanstreet database \cite{pecanstreet}. For HC, the squared Euclidean distance and weighted pair group method with arithmetic mean are used. First one can observe that partitioning clustering slightly outperforms hierarchical clustering, this might be explained by the fact that several clusters in HC compose of a single outlier data point (in terms of Euclidean distance), but outlier data points might yield similar decision as normal data points for Lp-norm problems especially with large $p$.} For $\mathrm{P}$ ranging from $4$ to $20$, the figure shows that the number of required clusters can be decreased from about $M=80$ to $M=8$ by adapting the clustering technique to the final decision instead of creating clusters based on an exogenous similarity index, which is the Euclidean norm in the case of the $k-$means algorithm. \begin{figure}[tbp] \centering{} \includegraphics[scale=0.45]{data-driven-eps-converted-to.pdf}\caption{Required number of clusters ($M$) against the Exponent power parameter of the $\mathrm{L}_\mathrm{P}$-norm ($\mathrm{P}$) for the $k$-means and goal-oriented clustering. The goal-oriented clustering approach yields a drastic reduction in terms of the number of clusters when $\mathrm{P}$ increases.\label{fig:Linfty}} \end{figure} \section{Conclusion} \label{sec:Conclusions} In this paper, the focus is on one key element of a goal-oriented communication chain namely, the quantization stage. The GOQ problem is very relevant for lossy data compression e.g., to have high spectral efficiency in wireless systems (by transmitting only the minimum amount of information relevant to the correct task execution). It is also relevant for many resource allocation problems, hence the choices for the goal function in this paper. One of the contributions of this paper is to exploit the HR assumption both for the analysis and design of a GOQ. Valuable insights of practical interest have been obtained. Let us mention two of them. \textcolor{black}{The most conventional way of designing a source coder is to allocate resources (say bits) according to the frequency of the realization of the source symbol (this is what Huffman and arithmetic coding schemes and their many variants do). Our analysis shows that this approach may lead to a significant performance degradation and rather shows in a precise way (see e.g., Prop. III.1, Example 1, and Fig. 4) how the variation speed of the goal and decision functions should be taken into account to allocate such resources in a much more efficient way.} Our analysis also allows one to make progresses into the direction of understanding how the goal function impacts the quantizer. Both analytical and simulation results are provided to exhibit the existence of possible classes of functions which would more or less easy to be compressed. This knowledge allows the quantizer to be matched to the goal. For example, rough quantization seem to have a small impact on the task execution as far as log-type goal functions are concerned. The behavior is different for exp-type functions. This suggests for example that CSI feedback should be much finer for energy-efficient performance metrics than for spectral-efficiency metrics. It is seen that the proposed framework is rich in terms of practical insights. Nonetheless, many relevant issues are left open and would need to be explored. For instance, theoretical analysis relies on smoothness assumptions for the goal and decision functions. What would the results become for non-smooth functions? The functions are also assumed to be known. How to adapt the approach when only the realizations of these functions are available? Also a dedicated complexity analysis should be conducted. Generally, the problem of designing vector GO quantizers when the dimension increases is open. \textcolor{black}{An interesting extension of this work would also be to address the case of a non-stationary source, leading to the problem of an adaptive quantizer.} How learning techniques could be used to solve all these issues? \appendices \section{Proof of Proposition III.1} By using Taylor expansion, the optimality loss in high-resolution regime can be approximated by \allowdisplaybreaks {\footnotesize\begin{align} & {L} \left( \textcolor{black}{Q};f \right) \nonumber\\ = &\alpha_f\sum_{m=1}^M \int_{\mathcal{G}_m} \left[ f \left(\chi \left(z_m \right);g \right) - f \left(\chi \left(g\right);g \right) \right] \phi \left( g\right)\mathrm{d}g\nonumber\\ \overset{(a)}{=}& \alpha_f\sum_{m=1}^M \int_{\mathcal{G}_m} (\chi \left(z_m \right) - \chi \left(g \right))^{\kappa}\frac{1}{\kappa!}\frac{\partial^{\kappa}f(x;g)}{\partial x^{\kappa}}|_{x = \chi \left(g \right)}\phi(g)\mathrm{d}g + o\left(M^{-\kappa}\right)\nonumber\\ \overset{(b)}{=}& \alpha_f\sum_{m=1}^M \int_{\mathcal{G}_m} (z_m-g)^{\kappa}\left(\frac{\mathrm{d}\chi(g)}{\mathrm{d} g}\right)^{\kappa}\frac{1}{\kappa!}\frac{\partial^{\kappa}f(\chi(g);g)}{\partial x^{\kappa}}\phi(g)\mathrm{d}g+o\left(M^{-\kappa}\right)\nonumber\\ \overset{(c)}{=}& \alpha_f \int_{\mathcal{G}} \frac{\Delta^{^{\kappa}}(g)}{(\kappa+1)2^{\kappa}}\left(\frac{\mathrm{d}\chi(g)}{\mathrm{d} g}\right)^{\kappa}\frac{1}{\kappa!}\frac{\partial^{\kappa}f(\chi(g);g)}{\partial x^{\kappa}}\phi(g)\mathrm{d}g+o\left(M^{-\kappa}\right)\nonumber\\ \overset{(d)}{=}& \frac{\alpha_f}{(2M)^{\kappa}(\kappa+1)!}\int_{\mathcal{G}} \rho^{^{-\kappa}}(g)\left(\frac{\mathrm{d}\chi(g)}{\mathrm{d} g}\right)^{\kappa}\frac{\partial^{\kappa}f(\chi(g);g)}{\partial x^{\kappa}}\phi(g)\mathrm{d}g + o\left(M^{-\kappa}\right)\nonumber\\ \label{eq:infi_derivation} \end{align}} (a) corresponds to the Taylor expansion of $(f(\chi(z_m);g)-f(\chi(g);g))$ in the regime of large $M$ (infinitesimals of $M^{-\kappa}$ are not considered further); (b) follows from the fact that the higher order terms in the Taylor expansion of $(\chi(z_m)-\chi(g))$ are negligible w.r.t. the first term. (c) extends the idea of approximating mean-square error distortion in high resolution regime (see \cite{Bennett_Beld_1948,Panter_IRE_1951}) to cases with even-order $\kappa$, i.e., {\small\begin{equation} \begin{split} &\int_{\mathcal{G}_m} (z_m-g)^{\kappa}\left(\frac{\mathrm{d}\chi(g)}{\mathrm{d} g}\right)^{\kappa}\frac{1}{\kappa!}\frac{\partial^{\kappa}f(\chi(g);g)}{\partial x^{\kappa}}\phi(g)\mathrm{d}g\\ \approx&\left(\frac{\mathrm{d}\chi(z_m)}{\mathrm{d} z_m}\right)^{\kappa}\frac{1}{\kappa!}\frac{\partial^{\kappa}f(\chi(z_m);z_m)}{\partial x^{\kappa}}\phi(z_m)\int_{z_m-\frac{\Delta(z_m)}{2}}^{z_m+\frac{\Delta(z_m)}{2}} (z_m-g)^{\kappa}\mathrm{d}g\\ \approx&\left(\frac{\mathrm{d}\chi(z_m)}{\mathrm{d} z_m}\right)^{\kappa}\frac{1}{\kappa!}\frac{\partial^{\kappa}f(\chi(z_m);z_m)}{\partial x^{\kappa}}\phi(z_m)\frac{\Delta(z_m)^{^{\kappa}}}{(\kappa+1)2^{\kappa}}\Delta(z_m)\\ \approx& \int_{\mathcal{G}_m} \frac{\Delta^{^{\kappa}}(g)}{(\kappa+1)2^{\kappa}}\left(\frac{\mathrm{d}\chi(g)}{\mathrm{d} g}\right)^{\kappa}\frac{1}{\kappa!}\frac{\partial^{\kappa}f(\chi(g);g)}{\partial x^{\kappa}}\phi(g)\mathrm{d}g; \end{split} \end{equation}} (d) follows from results on high resolution quantization referring to equation (\ref{eq:definition_representative_density}). After the derivation optimality loss with high-resolution quantization theory, we aim to find the optimal quantization point density to minimize the OL. We first introduce a new function called value density: \begin{equation} p(g)= \left(\frac{\mathrm{d}\chi(g)}{\mathrm{d} g}\right)^{\kappa}\frac{\partial^{\kappa}f(x;g)}{\partial x^{\kappa}}|_{x=\chi(g)}\phi(g) \geq 0. \label{eq:def_value_density} \end{equation} Then we resort to the H{\"o}lder's inequality: \begin{equation} \displaystyle\int p^{\frac{1}{\kappa+1}}\leq \left(\int p\rho^{-\kappa}\right)^{\frac{1}{\kappa+1}}\left(\int \rho\right)^{\frac{\kappa}{\kappa+1}} \end{equation} knowing $\displaystyle \left(\int \rho\right)^{\frac{\kappa}{\kappa+1}}=1$, it can be inferred that $\displaystyle\int p\rho^{-\kappa}\geq \left(\int p^{\frac{1}{\kappa+1}}\right)^{\kappa+1}$, with equality if and only if $p\rho^{-\kappa}= C_1\rho$ with $C_1 > 0$. The optimum density function of quantization points can thus be written as: \textcolor{black}{\begin{equation} \rho^{\star}(g)=\frac{\left[\left(\frac{\mathrm{d}\chi(g)}{\mathrm{d} g} \right)^{\kappa}\frac{\partial^{\kappa}f(\chi\left(g\right);g)}{\partial x^{\kappa}}\phi(g) \right]^{\frac{1}{\kappa+1}}}{\displaystyle\int_\mathcal{G} \left [\left(\frac{\mathrm{d}\chi(t)}{\mathrm{d} t}\right)^{\kappa}\frac{\partial^{\kappa}f(\chi\left(t\right);t)}{\partial x^{\kappa}}\phi(t)\right ]^{\frac{1}{\kappa+1}}\mathrm{d}t}. \label{eq:lambda_op_general_app} \end{equation}} By plugging the optimal density into the expression of the optimality loss, when $M$ is large, the OL ${L} \left( \textcolor{black}{Q};f\right)$ becomes: {\small\begin{align} & \underset{M\rightarrow \infty}{\lim}{{L}} \left( \textcolor{black}{Q};f\right) \nonumber \\ = &\frac{\alpha_f}{(2M)^{\kappa}(\kappa+1)!}\left(\displaystyle\int_{\mathcal{G}} \left[\left(\frac{\mathrm{d}\chi(g)}{\mathrm{d} g}\right)^{\kappa}\frac{\partial^{\kappa}f(\chi\left(g\right);g)}{\partial x^{\kappa}}\phi(g)\right]^{\frac{1}{\kappa+1}} \mathrm{d}g\right)^{\kappa+1} \label{eq:distortion} \end{align}} \section{Proof of Proposition IV.1} To facilitate the derivation, we introduce the multi-index notation in order to represent partial derivative of the goal function. The $d$-dimensional multi-index can be written as ${n} = \left(n_1,\dots,n_{d} \right)$. Its sum and factorial can be expressed as $\left|{n} \right| = \sum_{t=1}^{d} n_t$ and ${n}!=\prod_{t=1}^{d} n_t!$, respectively. Considering the decision variable ${x}= \left({x}_1,\dots,{x}_{d} \right)$, the partial derivative with degree ${n}$ w.r.t. ${x}$ can be expressed as $ \mathfrak{D}^{{n}}_{{x}} f = \frac{\partial^{\left|{n} \right|}f}{\partial {x}^{n_1}_1\dots \partial {x}^{n_{d}}_{d}}$, and the multi-index power of ${x}$ can be written as ${x}^n = \overset{d}{\underset{i=1}{\prod}} {x}_{i}^{n_i}$. By using the Taylor expansion for multivariate functions, the optimality loss can be rewritten as: {\small\begin{equation} \begin{split} & {{L}} \left( \textcolor{black}{Q}; f \right)\\ = &\alpha_f\sum_{m=1}^M \int_{\mathcal{G}_m} \left[f({\chi}({z}_m);{g})-f({\chi} \left({g} \right);{g})\right]\phi({g})\mathrm{d}{g}\\ {=}& \sum_{m=1}^M \left[\sum_{{n}:\left|{n} \right|\leq \kappa}\int_{\mathcal{G}_m} \frac{ \mathfrak{D}^{{n}}_{{x}} f \left({\chi} \left( {g} \right);{g} \right)}{{n}!} \left({\chi} \left( {z}_m \right) - {\chi} \left( {g} \right) \right)^{{n}}\phi({g})\mathrm{d}{g}\right.\\ & \quad\quad\quad\left. +\sum_{{\widehat{n}}:\left|{\widehat{n}} \right|= \kappa+1} \int_{\mathcal{G}_m} O\left(\left({\chi} \left( {z}_m \right) - {\chi} \left( {g} \right) \right)^{{\widehat{n}}}\right)\phi({g})\mathrm{d}{g} \right] \end{split} \label{eq:infi_derivation_vector_general} \end{equation}} Interestingly, one can note that the $\frac{ \mathfrak{D}^{{n}}_{{x}} f \left({\chi} \left( {g} \right);{g} \right)}{{n}!}$ are the components of the gradient vector of $f$ w.r.t. ${x}$ when $|{n}|=1$, and $\frac{ \mathfrak{D}^{{n}}_{{x}} f \left({\chi} \left( {g} \right);{g} \right)}{{n}!}$ are the components of the Hessian matrix of $f$ w.r.t. ${x}$ when $|{n}|=2$. For the terms with $|{n}|\geq3$, it could be seen as the infinitesimal of the second order terms. Therefore, we could take $k=2$ and ignore the higher order terms in high resolution regime. In addition, here we consider the scenario where the optimal decision function ${\chi}(.)$ always locates in the interior of the feasible set $\mathcal{X}$, and thus each component of the gradient vector is zero, namely, $\frac{\partial f({x};{g})}{\partial x_t}|_{{x}={\chi}({g})}=0$. The optimality loss can be approximated by: {\scriptsize\begin{equation} \begin{split} &{L}\left( \textcolor{black}{Q}; f \right)\\ {=}& \underbrace{\sum_{m=1}^M \sum_{{n}:\left|{n} \right|=2}\int_{\mathcal{G}_m} \frac{ \mathfrak{D}^{{n}}_{{x}} f \left({\chi} \left( {g} \right){g} \right)}{{n}!} \left({\chi} \left( {z}_m \right) - {\chi} \left( {g} \right) \right)^{{n}}\phi({g})\mathrm{d}{g}}_{\widehat{L}_M(Q;f)}+o\left(M^{-\frac{2}{p}}\right) \\ \end{split} \end{equation}} and the $\widehat{L}_M(Q;f)$ can be further simplified as {\scriptsize\begin{equation} \begin{split} &\widehat{L}_M(Q;f)=\\ \overset{(a)}{=}& \alpha_f\sum_{m=1}^M \int_{\mathcal{G}_m} \frac{1}{2}({\chi} \left( {z}_m \right) - {\chi} \left( {g} \right))^{\mathrm{T}}\mathbf{H}_f( {\chi} \left( {g} \right);{g})({\chi} \left( {z}_m \right) - {\chi} \left( {g} \right))\phi ({g})\mathrm{d}{g}\\ \overset{(b)}{=}& \alpha_f\sum_{m=1}^M \int_{\mathcal{G}_m} \frac{1}{2}(\mathbf{J}_{{\chi}}\left( {g}\right)({z}_m-{g}))^{\mathrm{T}}\mathbf{H}_f({\chi} \left( {g} \right);{g})(\mathbf{J}_{{\chi}}\left( {g}\right)({z}_m-{g}))\phi({g})\mathrm{d}{g}\\ \overset{(c)}{=}& \alpha_f{\sum_{m=1}^M \int_{\mathcal{G}_m} \frac{1}{2}\|{g}-{z}_m\|_2^2 {e}_m^{\mathrm{T}} \mathbf{J}_{{\chi}}^{\mathrm{T}}({g})\mathbf{H}_f({\chi} \left( {g} \right);{g}) \mathbf{J}_{{\chi}} \left( {g}\right) {e}_m \phi({g})\mathrm{d}{g}} \end{split} \label{eq:infi_derivation_vector} \end{equation}} where ${e}_m$ is defined as the normalized vector of the difference, i.e., ${e}_m=\frac{{g}-{z}_m }{\|{g}-{z}_m\|_2}$. (a) follows from the fact that the second order term in the Taylor expansion can be rewritten with matrix multiplication using Hessian matrix; (b) follows from the fact that the higher order term in the Taylor expansion of $\left({\chi}({g})-{\chi}({g}_m)\right)$ are negligible w.r.t. the first order term; (c) can be verified by defining ${e}_m$. It is worth noting that this expression is similar to the classical vector quantization while the p.d.f. of ${g}$ is weighted by a new coefficient related to the Hessian and Jacobian of the goal function and the normalized vector ${e}_m$. To simplify the formula, we denote by $\mathbf{A}_{f,{\chi}} \left( {g}\right) = \mathbf{J}_{{\chi}}^{\mathrm{T}}({g})\mathbf{H}_f({\chi} \left( {g} \right);{g}) \mathbf{J}_{{\chi}} \left({g}\right)$, then one has that: \begin{equation} \widehat{L}_{\textcolor{black}{M}}\left( \textcolor{black}{Q}; f \right) = \alpha_f\sum_{m=1}^M \int_{\mathcal{G}_m} \frac{1}{2}\|{g}-{z}_m\|_2^2 {e}_m^{\mathrm{T}} \mathbf{A}_{f,{\chi}} \left( {g}\right) {e}_m \phi({g})\mathrm{d}{g}. \label{eq:approximate_optimality_loss_vector} \end{equation} As the normalized vector ${e}_m$ depends both on ${g}$ and the representative ${z}_m$, the vector case can not be tackled as the scalar case. Nevertheless, we will show similar properties could be found in the vector case. To directly approximate the OL defined in (\ref{eq:infi_derivation_vector}) is complicated, we thus resort to some matrix properties to bound OL. The accuracy of our approximation depends on how we approximate \textcolor{black}{the} term ${e}_m^{\mathrm{T}} \mathbf{A}_{f,{\chi}} \left( {g}\right) {e}_m$. For a given parameter ${g}$, maximum eigenvalue and minimum eigenvalue of matrix $\mathbf{A}_{f,{\chi}} \left( {g}\right)$ are denoted by $ \lambda_{\max} ({g};f)$ and $ \lambda_{\min} ({g};f) \geq 0 $ respectively since the Hessian matrix $\mathbf{H}_f({\chi} \left( {g}\right);{g})$ is \textcolor{black}{nonnegative} definite due to optimum. Therefore, the term ${e}_m^{\mathrm{T}} \mathbf{A}_{f,{\chi}} \left( {g}\right) {e}_m$ can be upper bounded by $\lambda_{\max}({g};f)$ and lower bounded by $\lambda_{\min}({g};f)$. We first study the lower bound of $\widehat{L}_{\textcolor{black}{M}}\left( \textcolor{black}{Q}; f \right)$. Similarly, we extend the notation of the point density $\rho({g} )$ to a vector case which determines the approximate fraction of representatives contained in that region. Define the normalized moment of inertia of the cell $\mathcal{G}_m$ with representative ${z}_m$ by \begin{equation} \mathscr{M}(\mathcal{G}_m,{z}_m)=\frac{1}{p}\frac{1}{\mathrm{vol}(\mathcal{G}_m)^{1+2/p}}\displaystyle\int_{\mathcal{G}_m}\|{g}-{z}_m\|_2^2\mathrm{d}{g}, \end{equation} and the inertial profile $\mathfrak{m}({g})=\mathscr{M}(\mathcal{G}_m,{z}_m)$ when ${g}\in\mathcal{G}_m$, the OL can be further approximated as \cite{Gersho_TIT_1979}\cite{Gray_TIT_1998}: { \begin{equation} \begin{split} & \mathrm{L}\left( \textcolor{black}{Q}; f \right) \\ = &\alpha_f\sum_{m=1}^M \int_{\mathcal{G}_m} (f({\chi}({z}_m);{g})-f({\chi} \left( {g} \right);{g}))\phi(g)\mathrm{d}{g}\\ \overset{(a)}{\geq} &\alpha_f\sum_{m=1}^M \int_{{g}\in\mathcal{G}_m} \frac{1}{2}\|{g}-{z}_m\|_2^2\lambda_{\min}({g};f) \textcolor{black}{\phi{(g)}}\mathrm{d}{g} \\ \overset{(b)}{=}& \sum_{m=1}^M\frac{\alpha_f p}{2{M}^{2/p}} \frac{\mathscr{M}(\mathcal{G}_m,{z}_m)}{\rho^{2/p}({z}_m)}\lambda_{\min}({z}_m;f) \phi({z}_m)\mathrm{vol}(\mathcal{G}_m) \\ \overset{(c)}{=}& \frac{\alpha_fp}{2{M}^{2/p}}\displaystyle\int \frac{\mathfrak{m}({g} )}{\rho^{2/p}(g)}\lambda_{\min}({g} ;f) \left( \phi({g}) \right )\mathrm{d}{g} \end{split} \label{eq:infi_derivation_vector_lower_bound2} \end{equation}} (a) comes from the fact that ${e}_m$ is a normalized vector; (b) uses the definition of $\mathscr{M}(\mathcal{G}_m,{z}_m)$ and the relation $\underset{M\rightarrow\infty}{\lim}\sum_{m=1}^M\mathrm{vol}(\mathcal{G}_m)\rho({z}_m)=M$; (c) is still the definition of Riemman integral. This result can be seen as a special case of Bennett's integral (see \cite{Bennett_Beld_1948}\cite{Gray_TIT_1998}) by replacing $\phi \left( {g}\right)$ by the product $\lambda_{\min}({g};f) \phi({g})$. However, it is not known how to find the optimal inertial profile $\mathfrak{m}({g})$ and it is not even known what functions are allowable as inertial profiles. To this end, Gersho \cite{Gersho_TIT_1979} made the widely accepted hypothesis or conjecture that when $R$ is large, most regions of a $p$-dimensional quantizer aims at minimizing or nearly minimizing the mean square error are approximately congruent to some basic tessellating $p$-dimensional cell shape $\mathbb{T}_{p}$. With this conjecture, the optimal inertial profile $\mathfrak{m}({g})$ can be seen as a constant $\mu_{p}$ in high resolution case. By using the H{\"o}lder's inequality, the optimal density $\rho({g})$ that minimizes the distortion can be written as \begin{equation} \rho^{\star}({g})=\frac{\left(\lambda_{\min}({g};f) \phi({g})\right)^{\frac{p}{p+2}}}{\displaystyle\int_{\mathcal{G}}\left(\lambda_{\min}({t};f) \phi({t})\right)^{ \frac{p}{p+2} }\mathrm{d}{t}} \end{equation} resulting in the low bound of distortion in (\ref{eq:lower_bound_vec}). The same reasoning can be applied to the derivation of the proposed upper bound. \textcolor{black}{\textbf{Remark} When the number of cells is large, one has that $\mathfrak{m}({z_m}) \approx \mathfrak{m}({g})$. Then one is able to define the inertial profile $\mathfrak{m}({g})$ for the parameter $g$. Moreover, when $M$ is large, it is observed that the optimal cells (in the sense of the distortion) are roughly congruent to some basic tessellating cell shape (Gersho's conjecture). Even if it is difficult to find the optimal $\mathfrak{m}({g})$, it could be treated as a constant by admitting Gersho's conjecture since it is normalized.} \bibliographystyle{IEEEbib}
proofpile-arXiv_065-3991
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Abstract} Within the context of topological data analysis, the problems of identifying topological significance and matching signals across datasets are important and useful inferential tasks in many applications. The limitation of existing solutions to these problems, however, is computational speed. In this paper, we harness the state-of-the-art for persistent homology computation by studying the problem of determining topological prevalence and cycle matching using a cohomological approach, which increases their feasibility and applicability to a wider variety of applications and contexts. We demonstrate this on a wide range of real-life, large-scale, and complex datasets. We extend existing notions of topological prevalence and cycle matching to include general non-Morse filtrations. This provides the most general and flexible state-of-the-art adaptation of topological signal identification and persistent cycle matching, which performs comparisons of orders of ten for thousands of sampled points in a matter of minutes on standard institutional HPC CPU facilities. \paragraph{Keywords:} Absolute (co)homology; cycle matching; cycle prevalence; image-persistent homology; relative (co)homology. \section*{Introduction} \textit{Persistent homology}, one of the cornerstones of topological data analysis, studies the lifespan of the topological features in a nested sequence of topological spaces by tracking the changes in its homology groups. It provides a robust statistical summary of data, capturing its ``shape'' and ``size'' and has been been applied to many scientific disciplines in recent years with great success. The diagram consisting of the homology groups of the filtration connected by the maps induced by the inclusions is called the \textit{persistence module}. From this, the \textit{persistence barcode} (or simply, \emph{barcode}) is derived---a canonical summary of the aforementioned lifespans as a set of half-open intervals. A natural question is whether it is possible to compare the barcodes obtained from different filtrations, which would, for instance, provide a correspondence between some of their intervals. Several solutions have been proposed. \cite{gonzalez-diaz_basis-independent_2020} derive a basis-independent partial matching for ladder modules and zigzag persistence. A different method of persistent extension to find analogous bars, especially interesting if there is no known mapping between the persistence modules, was very recently introduced by \cite{yoon_persistent_2022}. \cite{bauer_induced_2015} match intervals of barcodes using a known mapping between the persistence modules. This notion was recently reinterpreted in statistical terms by \cite{reani_cycle_2021}, who propose a similar interval matching using \textit{image-persistence}, which was first introduced by \cite{cohen-steiner_persistent_2009}. This matching is applied to define a \textit{prevalence score}---a measure of the significance of a given interval in a barcode. Typically, \emph{persistence} (i.e., the length of the interval) is interpreted as topological significance or signal: longer intervals are considered to correspond to ``true'' features, while short ones are attributed to topological noise. However, this practice can be misleading since persistence is highly affected by the distance of sampling points and usually has a higher value for cycles created at a larger scale. The prevalence score proposed by \cite{reani_cycle_2021} bypasses this shortcoming by taking into account the statistical heuristics of the problem: it is obtained by matching persistence intervals across diagrams of several resamplings of the data. A limitation common to all previously proposed barcode comparison techniques is that they are all computationally very expensive, which significantly limits their practicality in many applications and to many real datasets. In this paper, we address this specific issue by leveraging the current state-of-the-art in persistent homology computation, \emph{Ripser} \citep{bauer_ripser_2021}, which studies the dual perspective and computes persistent \emph{cohomology}, taking advantage of its equivalence to persistent homology \citep{silva_dualities_2011}. Furthermore, recently, Ripser was adapted to the setting of image-persistence via Ripser-image \citep{bauer_efficient_2022}. We adapt this technology to the interval matching approach proposed by \cite{reani_cycle_2021} for its data-centric and statistical approach, as well as generalize and extend their definitions to allow for greater flexibility and applicability. The final result of our contributions is state-of-the-art computational interval matching executable in a matter of minutes using only standard institutional high performance computing facilities, which we showcase on a wide variety of complex and large-scale datasets, such as static and time-lapse imaging and video data from biomedical and astrophysical applications. \paragraph{Contributions.} \begin{itemize} \item We generalize the definition of interval matching proposed by \cite{reani_cycle_2021} to a broader spectrum of filtrations. In particular, this makes it compatible with the output of Ripser-image \citep{bauer_efficient_2022}. \item We present a comprehensive study case of different definitions for a matching affinity score that extends the original score proposed by \cite{reani_cycle_2021}. \item We provide state-of-the-art code for interval matching, freely available at \url{https://github.com/inesgare/interval-matching}. \item We comprehensively showcase and demonstrate representative applications of our generalized definitions and code to complex and large-scale datasets. \end{itemize} \paragraph{Outline.} We begin by introducing the fundamentals of persistence and set relevant notations in section \ref{sec:preliminaries}. We also review image-persistence and use it to present interval matching as proposed by \cite{reani_cycle_2021}. In section \ref{sec:cycle_matching_cohomology}, we adapt the definition of image-persistence to the various homology settings, and study how these frameworks are related. Here, we propose our generalized definition of interval matching and revisit the notion of matching affinity of \cite{reani_cycle_2021} in a case study of alternative formulations. In section \ref{sec:applications}, we present applications of the notion of cycle matching to a variety of data sets of diverse nature and aiming at different objectives. We close with a discussion of our contributions and proposals for future work in section \ref{sec:end}. \section{Preliminaries} \label{sec:preliminaries} In this section we introduce the fundamental concepts underlying our work and establish some relevant notations that we will use throughout the rest of the paper. \subsection{The Four Standard Persistence Modules} \label{subsec:standard_persistence_modules} A \textit{filtration} is a family of nested subspaces \(\{X_t: t \in T\}\) of some space $X$ $$X_t \subset X_s \subset X, \quad \mathrm{for}\ t \leq s, $$ where \(T \subset \mathbb{R}\) is a totally ordered indexing set. In this paper, we work with \textit{filtered complexes}, specifically, we further assume that $X$ is a finite simplicial complex and the spaces $X_t$ are simplicial subcomplexes of $X$. Filtered complexes can also be interpreted as diagrams \(X_\bullet: T \to \mathbf{\mathrm{Simp}}\) of simplicial complexes indexed over some finite totally ordered set \(R\), such that all maps in the diagram are inclusions. A \textit{re-indexing} of a filtration changes the indexing set \(T\) to another totally ordered set \(I\) using some monotonic map \(r: I \to T\) so that \(X_{r(i)} = X_t\). For instance, if \(\{X_t: t \in T\}\) is a filtered complex, \(T = \{t_1,\ldots,t_n\}\) is finite and we have the re-indexing \(r(i) = t_i\) that allows for a reparametrization of the filtration over the natural numbers \(\{X_i: 1\leq i \leq n\}\). Applying the corresponding homology functor to the simplicial complexes in a filtered complex and the inclusions \(X_i \subset X_{i+1}\) between consecutive spaces gives us the following diagrams \begin{alignat}{10} \H_*(X_\bullet):\ & & & \ \H_*(X_1) & \ \rightarrow \ & \ldots & \ \rightarrow \ & \ \H_*(X_{n-1}) & \ \rightarrow \ & \ \H_*(X_{n}), \label{eq:ah} \\[2pt] \H^*(X_\bullet):\ & & & \ \H^*(X_1) & \ \leftarrow \ & \ldots & \ \leftarrow \ & \ \H^*(X_{n-1}) & \ \leftarrow \ & \ \H^*(X_{n}) , \label{eq:ac}\\[2pt] \H_*(X, X_\bullet):\ & \ \H_*(X_n) & \ \rightarrow \ & \ \H_*(X, X_1) & \ \rightarrow \ & \ldots & \ \rightarrow \ & \ \H_*(X, X_{n-1}), & & \label{eq:rh}\\[2pt] \H^*(X, X_\bullet):\ & \ \H^*(X_n) & \ \leftarrow \ & \ \H^*(X, X_1)& \ \leftarrow \ & \ldots & \leftarrow & \ \H^*(X, X_{n-1}). \label{eq:rc} \end{alignat} Following \cite{silva_dualities_2011}, we will call these the \emph{four standard persistence modules}. The first persistence module \eqref{eq:ah} corresponds to \textit{absolute homology} and is the one most often used. The expressions following are the persistence modules for \textit{absolute cohomology \eqref{eq:ac}, relative homology \eqref{eq:rh},} and \emph{relative cohomology \eqref{eq:rc}}. Unless otherwise stated, homology and cohomology will have field coefficients, so that these persistence modules are made up of vector spaces and linear maps. The assumption of field coefficients allows us to invoke the \emph{structure theorem} \citep{zomorodian_computing_2005}. This is one of the foundational results in persistent homology and ensures that, up to isomorphism, any persistence module, such as the ones above, can be decomposed in a direct sum of \textit{interval modules}. An interval module consists of copies of the field of coefficients over an interval range of indices, these copies connected by the identity map, and the trivial vector space outside of that interval. This allows for the interpretation that some (co)cycle is \emph{born} at the beginning of the interval and \emph{dies} at the end of it. For instance, for the absolute homology module, \[\H_*(X_\bullet) \cong \bigoplus_{m = 1}^M I_{[b_m,\, d_m]}\] where the sub-index denotes the range of indices in which the interval module is nontrivial. The collection of intervals that appears in the decomposition of the structure theorem is an invariant of the isomorphism type of the persistence module. This collection is the \emph{persistence barcode} of the filtration \[ \mathrm{Pers}(\H_*(X_\bullet)) = \big\{ [b_m,\, d_m] \big\}_{m = 1}^M. \] The intervals from the barcode are called \textit{persistence intervals} and the starting and ending points from the intervals are the \textit{persistence pairs}. The persistence barcode provides a summary of the lifespans of the topological features of the filtration. The persistence pairs are often interpreted with real indices instead of natural ones. In this case, the interval excludes the real-valued death time on the right side and we obtain half-open intervals, $$ \mathrm{Pers}(\H_*(X_\bullet)) = \big\{ [t_{b_m},\, t_{d_{m} + 1}) \big\}_{m = 1}^M. $$ The four standard persistence modules carry the same information: the barcode of the setting of absolute (or relative) homology setting is the same as the barcode of the setting of absolute (or relative) cohomology. We can also find a bijection between the bars in the barcodes of the relative setting and the bars in the corresponding absolute setting. For further the details on this, see \cite{silva_dualities_2011}. \subsection{Image-Persistence and Interval Matching} \label{subsec:image_persistence} The idea underlying image-persistence is to study the persistent homology of some filtered complex inside another larger filtered complex. Let \(X\) and \(Z\) be finite simplicial complexes and \( f : X \to Z\) an injective map between them. Let \(\{X_i : 1 \leq i \leq n\}\) and \(\{Z_i: 1 \leq i \leq n\}\) be filtrations associated to the previous complexes and denote the restrictions to the steps in the filtrations by \[f_i := f\vert_{X_i} : X_i \to Z_i. \] Note that these are also injective maps, which gives rise to the following commutative diagram for all \(1\leq i\leq n-1\): \[\begin{CD} X_i @>\iota_i^X>> X_{i+1} \\ @Vf_iVV @VVf_{i+1}V \\ Z_i @>\iota_i^Z>> Z_{i+1} \\ \end{CD}\] where \(\iota_i^X\) and \(\iota_i^Z\) are the corresponding inclusion maps between consecutive steps in the corresponding filtration. Applying the homology functor to the previous diagram, another commutative diagram is obtained: \begin{align*} \begin{CD} \H_*(X_i) @>\H_*(\iota_i^X)>> \H_*(X_{i+1}) \\ @V\H_*(f_i)VV @VV\H_*(f_{i+1})V \qquad \\ \H_*(Z_i) @>\H_*(\iota_i^Z)>> \H_*(Z_{i+1}) \\ \end{CD} \end{align*} which now involves the homology groups and the induced linear maps. The commutativity of this diagram allows the following definition. \begin{definition}[Image-persistent homology] The persistence module \[ \Im \H_* (f_\bullet) : \quad \mathrm{Im}(\H_*(f_i)) \to \mathrm{Im}(\H_*(f_{i+1})) \] given by the subspaces \(\mathrm{Im}(\H_*(f_i))\subset \H_*(Z_i)\) and the restrictions of the maps \(\H_*(\iota_i^Z)\) is called \emph{image-persistent homology}. \end{definition} The elements in \(\Im(\H_*(f_i))\) can be seen as \textit{cycles in \(X_i\) up to boundaries in \(Z_i\)}, which we gain by studying one filtration inside another. \begin{remark} Since \(\Im \H_*(f_i)\) is a subspace of \(\H_*(Z_i)\), a death in the image-persistence module implies a death in the persistent homology of the space \(Z\). Also, a birth in the image-persistence module implies a birth in the persistent homology of the space \(X\) (see \cite{cohen-steiner_persistent_2009} for details). \newline \end{remark} The definition of interval matching introduced by \cite{reani_cycle_2021} is based on image-persistence to compare the persistence bars of two diagrams and is restricted to \emph{Morse filtrations}. \begin{definition}[Morse filtration] A filtration $\{X_t:t\in \mathbb{R}\}$ is a \emph{Morse filtration} if there exists a finite set $T= \{t_1,...,t_n\}\subset \mathbb{R}$ such that the following are satisfied: \begin{enumerate} \item For all $t\notin T$, there exists $\epsilon>0$ small enough such that for every $0<\epsilon'<\epsilon$ the map $$ \H_*(i) : \H_*(X_{t-\epsilon'}) \to \H_*(X_{t+\epsilon'}) $$ induced by inclusion is an isomorphism for every homology group. Equivalently, the homology does not change at $t$. \item For all $t\in T$, there exists $\epsilon>0$ small enough so that for any $0<\epsilon'<\epsilon$ either \begin{enumerate} \item $ \H_*(i) : \H_*(X_{t-\epsilon'}) \to \H_*(X_{t+\epsilon'}) $ is injective and the dimension of the vector space increases by one, or \item $ \H_*(i) : \H_*(X_{t-\epsilon'}) \to \H_k(X_{t+\epsilon'}) $ is surjective and the dimension decreases by one. \end{enumerate} Equivalently, the homology changes allowed are either the creation of a single new cycle or the termination of a single existing cycle. \end{enumerate} \label{def:morse_filtration} \end{definition} We now review how to match the persistence intervals of two filtered complexes inside a third comparison space. Let $X, Y, Z$ be finite simplicial complexes with Morse filtrations \(\{X_i: 1 \leq i \leq n\}\), \(\{Y_i: 1 \leq i \leq n\}\), and \(\{Z_i: 1 \leq i \leq n\}\). Assume we have injective maps \[f_i : X_i \to Z_i, \qquad g_i: Y_i \to Z_i\] for every \(1\leq i\leq n\) such that \(f_j\vert_{X_i} = f_i\) and \(g_j\vert_{Y_i} = g_i\) for every \(i\leq j\). With these assumptions, \cite{reani_cycle_2021} match persistence intervals as follows. \begin{definition}[Matching intervals, \cite{reani_cycle_2021}] \label{def:cycle_matching_morse} Let $\alpha \in \mathrm{Pers}(\H_*(X_\bullet))$ and $\beta \in \mathrm{Pers}(\H_*(Y_\bullet))$. $\alpha$ and $\beta$ are \emph{matching intervals via $Z_\bullet$} if there exist $\tilde{\alpha} \in \mathrm{Pers}(\Im \H_*(f_\bullet))$ and $\tilde{\beta} \in \mathrm{Pers}(\Im \H_*(g_\bullet))$ such that \begin{align*} \mathrm{birth} \, \alpha &= \mathrm{birth} \, \tilde{\alpha} \\ \mathrm{birth} \, \beta &= \mathrm{birth} \, \tilde{\beta} \\ \mathrm{death} \, \tilde \alpha &= \mathrm{death} \, \tilde{\beta}. \end{align*} \end{definition} \begin{remark} \label{rmk:morse_condition_matching} The Morse assumption is crucial in order for the notion in Definition \ref{def:cycle_matching_morse} to be well-defined. Having Morse filtrations for \(X\) and \(Y\) ensures that there is at most one birth at each time in \(\H_*(X_\bullet)\) and \(\H_*(Y_\bullet)\). From the definition of image-persistence, this also holds in the respective image-persistence modules. Recall that a birth in the image-persistence module means a birth in the corresponding persistent homology module of \(X\) or \(Y\). This allows for each bar from the image-persistence module to have the same birth time as exactly one bar in the associated persistent homology. From the death perspective, recall that a death happening in any of the image-persistence modules means a death in \(\H_*(Z_\bullet)\). Thus, there is also at most one bar in each image-persistence diagram dying at any given time. Consequently, each bar from an image-persistence module can share the same death time with at most one bar from the other image-persistence module. These notions of uniqueness induced by assuming Morse filtrations guarantee that there are no ambiguous matchings. \end{remark} \subsection{Matching Affinity and Prevalence Score} \label{sec:matching_and_prevalence} The \emph{prevalence score} was proposed by \cite{reani_cycle_2021} as an alternative measure to persistence, in the sense of interval length, as an indicator for topological significance in noisy data. It takes inspiration from bootstrapping techniques, which is a well-known and powerful subsampling with replacement method, originally proposed in the statistical literature by \cite{efron1982jackknife}. The formulation of the prevalence score takes into account the inherent tendency that as the sample size grows, noisy generators tend to reappear frequently. Due to this tendency, the \emph{affinity} of a match must first be discussed before prevalence may be considered. Affinity is a score assigned to every match that considers the lifetimes of the persistent cycles and image-persistent cycles involved in the definition of interval matching. Recall that the \emph{Jaccard index} of two intervals \(I\) and \(J\) is given by \[\mathrm{Jac}(I,J) : = \dfrac{\vert I \cap J\vert}{\vert I \cup J\vert}.\] \begin{definition}[Matching affinity, \cite{reani_cycle_2021}] \label{def:affinity} The \emph{matching affinity} of two bars \(\alpha, \beta\) and matching through their image-bars \(\tilde{\alpha}, \tilde{\beta}\) is defined as the product \[\rho(\alpha,\beta) := \mathrm{Jac}(\alpha,\beta) \cdot \mathrm{Jac}(\alpha, \tilde{\alpha})\cdot \mathrm{Jac}(\beta,\tilde{\beta}).\] \end{definition} With this definition, the prevalence score may now be formally introduced. \begin{definition}[Prevalence score, \cite{reani_cycle_2021}] \label{def:prevalence} Given some reference space \(X = X_{\mathrm{ref}}\) and resampling spaces \(X^{(1)},\ldots,X^{(K)}\) , any \(\alpha \in \mathrm{Pers}(H_*(X_\bullet))\) has a \emph{prevalence score} defined as \[\mathrm{prev}(\alpha):= \dfrac{1}{K} \sum_{k=1}^{K} \rho (\alpha, \beta_k(\alpha))\] where \(\beta_k(\alpha)\) is the unique bar in \(X^{(k)}\), for \(1\leq k\leq K\), matched to \(\alpha\) (if it does not exist, set \(\rho = 0\)). \end{definition} \subsection{Clearing Algorithm and Cohomology} \label{subsec:ripser} The cohomology framework presented in Section \ref{subsec:standard_persistence_modules} gained special relevance due to its application in the Ripser code \citep{bauer_ripser_2021}---the state-of-the-art code for persistent homology computations. The determining novelty in the Ripser algorithm is the usage of a \emph{clearing algorithm} by \cite{chen_persistent_2011}, which provides a vast increase in speed when applied in the cohomology setting. The basic algorithm to compute the persistent homology of a filtered complex is the following: reduce each column of the matrix of the boundary operator on the complex by adding columns on its left, from left to right, to obtain a reduced matrix. From this reduced matrix, the barcode can be readily attained. The key observation made by \cite{chen_persistent_2011} is that some columns in the reduced matrix must be null after the reduction and do not play a role in the reduction process. The clearing algorithm reduces the boundary matrix in blocks from right to left so that it becomes possible to detect beforehand these null columns and set them directly to \(0\). Consequently, it is possible to avoid reducing these columns and accelerate the computation. This is already an improvement compared to the standard reduction algorithm. However, this improvement is burdened by the large number of columns in the first block that must be reduced in the boundary matrix. One of the main contributions of Ripser \citep{bauer_ripser_2021} is noting that the real increase in speed appears when considering the relative coboundary matrix in which this first block is significantly smaller. From the reduced coboundary matrix, the barcode for the relative cohomology setting (\ref{eq:rc}) is read off, which is equivalent to the barcode for the homology setting (\ref{eq:ah}) as established by \cite{silva_dualities_2011}. This is the methodology implemented in Ripser to provide state-of-the-art computation for persistent homology. We note in particular that Ripser only considers Vietoris--Rips persistent homology, which is based on the Vietoris--Rips filtration, and does not compute other filtrations. This is a standard filtration often considered in computational applications and settings. Recall that the Vietoris--Rips filtration of a finite metric space $(\mathcal{P},d)$ is $\mathrm{VR}_\bullet(\mathcal{P}, d)$ where the simplicial complex at filtration value $\epsilon$ is \[\mathrm{VR}_\epsilon(\mathcal{P}, d) = \{ \emptyset \neq S \subset \mathcal{P} ~|~ \forall\ p,q \in S, \, d(p,q) \leq \epsilon \}.\] By applying the homology functor we obtain the Vietoris--Rips persistent homology $\H_*(\mathrm{VR}_\bullet(\mathcal{P}, d))$. \section{Cycle Matching in the Setting of Cohomology} \label{sec:cycle_matching_cohomology} In this section, we present our theoretical contributions. Specifically, we extend the definition of image-persistence to the remaining settings of the four standard persistence modules and study the relations between the persistence modules obtained. We then generalize the definition of interval matching to non-Morse filtrations and outline how to implement our generalization using Ripser-image. We finish the section with a study case of alternative definitions of the matching affinity. \subsection{The Four Image-Persistence Modules} We only need functoriality applied to the following commutative diagram \[\begin{CD} X_i @>\iota_i^X>> X_{i+1} \\ @Vf_iVV @VVf_{i+1}V \\ Z_i @>\iota_i^Z>> Z_{i+1} \\ \end{CD}\] for \(1\leq i\leq n-1\) to define image-persistence, which can then be easily extended to obtain four image-persistence modules as parallels to the four standard persistence modules presented previously in Section \ref{subsec:standard_persistence_modules}. In the setting of \textit{absolute cohomology}, applying the corresponding homology functor gives us the following commutative diagram \[\begin{CD} \H^*(X_i) @<\H^*(\iota_i^X)<< \H^*(X_{i+1}) \\ @A\H^*(f_i)AA @AA\H^*(f_{i+1})A \\ \H^*(Z_i) @<\H^*(\iota_i^Z)<< \H^*(Z_{i+1}) \\ \end{CD} \] for every \(1\leq i \leq n-1\). Since we are working with field coefficients, the objects with superscripts are dual to the objects with subscripts from the diagram for homology. The commutativity of the diagram allows for the following definition. \begin{definition}[Image-persistent cohomology] The \emph{image-persistent cohomology} is defined as the persistence module \[ \Im \H^* (f_\bullet) : \quad \mathrm{Im}(\H^*(f_i)) \to \mathrm{Im}(\H^*(f_{i-1})) \] given by the subspaces \(\mathrm{Im}(\H^*(f_i)) \subset \H^*(X_i)\) and the restrictions of the maps \(\H^*(\iota_i^X)\). \end{definition} We now consider the relative settings. Recall that we have a map \(f: X \to Z\) such that \(f(X_i) \subset Z_i\); we denote this by \[ f : (X, X_i) \to (Z, Z_i)\] for every \(1 \leq i \leq n\). Since we also have \(X_i \subset X_{j} \subset X\), for \(i\leq j\), we can write \[ \iota^X : (X,X_i) \to (X, X_{j}), \quad \ i\leq j\] for the identity in \(X\). The same can be written for the identity \(\iota^Z\) \[ \iota^Z : (Z,Z_i) \to (Z, Z_{j}), \quad \ i\leq j.\] This gives us the following commutative diagram \[\begin{CD} (X,X_i) @>\iota^X>> (X,X_{i+1}) \\ @VfVV @VVfV \\ (Z,Z_i) @>\iota^Z>> (Z,Z_{i+1}) \\ \end{CD}\] for \(1 \leq i \leq n-1\). Using the functoriality in the relative setting we obtain the subsequent commutative diagrams \[\begin{CD} \H_*(X,X_{i}) @>\H_*(\iota^X)>> \H_*(X,X_{i+1}) \\ @V\H_*(f, f_i) VV @VV\H_*(f, f_{i+1})V \\ \H_* (Z,Z_{i}) @>\H_*(\iota^Z)>> \H_*(Z,Z_{i+1}) \\ \end{CD}\qquad, \hspace{50pt} \begin{CD} \H^*(X,X_{i}) @<\H^*(\iota^X)<< \H^*(X,X_{i+1}) \\ @A\H^*(f, f_i) AA @AA\H^*(f, f_{i+1})A \\ \H^* (Z,Z_{i}) @<\H^*(\iota^Z)<< \H^*(Z,Z_{i+1}) \\ \end{CD}\] where again, the subscripts and superscripts mean duality. Observe the notation of the homology functor applied to \(f : (X, X_i) \to (Z, Z_i)\). Commutativity again allows for the following definitions. \begin{definition}[Image-persistent relative homology] The \emph{image-persistent relative homology} is the persistence module \[\Im \H_* (f, f_\bullet) : \quad \mathrm{Im}(\H_*(f, f_i)) \to \mathrm{Im}(\H_*(f, f_{i+1})) \] given by the vector spaces \(\mathrm{Im}(\H_*(f, f_i)) \subset \H_*(Z,Z_i)\) and the restrictions of the linear maps \(\H_*(\iota^Z)\). \end{definition} \begin{definition}[Image-persistent relative cohomology] The \emph{image-persistent relative cohomology} is the persistence module \[\Im \H^*(f, f_\bullet) : \quad \mathrm{Im}(\H^*(f, f_i)) \to \mathrm{Im}(\H^*(f,f_{i-1})) \] given by the vector spaces \(\mathrm{Im}(\H^*(f,f_i)) \subset \H^*(X,X_i)\) and the restrictions of the linear maps \(\H^*(\iota^X)\). \end{definition} \subsection{Equivalence Among the Four Image-Persistence Settings} A natural question to ask after introducing the four image-persistence modules is whether we can expect equivalences among them akin to the ones proved by \cite{silva_dualities_2011} for the standard persistence modules. In search of a first immediate answer to that question, we check directly whether the persistence modules of homology and cohomology provide the same information. \begin{proposition} The following equalities hold \begin{align*} \mathrm{Pers}(\Im \H_*(f_\bullet)) &= \mathrm{Pers}(\Im \H^*(f_\bullet)), \\ \mathrm{Pers}(\Im \H_*(f, f_\bullet)) &= \mathrm{Pers}(\Im \H^*(f,f_\bullet)). \end{align*} \end{proposition} \begin{proof} It is sufficient to prove that the maps naturally induced between the images, $$ {\H_*(\iota_i^Z)}\vert_{ \mathrm{Im} (\H_*(f_i))} : \mathrm{Im} (\H_*(f_i))\rightarrow \mathrm{Im} (\H_*(f_{i+1})) $$ and $$ {\H^*(\iota_i^X)}\vert_{ \mathrm{Im} (\H^*(f_{i+1})) } : \mathrm{Im} (\H^*(f_{i+1})) \rightarrow \mathrm{Im} (\H^* (f_i)),$$ have the same rank. This is true since \begin{align} \mathrm{rank} \, {\H_*(\iota_i^Z)}\vert_{ \mathrm{Im} (\H_*(f_i))} &= \mathrm{rank} \, \left( \H_* (\iota_i^Z) \circ \H_*(f_i) \right) \nonumber\\ &= \mathrm{rank} \, \left(\H^* (f_i) \circ \H^*(\iota_i^Z)\right) \label{eq:rk2} \\ &= \mathrm{rank} \, \left(\H^* (\iota_i^X) \circ \H^*(f_{i+1})\right) \label{eq:rk3} \\ &= \mathrm{rank} \, {\H^*(\iota_i^X)}\vert_{ \mathrm{Im} (\H^*(f_{i+1})) } \nonumber \end{align} where the second equality \eqref{eq:rk2} is given by duality and the third equality \eqref{eq:rk3} is given by the commutativity of the diagram in absolute cohomology. Since persistence barcodes are uniquely determined by dimensions and ranks, we have shown that the image-persistent absolute homology and image-persistent cohomology barcodes are the same. The same proof applies to prove equality of the barcodes in the relative setting. \end{proof} As for equivalence between the absolute and relative settings, the arguments used by \cite{silva_dualities_2011} are not directly applicable for image-persistence. Instead, if \(\mathrm{Pers}_0\) denotes the finite intervals and \(\mathrm{Pers}_\infty\) the infinite intervals of a given persistence barcode, we have the following correspondence \cite[see][Proposition 3.12]{bauer_efficient_2022}. \begin{proposition}[\cite{bauer_efficient_2022}] We have \[\mathrm{Pers}_0 (\Im \H_*(f_\bullet)) = \mathrm{Pers}_0 (\Im \H^{\ast +1}(f,f_\bullet)).\] Additionally, the map \(I \to T \setminus I\) defines bijections \[\begin{array}{rcl} \mathrm{Pers}_\infty (\Im \H_*(f_\bullet)) & \cong & \mathrm{Pers}_\infty (\H^*(X,X_\bullet)),\\[2pt] \mathrm{Pers}_\infty (\Im \H^*(f,f_\bullet)) & \cong & \mathrm{Pers}_\infty( \H_*(Z_\bullet)). \end{array}\] \end{proposition} This result means that in order to determine the barcode of \(\Im \H_*(f_\bullet)\), it suffices to compute $\mathrm{Pers}_\infty(\H^*(X,X_\bullet))$ and \(\mathrm{Pers}_0(\Im \H^*(f, f_\bullet))\). Both of these persistence diagrams may be computed applying a matrix reduction algorithm to appropriate boundary matrices. \cite{bauer_efficient_2022} also show that the clearing algorithm implemented in Ripser (see Section \ref{subsec:ripser}) can also be applied to compute image-persistence. In this way, the code for Ripser can be fully adapted to this setting to achieve state-of-the-art computations for image-persistence. \subsection{Matching Intervals in Non-Morse Filtrations} Ripser-image provides the barcode of the image-persistent homology for Vietoris--Rips filtrations. As noted previously in Remark \ref{rmk:morse_condition_matching}, this presents a significant obstacle to implement efficient interval matching using the Ripser-image technology: Vietoris--Rips filtrations are not Morse filtrations. To overcome this limitation, we introduce a generalization of Definition \ref{def:cycle_matching_morse} that resolves the matches between bars with shared birth or death time. To properly formulate this generalization, we first recall the definition of s simplex-wise filtration. \begin{definition} A filtered complex \(\{X_i : i \in I\}\) is \emph{essential} if $i\neq j$ implies $X_i \neq X_j$. Additionally, it is a \emph{simplex-wise filtration} if for every $i\in I$ such that $X_i \neq \emptyset$ there is some simplex $\sigma_{i}$ and some index $j<i$, such that $X_i \smallsetminus X_j = \{\sigma_i\}$. \end{definition} Observe that in simplex-wise filtrations there is a bijection between the indices of the filtration and the simplices of the complex \(X\). Consequently, in this context we consider the persistence pairs as pairs of simplices. We call those corresponding to birth times \emph{positive simplices} and those corresponding to death times \emph{negative simplices}. Notice that simplex-wise filtrations are Morse filtrations, which allows us to directly apply the definition of interval matching proposed by \cite{reani_cycle_2021} (Definition \ref{def:cycle_matching_morse}). Since birth and death times are in correspondence with positive and negative simplices, we can in fact rephrase Definition \ref{def:cycle_matching_morse} by simply identifying bars when they are created or destroyed by the same simplex. More importantly, for any filtered complex we can always find a re-indexing that refines the filtration and turns it into an essential simplex-wise filtration. This can be done by finding a partial ordering of the simplices in each of the steps of the original filtration that extends to a total ordering in the whole complex. For instance, Ripser uses a lexicographic refinement of the Vietoris--Rips filtration, which orders the simplices by dimension, diameter and a combinatorial system (see \cite{bauer_ripser_2021} for further detail). Whenever we fix a way of obtaining this refinement of the usual filtration to a simplex-wise one, there is no ambiguity of cycles being created or being destroyed by certain simplices. We are now in a position to introduce the definition of interval matching for general filtrations. Let $X, Y, Z$ be finite simplicial complexes with filtrations \(\{X_i: i \in I\}\), \(\{Y_i: i \in I\}\), and \(\{Z_i:i \in I\}\). Assume we have injective maps \(f:X \to Z\) and \(g: Y \to Z\) with the usual notation for the restrictions \[f_i : X_i \to Z_i, \qquad g_i: Y_i \to Z_i,\] for every \(i \in I\). \begin{definition}[Generalized interval matching] \label{def:general_cycle_matching} Let $\alpha \in \mathrm{Pers}(\H_*(X_\bullet))$ and $\beta \in \mathrm{Pers}(\H_*(Y_\bullet))$. $\alpha$ and $\beta$ are \emph{matching intervals via $Z_\bullet$}, if there exist $\tilde{\alpha} \in \mathrm{Pers}(\Im \H_*(f_\bullet))$ and $\tilde{\beta} \in \mathrm{Pers}(\Im \H_*( g_\bullet))$ such that the following conditions are satisfied: \begin{itemize} \item \(\alpha\) and \(\tilde{\alpha}\) are created by the same simplex (seen in \(X\) and in \(f(X)\), respectively); \item \(\beta\) and \(\tilde{\beta}\) are created by the same simplex (seen in \(Y\) and in \(g(Y)\), respectively); \item \(\tilde{\alpha}\) and \(\tilde{\beta}\) are destroyed by the same simplex in \(Z\). \end{itemize} \end{definition} \begin{remark} Notice that in Definition \ref{def:general_cycle_matching}, we assume underlying simplex-wise refinements that are compatible in the three filtrations. This means that the simplices are added to the persistence modules \(\H_*(X_\bullet)\) and \(\H_*(Y_\bullet)\) and to their image-persistence modules \(\Im \H_*(f_\bullet)\) and \(\Im \H_*(g_\bullet)\) in the same order. This can always be achieved by first setting the orders in \(X\) and \(Y\) and then in \(Z\) accordingly. \end{remark} \subsection{Implementing Cycle Matching with Ripser-Image} \label{subsec:implementation} The input to Ripser-image consists of two Vietoris--Rips filtrations \[X_\bullet = \mathrm{VR}_\bullet(\mathcal{P}, d), \qquad Z_\bullet = \mathrm{VR}_\bullet(\mathcal{P}, d')\] where the two metrics \(d, d'\) on a finite set \(\mathcal{P}\) satisfy \(d(p,q) \geq d'(p,q)\) for all \(p, q \in \mathcal{P}\). However, as for interval matching the setting is slightly different. From Section \ref{subsec:image_persistence}, to implement interval matching on finite point clouds \(\mathcal{X}\) and \(\mathcal{Y}\) sampled from the same space, we can consider their union \(\mathcal{P} = \mathcal{X} \cup \mathcal{Y}\) and any metric \(d'\) on \(\mathcal{P}\) induced from such sample space. This will induce metrics \(d_X := d'\vert_{\mathcal{X}} \) and \(d_Y :=d'\vert_{\mathcal{Y}}\) on the smaller point clouds as well. Consider the extension \[(\mathcal{X}, d_X) \subset (\mathcal{P}, d_X')\] such that the metric \(d_X'\) is obtained by setting a very large distance between any point in \(\mathcal{X}\) and any point in \(\mathcal{Y}\), and any pair of points in \(\mathcal{Y}\), all seen in the union. Then, up to a threshold corresponding to that large distance and the points in \(\mathcal{P}\smallsetminus \mathcal{X} = \mathcal{Y}\), we have \[X_\bullet = \mathrm{VR}_\bullet (\mathcal{X}, d_X) \simeq \mathrm{VR}_\bullet (\mathcal{P}, d_X')\] which puts us in the setting of Ripser-image. The same construction can be applied to \((\mathcal{Y}, d_Y)\). In this manner, we obtain the three Vietoris--Rips filtrations \[X_\bullet \subset Z_\bullet \supset Y_\bullet,\] and we consider the inclusions as the connecting functions for the matching. The code for Ripser and Ripser-image assigns a unique index to any simplex of the input filtered complex using a lexicographic refinement. However, it does not provide the indices associated to the positive and negative simplices of the persistence intervals of the barcode in its output. These values can be readily retrieved with a slight change the original code. By arranging the matrices representing the finite metric spaces accordingly, these indices allow the implementation of Definition \ref{def:general_cycle_matching} without affecting the computational runtime of both programs. \paragraph{A Note on Terminology: Cycle Matching.} \cite{reani_cycle_2021} refer to this framework of matching intervals in persistent homology \textit{cycle registration}. In this paper, whenever we use the term ``cycle matching,'' we assume that we have a way of finding cycles in the final simplicial complex of the filtration that correspond to the intervals in its barcode. Note that, in general, these representative cycles are not unique---in fact, the persistence intervals are associated to homology classes, i.e., equivalence classes of cycles. However, there are methods to uniquely determine these cycles. One such method considers the columns of the reduced boundary matrix corresponding to the killing simplices, which provides the simplices that make up a cycle killed by that simplex. Further on in Section \ref{sec:applications}, we implement cycle matching by using a version of Ripser called \textit{ripser-tight-representative-cycles}. This feature provides the representative cycles corresponding to the intervals in the barcode computed by Ripser. Note that Ripser does not reduce the boundary matrix but the relative coboundary matrix (see the previous discussion from Section \ref{subsec:ripser}), and thus we cannot implement the aforementioned method directly. However, \cite{cufar_fast_2021} develop an adaptation of this idea to obtain state-of-the-art computations of barcodes and representatives. The method proposed by \cite{cufar_fast_2021} uses persistent cohomology to obtain the persistence pairs and reduces the boundary matrix only using the columns corresponding to death indices. This is the technique that \textit{ripser-tight-representative-cycles} implements. \paragraph{Processing Multiple Jobs in Parallel.} A computational advantage of the bootstrapping approach proposed by \cite{reani_cycle_2021} is that this technique is parallelizable, despite the inherently non-parallelizable nature of persistent homology computations. Recall that, once the barcode of the reference sample is computed, to obtain the prevalence score of its intervals, these intervals are matched with the intervals in the barcodes of \(K\)-many resamplings. These matchings can be processed in parallel jobs using a high performance computer cluster (HPC) with a workload manager and job scheduling system, such as SLURM or OpenPBS. In each of these jobs, first, the barcode of the corresponding resampling and the barcodes of the image-persistence modules involved are computed, then the generalized interval matching is implemented. This allows for a dramatic increase in efficiency in the computational runtime, with respect to a sequential execution of the code: the total computational time corresponds to the one of the slowest job, instead of the sum of the computational times of the individual jobs. \subsection{Revisiting Matching Affinity} \label{sec:revisit_affinity} The matching affinity introduced in Definition \ref{def:affinity} relies on a particular choice of pairs of intervals to compare through their Jaccard index. In principle, other selections are also valid to obtain different definitions of the matching affinity. We now study the behavior of four such affinities in the example of two circles with same radius but diverging centers. This will allow us to conclude that only one of these definitions exhibits a significant difference with respect to the others. From now on, we refer to the matching affinity of Definition \ref{def:affinity} as \emph{matching affinity $A$} \[\rho_A(\alpha,\beta) := \mathrm{Jac}(\alpha,\beta) \cdot \mathrm{Jac}(\alpha, \tilde{\alpha})\cdot \mathrm{Jac}(\beta,\tilde{\beta}),\] where \(\alpha, \beta\) denote two bars matched through their image-bars \(\tilde{\alpha}, \tilde{\beta}\). This score involves the comparison of \(\alpha\) and \(\beta\) but also of each bar with its corresponding image-bar. Considering multiple ways to compare persistence bars and image-bars, we also have: \begin{itemize} \item the \emph{matching affinity $B$} as \(\rho_B(\alpha,\beta) := \mathrm{Jac}(\Tilde{\alpha},\Tilde{\beta}) \cdot \mathrm{Jac}(\alpha, \tilde{\alpha})\cdot \mathrm{Jac}(\beta,\tilde{\beta})\), \item the \emph{matching affinity $C$} as \(\rho_C(\alpha,\beta) := \mathrm{Jac}(\alpha,\beta) \cdot\mathrm{Jac}(\Tilde{\alpha},\Tilde{\beta}) \cdot \mathrm{Jac}(\alpha, \tilde{\alpha})\cdot \mathrm{Jac}(\beta,\tilde{\beta})\), \item the \emph{matching affinity $D$} as \(\rho_D(\alpha,\beta) := \mathrm{Jac}(\alpha,\beta) \cdot \mathrm{Jac}(\tilde{\alpha}, \tilde{\beta})\). \end{itemize} The following concrete example provides an intuition about how the different affinities behave. Consider two circles of radius \(1\) and centers shifted by a distance \(s\). We expect the matching affinity decreases as the center-to-center distance \(s\) increases, until reaching $0$ (no match) beyond a certain value. The result of this experiment is displayed in Figure \ref{fig:comparison_affinities}, where we see that all matching affinities decreases with respect to $s$ and that the cutoff value is $1$. Affinities $A, B,$ and $C$ follow very similar decreasing behaviors in a linear fashion, whereas affinity $D$ has a distinct plateau-like behavior. We now further investigate this phenomenon. Assume that we have two persistence bars \(\alpha\) and \(\beta\) matched via their image-bars \(\tilde{\alpha}\) and \(\tilde{\beta}\). We know that the birth times of \(\alpha\) and \(\tilde{\alpha}\), and \(\beta\) and \(\tilde{\beta}\) coincide, respectively, and that the bars \(\tilde{\alpha}\) and \(\tilde{\beta}\) also have the same death time. Thus, having high affinity $A$, for instance, depends on the two following phenomena. Firstly, the bars \(\alpha\) and \(\beta\) must have similar birth and death times. This means that the cycles in \(X\) and \(Y\) that generated them should be similar in size. Secondly, the death time of the image-bars should also be similar to the death time of the original bars. Geometrically, this means that the cycles that generated the bars should have high overlapping surface when considered in the union of the point clouds. These ideas are illustrated in Figure \ref{fig:3_matches}, where the affinity $A$ of a match of two circles decreases significantly when the circles have different centers or radii. Returning to Figure \ref{fig:comparison_affinities} with these ideas in mind, we see that there are few noticeable but not fundamental differences between the affinities $A, B,$ and $C$. Indeed, the Jaccard indices \(\mathrm{Jac}(\alpha, \beta)\) and \(\mathrm{Jac}(\tilde{\alpha}, \tilde{\beta})\) have similar magnitude---if a pair of cycles are similar in size in the spaces \(X\) and \(Y\), the cycles corresponding to their image-bars in the union will also have similar sizes. This implies that the affinities $A$ and $B$ are very similar. Affinity \(C\) is slightly lower than affinities \(A\) and \(B\) only because it has an extra multiplicative factor with magnitude less than one. It also makes sense that the matching affinities $A, B,$ and $C$ drop when the spatial overlap between cycles decreases. This happens because these affinities include the Jaccard indices between bars and image-bars, which are influenced by this overlap. The matching affinity \(D\) does not consider such a comparison, and thus, remains at a higher value until there is no match anymore, when it drops abruptly. Such a behavior could be useful in certain situations. In real-life applications, it might happen that we are interested in matching topological features that shrink or enlarge significantly, or which appear misplaced in the samples. Matching affinity $D$ would then be more sensitive to these matches, by assigning a higher prevalence score to them. However, this feature could be undesirable in other contexts, as we discuss in the next observation. We now need to check whether the four affinities are consistent with the original motivation, which is the condition proposed by \cite{reani_cycle_2021}. Namely, random cycles that appear in resamplings and get matched several times should be assigned a low prevalence score. Similarly to \cite{reani_cycle_2021}, we consider a uniform sampling of the unit square with \(N_{\mathrm{ref}} = 1000\) points and compute the prevalence scores of its bars by finding matches with \(K = 20\) different resamplings of \(N =1000\) points from that same distribution. The results of this experiment are given in Figure \ref{fig:random_cycles}. For affinity $D$, some random cycles are assigned quite high prevalence scores in the range $0.6-0.7$. This must be taken into account when interpreting the affinity scores for applications using affinity $D$. \section{Applications} \label{sec:applications} In this section, we demonstrate with numerous examples that cycle matching can be applied to real-life, large-scale, complex datasets from biology and astrophysics. Several applications motivate the usage of cycle matching and prevalence. First, we can identify common topological features shared by two spaces as a direct application of cycle matching. We can use this to track features both spatially and over time, on consecutive slices of an object or on consecutive time frames. We demonstrate both of these applications in this section. Second, the most prevalent features in data can be identifed by applying cycle matching repeatedly after resampling from the same distribution. By doing this, we can detect prevalent cycles in large-scale and complex data. We demonstrate this on cosmic web data and cell actin network data. Prevalence gives rise to an enriched visualization via the \emph{prevalence-augmented barcode}, where length corresponds to persistence while thickness and color corresponds to prevalence. \subsection{Tunneling: Tracking Intervals Over Slices} \label{sec:track_slices} As a first application of cycle matching, we tracked intervals over two-dimensional slices of three-dimensional objects. In biomedical imaging, for instance, it is common that data are made up of spherical or tubular elements, such as in vessels and other biological organs with channeling functions. Using cycle matching, we can match the closed contours delimiting the spherical or tubular elements across slices. \paragraph{Data: Lateral Line in Zebrafish.} To demonstrate this application, we used a biological imaging dataset, in particular the dataset with image ID 9836972 provided by \cite{10.7554/eLife.55913}. This is a stack of two-dimensional confocal images from the zebrafish posterior lateral line primordium (pLLP). The pLLP is a primitive expression of the lateral line---an organ in fish that allows them to detect the pattern of water flow over their body surface. It appears at the embryonic stage in the form of a rosette-shaped cluster of cells. The circular contours that we can see in the images of the dataset (see Figure \ref{fig:2D_slices}) are precisely these cells; we will track these along the height of the stack. We considered a stack of \(15\) images with a \(0.66\ \mu m\) gap and size of \(300 \times 300\) pixels in each image, with a resolution of 0.1 \(\mu m\) per pixel. We thresholded the images with the Otsu method and sampled them with \(N = 1000\) points. We matched persistent intervals between pairs of Vietoris--Rips filtrations on consecutive slices. Some features were matched on consecutive frames and formed a tunnel: we were able to detect $32$ such tunnels, each stained with a different color. We computed the geometric generators drawn here to represent the persistence intervals that were matched using the ripser-tight-representative-cycles module of Ripser \citep{bauer_ripser_2021}. The results are shown in Figure \ref{fig:2D_slices}. As a byproduct of this approach, we can identify slices on which a cell appears then disappears. \subsection{Video Data: Tracking Features Over Time} \label{sec:track_video} Cycle matching can be used to track topological features over time, by matching the barcodes of consecutive frames in a video, or, in a biological context, at different stages of disease development. This method detects common topological patterns surviving across consecutive time points and quantifies the quality of the match through the affinity scores. \paragraph{Data: Heart Valves in Zebrafish.} To illustrate this application, we analyzed a video of the atrioventricular valve (AVV) of a wild-type AB zebrafish, from \cite{scherz_developing_hearts_2008}). This video is taken $76$ hours post fecundation and at a rate of $50$ milliseconds. The specimen studied comes from a transgenic line that allows the monitoring of the two chambers that make up the primitive heart of the zebrafish embryo, and how its contraction over time generates embryonic heartbeats. The contraction is especially pronounced for the right chamber, as can be seen from Figure \ref{fig:heartbeat}. We selected $10$ frames capturing one contraction and matched cycles on consecutive frames (Figure \ref{fig:heartbeat}). We sampled \( N = 500\) points on each of the images after applying a thresholding technique based on the mean of gray-scale values. We successfully detected the persistent intervals delimiting the two chambers, and tracked them on all consecutive frames. We also tracked their size variation. Note that the matching affinities are much more variable for the cycle on the right (in red), which is expected since the right chamber changes abruptly in shape. As before, we show on each frame of Figure \ref{fig:heartbeat} the generator associated to each persistence interval, obtained with the ripser-tight-representative-cycles module of Ripser, while using the same color to stain matched cycles. \paragraph{Data: Time-Lapse Images of Human Embryos.} As another example to demonstrate feature tracking over time, we tracked intervals on $10$ consecutive frames of time-lapse embryo data from \cite{GOMEZ2022108258} (Figure \ref{fig:embryogenesis}). A time-lapse imaging (TLI) system with a special camera captures images of a human embryo every 50 to 100 minutes and features different stages of cellular division. We matched samples of \(N =500\) points on the images after applying a Sato operator and a threshold using the Otsu method. We were able in particular to detect cell division as the appearance of a new topological feature (in red). \subsection{Prevalent Cycles} \label{sec:appli_prevalent} Our next application is to find prevalent cycles in order to reveal significantly organized topological patterns in data. We will demonstrate this on cosmic web data (Figure \ref{fig:cosmic}) and cell actin data. Recall that this consists of comparing multiple resamplings $X^{(1)},\ldots,X^{(K)}$ to a reference space $X_{\rm ref}$ and finding all possible matching pairs of persistence intervals between $X_{\rm ref}$ and any $X^{(k)}$ for \(1\leq k\leq K\). This approach becomes especially useful in situations where the initial (unknown) distribution has high topological complexity, and for which we only have access to point cloud samplings. In other situations, the distribution may be partially or even entirely known already---for instance if we are given an image from which to sample points. Often, an image may suffer from noise or contrast variations, but we can recover the true cycles of the object. Even in the absence of noise, brighter and weaker signals may still capture interesting information (for instance, depth in 2D images); prevalence takes this into account. We can also study the profile of prevalence scores corresponding to an image to characterize its topological structure, in comparison to another image. We may visualize this using prevalence-augmented persistence barcodes, where the length of a bar still represents persistence, while its thickness and color represents prevalence. \paragraph{Data: The Cosmic Web.} First, we identified prevalent cycles in the cosmic web, based on the point distribution of galaxies from the BOSS CMASS database \citep{dawson_baryon_2013}. Matter in the universe is arranged along an intricate pattern involving filamentary structures \citep{de_lapparent_slice_1986, york_sloan_2000}. However, the reconstruction of these filaments is still challenging; multiple methods to address this reconstruction problem have been proposed \citep{malavasi_characterising_2020}. Instead of detecting filaments with an uncertainty score \citep[e.g.,][]{duque_novel_2022}, we propose to detect cycles with a prevalence score. The final version of the BOSS CMASS dataset used here included in the current data release SDSS DR17 \citet{abdurrouf_seventeenth_2022} and was released in SDSS DR12 \citep{alam_eleventh_2015}. We selected galaxies with right ascension $170 < \mathrm{RA} < 190$, declination $30 < \mathrm{dec} < 50$ and redshift range $0.564 < z < 0.57$ and projected the points onto the $(\mathrm{RA},\, \mathrm{dec})$ 2D space. We sampled the reference space $X$ using $N_\mathrm{ref} = 1000$ points and resampled the dataset $K = 20$ times with $300$ points in each resampling $X^{(k)}$, by adding Gaussian noise of magnitude $0.1$ to each point. We thus performed $20$ comparisons of barcodes to find matching cycles. The results are shown in Figure \ref{fig:cosmic}, where cycles with different prevalence scores can be visualized. \paragraph{Data: Cell Actin.} Next, we computed the prevalence of cycles in biological imaging data of cell actin. Actin networks are essential in scaffolding the inner structure of cells, enabling in particular cell motility and reshaping. Figure \ref{fig:actin_data}, featuring data from \cite{svitkina_arp23_1999}, shows significant loss of actin filaments in the rear of the lamellipodium network due to the absence of some stabilizing chemicals during extraction. We selected three crops I, II and III, to study from this image, where the actin filaments are sparse, half-sparse and half-dense, and dense, respectively. We then thresholded each cropped image using the Otsu thresholding method to segment filaments, and restricted the original image pixel intensities to these filaments to finally obtain a discrete probability distribution. Points were sampled from this distribution and their spatial coordinates were perturbed with a Gaussian noise of standard deviation $10\%$ of a pixel side. We show the results of our computations in Figures \ref{fig:actin_stained_cycles}, \ref{fig:actin_overlayed_cycles}, \ref{fig:actin_barcodes}, \ref{fig:actin_prevalence_barcodes}, \ref{fig:actin_scores}, and \ref{fig:actin_persist_vs_preval}. We found that larger voids in the structure of the actin mesh led to larger cycles of higher prevalence, such as in crop I, whereas small voids in denser parts led to smaller cycles of lower prevalence, such as in crop III. As seen in Figures \ref{fig:actin_stained_cycles} and \ref{fig:actin_overlayed_cycles}, we correctly found large cycle representatives in crop I, small ones in crop III, and a mix of small and large ones in crop II, at the transitional region between rear and front of the lamellipodium. The usual persistence barcodes (Figure \ref{fig:actin_barcodes}) show numerous short-lived bars in crop III, but some longer-lived features for crops I and II. This barcode information can be enriched by visualizing the prevalence as the thickness and color of a bar (Figure \ref{fig:actin_prevalence_barcodes}). It is interesting to note that the highest prevalence scores throughout selected data were found in crop II, corresponding to two highly prevalent cycles (see Figure \ref{fig:actin_scores}). Their scores are higher than those from crop I, due to contrast variations of larger amplitude along filaments of crop II. Indeed, prevalence can be interpreted as a certainty measure of topological features. Finally, a scatter plot of prevalence versus persistence scores (Figure \ref{fig:actin_persist_vs_preval}) confirmed that prevalence is not monotonous with respect to persistence and longer intervals do not necessarily correspond to more prevalent features. \subsection{A Note on Computational Runtime} As mentioned previously in Section \ref{subsec:implementation}, the advantage of the framework proposed by \cite{reani_cycle_2021} is that the procedure of matching is easily parallelizable. With access to standard institutional high performance computing (HPC) resources requiring simply CPU processing, use of a single node, one CPU per task, and max 30 GB of memory per CPU, the problem of identifying prevalent cycles or matching intervals in spaces with about $100 - 1000$ points generally reduces to a matter of minutes, ranging from seconds to a few hours for the applications showcased here. We present in Table \ref{tab:runtime_tracking} the runtimes corresponding to the real datasets from Section \ref{sec:track_slices} and Section \ref{sec:track_video}, where we track topological features over a set of frames. We include the number of points in the samples \(N\) and the number of matchings performed \(K\). In Table \ref{tab:runtime_real}, we exhibit the runtimes associated to the real datasets shown in Section \ref{sec:appli_prevalent}, where we compute prevalent features. We include the number of points on the reference space \(N_\mathrm{ref}\), the number of points in the resamples \(N\), and the number of resamples \(K\). Table \ref{tab:runtime_synth} collects runtimes for computing prevalent features on synthetic datasets that consist of point clouds uniformly sampled in the unit square of the plane. Note that we took $N_\mathrm{ref} = N$ in the synthetic examples. In Table \ref{tab:runtime_tracking}, one runtime corresponds to the computation of \begin{enumerate}[(i)] \item the barcodes of the two samplings on consecutive frames; \item the two image-barcodes of those samplings in their union; and \item the matching itself, which compares the barcodes. \end{enumerate} In Table \ref{tab:runtime_real} and Table \ref{tab:runtime_synth}, the step (i) only covers the computation of the barcode on the resampling \(X^{(k)}\) corresponding to that job, and in step (ii), we compute the image-persistence of the reference space \(X_\mathrm{ref}\) and the resampling \(X^{(k)}\) in their union. The runtime needed to compute the barcode of the reference space $X_\mathrm{ref}$---which is just a few seconds and is computed once before the parallel jobs---is not included in Tables \ref{tab:runtime_real} and \ref{tab:runtime_synth}. The computational bottleneck here is not the number of matchings $K$ but the number of points $N$ and $N_\mathrm{ref}$ sampled in the respective spaces for each matching instead. The number of matchings could be increased arbitrarily (up to the capacity of the HPC) without affecting the computational runtime while maintaining it on the order of minutes or a few hours. Increasing it here allows for more precise estimations of, for instance, the average runtime needed to find all possible matchings between two spaces. However, median runtime increases following a power law $T \sim \mathrm{cst} \cdot N^p$ where $\mathrm{cst}$ is a constant and $p \simeq 3.266$ (see Figure \ref{fig:runtime_regression}), so that increasing the number of sampled points $N$ by a factor of $10$ would result in increasing runtime by a factor of $10^p \simeq 1845$. \begin{table}[!ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Dataset} & \multicolumn{4}{|c|}{Runtime (minutes)} & \multicolumn{2}{|c|}{Parameters} \\ \cline{2-7} & min & max & average & median & $N$ & $K$ \\ \hline Lateral Line & 31 & 281 & 124 & 95 & 1000 & 14 \\ \hline Heart Valves & 3 & 21 & 11 & 8 & 500 & 9\\ \hline Time-lapse Embryo & 5 & 31 & 15 & 15 & 500 & 9\\ \hline \end{tabular} \caption{Computational Runtimes of Tracking Experiments on Real Data.} \label{tab:runtime_tracking} \end{table} \begin{table}[!ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Dataset} & \multicolumn{4}{|c|}{Runtime (minutes)} & \multicolumn{3}{|c|}{Parameters} \\ \cline{2-8} & min & max & average & median & $N_\mathrm{ref}$ & $N$ & $K$ \\ \hline Actin Crop I & 83 & 360 & 184 & 178 & 1200 & 500 & 30 \\ Actin Crop II & 58 & 377 & 146 & 118 & 1200 & 500 & 30 \\ Actin Crop III & 65 & 310 & 164 & 172 & 1200 & 500 & 30 \\ \hline Cosmic Web & 8 & 29 & 15 & 14 & 1000 & 300 & 20 \\ \hline \end{tabular} \caption{Computational Runtimes of Prevalence Experiments on Real Data.} \label{tab:runtime_real} \end{table} \begin{table}[!ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Dataset} & \multicolumn{4}{|c|}{Runtime (seconds or minutes)} & \multicolumn{3}{|c|}{Parameters} \\ \cline{2-8} & min & max & average & median & $N_\mathrm{ref}$ & $N$ & $K$ \\ \hline Synth. 1 & 1.81 s & 5.61 s & 3.43 s & 3.34 s & 100 & 100 & 30 \\ Synth. 2 & 8.99 s & 48.32 s & 23.47 s & 20.17 s & 200 & 200 & 30 \\ Synth. 3 & 25.68 s & 2 mn 44 s & 1 mn 25 s & 1 mn 18 s & 300 & 300 & 30 \\ Synth. 4 & 3 mn 13 s & 15 mn 31 s & 8 mn 12 s & 8 mn 5 s & 500 & 500 & 30 \\ Synth. 5 & 13 mn 19 s & 88 mn 54 s & 46 mn 18 s & 39 mn 48 s & 800 & 800 & 30 \\ Synth. 6 & 38 mn 41 s & 189 mn 33 s & 95 mn 16 s & 92 mn 10 s & 1000 & 1000 & 30 \\ \hline \end{tabular} \caption{Computational Runtimes of Prevalence Experiments on Synthetic Data. Median runtime increases as a power law $T \sim \mathrm{cst} \cdot N^{3.266}$ with respect to $N$ (see Figure \ref{fig:runtime_regression}).} \label{tab:runtime_synth} \end{table} \subsection*{Software and Data Availability} The code used to perform all experiments here is freely and publicly available at our project GitHub repository \url{https://github.com/inesgare/interval-matching}. It is fully adaptable for individual user customization. Where possible, the data we have used in this section are also provided on the same GitHub repository so that all experiments and examples in our paper are fully reproducible. Note that some of the data we have used here required an institutional materials transfer agreement, so these data were not made available on our repository. \section{Discussion} \label{sec:end} In this paper, we studied the problem of identifying topologically significant features in noisy data where the usual measure of persistence in the sense of the length of an interval in a persistence barcode is unsatisfactory. We also studied the problem of comparing barcodes over different filtrations and identifying correspondences between persistence intervals. To date, the various existing proposals to these problems have faced significant computational limitations. The main contribution of our work is an extension of existing notions of topological significance and cycle matching to provide the most general and flexible definitions, which we then implement using the dual perspective of cohomology to achieve the fastest available identification of prevalent cycles as well as cycle matching. Our implementation now makes these approaches practical and applicable to real-life, large-scale, complex datasets with execution times ranging from a matter of minutes to a few hours for $100-1000$ sampling points, using only standard institutional HPC facilities. Our work inspires several directions for future research, which we now discuss. First, one natural question is to understand the behavior of the prevalence-augmented barcodes as the number of resampling spaces $K$ and the number of sampling points $N$ for each space increase to infinity. It will be important to understand how at the limit augmented barcodes are related to the original distribution, quantify the rate and type of convergence, and study how the choice of filtration (Rips, \v{C}ech, coupled-Alpha \citep{reani_coupled_2021}) and of affinity (A, B, C, D) affects the convergence, if at all. This would then allow us to design probabilistically-founded statistical tests, for example, to determine whether a collection of points clouds has been sampled from a specific distribution with known barcode, or not, based on some confidence intervals. Developing such a framework would provide a practical approach to choosing a threshold to identify ``true'' cycles in the original data as bars whose prevalence are higher than $1 - \epsilon$. This could also have practical implications on certain applications, for example, by directly extracting ``true'' topological signal in complex settings such as imaging data, and perhaps even bypassing the need for computationally expensive procedures, such as image segmentation. Another theoretical question of interest is the study of the new metric $d_{\mathrm{IM}_p}$ on persistence modules introduced by \cite{reani_cycle_2021} in Section 6.2. This metric involves comparing matched intervals between two modules directly, and not comparing birth--death times between persistence diagrams which discards important information about spatial correspondence, as explained previously. It is reasonable to expect that as $N$ increases, the distance between two persistence modules converges to zero if the point clouds are drawn from the same distribution. Likewise, this could be an alternative approach towards the design of a (non-parametric) statistical test to determine whether two points clouds were sampled from a same distribution. \section*{Acknowledgments} We wish to thank Omer Bobrowski, Dominique Bonnet, Vin de Silva, Marc Glisse, Yohai Reani, Wojciech Reise and Iris Yoon for helpful conversations. This work was supported by the Engineering and Physical Sciences Research Council [EP/S021590/1]. The EPSRC Centre for Doctoral Training in Geometry and Number Theory (The London School of Geometry and Number Theory), University College London. I.G.R.~is funded by the UK EPSRC London School of Geometry and Number Theory and Imperial College London. A.S.~is funded by a joint Imperial College London--Francis Crick Institute PhD studentship. We wish to acknowledge The Francis Crick Institute, London, and the Information and Communication Technologies resources at Imperial College London for their computing resources which were used to implement the experiments and data applications in this paper. We would like to thank the projects involved in \cite{10.7554/eLife.55913}, \cite{scherz_developing_hearts_2008} and \cite{GOMEZ2022108258} for these datasets and their public accessibility. We are grateful to the contributors of the Cell Image Library (CIL) \citep{ellisman_cell_2021}, located at \url{http://www.cellimagelibrary.org}, for making their data repository and resources available for public access. Funding for SDSS-III has been provided by the Alfred P.~Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is \url{http://www.sdss3.org/}. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. \clearpage \section{Figures} \begin{figure}[!ht] \centering \includegraphics[clip, width=0.5\linewidth]{comparisong_affinities.png} \caption{Mean affinity of the matches between two circles of radius 1 with centers shifted according to the horizontal axis. The circles were sampled with \(N = 100\) points without noise added. We considered 15 equidistant distances between 0 and 1 and took 15 samples at each step.} \label{fig:comparison_affinities} \end{figure} \begin{figure}[!ht] \centering \begin{subfigure}{0.45\linewidth} \includegraphics[width=\linewidth]{match_circles_1.png} \caption{A match with affinity \(\rho_A = 0.9435\).} \label{subfig:first_match} \end{subfigure} \hfill \begin{subfigure}{0.5\linewidth} \includegraphics[width=\linewidth]{match_circles_2.png} \caption{A match with affinity \(\rho_A = 0.2724\).} \label{subfig:second_match} \end{subfigure} \hfill \begin{subfigure}{0.45\linewidth} \includegraphics[width=\linewidth]{match_circles_3.png} \caption{A match with affinity \(\rho_A = 0.2649\)} \label{subfig:third_match} \end{subfigure} \caption{Three matches between samples of 100 points of two circles with Gaussian noise of magnitude 0.1 added. In Figure \ref{subfig:first_match}; the circles have the same radii and centres; in Figure \ref{subfig:second_match} the circles have the same radii and centres 0.7 units of length apart; and in Figure \ref{subfig:third_match}, the circles have coinciding centres and radii 1 and 1.5.} \label{fig:3_matches} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width = \linewidth]{random_cycles_1000.png} \caption{Most matched intervals in resamplings of \(N=1000\) points of the uniform distribution in the unit square. Top left: Frequency of reappearance of the 15 most frequently matched intervals. Remainder of the figure: Cycles representing the persistence intervals of \(X_{\mathrm{ref}}\) stained by their prevalence score using the affinity specified above of the image.} \label{fig:random_cycles} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width = \linewidth]{pLLP/image_pLLP_1000_otsu.png} \caption{Cycle matching to track cell contours on images of slices of the posterior Lateral Line Primordium (pLLP) of the zebrafish. We applied an Otsu threshold and took samples of \(N = 1000\) points on the images. Cycles matched across consecutive slices can be grouped into tunnels, each stained with a different color. Data courtesy of \cite{10.7554/eLife.55913}.} \label{fig:2D_slices} \end{figure} \begin{figure}[!ht] \centering \includegraphics[trim = {0 0 0 0}, clip, width=\linewidth]{heartbeat zebrafish/heartbeat_affinities_2.png} \caption{Cycle matching on 10 frames from a video courtesy of \cite{scherz_developing_hearts_2008}. We took samples of \(N = 500\) points and applied a threshold based on the mean of the gray-scale values before sampling. The intervals matched are stained in the same color. Below each image we display the affinity of the match between the interval in the image and the corresponding interval in the subsequent image.} \label{fig:heartbeat} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{embryogenesis/embryogenesis_500.png} \caption{ Cycle matching on the time-lapse embryo dataset \citep{GOMEZ2022108258} between samples of \(N = 500\) points in the images after applying a Sato operator and a threshold using the Otsu method. Cycles that get matched are stained in the same color. Below each image one can find the affinity of the match between the interval in the image and the corresponding interval in the previous image.} \label{fig:embryogenesis} \end{figure} \begin{figure}[!ht] \centering \includegraphics[trim = {0 2cm 0 2cm}, clip, width=.6\linewidth]{cosmos/cosmic_Nref1000_samp20_N300.png} \caption{Prevalent cycles in the cosmic web. Cycle representatives are stained by prevalence score and galaxies (from the original BOSS CMASS data) are shown as blue dots. We formed the reference space by sampling the galaxies with $N_\mathrm{ref} = 1000$ perturbed points and performed $K = 20$ comparisons to spaces formed by sampling with $N = 300$ perturbed points each.} \label{fig:cosmic} \end{figure} \begin{figure} \centering \includegraphics[trim = {0 1cm 0 2cm}, clip, width=.8\linewidth]{actin/crops.png} \caption{Electron micrograph of actin network in Xenopus keratocyte lamellipodium, whose rear part disassembled in the course of unprotected extraction and front part remained dense as in control cells. Selected crops I, II, III (from left to right) are shown as red rectangles. Original image CIL:24800 is from the Cell Image Library database \citep{ellisman_cell_2021}, available under CC BY-NC-SA 3.0 License, corresponding to Figure 6b of (\cite{svitkina_arp23_1999}).} \label{fig:actin_data} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.92\textwidth} \centering \includegraphics[trim = {0 0 0 0}, clip, width=1\textwidth]{actin/all_stained_cycles_Nref1200_samp30_N500.png} \end{subfigure} \hfill \begin{subfigure}[b]{0.06\textwidth} \centering \includegraphics[trim = {0 0 0 0}, clip,width=\textwidth]{actin/all_colorbar.png} ~~~ \end{subfigure} \caption{Prevalent cycles of reference space $X$ with $N_\mathrm{ref} = 1200$ points (shown as blue dots), based on $K = 30$ resampling spaces of $N = 500$ points. Cycle representatives are stained by prevalence score. From left to right: crops I, II, III. Colorbar describes prevalence scores.} \label{fig:actin_stained_cycles} \end{figure} \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.92\textwidth} \centering \includegraphics[trim = {0 0 0 0}, clip, width=1\textwidth]{actin/all_overlayed_Nref1200_samp30_N500.png} \end{subfigure} \hfill \begin{subfigure}[b]{0.06\textwidth} \centering \includegraphics[trim = {0 0 0 0}, clip,width=\textwidth]{actin/all_colorbar.png} ~~~ \end{subfigure} \caption{Prevalent cycles overlayed on original image (see Figures \ref{fig:actin_data} and \ref{fig:actin_stained_cycles}). From left to right: crops I, II, III. Colorbar describes prevalence scores.} \label{fig:actin_overlayed_cycles} \end{figure} \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.92\textwidth} \centering \includegraphics[trim = {0 0 1.8cm 0}, clip, width=1\textwidth]{actin/all_barcode_Nref1200_samp30_N500.png} \end{subfigure} \hfill \begin{subfigure}[b]{0.06\textwidth} \centering \includegraphics[trim = {0 0 0 0}, clip,width=\textwidth]{actin/all_colorbar.png} ~~~ \end{subfigure} \caption{Persistence barcodes of reference space $X$. Values on the horizontal axis correspond to birth time and length of a bar to persistence. From left to right: crops I, II, III. Colorbar describes prevalence scores.} \label{fig:actin_barcodes} \end{figure} \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.92\textwidth} \centering \includegraphics[trim = {0 0 1.8cm 0}, clip, width=1\textwidth]{actin/all_augmented_barcode_Nref1200_samp30_N500.png} \end{subfigure} \hfill \begin{subfigure}[b]{0.06\textwidth} \centering \includegraphics[trim = {0 0 0 0}, clip,width=\textwidth]{actin/all_colorbar.png} ~~~ \end{subfigure} \caption{Prevalence-augmented persistence barcodes of reference space $X$. Thickness and color of a bar correspond to prevalence score. Values on the horizontal axis correspond to birth time and length of the bars to persistence. From left to right: crops I, II, III. Colorbar describes prevalence scores.} \label{fig:actin_prevalence_barcodes} \end{figure} \begin{figure}[!ht] \centering \includegraphics[trim = {0 0 0 0}, clip, width=\linewidth]{actin/all_scores_Nref1200_samp30_N500.png} \caption{Prevalence scores, sorted by birth time of the persistence interval. One dot stands for one interval. From left to right: crops I, II, III.} \label{fig:actin_scores} \end{figure} \begin{figure}[!ht] \centering \includegraphics[trim = {0 0 0 0}, clip, width=\linewidth]{actin/all_persist_vs_preval_Nref1200_samp30_N500.png} \caption{Persistence vs prevalence scores in the augmented barcode of the reference space $X$. One dot stands for one persistence interval. From left to right: crops I, II, III.} \label{fig:actin_persist_vs_preval} \end{figure} \begin{figure}[!ht] \centering \includegraphics[clip, width=\linewidth]{runtimes_regression.png} \caption{Median runtime v.s. number of sampling points $N$ for the synthetic examples of Table \ref{tab:runtime_synth}. Median runtime increases as a power law $T \sim \mathrm{cst} \, N^{3.266}$ with respect to $N$ (a linear regression on the logarithmic scale gave a score of $0.996$, coefficient $3.266$ and intercept $-14.087$ with respect to the data points of Table \ref{tab:runtime_synth}). Left: linear-linear scale. Right: log-log scale. The dashed line corresponds to the result of the linear regression on the log-log scale.} \label{fig:runtime_regression} \end{figure} \clearpage \newpage \bibliographystyle{authordate3}
proofpile-arXiv_065-3996
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} The OECD-FAO Agricultural Outlook 2022-2031 \cite{oecd2022oecd} projects a requirement of 24\% agricultural productivity growth to meet the twin challenges of zero hunger and reduction in greenhouse gas emissions. However, with a projected decrease of 20\% in real food prices by 2031, farming has become an increasingly unattractive economic proposition, especially with the increasing industrialization and urbanization of low income countries. This necessitates increasing automation of agricultural activities, with a shift from large-scale mechanization and indiscriminate chemical interventions to targeted precision agriculture using robotics and artificial intelligence. \begin{figure}[t] \centering \includegraphics[width=0.97\columnwidth]{images/cover_picture_11.pdf} \captionsetup{width=0.99\columnwidth, justification=justified} \caption{Shape completion based viewpoint planning. Here, the red clusters denote the point clouds for the yellow and green sweet peppers perceived by the Realsense L515 sensor on a a UR5e arm. Completed shapes for the partially detected fruits are estimated in real time, which are then used to predict new viewpoints to cover the fruits.} \label{fig:cover} \end{figure} Robotics is being used for varied activities in agriculture such as crop monitoring, land preparation, plant treatment, harvesting, plant phenotyping, and yield estimation \cite{oliveira2021advances}. In the context of precision agriculture, fruit detection and mapping is key to yield estimation and harvesting. Unlike objects found in industrial and household scenarios, fruits and plants change their color, size and shape over time, requiring spatio-temporal mapping of their positions and sizes for phenotyping and crop management decisions. However, frequent occlusion of fruits and the variation in their position on plant structures make the reliable perception of fruits a challenging task. State-of-the-art active perception for horticulture robots focuses on image-based detection and segmentation as well as visual servoing techniques presenting a research gap in shape-based active perception\cite{magalhaes2022active}. Whereas we earlier investigated the benefits of shape completion for improving the accuracy of fruit size estimation using data fused from different poses generated by our RoI viewpoint planner \cite{marangoz2022fruit}, \cite{zaenker2020viewpoint}, we now close the loop by feeding the shape completion results to inform the viewpoint planner where to plan viewpoints for maximizing the information gain. In more detail, we present a framework for shape completion based viewpoint planning that uses the predicted shape structure to focus the sensor's attention on the intersection between unknown regions and missing surfaces. As can be seen in \figref{fig:cover}, shape completion enables not only to predict the size and position of partially detected fruits, but also to guide the sensor to the next bext view using the predicted shapes. We utilize two different approaches to shape completion\cite{marangoz2022fruit, rodriguez2018transferring} and perform a comparative analysis of their efficacy in guiding the viewpoint planning. To summarize, our contributions are the following: \begin{itemize} \item Adaption and integration of two shape completion methods to predict fruit shapes. \item A novel viewpoint planning approach that uses the predicted surfaces of the fruit shapes for finding the next best view. \item Formulation of viewpoint dissimilarity to find new viewpoints in far away regions. \item Quantitative simulation experiments that demonstrate superior performance of our planner compared to viewpoint planning without shape completion in terms of estimated volume and reconstruction accuracy \item Qualitative experiments with a real robotic platform measuring sweet pepper plants in a commercial glasshouse. \end{itemize} \section{Related Work} \label{sec:related} With robots being increasingly deployed outside structured industrial scenarios, active perception is a key factor in improving their efficacy \cite{zeng2020view}. Viewpoint planning is a subset of active perception where the sensor pose or a sequence of poses is planned to maximize the information gain, i.e., minimize the entropy about the state of the environment or target objects, subject to constraints such as obstacle avoidance and movement cost. For detail oriented tasks, especially at the object level, such as active recognition, pose estimation \cite{atanasov2014nonmyopic}, mapping or reconstruction \cite{li2005information, kriegel2015efficient}, manipulators or mobile manipulators are typically used with attention-driven next best view (NBV) planning. In reconstruction tasks, it is typically assumed that the object is not occluded by the environment and, given enough views, can be completely perceived by the viewpoint planning system. However, fruit mapping is a task where the objects of interest are highly occluded due to fruits growing under the leaves and hence might never be fully reconstructed. Soria\etal \cite{ramon2017multi} developed a multi-view reconstruction method for apples in commercial orchards using probabilistic segmentation, whereas Sarabu\etal \cite{sarabu2019leveraging} proposed a dual arm system for cooperative apple picking with both arms equipped with RGB-D sensors for volumetric surveying and grasping, using graph-based planning. Zaenker\etal \cite{zaenker2020viewpoint} developed RoI targeted viewpoint planning where contours of the detected RoIs are selected as targets for the next best view. While this approach shows an improvement in volume estimation compared to general viewpoint planners, it does not utilize shape information to find the next view. Burusa\etal \cite{burusa2022attention} demonstrated that placing 3D bounding boxes on different parts of the plant such as stem, leaf nodes, and the whole plant, as an attention mechanism to guide the volumetric NBV planner led to significant improvement in accuracy and speed of reconstruction. However, the 3D bounding boxes were defined by the user and not autonomously generated. Unlike in industrial scenarios, where object congruence in terms of shape is a reasonable assumption, in agricultural and household scenarios, even objects belonging to the same class are only similar in shape leading to the need for deformable shape completion \cite{mees2019self}. In the field of manipulation planning, shape completion is being increasingly used to improve the robustness of grasp planning \cite{lundell2019robust, varley2017shape, gualtieri2021robotic}, whereas in the agricultural context, shape completion has been used for fruit mapping and localization \cite{ge2020symmetry, gong2022robotic, magistri2022contrastive}, without using its output for viewpoint planning. Volumetric occupancy mapping with probabilistic depth completion \cite{popovic2021volumetric} and prediction of unobserved space using depth map augmentation \cite{fehr2019predicting} have been used for navigation planning of mobile robots and micro-aerial vehicles, respectively, without any exploration planning. In the context of exploration planning, shape priors have been used for guiding the exploration of objects using active tactile sensing \cite{meier2011probabilistic, smith2021active}. Recently, Schmid\etal\cite{schmid2022scexplorer} have demonstrated the application of incremental semantic scene completion for informative path planning of micro-aerial vehicles for efficient environment coverage. While this work is similar to our work, \mbox{Schmid\etal} focus on efficient coverage of complete scenes and not on the precise mapping of particular objects. Furthermore, in contrast to \mbox{Schmid\etal} who update the actual scene mapping based on the scene completion, we mark shape completed regions as unknown in terms of occupancy but with high region of interest probability scores, thus leading to a more conservative application of shape predictions while still using it for guiding the NBV planner to regions of interest. To the best of our knowledge, next best view planning for object mapping and reconstruction based on iterative deformable shape completion has not been carried out till date. \section{Our Approach} \label{sec:approach} \begin{figure*}[t] \includegraphics[width=\linewidth]{images/system_overview_shape_completion_rvp.pdf} \captionsetup{justification=justified} \caption{Overview of our system. The blue block represents RoI detection (\secref{subsec:roi_detection}), the yellow block represents shape completion (\secref{subsec:shape_completion}), the green block represents occupancy mapping with predicted region of interests (\secref{subsec:predicted_roi}), and the purple block represents viewpoint planning (\secref{subsec:viewpoint})} \label{fig:system_overview} \end{figure*} Our work extends the RoI based viewpoint planner~\cite{zaenker2020viewpoint} with the output of shape completion not only to estimate the location and size of the fruits, but also to provide feedback to the viewpoint planner to provide useful viewpoints on predicted regions of interest. \figref{fig:system_overview} gives an overview of our shape completion based RoI viewpoint planning. The RoI detection module detects the fruit point clouds which are then fed to the shape completion module as well as the RoI occupancy mapping module. The shape completion module estimates the completed shapes for the fruit clouds, which are then fed to the occupancy mapping module to estimate the predicted regions of interest, with their predicted RoI values. Finally, viewpoints are sampled around the predicted RoIs to obtain better views of the fruits which in turn are used to generate an improved shape prediction. The individual steps are described in more detail in the following subsections. \subsection{Region of Interest Detection} \label{subsec:roi_detection} The color image from the RGB-D camera at every sensor pose is forwarded to an HSV based segmentation method in the case of red sweet peppers in the simulated scenario, and to a Mask R-CNN \cite{he2017mask} based sweet pepper detector, in the real-world experiments for detecting red, green, and yellow peppers, to form the RoI, i.e. fruit masks. These masks are fused with the depth data from the RGB-D camera to form the RoI point cloud which is then forwarded to the occupancy mapping module for NBV planning, and to the surface mapping module for shape completion. \subsection{Shape Completion} \label{subsec:shape_completion} The shape completion process runs in parallel to viewpoint planning, with the latest shape completion result being used for planning new viewpoints. It is also used for estimating the size and location of the fruits. It takes as input the fruit point clouds and outputs the completed shapes. \subsubsection{Surface Mapping and Clustering} \label{subsubsec:mapping_clustering} The RoI point clouds of the fruits at every observation pose are fed to the voxblox mapping system \cite{oleynikova2017voxblox}, which accumulates the point clouds iteratively to form a truncated signed distance fields (TSDF) map, from which the surface point cloud of the fruits are extracted. The surface point cloud is then clustered using the method described in \cite{marangoz2022fruit}. We also estimate the centroid of each cluster by computing the surface normals and calculating the least-squares solution to the intersection. We improved the clustering by performing a sanity check on the cluster size, i.e., clusters whose bounding box dimensions are larger than the typical sweet pepper dimensions are split further. \subsubsection{Shape Fitting} We integrated the following shape completion methods to provide a common shape completor interface which provides feedback to the viewpoint planner. \textbf{Superellipsoid Fitting (SE):} As in our previous work \cite{marangoz2022fruit}, the clustered surface point clouds are fed to the superellipsoid matcher, which fits a superellipsoid to the respective cluster by optimizing a cost function that minimizes the deviation in cluster center while simultaneously imposing constraints on its dimensions. \\ \textbf{Non Rigid Shape Registration (SR):} \begin{figure}[b] \centering \includegraphics[width=0.99\columnwidth]{images/shape_registration.pdf} \captionsetup{width=0.99\columnwidth, justification=justified} \caption{Non-rigid shape registration\cite{rodriguez2018transferring}: The red point cloud shows the canonical model of sweet pepper, the green point cloud corresponds to a partially observed sweet pepper, and the blue point cloud illustrates the result of the shape registration where the canonical model is deformed to fit the observed cloud with a small local rigid transformation.} \label{fig:shape_registration} \end{figure} Rodriguez\etal\cite{rodriguez2018transferring} developed a category level shape completion approach using a learned latent space projection, and local non-rigid registration based on coherent point drift \cite{myronenko2010point}, for predicting shapes of partially observed objects for grasp planning. We adapted the approach to predict shapes of fruits by learning a canonical model using meshes of the sweet peppers used in simulation, as shown in \figref{fig:shape_registration}. As the shape registration approach can only deal with local rigid transforms, we first shift the input cluster to the centroid calculated in \secref{subsubsec:mapping_clustering}. Then, the shape registration calculates the deformed completed shape for each cluster along with a small local rigid transform. The deformed shape is then shifted back to its original centroid location and corrected using the generated transform. \subsection{Occupancy Mapping with Predicted Regions of Interest} \label{subsec:predicted_roi} The RoI point clouds generated in \secref{subsec:roi_detection} as well as the complete point cloud at every observation pose are fed to the RoI Octomap developed in our previous work\cite{zaenker2020viewpoint}, by augmenting the Octomap\cite{hornung13auro} nodes with region of interest values in addition to the existing occupancy information. We then calculate the missing surfaces from the completed shapes and their predicted RoI value, to augment the RoI Octomap with information about predicted RoIs. \subsubsection{Missing Surface Estimator} \label{subsubsec:missing_surface_estimator} The shape completion approaches detailed above fit a complete shape based on the clustered point cloud. However, for viewpoint planning we are only interested in the missing surfaces. Thus, we perform a nearest neighbour search for the predicted cloud on the input cloud and store the Euclidean distance. If this distance is below a threshold (chosen as~1.5cm), we remove the point from the predicted cloud. The Euclidean distance $d$ to the nearest point in the input cloud is also used to assign the RoI probability score $p_{roi}$:\begin{equation} \label{eq:roi_prob} p_{roi} = \min(\max({exp(-\frac{(d- \mu)^{2}}{2\sigma^{2}})}, 0.5), 0.75), \end{equation} where $\mu$ and $\sigma$ are parameters set heuristically to influence the probability distribution. The logodds of the RoI probability score, i.e., $logodds(p_{roi})$ is used to calculate the RoI value of the point, which is in turn used in \ref{subsubsec:roi_octomap_with_proi} to determine whether or not a node of the occupancy map is a region of interest. \subsubsection{Predicted Region of Interest Octomap} \label{subsubsec:roi_octomap_with_proi} Zaenker\etal \cite{zaenker2020viewpoint} augmented the Octomap \cite{hornung13auro} nodes with RoI values, depending on whether or not the cloud point belonged to the RoI. Regions of interest are defined as occupied regions with a non-zero logodds of the RoI probability. We modified the thresholds for node occupancy according to \eqref{eq:occupancy}, similar to Duberg\etal\cite{duberg2020ufomap}: \begin{equation} \label{eq:occupancy} state_{occ} = \hspace{1cm} \begin{cases} Free (F) & p_{occ} \leq 0.45 \\ Occupied (O) & p_{occ} \geq 0.55 \\ Unknown (U) & \text{otherwise} \end{cases} \end{equation} Similarly, we also modified the RoI states from a binary RoI-NonRoI state to account for predicted regions of interest $PredRoI$ in \eqref{eq:roi}, which are nodes whose occupancy state is unknown and whose RoI value is lower than that for regions of interest: \begin{equation} \label{eq:roi} state_{roi} = \begin{cases} RoI &state_{occ} = O \wedge (p_{roi} \geq 0.75)\\ PredRoI & state_{occ} = U \wedge (0.5 < p_{roi} < 0.75) \\ NonRoI & \text{otherwise} \end{cases} \end{equation} As Octomap does not explicitly store unknown regions as nodes, while inserting predicted RoI nodes, we check the Octomap for the predicted cloud point, and only if it does not exist, we insert a new node with $p_{occ} = 0.5$ and the respective $p_{roi}$. With every observation, we check for the state of the predicted region nodes. The predicted RoI nodes that are observed in the current view and change their occupancy state to \textit{Free} or \textit{Occupied} are removed and added to the list of deleted predicted nodes. In the next round of insertion of predicted RoI nodes, it is verified that they do not belong to the deleted list before the insertion. This ensures the predicted RoIs decrease over time in the absence of new fruit clusters being detected. The occupancy map with predicted RoIs and their predicted RoI values, is used for viewpoint generation as described below. \subsection{Predicted RoI Based Viewpoint Planning} \label{subsec:viewpoint} \begin{algorithm}[t] \SetAlgoLined { $ pastVps \gets \emptyset$ \\ $ i_{emptyVps} \gets 0 $ \ // empty viewpoint set counter \\ $iter_{max}$ \ // max consecutive iterations of $i_{empty Vps}$ \\ $Vps$ \ // viewpoint sampler output \\ $dissimVps$ \ // dissimilar viewpoints \\ \While{$i_{emptyVps} < iter_{max}$} { $Vps$ = sampleViewpts($pose$, $nVps$, $utilType$) \\ $dissimVps$ = calcVpD($chosenVps$,$pastVps$ ) \\ \eIf{$dissimVps \neq \emptyset$} { $ i_{emptyVps} \gets 0 $ \\ \While{$dissimVps \neq \emptyset $ } { $vp$ = extractMax($dissimVps$)\; \If{moveToPose($vp$)} { $ pastVps = pastVps \cup vp$ \\ break\; } } } { // no useful dissimilar viewpoint $i_{emptyVps}\gets i_{emptyVps} +1 $ ; } } } \caption{Viewpoint Similarity Check} \label{algo:viewpoint_similarity_check} \end{algorithm} The viewpoint planner has no model of the environment and hence we use a random sampling based next best view planner. Viewpoint planning is used here to maximize the information gain about the objects of interest, i.e., fruits, to enable better estimation of their location and volume for yield estimation and for robotic harvesting. \subsubsection{Viewpoint Sampling} As in our previous work\cite{zaenker2020viewpoint}, we use a combination of two sampling methods: predicted RoI targeted sampling, which uses the predicted regions of interest to find viewpoints that can observe hitherto unseen predicted parts of the fruits, and exploration sampling to explore unknown regions to find new regions of interest, i.e., fruits if the utility values of all predicted RoI viewpoints fall below a threshold. We sample candidate viewpoint poses from the target nodes by sampling random sensor distances and viewing directions. We then cast rays from the viewpoint pose according to the sensor's field of view parameters, until they hit an occupied node or the target node. Additionally, we formulated two new methods for sampling around predicted regions of interest. In the first method, we randomly sample nodes from the list of predicted RoI nodes (pRoI). In the second method, we calculate the RoI weighted mean of the missing surfaces (MSC). These missing surface centers are used as target nodes for viewpoint pose generation. For predicted RoI sampling, nodes with $state_{\mathit{occ}} = Unknown$ in the ray's path are summed to calculate the $entropyGain$ whereas the RoI value of nodes with $state_{\mathit{roi}} = predRoI$ in the 6-neighbourhood ${\mathit{NB}}_{6}$ of the unknown node are summed to calculate the $roiGain$. The expected information gain $IG$ of a ray is calculated as the weighted gain of $entropyGain$ and $roiGain$ in \eqref{eqn:IG}: \begin{equation} \label{eqn:IG} IG = \alpha * roiGain + (1-\alpha) * entropyGain \end{equation} \subsubsection{Viewpoint Dissimilarity} We observed that the occlusion of fruits by leaves and the limited reachability of manipulators led to the problem of repeated sampling of similar viewpoints which was exacerbated by inserting predicted RoIs. To mitigate this problem, we formulated the concept of viewpoint dissimilarity and filter sampled viewpoints using a dissimilarity threshold as shown in \algref{algo:viewpoint_similarity_check}. The dissimilarity metrics between two viewpoints $vp_1$ and $vp_2$, with origins $t_{vp_1} $ and $t_{vp_2}$, and viewing directions $dir_{vp_1} $ and $dir_{vp_2}$ respectively are formulated as follows: \begin{equation} \label{eq:similarity_metrics} \begin{split} & VpD_{\angle}(vp_1, vp_2) = 1 - dir_{vp_1} \cdot dir_{vp_2} \\ & VpD_{\mathit{origin}}(vp_1, vp_2) = \min({\frac{\lVert t_{vp_1} - t_{vp_2}\rVert}{dist_\mathit{cutoff}}, 2.0}) \\ & VpD(vp_1, vp_2) = \min({VpD_{\angle}*VpD_{\mathit{origin}}, 1.0}) \end{split} \end{equation} $VpD_{\angle}(vp_1, vp_2)$ indicates the dissimilarity in viewing direction whereas $VpD_{origin}(vp_1, vp_2)$ indicates the dissimilarity in the origin of viewpoints, with $dist_\mathit{cutoff}$ being a scaling factor. The dissimilarity index $VpD$ is in the range [0,1], with 0 indicating a high similarity and 1 indicating a high dissimilarity. Every successfully attained viewpoint, whether from predicted RoI sampling or exploration sampling, is added to a list of past viewpoints as depicted in line 14 of \algref{algo:viewpoint_similarity_check}. During the sampling of new viewpoints, each viewpoint is compared to the past viewpoints. If the dissimilarity index falls below a threshold (chosen as 0.1), the viewpoint is discarded, otherwise the information gain $IG$ is weighted with the dissimilarity index. The dissimilarity indices can be varied to achieve a balance between focusing on currently discovered regions of interest and discovering new regions of interest. The dissimilarity index based viewpoint rejection also leads to a narrower sampling space over time, thus allowing to terminate the viewpoint sampling if no new useful dissimilar viewpoints are available for a certain consecutive number of iterations (line 6 of \algref{algo:viewpoint_similarity_check}). \section{Experiments} \label{sec:exp} \begin{figure}[t] \captionsetup{justification=justified} \begin{subfigure}[t]{0.32\columnwidth} \centering \includegraphics[width=\columnwidth]{images/simulated_env_1_cropped} \caption{Scenario 1} \label{fig:simulatedenv1} \end{subfigure} \begin{subfigure}[t]{0.32\columnwidth} \centering \includegraphics[width=\columnwidth]{images/simulated_env_2_cropped} \caption{Scenario 2} \label{fig:simulatedenv2} \end{subfigure} \begin{subfigure}[t]{0.32\columnwidth} \centering \includegraphics[width=\columnwidth]{images/simulated_env_3_cropped} \caption{Scenario 3} \label{fig:simulatedenv3} \end{subfigure} \caption{Simulation scenarios used for the experimental evaluation. \textit{Scenario 1:} Simulated environment with a static arm, as used in \cite{zaenker2020viewpoint}, with 4 plants and 14 fruits. \textit{Scenario 2}: Simulated environment with a retractable arm, as in \cite{zaenker2020viewpoint}, with 4 plants and 28 fruits. \textit{Scenario 3}: Simulated environment with a retractable arm, as in \cite{marangoz2022fruit}, with 12 plants and 54 fruits.} \label{fig:simulation_env} \end{figure} We performed experiments in simulation using the Gazebo framework\cite{koenig2004design} to provide a quantitative analysis of the performance of our shape completion based viewpoint planning in comparison to \cite{zaenker2020viewpoint}. \begin{figure*}[t] \begin{subfigure}[t]{0.32\linewidth} \raggedleft \includegraphics[width=\linewidth,right]{images/results/s1/plots/detected_roi_cluster.png} \includegraphics[width=\linewidth,right]{images/results/s1/plots/volume_accuracy.png} \caption{Scenario 1} \label{fig:s1_clusters} \end{subfigure} \begin{subfigure}[t]{0.32\linewidth} \raggedleft \includegraphics[width=\linewidth,right]{images/results/s2/plots/detected_roi_cluster.png} \includegraphics[width=\linewidth,right]{images/results/s2/plots/volume_accuracy.png} \caption{Scenario 2} \label{fig:s2_clusters} \end{subfigure} \begin{subfigure}[t]{0.32\linewidth} \raggedleft \includegraphics[width=\linewidth,right]{images/results/s3/plots/detected_roi_cluster.png} \includegraphics[width=\linewidth,right]{images/results/s3/plots/volume_accuracy.png} \caption{Scenario 3} \label{fig:s3_clusters} \end{subfigure} \caption{Volume accuracy of completed shapes for fruit clusters. SE denotes superllipsoid, SR denotes shape registration, MSC denotes missing surface center sampler, pRoI denotes predicted RoI node sampling, and RVP denotes the RoI Viewpoint planner~\cite{zaenker2020viewpoint}. NBV-SC planners and RVP are similar in number of detected clusters, however NBV-SC planners outperfrom RVP in the accuracy of volume estimation of completed shapes, especially in scenarios 2 and 3. NBV-SC planners plan more views on individual fruits leading to better individual coverage. Volume accuracy is the accuracy of the completed shape's volume compared to the groundtruth shape's volume calculated as in \cite{marangoz2022fruit}.} \label{fig:results_num_vol} \end{figure*} For the evaluation, we used the three scenarios shown in \figref{fig:simulation_env}. For shape completion based viewpoint planning (\textbf{NBV-SC}), we apply two shape completion methods: superellipsoid (\textbf{SE}) and shape registration (\textbf{SR}), and two predicted RoI viewpoint samplers: predicted RoI node sampling (\textbf{pRoI}) and missing surface center sampler (\textbf{MSC}). We compared the performance of our shape completion based viewpoint planners to the RoI viewpoint planner (\textbf{RVP}) ~\cite{zaenker2020viewpoint}, with superellipsoid based shape completion only for size and position estimation as in \cite{marangoz2022fruit}. We carried out 10 trials in each scenario and use a plan length of 120\,s for each trial, where plan length corresponds to the trajectory duration. In Scenario 1 (\figref{fig:s1_clusters}), the number of detected fruits are similar for viewpoint planning with and without shape completion. In Scenario 2 (\figref{fig:s2_clusters}), and Scenario 3 (\figref{fig:s3_clusters}), RVP detects clusters faster compared to NBV-SC planners. However, as it does not have shape completion to find new targets for RoI targeted sampling, it keeps performing exploration sampling with time, which leads to fruits being re-detected from larger distances, causing errors in perception. In contrast to RVP, the NBV-SC planners detect new clusters over time, with the number of clusters detected slightly lower as compared to RVP. This is due to the fact it keeps finding new predicted RoI viewpoints, thus leading to less exploration. Within the NBV-SC planners, it can be seen that shape registration has less variation both in the number of fruits detected and the volume accuracy. SE is more deformable as it tries to fit a general superellipsoid shape to fruit clusters whereas SR being model based, has less variation in the completed shapes. SR tends to underestimate the completed shape volume whereas SE generally overestimates the shapes. With regards to the sampling approach, while MSC is computationally faster as it uses the cluster centers instead of individual predicted RoI nodes, its performance in terms of volume accuracy, especially in Scenario 3, is worse than that of pRoI sampling for superellipsoid fitting. We also performed an ablation study by disabling the viewpoint dissimilarity check for SE-MSC method and adding the viewpoint dissimlarity check to RVP, and conducted 10 trials for Scenario 3. It can be seen from \tabref{tab:vpd} that addition of viewpoint dissimilarity check leads to either termination of the sampling process or encourages the planner to find useful dissimilar viewpoints, resulting in an overall lower planning time. \begin{table}[b] \centering \begin{tabular}{ l | r } Planner & Total Time (seconds) \\ \hline SE-MSC & 377.2 $\pm$ 80.7 \\ SE-MSC-NOVPD & 715.0 $\pm$ 101.0 \\ RVP-VPD & 411.0 $\pm$ 81.7 \\ RVP & 993.8 $\pm$ 236.7 \\ \end{tabular} \caption{Effect of Viewpoint Dissimilarity in Scenario 3 of \figref{fig:simulation_env}. SE-MSC-NOVPD is viewpoint dissimilarity check disabled for SE-MSC whereas RVP-VPD is RVP augmented with viewpoint dissimilarity check. Total time is the time taken for the planner to achieve a plan length of 120 seconds, or the time at termination of sampling} \label{tab:vpd} \end{table} \begin{table}[t] \centering \begin{tabular}{l|r|r|r} Planner& Scenario 1 & Scenario 2 & Scenario 3 \\ \hline SE-pRoI & $8.2 \pm 0.7$ & $7.6 \pm 1.0$ & $7.2 \pm 0.24$ \\ SE-MSC & $8.2 \pm 0.8$ & $7.5 \pm 0.3$ & $8.2 \pm 1.0$ \\ SR-pRoI & $8.3 \pm 0.1$ & $8.0 \pm 1.0$ & $7.4 \pm 0.2$ \\ SR-MSC & $8.0 \pm 0.8$ & $7.2 \pm 0.3$ & $7.6 \pm 0.4$ \\ RVP & $10.1 \pm 2.0$ & $10.5 \pm 2.0$ & $10.9 \pm 2.0$ \end{tabular} \caption{Chamfer distances (in mm) of the detected point clouds generated from the surface map ~\cite{oleynikova2017voxblox}, obtained after a plan length of 120\,s or the time at termination of sampling, compared to the groundtruth. All NBV-SC methods achieved significantly better results compared to RVP~\cite{zaenker2020viewpoint}.} \label{tab:chamferdist} \end{table} We additionally calculated the Chamfer distance of the generated fruit point cloud from the surface map to the ground truth to evaluate the final quality of our reconstructed fruit meshes. \tabref{tab:chamferdist} shows that with the new approaches, the average distances are 2-3\,mm more accurate compared to our old planner. While this is only a small improvement, it applies consistently across all scenarios. We performed a one-sided MannWhitney U test of RVP with all other methods, to check the statistical significance of the Chamfer distance improvement. In all scenarios, NBV-SC methods were significantly better with $p < 0.05$. The new methods generate more viewpoints per fruit, which leads to this improvement. In real-world scenarios, with inaccurate sensors and potentially more cluttered environments with more occlusions, this behavior could prove to be even more beneficial. For our real-world experiments, we applied the approach for mapping sweet peppers in a commercial glasshouse, with the UR5e arm equipped with a Realsense L515 sensor, mounted on the trolley presented in \cite{mccool21icra}. An example of the performed shape completion in this environment is shown in \figref{fig:cover}. We provide a demonstration of the viewpoint selection in the accompanying video. \section{Summary} \label{sec:concl} In this paper, we presented a novel approach to active perception in agricultural robotics using iterative deformable shape completion based viewpoint planning for fruit mapping. We adapted two shape completion approaches, namely superellipsoid fitting and non-rigid shape registration for predicting regions of interest that can be used as potential targets for next best view planning. Our simulation experiments with a UR5e arm equipped with a Realsense L515 show that shape completion based viewpoint planning leads to more complete coverage and thereby better reconstruction of individual fruits as well as to a more accurate size estimation. We also formulated a new concept of viewpoint dissimilarity to strike a balance between individual versus total coverage of fruits using manipulator based exploration and demonstrated its effects in reducing the planning time. \bibliographystyle{IEEEtran} \clearpage \balance
proofpile-arXiv_065-3999
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Deep learning has made great inroads in robotic perception tasks such as classification \cite{wang2019kervolutional}, segmentation \cite{minaee2021image}, and detection \cite{li2021airdet}. However, it is still often unsatisfactory in some robotic applications due to the lack of training data \cite{wang2021unsupervised}, ever-changing environments \cite{zhao2021super}, and limited computational resources \cite{wang2017non}. On the other hand, physics-based optimization has shown great generalization ability and high accuracy in many robotic tasks such as control \cite{fang2020cooperative}, planning \cite{yang2021far}, and simultaneous localization and mapping (SLAM) \cite{zhao2021super}. Nevertheless, it relies on problem-specific parameter tuning and suffers from the lack of semantic information. Since both methods have shown their own merits, more and more efforts have been made to take advantage of the two worlds \cite{zhao2020tp}. Currently, learning-based methods and physics-based optimization are typically used separately in different modules of a robotic system \cite{ebadi2022present}. For example, in semantic SLAM, learning-based methods showed promising results in scenarios where high-level semantic information is needed or as a replacement for hand-crafted features and descriptors, e.g., feature matching in the front-end \cite{sarlin2020superglue}, while physics-based optimization plays a vital role in cases where a well-defined physical model can be established, e.g., pose graph optimization in the back-end \cite{campos2021orb}. Researchers usually first execute the front-end and then pass the results to the back-end for optimization. Despite the tremendous progress in SLAM in the past decades, such a two-stage, decoupled paradigm may only achieve sub-optimal solutions, which in turn limits system performance. Hence, developing integrated methods with end-to-end differentiation through optimization is an emerging trend \cite{teed2021droid, teed2021tangent, teed2022deep}. \begin{figure} \centering \vspace{5pt} \subfloat{\includegraphics[width=0.47\linewidth]{figures/CPU_Performance_Comp.pdf}} \hfill \subfloat{\includegraphics[width=0.47\linewidth]{figures/GPU_Performance_Comp.pdf}} \caption{Efficiency comparison of Lie group operations on CPU and GPU (we take Theseus \cite{pineda2022theseus} performance as $1\times$).} \label{fig:jacobian} \vspace{-5pt} \end{figure} A variety of applications in perception, motion planning, and automatic control have been explored for end-to-end learning \cite{teed2021droid,hafner2019learning,lenz2015deepmpc}. However, most of these applications rely on problem-specific implementations that are often coded from scratch, which makes it difficult for researchers to build upon prior work and explore new ideas. This hinders the development cycle due to the lack of a unified and systematic development framework. For example, people usually leverage PyTorch-based \cite{paszke2019pytorch} models for developing learning-based perceptual models, but then have to use C(++)-based optimization libraries, such as GTSAM \cite{dellaert2012factor}, OMPL \cite{sucan2012open}, and CT \cite{giftthaler2018control}, for physics-based optimization. The mixed usage of Python and C++ libraries increases the system complexity and slows down the development cycle as it is time-consuming to do cross-language debugging and inefficient to transfer data among different processes, e.g., ROS nodes \cite{quigley2009ros}. Therefore, there is an urgent need for a systematic development tool in a single language, accelerating end-to-end learning for physics-based optimization. Some researchers have spent effort towards this objective. For example, LieTorch exploits the group structure of 3D transformations and performs back-propagation in the tangent spaces of manifolds~\cite{teed2021tangent}. However, only \nth{1}-order differentiable operations are currently implemented, which limits its practical use, since higher order derivatives provide additional local information about the data distribution and enable new applications \cite{meng2021estimating}. CvxpyLayer \cite{cvxpylayers2019} and Theseus \cite{pineda2022theseus} take convex and non-linear optimization as differentiable layers, respectively. However, their development is still at an early stage, and the overall accuracy and stability are not yet mature for practical robotic applications. For example, CvxpyLayer doesn't support operations for Lie groups and Theseus uses rotation matrices for trasnformation representation, which is memory inefficient for robotic applications. To address the above limitations, we present PyPose, a robotics-oriented open source library based on PyTorch to connect learning-based perceptual models with classical algorithms that can be formulated as physics-based optimization, e.g., geometry problem, factor-graph optimization, and optimal control. In summary, our main contributions are: \begin{itemize} \item We present a new python-based open source library, PyPose, to further enable end-to-end learning with physics-based optimization and accelerate the next generation of developments in robotics. PyPose is designed to be easily interpretable, user-friendly, and efficient with a tidy and well-organized architecture. It provides an imperative programming style for the convenience of real-world robotic applications. PyPose supports any order gradient computation of Lie groups and Lie algebras, and \nth{2}-order optimizers such as Levenberg-Marquardt with trust region steps. As demonstrated in \fref{fig:jacobian}, our experiments show that PyPose achieves $3-20\times$ faster computation compared to state-of-the-art libraries. \item We provide examples of use of PyPose. Users can easily build upon existing functionalities for various robotic applications. To the best of our knowledge, PyPose is one of the first Python libraries to comprehensively cover several sub-fields of robotics, such as perception, planning, and control, where optimization is involved. \end{itemize} \section{Related Work} Existing open source libraries related to PyPose can mainly be divided into two groups: (1) linear algebra and machine learning frameworks and (2) optimization libraries. \subsection{Linear Algebra and Machine Learning Frameworks} Linear algebra libraries are essential to machine learning and robotics research. To name a few, NumPy \cite{oliphant2006guide}, a linear algebra library for Python, offers comprehensive operations on vectors and matrices while enjoying higher running speed due to its underlying well-optimized C code. Eigen \cite{guennebaud2010eigen}, a high performance C++ linear algebra library, has been used in many projects such as TensorFlow \cite{abadi2016tensorflow}, Ceres \cite{AgarwalCeresSolver2022}, GTSAM \cite{dellaert2012factor}, and g$^2$o \cite{grisetti2011g2o}. ArrayFire \cite{malcolm2012arrayfire}, a GPU acceleration library for C, C++, Fortran, and Python, contains simple APIs and provides thousands of GPU-tuned functions for linear algebra. While relying heavily on high-performance numerical linear algebra techniques, machine learning libraries focus more on operations on tensors (i.e., high-dimensional matrices) and automatic differentiation. Early machine learning frameworks, such as Torch \cite{collobert2002torch}, OpenNN \cite{open2016open}, and MATLAB \cite{MATLAB2010}, provide primitive tools for researchers to develop neural networks. However, they only support CPU computation and lack concise APIs, which plague engineers using them in applications. A few years later, deep learning frameworks such as Chainer \cite{tokui2015chainer}, Theano \cite{al2016theano}, and Caffe \cite{jia2014caffe} arose to handle the increasing size and complexity of neural networks while supporting multi-GPU training with convenient APIs for users to build and train their neural networks. Furthermore, the recent frameworks, such as TensorFlow \cite{abadi2016tensorflow}, PyTorch \cite{paszke2019pytorch}, and MXNet \cite{chen2015mxnet}, provide a comprehensive and flexible ecosystem (e.g., APIs for multiple programming languages, distributed data parallel training, and facilitating tools for benchmark and deployment). JAX \cite{jax2018github} can automatically differentiate native Python and NumPy functions and is an extensible system for composable function transformations. In many ways, the existence of these frameworks facilitated and promoted the growth of deep learning. Recently, more efforts have been taken to combine standard optimization tools with deep learning. Recent work like Theseus \cite{pineda2022theseus} and CvxpyLayer \cite{cvxpylayers2019} showed how to embed differentiable optimization within deep neural networks. By leveraging PyTorch, our proposed library, PyPose, enjoys the same benefits as the existing state-of-the-art deep learning frameworks. Additionally, PyPose provides new features, such as computing any order gradients of Lie groups and Lie algebras, which are essential to robotics. \subsection{Open Source Optimization Libraries} Numerous optimization solvers and frameworks have been developed and leveraged in robotics. To mention a few, Ceres \cite{AgarwalCeresSolver2022} is an open-source C++ library for large-scale nonlinear least squares optimization problems and has been widely used in SLAM. Pyomo \cite{hart2017pyomo} and JuMP \cite{dunning2017jump} are optimization frameworks that have been widely used due to their flexibility in supporting a diverse set of tools for constructing, solving, and analyzing optimization models. CasADi \cite{andersson2019casadi} has been used to solve many real-world control problems in robotics due to its fast and effective implementations of different numerical methods for optimal control. Pose- and factor-graph optimization also play an important role in robotics. For example, g$^2$o \cite{grisetti2011g2o} and GTSAM \cite{dellaert2012factor} are open-source C++ frameworks for graph-based nonlinear optimization, which provide concise APIs for constructing new problems and have been leveraged to solve several optimization problems in SLAM and bundle adjustment. Optimization libraries have also been widely used in robotic control problems. To name a few, IPOPT \cite{wachter2006implementation} is an open-source C++ solver for nonlinear programming problems based on interior-point methods and is widely used in robotics and control. Similarly, OpenOCL \cite{koenemann2017openocl} supports a large class of optimization problems such as continuous time, discrete time, constrained, unconstrained, multi-phase, and trajectory optimization problems for real-time model-predictive control. Another library for large-scale optimal control and estimation problems is CT \cite{giftthaler2018control}, which provides standard interfaces for different optimal control solvers and can be extended to a broad class of dynamical systems in robotic applications. In addition, Drake \cite{drake} also has solvers for common control problems and that can be directly integrated with its simulation tool boxes. Its system completeness made it favorable to many researchers. The proposed library PyPose combines deep learning frameworks with general robotics-oriented optimization and provides a direct way to back-propagate gradients, which is important to robotic learning applications. \section{Methodology} To reduce the learning curve of new users, our methodology leverages ---to the greatest possible extent--- existing structures in PyTorch. Our design principles are to keep the implementation logical, modular, and transparent so that users can focus on their own applications and invest effort in the mathematical aspects of their problems rather than the implementation details. Similar to PyTorch, we believe that trading a bit of efficiency for interpretability and ease of development is acceptable. However, we are still careful in maintaining a good balance between interpretability and efficiency by leveraging advanced PyTorch functionalities. Following these principles, we mainly provide four concepts to enable end-to-end learning with physics-based optimization, i.e., \texttt{LieTensor}, \texttt{Module}, \texttt{Function}, and \texttt{Optimizer}. We briefly present their motivation, mathematical underpinning, and the interfaces in PyPose. \begin{table}[t] \caption{A list of supported alias of \texttt{LieTensor}.} \label{tab:lietensor} \centering \begin{tabular}{C{0.44\linewidth}C{0.18\linewidth}C{0.18\linewidth}} \toprule Transformation & Lie Group & Lie Algebra \\\midrule Rotation & \texttt{SO3()} & \texttt{so3()} \\ Rotation \& Translation & \texttt{SE3()} & \texttt{se3()} \\ Rotation \& Translation \& Scale & \texttt{Sim3()} & \texttt{sim3()} \\ Rotation \& Scale & \texttt{RxSO3()} & \texttt{rxso3()} \\ \bottomrule \end{tabular} \end{table} \subsection{\texttt{LieTensor}} In robotics, 3D transformations are crucial for many applications, such as SLAM, control, and motion planning. However, most machine learning libraries, including PyTorch, assume that the computation graph operates in Euclidean space, while a 3D transformation lies on a smooth manifold. Neglecting the manifold structure leads to inconsistent gradient computation and numerical issues. Therefore, we need a specific data structure to represent 3D transformations in learning models. To address the above challenge while keeping PyTorch's excellent features, we resort to Lie theory and define \texttt{LieTensor} as a subclass of PyTorch's \texttt{Tensor} to represent 3D transformations. One of the challenges of implementing a differentiable \texttt{LieTensor} is that we often need to calculate numerically problematic terms such as $\frac{\sin x}{x}$ for the conversion between a Lie group and Lie algebra \cite{teed2021tangent}. To solve this problem, we take the Taylor expansion to avoid calculating the division by zero. To illustrate this, consider the exponential map from $\mathbb{R}^3$ to $\mathbb{S}^3$ as an example, where $\mathbb{S}^3$ is the set of unit-quaternions. For numerical stability, the exponential map from $\mathbb{R}^3$ to $\mathbb{S}^3$ is formulated as follows: \begin{equation*} \texttt{Exp}(\bm{x}) = \left\{ \begin{aligned} &\left[\bm{x}^T\gamma_e,~ \cos(\frac{\|\bm{x}\|}{2})\right]^T & \|\bm{x}\| > \text{eps}\\ &\left[\bm{x}^T\gamma_o,~ 1 - \frac{\|\bm{x}\|^2}{8} + \frac{\|\bm{x}\|^4}{384} \right]^T & \text{otherwise},\\ \end{aligned} \right. \end{equation*} where $\gamma_e = \frac{1}{\|\bm{x}\|}\sin(\frac{\|\bm{x}\|}{2})$, $\gamma_o = \frac{1}{2} - \frac{1}{48} \|\bm{x}\|^2 + \frac{1}{3840} \|\bm{x}\|^4$, and $\text{eps}$ is the smallest machine number such that $1 + \text{eps} \ne 1$. \texttt{LieTensor} is different from the existing libraries in several aspects: (\textbf{1}) PyPose supports auto-diff for any order gradient and is compatible with most popular devices, such as CPU, GPU, TPU, and Apple silicon GPU, while other libraries like LieTorch \cite{teed2021tangent} implement customized CUDA kernels and only support \nth{1}-order gradient. (\textbf{2}) \texttt{LieTensor} supports parallel computing of gradient with the \texttt{vmap} operator, which allows it to compute Jacobian matrices much faster. (\textbf{3})~Theseus \cite{pineda2022theseus} adopts rotation matrices, which require more memory, while PyPose uses quaternions, which only require storing four scalars. (\textbf{4})~Libraries, including LieTorch, JaxLie \cite{jaxlie}, and Theuses only support Lie groups, while PyPose supports both Lie groups and Lie algebras. As a result, one can directly call the \texttt{Exp} and \texttt{Log} maps from a \texttt{LieTensor} instance, which is more flexible and user-friendly. For other types of supported 3D transformations, the reader may refer to \tref{tab:lietensor} for more details. The reader may also find a full list of supported \texttt{LieTensor} operations in the library documentation.\footnote{\href{https://pypose.org/docs/main/basics}{https://pypose.org/docs/main/basics}} For convenience, a short sample code for how to use a \texttt{LieTensor} is given in \aref{app:lietensor}. \subsection{\texttt{Module} \& \texttt{Function}} PyPose leverages the concept of \texttt{Function} and \texttt{Module} to implement differentiable robotics-related functionalities. Concretely, a \texttt{Function} performs a specific computation given inputs and returns outputs; a \texttt{Module} has the same functionality as a \texttt{Function}, but it also stores data, such as \texttt{Parameter}, which is of type \texttt{Tensor} or \texttt{LieTensor}; the latter can be updated by an optimizer as discussed below. PyPose provides many useful modules, such as the system transition function, model predictive control (MPC), Kalman filter, and IMU preintegration. A list of supported \texttt{Module} from the most recent stable release can be found in the documentation.\footnote{\href{https://pypose.org/docs/main/modules}{https://pypose.org/docs/main/modules}} Users can easily integrate them into their own systems and perform a specific task, e.g., a \texttt{System} module can be used by both \texttt{EKF} and \texttt{MPC}. A few practical examples in the field of SLAM, planning, control, and IMU are presented in \sref{sec:examples}. \begin{table}[!t] \caption{The final error and runtime of optimizers.} \label{tab:optimizer} \centering \resizebox{\linewidth}{!}{ \begin{tabular}{c|cc|cc|cc|cc} \toprule \multirow{2}{*}{Batch} & \multicolumn{2}{c|}{$10^1$} & \multicolumn{2}{c|}{$10^2$} & \multicolumn{2}{c|}{$10^3$} & \multicolumn{2}{c}{$10^4$} \\ & Error & Time & Error & Time & Error & Time & Error & Time \\ \midrule \texttt{SGD} & 5.10E+01 & 0.46 & 4.96E+02 & 0.95 & 4.09E+03 & 1.45 & 6.86E+03 & 31.5 \\ \texttt{Adam} & 9.81E+01 & 2.96 & 1.25E+03 & 9.28 & 1.13E+04 & 16.9 & 1.16E+05 & 40.5 \\ \textbf{\texttt{LM}} & \textbf{5.96E-05} & \textbf{0.57} & \textbf{5.80E-04} & \textbf{0.60} & \textbf{3.72E-03} & \textbf{0.92} & \textbf{3.77E-02} & \textbf{51.6} \\ \bottomrule \end{tabular} } \end{table} \subsection{\texttt{Optimizer}} To enable end-to-end learning with physics-based optimization, we need to integrate general optimizers beyond the basic gradient-based methods such as SGD \cite{robbins1951stochastic} and Adam \cite{kingma2014adam}, since many practical problems in robotics require more advanced optimization such as constrained or \nth{2}-order optimization \cite{barfoot2017state}. In this section, we take a robust non-linear least square problem as an example and present the intuition behind the optimization-oriented interfaces of PyPose, including \texttt{solver}, \texttt{kernel}, \texttt{corrector}, and \texttt{strategy} for using the \nth{2}-order Levenberg-Marquardt (\texttt{LM}) optimizer. Consider a weighted least square problem: \begin{equation}\label{eq:least-square} \min_{\bm{\theta}} \sum_i \left(\bm{f}(\bm{\theta},\bm{x}_i)-\bm{y}_i\right)^T \mathbf{W}_i \left(\bm{f}(\bm{\theta},\bm{x}_i)-\bm{y}_i\right), \end{equation} where $\bm{f}(\cdot)$ is a regression model (\texttt{Module}), $\bm{\theta}\in\mathbb{R}^n$ is the parameters to be optimized, $\bm{x}_i\in\mathbb{R}^m$ is an input, $\bm{y}_i\in\mathbb{R}^d$ is the target, $\mathbf{W}_i\in\mathbb{R}^{d\times d}$ is a square information matrix. The solution to \eqref{eq:least-square} of an \texttt{LM} algorithm is computed by iteratively updating an estimate $\bm{\theta}_{t}$ via $\bm{\theta}_t \leftarrow \bm{\theta}_{t-1} + \bm{\delta}_t$, where the update step $\bm{\delta}_t$ is computed as: \begin{equation}\label{eq:step} \sum_{i}\left(\mathbf{H}_i + \lambda\cdot\mathrm{diag}(\mathbf{H}_i)\right) \bm{\delta}_t = - \sum_{i}\mathbf{J}_i^T \mathbf{W}_i \mathbf{R}_i \end{equation} where $\mathbf{R}_i = \bm{f}(\bm{\theta}_{t-1}, \bm{x}_i)-\bm{y}_i$ is the $i$-th residual error, $\mathbf{J}_i$ is the Jacobian of $\bm{f}$ computed at $\bm{\theta}_{t-1}$, $\mathbf{H}_i$ is an approximate Hessian matrix computed as $\mathbf{H}_i = \mathbf{J}_i^T\mathbf{W}_i\mathbf{J}_i$, and $\lambda$ is a damping factor. To find step $\bm{\delta}_t$, one needs a linear \texttt{solver}: \begin{equation} \mathbf{A} \cdot \bm{\delta}_t = \bm{b}, \end{equation} where $\mathbf{A} = \sum_{i}\left(\mathbf{H}_i + \lambda\cdot\mathrm{diag}(\mathbf{H}_i)\right)$, $\bm{b} = - \sum_{i}\mathbf{J}_i^T \mathbf{W}_i \mathbf{R}_i$. Since $\mathbf{A}$ is often positive-definite, we could leverage standard linear solvers such as Cholesky. If the Jacobian $\mathbf{J}_i$ is large and sparse, we may also use sparse solvers such as sparse Cholesky or preconditioned conjugate gradient (PCG) solver. In practice, one often introduces robust \texttt{kernel} functions $\rho: \mathbb{R}\mapsto\mathbb{R}$ into \eqref{eq:least-square} to reduce the effect of outliers: \begin{equation}\label{eq:kernel-least-square} \min_{\bm{\theta}} \sum_i \rho\left(\mathbf{R}_i^T \mathbf{W}_i \mathbf{R}_i\right), \end{equation} where $\rho$ is designed to down-weigh measurements with large residuals $\mathbf{R}_i$. In this case, we need to adjust \eqref{eq:step} to account for the presence of the robust kernel. A popular way is to use the Triggs' correction \cite{triggs1999bundle}, which is also adopted by the Ceres \cite{AgarwalCeresSolver2022} library. However, it needs \nth{2}-order derivative of the kernel function $\rho$, which is always negative. This can lead \nth{2}-order optimizers including LM to be unstable \cite{triggs1999bundle}. Alternatively, we can leverage \texttt{FastTriggs} corrector, which is faster yet more stable than the Triggs' correction by only involving the \nth{1}-order derivative: \begin{equation} \mathbf{R}_i^\rho = \sqrt{\rho'(c_i)} \mathbf{R}_i,\quad \mathbf{J}_i^\rho = \sqrt{\rho'(c_i)} \mathbf{J}_i, \end{equation} where $c_i = \mathbf{R}_i^T\mathbf{W}_i\mathbf{R}_i$, $\mathbf{R}_i^\rho$ and $\mathbf{J}_i^\rho$ are the corrected model residual and Jacobian due to the introduction of kernel functions, respectively. More information of \texttt{FastTriggs} can be found in the documentation.\footnote{\href{https://pypose.org/docs/main/generated/pypose.optim.corrector.FastTriggs}{https://pypose.org/docs/main/generated/pypose.optim.corrector.FastTriggs}} A simple \texttt{LM} optimizer may not converge to the global optimum if the initial guess is too far from the optimum. For this reason, we often need \texttt{strategy} such as adaptive damping, dog leg, and more advanced trust region methods \cite{lourakis2005levenberg} to restrict each step, preventing it from stepping ``too far''. PyPose supports easy extension for the above algorithms by simply passing \texttt{optimizer} arguments to their constructor including \texttt{solver}, \texttt{strategy}, \texttt{kernel}, and \texttt{corrector}. A list of available algorithms, examples, and automatic \texttt{corrector} can be found in the documentation.\footnote{\href{https://pypose.org/docs/main/optim}{https://pypose.org/docs/main/optim}} A short sample code is listed in \aref{app:optimization}. \begin{figure}[t] \centering \subfloat[Before optimization.\label{fig:pgo-before}]{\includegraphics[width=0.5\linewidth]{figures/pgo-before.pdf}} \subfloat[After optimization.\label{fig:pgo-after}]{\includegraphics[width=0.5\linewidth]{figures/pgo-after.pdf}} \\ \caption{A PGO example ``Garage'' with \texttt{LM} optimizer. PyPose has the same final error as Ceres \cite{AgarwalCeresSolver2022} and GTSAM \cite{dellaert2012factor}.} \label{fig:pgo} \end{figure} \section{Experiments}\label{sec:experiment} In this section, we showcase performance of PyPose and present several practical examples of its use for robotics. \subsection{Benchmark} \subsubsection{Runtime Efficiency} To test the efficiency of PyPose, we report the average number of operations per second for \texttt{LieTensor} on both CPU and GPU. Specifically, we test the performance of calculating the Jacobian for a basic operation, i.e., transforming points with \texttt{LieTensor}: \begin{equation}\label{eq:3D-transformation} \mathbf{f}_i(\mathbf{x}) = \mathbf{R}_i \mathbf{x}_i+\mathbf{T}_i, \end{equation} where $\mathbf{f}_i(\mathbf{x})\in\mathbb{R}^{3}$ represents the point after transformation, $\mathbf{x}\in\mathbb{R}^{3}$ represents the original point, $\mathbf{R}_i \in \mathbb{SO}(3)$ represents a rotation, and $\mathbf{T}_i \in \mathbb{R}^{3}$ is a translation. We compare with Theseus \cite{pineda2022theseus} and LieTorch \cite{teed2021tangent}, where $\mathbf{R}_i$ is represented by each package's $\mathbb{SO}(3)$ instance. For thoroughness, we report the performance for a batch size of $10$, $10^2$, $10^3$, and $10^4$ in \fref{fig:jacobian}, where the performance of Theseus is always taken as $1\times$ since the runtime of the small batch is short. PyPose achieves the best overall performance in all tests. On a CPU, PyPose is up to $9.4\times$ faster than Theseus and up to $3\times$ faster than LieTorch. On a GPU, PyPose shows even better performance, up to $19.6\times$ faster than Theseus and up to $5.3\times$ faster than LieTorch. It is worth mentioning that the runtime usually has a fluctuation within $\mathbf{10\%}$ and we report the average performance of multiple tests. The reason that we report the maximum batch size of $10^4$ is that it takes about $14\giga$B of GPU memory during calculation and a batch size of $10^5$ is too large to handle for our platform. One of the reasons that PyPose achieves much faster computation is that our \texttt{LieTensor} supports parallel gradient computation via the \texttt{vmap} operation provided by \texttt{functorch} \cite{functorch2021}. \begin{figure} \centering \sbox{\measurebox}{% \begin{minipage}[b]{.5\linewidth} \subfloat[Matching reprojection error.\label{fig:slam-matching}]{\includegraphics[width=\linewidth]{figures/slam_matching_comp.pdf}} \end{minipage}} \usebox{\measurebox} \begin{minipage}[b][\ht\measurebox][s]{.47\linewidth} \centering \subfloat[Trajectory.]{\label{fig:slam-traj}\includegraphics[width=\linewidth]{figures/slam_traj_input_comb.png}} \end{minipage} \caption{An example of visual SLAM using PyPose library. (a) Matching reprojection error (pixels) (top) is improved after self-supervised tuning using PyPose (bottom). (b) Input sequence (bottom right), resulting trajectory and point cloud. The estimated and ground truth trajectories are shown in blue and orange, respectively.} \label{fig:slam} \vspace{-5pt} \end{figure} \subsubsection{Optimization} We next report the performance of PyPose's \nth{2}-order optimizer Levenberg-Marquardt (\texttt{LM}) and compare with PyTorch's \nth{1}-order optimizers such as \texttt{SGD} and \texttt{Adam}. Specifically, we construct a \texttt{Module} to learn transformation inverse, as listed in \aref{app:optimization} and report their performance in \tref{tab:optimizer}. Note that PyPose provides analytical solutions for inversion, and this module is only to benchmark the optimizers. It can be seen that this problem is quite challenging for \nth{1}-order optimizers, while \nth{2}-order optimizers can find the best solution quickly, which verifies the effectiveness of PyPose's implementation. However, one of the drawbacks of \nth{2}-order optimizers is that they need to compute Jacobians, which requires large memory and may increase the runtime. Leveraging the sparsity of Jacobian matrices could significantly alleviate this challenge \cite{zach2014robust}. One of the real examples is the problem of pose graph optimization (PGO), which is challenging for \nth{1}-order optimizers and also requires pose inversion for every edge in the graph \cite{bai2021sparse}. We report the performance of PGO using the \texttt{LM} optimizer with a \texttt{TrustRegion} \cite{sun2006optimization} strategy in \fref{fig:pgo}, where the sample is from the g$^2$o dataset \cite{carlone2015lagrangian}. PyPose achieves the same final error as Ceres \cite{AgarwalCeresSolver2022} and GTSAM \cite{dellaert2012factor}. \subsection{Practical Examples} \label{sec:examples} \subsubsection{SLAM} To showcase PyPose's ability to bridge learning and optimization tasks, we develop a method for learning the SLAM front-end in a self-supervised manner. Due to difficulty in collecting accurate camera pose and/or dense depth, few dataset options are available to researchers for training correspondence models offline. To this end, we leverage the estimated poses and landmark positions as pseudo-labels for supervision. At the front end, we employ two recent works - SuperPoint \cite{detone2018superpoint}, and CAPS \cite{wang2020learning} for feature extraction and matching respectively. Implemented with the \texttt{LM} optimizer of PyPose, the backend performs bundle adjustment and back-propagates the final projection error to update the CAPS feature matcher. In the experiment, we start with a CAPS network pretrained on MegaDepth \cite{li2018megadepth} and fine-tune on the TartanAir \cite{wang2020tartanair} dataset with only image input. As shown in \fref{fig:slam-matching}, matching accuracy (reproj. error $\leq$ 1 pixel) increased by up to 77.5\% on unseen sequences after self-supervised fine-tuning. We also show the resulting trajectory and point cloud on a test sequence in \fref{fig:slam-traj}. While the pretrained model quickly loses track, the fine tuned model runs to completion with an ATE of 0.63 m. This verifies the feasibility of PyPose for optimization in the SLAM backend. \begin{figure}[t] \centering \subfloat[Planning runtime.\label{fig:computing-time}]{\includegraphics[height=0.33\linewidth]{figures/planner_compare.png}} \subfloat[Executed trajectory.\label{fig:trajectory}]{\includegraphics[height=0.32\linewidth]{figures/planner_trajectory.png}} \\ \subfloat[Input depth image.\label{fig:field-test-image}]{\includegraphics[height=0.34\linewidth]{figures/planner_field_test_1.png}} \subfloat[Planning Instance.\label{fig:field-test-traj}]{\includegraphics[height=0.34\linewidth]{figures/planner_field_test_2.png}} \\ \caption{An example of an end-to-end planning using PyPose library. (a) and (b) show a local planning experiment conducted in the Matterport3D simulated environment. (a) shows the comparison plot of the algorithm planning time. (b) shows the trajectory executed by the proposed method and the outline of the environment. (c) and (d) show an instance of the real-world experiment conducted with ANYmal-legged robot inside the ETH LEE building. A kinodynamically feasible trajectory, shown in (d), is directly generated from the input depth image (c). The red dots in (c) show the trajectory projected back into the image frame for visualization.} \label{fig:planner} \vspace{-5pt} \end{figure} \subsubsection{Planning} The PyPose library has been used to develop a novel end-to-end planning policy that maps the depth sensor inputs directly into kinodynamically feasible trajectories. As shown in \fref{fig:computing-time}, our method achieves around $3\times$ speedup on average compared to a traditional planning framework~\cite{cao2022autonomous}, which utilizes a combined pipeline of geometry-based terrain analysis and motion-primitives-based planning~\cite{zhang2020falco}. The experiment is conducted in the Matterport3D~\cite{Matterport3D} environment. The robot follows 20 manually inputted waypoints for global guidance but plans fully autonomously by searching feasible paths and avoiding local obstacles. The trajectory executed by the proposed method is shown in \fref{fig:trajectory}. The efficiency of this method benefits from both the end-to-end planning pipeline and the efficiency of the PyPose library for training and deployment. Furthermore, this end-to-end policy has been integrated and tested on a real legged robot system, ANYmal. A planning instance during the field test is shown in \fref{fig:field-test-traj} using the current depth observation, shown in \fref{fig:field-test-image}. \subsubsection{Control} \begin{figure}[t] \centering \subfloat[Model loss.\label{fig:model_loss}]{\includegraphics[width=0.48\linewidth]{figures/control_model_loss.pdf}} \subfloat[Trajectory loss.\label{fig:traj_loss}]{\includegraphics[width=0.48\linewidth]{figures/control_trajloss.pdf}} \\ \subfloat[Overall runtime.\label{fig:total_time}]{\includegraphics[width=0.48\linewidth]{figures/control_total_time.pdf}} \subfloat[Backwards runtime.\label{fig:backward_time}]{\includegraphics[width=0.48\linewidth]{figures/control_backward_time.pdf}} \\ \caption{An example of MPC with imitation learning using PyPose library. (a) and (b) show the learning performance comparison of PyPose and Diff-MPC \cite{amos2018differentiable}. The two methods achieve the same learning performance. (c) and (d) show the computation cost comparison, where different colors represent different batch sizes.} \vspace{-10pt} \label{fig:control} \end{figure} PyPose also integrates the dynamics and control tasks into the end-to-end learning framework. We demonstrate this capability using a learning-based MPC for an imitation learning problem, Diff-MPC~\cite{amos2018differentiable}, where both the expert and learner employ a linear-quadratic regulator (LQR) and the learner tries to recover the dynamics using only expert controls. We treat MPC as a generic policy class with parameterized cost functions and dynamics, which can be learned by automatic differentiating (AD) through the LQR optimization. Unlike the existing method, e.g., Diff-MPC employs the analytical derivative of LQR for AD, in PyPose, we use a \textit{problem-agnostic} AD implementation that performs the backward pass for an extra optimization iteration at the optimal point. The preliminary results are shown in \fref{fig:control} for a benchmarking problem~\cite{amos2018differentiable}. \fref{fig:model_loss} and \fref{fig:traj_loss} show the model and trajectory losses, respectively, of four randomly sampled trajectories, and both methods achieve the same learning performance. The computational cost of both methods scale in a similar manner with respect to number of states, as shown in Figs. \ref{fig:total_time} and \ref{fig:backward_time}. The overhead due to the extra forward pass causes the longer overall runtime in PyPose; however, our backward time is always faster because the method in~\cite{amos2018differentiable} needs to solve an additional LQR iteration in the backward pass. The computational advantage of our method may be more prominent as more complex optimization problems are involved in the learning. \subsubsection{IMU preintegration} IMU pre-integration \cite{forster2015imu} is one of the most popular strategies for inertial navigation systems, e.g., VIO \cite{qin2018vins}, LIO \cite{zuo2020lic}, and multi-modal sensor fusion \cite{zhao2021super}. A growing trend consists in leveraging the IMU's physical prior in learning-based inertial navigation, e.g., IMU odometry network \cite{chen2018ionet}, data-driven IMU calibration \cite{brossard2020denoising, zhang2021imu}, and EKF fused IMU odometry \cite{liu2020tlio, sun2021idol}. To boost future research in this field, PyPose develops an \texttt{IMUPreintegrator} module for differentiable IMU preintegration with covariance propagation. It supports batched operation, cumulative product, and integration on the manifold space. As the example shown in \fref{Fig:Integrate}, we train an IMU calibration network that denoises the IMU signals after which we integrate the denoised signal and calculate IMU's state expressed with \texttt{LieTensor} including position, orientation, and velocity. In order to learn the parameters in a deep neural network (DNN), we supervise the integrated pose and back-propagate the gradient through the integrator module. To validate the effectiveness of this method, we train the network on 5 sequences and evaluate on 5 testing sequences from EuRoC dataset \cite{burri2016euroc} with root-mean-square-error (RMSE) on position (\meter) and rotation (\rad). As shown in \tref{tab:imupreinte}, our method has a significant improvement on both orientation and translation, compared to the traditional method \cite{forster2015imu}. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figures/Fig_integrate4.png} \caption{The framework of IMU calibration network using PyPose's \texttt{IMUPreintegrator} with \texttt{LieTensor}.} \label{Fig:Integrate} \end{figure} \begin{table}[!t] \caption{The results of IMU pre-integration in 1\second~(200 frames) on the testing sequences of the EuRoC dataset.} \label{tab:imupreinte} \centering \resizebox{\linewidth}{!}{ \begin{tabular}{ccccccccccc} \toprule \multirow{2}{*}{Seq} & \multicolumn{2}{c}{\textbf{MH\_02\_easy}} & \multicolumn{2}{c}{\textbf{MH\_04\_difficult}} & \multicolumn{2}{c}{\textbf{V1\_03\_difficult}} & \multicolumn{2}{c}{\textbf{V2\_02\_median}} & \multicolumn{2}{c}{\textbf{V1\_01\_easy}}\\ & pos & rot & pos & rot & pos & rot & pos & rot & pos & rot\\ \midrule Preinte \cite{forster2015imu} & 0.050 & 0.659 & 0.047 & 0.652 & 0.062 & 0.649 & 0.041 & 0.677 & 0.161 & 0.649\\ Ours & \textbf{0.012} & \textbf{0.012} & \textbf{0.017} & \textbf{0.016} & \textbf{0.025} & \textbf{0.036} & \textbf{0.038} & \textbf{0.054} & \textbf{0.124} & \textbf{0.056}\\ \bottomrule \end{tabular} } \end{table} \section{Conclusions} We present PyPose, a novel pythonic open source library to boost the development of the next generation of robotics applications. PyPose enables end-to-end learning with physics-based optimization and provides an imperative programming style for the convenience of real-world robotics applications. PyPose is designed to be easy-interpretable, user-friendly, and efficient with a tidy and well-organized architecture. It supports most of the widely-used functionalities such as Lie groups and \nth{2}-order optimizers. The experiments show that PyPose achieves $3-20\times$ faster computation than Theseus. To the best of our knowledge, PyPose is one of the first comprehensive pythonic libraries covering several sub-fields of robotics, including perception, planning, and control, where physics-based optimization is involved. { \bibliographystyle{IEEEtran}
proofpile-arXiv_065-4018
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} With the rapid development of image editing techniques and tools (\emph{e.g.}, appearance adjustment, copy-paste), users can blend and edit existing source images to create fantastic images that are only limited by an artist's imagination. However, some manipulated regions in the created synthetic images may have inconsistent color and lighting statistics with the background, which could be attributed to careless editing or the difference among source images (\emph{e.g.}, capture condition, camera setting, artistic style). We refer to such regions as inharmonious regions~\cite{liang2021inharmonious}, which will remarkably downgrade the quality and fidelity of synthetic images. Recently, the task of inharmonious region localization~\cite{liang2021inharmonious} has been proposed to identify the inharmonious regions. When the inharmonious regions are identified, users can manually adjust the inharmonious regions or employ image harmonization methods~\cite{tsai2017deep,cong2020dovenet,cun2020improving,cong2021bargainnet} to harmonize the inharmonious regions, yielding the images with higher quality and fidelity. To the best of our knowledge, the only existing inharmonious region localization method is DIRL~\cite{liang2021inharmonious}, which attempted to fuse multi-scale features and avoid redundant information. However, DIRL is a rather general model without exploiting the uniqueness of this task, that is, the discrepancy between inharmonious region and background. Besides, the performance of DIRL is still far from satisfactory when the inharmonious region is surrounded by cluttered background or objects that have similar shapes to the inharmonious region. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figures/teaser.jpg} \caption{We show the examples of inharmonious synthetic images in the top row and their inharmonious region masks in the bottom row.} \label{fig:teaser} \end{figure} Considering the uniqueness of inharmonious region localization task, we refer to each suite of color and illumination statistics as one domain following~\cite{cong2020dovenet,cong2021bargainnet}. Thus, the inharmonious region and the background belong to two different domains. In this work, we propose a novel method based on a simple intuition: \emph{can we transform the input image to another color space to magnify the domain discrepancy between inharmonious region and background, so that the model can identify the inharmonious region more easily? } To achieve this goal, we propose a framework composed of two components: one color mapping module and one inharmonious region localization network. First, the color mapping module transforms the input image to another color space. Then, the inharmonious region localization network detects the inharmonious region based on the transformed image. For color mapping module, we extend HDRNet~\cite{gharbi2017deep} to improved HDRNet (iHDRNet). HDRNet is popular and has achieved great success in previous works~\cite{zhou2021transfill, xia2020joint, wang2019underexposed}. Similar to HDRNet, iHDRNet learns region-specific and intensity-specific color transformation parameters, which are applied to transform each input image adaptively. After color transformation, we expect that the domain discrepancy between inharmonious region and background could be magnified, so that the region localization network can identify the inharmonious region more easily. With this purpose, we leverage encoder to extract the domain-aware codes from inharmonious region and background before and after color transformation, in which the domain-aware codes are expected to contain the color and illumination statistics. Then, we design a Domain Discrepancy Magnification (DDM) loss to ensure that the distance of domain-aware codes between inharmonious region and background becomes larger after color transformation. Furthermore, we employ a Direction Invariance (DI) loss to regularize the domain-aware codes. For inharmonious region localization network, we can choose any existing network for region localization and place it under our framework. We refer to our framework as MadisNet (\textbf{Ma}gnifying \textbf{d}omain d\textbf{is}crepancy). We conduct experiments on the benchmark dataset iHarmony4~\cite{cong2020dovenet}, which shows that our proposed framework outperforms DIRL~\cite{liang2021inharmonious} and the state-of-the-art methods from other related fields. Our contributions can be summarized as follows: \begin{itemize} \item We devise a simple yet effective inharmonious region localization framework which can accommodate any region localization method. \item We are the first to introduce adaptive color transformation to inharmonious region localization, in which improve HDRNet is used as the color mapping module. \item We propose a novel domain discrepancy magnification loss to magnify the domain discrepancy between inharmonious region and background. \item Extensive experiments demonstrate that our framework outperforms existing methods by a large margin (\emph{e.g.}, IoU is improved from 67.85\% to 74.44\%). \end{itemize} \section{Related Works} \subsection{Image Harmonization} Image harmonization, which aims to adjust the appearance of foreground to match background, is a long-standing research topic in computer vision. Prior works~\cite{cohen2006color, sunkavalli2010multi, jia2006drag, perez2003poisson, tao2010error} focused on transferring low-level appearance statistics from background to foreground. Recently, plenty of end-to-end solutions~\cite{tsai2017deep,cong2020dovenet,ling2021region,guo2021intrinsic,sofiiuk2021foreground} have been developed for image harmonization, including the first deep learning method~\cite{tsai2017deep}, domain translation based methods ~\cite{cong2020dovenet,cong2021bargainnet}, attention based module~\cite{cun2020improving,hao2020image}. Unfortunately, most of them require inharmonious region mask as input, otherwise the quality of harmonized image will be remarkably degraded. S2AM~\cite{cun2020improving} took blind image harmonization into account and predicted inharmonious region mask. However, mask prediction is not the focus of \cite{cun2020improving} and the quality of predicted masks is very low. \subsection{Inharmonious Region Localization} Inharmonious region localization aims to spot the suspicious regions incompatible with background, from the perspective of color and illumination inconsistency. DIRL~\cite{liang2021inharmonious} was the first work on inharmonious region localization, which utilized bi-directional feature integration, mask-guided dual attention, and global-context guided decoder to dig out inharmonious regions. Nevertheless, DIRL did not consider the uniqueness of this task and its performance awaits further improvement. In this work, we propose a novel framework to magnify the discrepancy between inharmonious region and background, which can help the downstream detector distinguish the inharmonious region from background. \subsection{Image Manipulation Localization} Another related topic is image manipulation localization, which targets at distinguishing the tampered region from the pristine background. Copy-move, image splicing, removal, and enhancement are the four well-studied types in image manipulation localization, in which image splicing is the most related topic to our task. Traditional image manipulation localization methods heavily relied on the prior knowledge or strong assumptions on the inconsistency between tampered region and background, such as noise patterns~\cite{pun2016multi}, Color Filter Array interpolation patterns~\cite{ferrara2012image}, and JPEG-related compression artifacts~\cite{amerini2014splicing}. Recently, deep learning based methods~\cite{wu2019mantra, bappy2019hybrid, kniaz2019point, yang2020constrained} attempted to tackle the image forgery problem by leveraging local patch comparison~\cite{bayar2016deep, rao2016deep, huh2018fighting, bappy2019hybrid}, forgery feature extraction~\cite{yang2020constrained, wu2019mantra,zhou2020generate}, adversarial learning~\cite{kniaz2019point}, and so on. Different from the above image manipulation localization methods, color and illumination inconsistency is the main focus in inharmonious region localization task. \subsection{Learnable Color Transformation} In previous low-level computer vision tasks such as image enhancement, many color mapping techniques have been well explored, which meet our demand for color space manipulation. To name a few, HDRNet~\cite{gharbi2017deep} learned a guidance map and a bilateral grid to perform instance-aware linear color transformation. Zeng \emph{et al.}~\cite{zeng2020learning} exploited 3D Look Up Table (LUT) for color transformation. DCENet~\cite{guo2020zero} iteratively estimated color curve parameters to correct color. In this work, we adopt the improved version of HDRNet~\cite{gharbi2017deep} as color mapping module to magnify the domain discrepancy between inharmonious region and background. \section{Our Approach} Given an input synthetic image $\bm{I}$, inharmonious region localization targets at predicting a mask $\hat{\bm{M}}$ that distinguishes the inharmonious region from the background region. Since the perception of inharmonious region is attributed to color and illumination inconsistency, we expect to find a color mapping $\mathcal{F}: \bm{I} \mapsto \bm{I}'$ so that the downstream localization network $G$ can capture the discrepancy between inharmonious region and background more easily. As shown in Figure~\ref{fig:framework}, the whole framework consists of two stages: color mapping stage and inharmonious region localization stage. In the color mapping stage, we derive color transformation coefficients $\bm{A}$ from the color mapping module and perform color transformation to synthetic image $\bm{I}$ to produce the retouched image $\bm{I}'$. We assume that the retouched image $\bm{I}'$ will be exposed larger discrepancy between the inharmonious region and the background. To impose this constraint, we propose a domain discrepancy magnification loss and a direction invariance loss based on the extracted domain-aware codes of inharmonious regions and background regions in $\bm{I}$ and $\bm{I}'$. In the inharmonious region localization stage, the retouched image $\bm{I}'$ is delivered to the localization network $G$ to spot the inharmonious region, yielding the inharmonious mask $\hat{\bm{M}}$. We will detail two stages in Section~\ref{sec:color_mapping} and Section~\ref{sec:localization_network} respectively. \begin{figure*}[!ht] \centering \includegraphics[width=1.0\linewidth]{figures/framework.jpg} \caption{The illustration of our proposed framework which consists of color mapping stage and inharmonious region localization stage. Our color mapping module iHDRNet predicts the color transformation coefficients $\bm{A}$ for the input image $\bm{I}$, and the transformed image $\bm{I}'$ is fed into $G$ to produce the inharmonious region mask $\hat{\bm{M}}$. } \label{fig:framework} \end{figure*} \subsection{Color Mapping Stage} \label{sec:color_mapping} \subsubsection{Color Manipulation:} In some localization tasks~\cite{panzade2016copy, roy2013face, cho2016canny, beniak2008automatic}, input images are first converted from RGB color space to other color spaces (\emph{e.g.}, HSV~\cite{panzade2016copy, roy2013face}, YCrCb~\cite{cho2016canny, beniak2008automatic}), in which the chroma and illumination distribution are more easily characterized. However, these color mappings are prefixed and cannot satisfy the requirement of inharmonious region localization task. Therefore, we seek to learn an instance-aware color mapping $\mathcal{F}: \bm{I} \mapsto \bm{I}'$, to promote the learning of downstream localization network. Considering the popularity of HDRNet~\cite{gharbi2017deep} and its remarkable success in color manipulation task~\cite{zhou2021transfill, xia2020joint, wang2019underexposed}, we build our color mapping module inheriting the spirits of HDRNet. HDRNet~\cite{gharbi2017deep} implements local and global feature integration to keep texture details, producing a bilateral grid. To preserve edge information, they also learn an intensity map named guidance map and perform data-dependent lookups in the bilateral grid to generate region-specific and intensity-specific color transformation coefficients. For more technique details, please refer to \cite{gharbi2017deep}. We make two revisions for HDRNet. First, we first use central difference convolution layers~\cite{yu2020searching} to extract local features, in which a hyperparameter $\theta$ tradeoffs the contribution between vanilla convolution and central difference convolution. As claimed in \cite{yu2020searching}, introducing central difference convolution into vanilla convolution can enhance the generalization ability and modeling capacity. Then, we apply a self-attention layer \cite{zhang2019self} to aggregate global information, which is adept at capturing long-range dependencies between distant pixels. We use the processed features to produce the bilateral grid and the remaining steps are the same as HDRNet. We refer to the improved HDRNet as iHDRNet. The detailed comparison between HDRNet and iHDRNet can be found in Supplementary. Analogous to HDRNet, iHDRNet learns region-specific and intensity-specific color transformation coefficients $\bm{A} = [\bm{K}, \bm{b}] \in \mathbb{R}^{H\times W \times 3 \times 4}$ with $\bm{K}\in \mathbb{R}^{H\times W \times 3 \times 3}$ and $\bm{b}\in \mathbb{R}^{H\times W \times 3 \times 1}$, where $H$ and $W$ are the height and width of input image $\bm{I}$ respectively. With color transformation coefficients $\bm{A}$, the inharmonious image $\bm{I}$ could be mapped to the retouched image $\bm{I}'$. Formally, for each pixel at location $p$, $\bm{I}'(p) = \bm{A}(p) \cdot [\bm{I}(p), \textbf{1}]^T = \bm{K}(p)\bm{I}(p) + \bm{b}(p)$, where $\bm{K}(p) \in \mathbb{R}^{3\times 3}, \bm{b}(p) \in \mathbb{R}^{3 \times 1}$ are the transform coefficients at location $p$. \subsubsection{Domain Discrepancy Magnification:}\label{sec:ddm} We expect that the color and illumination discrepancy between the inharmonious region and the background is enlarged after color transformation. Following~\cite{cong2020dovenet,cong2021bargainnet}, we refer to each suite of color and illumination statistics as one domain. Then, we employ a domain encoder $E_\text{dom}$ to extract the domain-aware codes of inharmonious region and background separately from $\bm{I}$ and $\bm{I}'$. Note that we name the extracted code as domain-aware code instead of domain code, because the extracted code is expected to contain the color/illumination statistics but may also contain the content information (\emph{e.g.}, semantic layout). For the latent feature space, we select the commonly used intermediate features from the fixed pre-trained VGG-19~\cite{simonyan2014very} and pack them into the partial convolution layer~\cite{Liu2018} to derive region-aware features. The domain encoder takes an image and a mask as input. Each partial convolutional layer performs convolution operation only within the masked area, where the mask is updated by rule and the information leakage from the unmasked area is avoided. At the end of $E_\text{dom}$, features are averaged along spatial dimensions and projected into a shape-independent domain-aware code. We denote the domain-aware codes of inharmonious region (\emph{resp.}, background) of $\bm{I}$ as $\bm{z}_f$ (\emph{resp.}, $\bm{z}_b$). Similarly, we denote the domain-aware code of inharmonious region (\emph{resp.}, background) of $\bm{I}'$ as $\bm{z}_f'$ (\emph{resp.}, $\bm{z}_b'$). Note that the domain encoder $E_\text{dom}$ is only used in the training phase, and only the projector is trainable while other components are frozen. \subsubsection{Domain Discrepancy Magnification Loss:} To ensure that the color/illumination discrepancy between inharmonious region and background is enlarged, we enforce the distance between the domain-aware codes of inharmonious region and background of retouched image $\bm{I}'$ to be larger than that of original image $\bm{I}$. To this end, we propose a novel Domain Discrepancy Magnification (DDM) loss as follows, \begin{eqnarray} \label{eqn:loss_ddm} \mathcal{L}_{{ddm}} = \max{(d(\bm{z}_f, \bm{z}_b) - d({\bm{z}'_f},{\bm{z}'_b}) + m,0)}, \end{eqnarray} where $d(\cdot, \cdot)$ measures the Euclidean distance between two domain-aware codes, and the margin $m$ is set as $0.01$ via cross-validation. In this way, the distance between $\bm{z}'_f$ and $\bm{z}'_b$ is enforced to be larger than the distance between $\bm{z}_f$ and $\bm{z}_b$ by a margin $m$. One issue is that the domain-aware codes may also contain content information (\emph{e.g.}, semantic layout). However, the content difference between inharmonious region and background remains unchanged after color transformation, so we can deem $d(\bm{z}_f, \bm{z}_b) - d({\bm{z}'_f},{\bm{z}'_b})$ as the change in domain difference after color transformation. \subsubsection{Direction Invariance Loss: } In practice, we find that solely using (\ref{eqn:loss_ddm}) might lead to the corruption of domain-aware code space without necessary regularization. Inspired by StyleGAN-NADA~\cite{gal2021stylegannada}, we calculate the domain discrepancy vector $\Delta \bm{z} = \bm{z}_f - \bm{z}_b$ (\emph{resp.}, $\Delta \bm{z}' = \bm{z}_f' - \bm{z}_b'$) between inharmonious region and background in the input (\emph{resp.}, retouched) image. Then, we align the direction of domain discrepancy vector of input image with that of retouched image, using the following Direction Invariance (DI) loss: \begin{eqnarray} \label{eqn:loss_di} \begin{aligned} \mathcal{L}_{{di}} = 1 - \langle \Delta \bm{z},\Delta \bm{z}' \rangle, \end{aligned} \end{eqnarray} where $\langle \cdot, \cdot \rangle$ means the cosine similarity. Intuitively, we expect that the direction of domain discrepancy roughly stays the same after color transformation. There could be some other possible regularizers for domain-aware codes, but we observe that Direction Invariance (DI) loss in (\ref{eqn:loss_di}) empirically works well. \subsection{Inharmonious Region Localization Stage} \label{sec:localization_network} In the inharmonious region localization stage, the retouched image $\bm{I}'$ is delivered to the localization network $G$, which can dig out the inharmonious region from $\bm{I}'$ and produce the inharmonious mask $\hat{\bm{M}}$. The focus of this paper is a novel inharmonious region localization framework by magnifying the domain discrepancy. This framework can accommodate an arbitrary localization network $G$, such as inharmonious region localization method DIRL~\cite{liang2021inharmonious}, segmentation methods \cite{ronneberger2015u,chen2017rethinking}, and so on. In our experiments, we try using DIRL \cite{liang2021inharmonious} and UNet \cite{ronneberger2015u} as the localization network. After determining the region localization network, we wrap up its original loss terms (\emph{e.g.}, binary-cross entropy loss, intersection over union loss) as a localization loss $\mathcal{L}_{loc}$. Together with our proposed domain discrepancy magnification (DDM) loss in (\ref{eqn:loss_ddm}) and direction invariance (DI) loss in (\ref{eqn:loss_di}), the total loss of our framework could be written as \begin{eqnarray}\label{eqn:total} \mathcal{L}_{total} = \lambda_{ddm} \mathcal{L}_{{ddm}} + \lambda_{di} \mathcal{L}_{di} + \mathcal{L}_{loc}, \label{eqn:total} \end{eqnarray} where the trade-off parameter $\lambda_{ddm}$ and $\lambda_{di}$ depend on the downstream localization network. \section{Experiments} \subsection{Datasets and Implementation Details} Following \cite{liang2021inharmonious}, we conduct experiments on the image harmonization dataset iHarmony4~\cite{cong2020dovenet}, which provides inharmonious images with their corresponding inharmonious region masks. iHarmony4 is composed of four sub-datasets: HCOCO, HFlickr, HAdobe5K, HDay2Night. For HCOCO and HFlickr datasets, the inharmonious images are obtained by adjusting the color and lighting statistics of foreground. For HAdobe5K and HDay2Night datasets, the inharmonious images are obtained by overlaying the foreground with the counterpart of the same scene retouched with a different style or captured in a different condition. Therefore, the inharmonious images of the four sub-datasets will give people inharmonious perception mainly due to color and lighting inconsistency, which conforms to our definition of the inharmonious region. Moreover, suggested by DIRL~\cite{liang2021inharmonious}, we simply discard the images with foreground occupying larger than 50\% area, which avoids the ambiguity that background can also be deemed as inharmonious region. Following \cite{liang2021inharmonious}, the training set and test set are tailored to 64255 images and 7237 images respectively. All experiments are conducted on a workstation with an Intel Xeon 12-core CPU(2.1 GHz), 128GB RAM, and a single Titan RTX GPU. We implement our method using Pytorch~\cite{paszke2019pytorch} with CUDA v10.2 on Ubuntu 18.04 and set the input image size as $256\times 256$. We choose Adam optimizer~\cite{kingma2014adam} with the initial learning rate 0.0001, batch size 8, and momentum parameters $\beta_1 = 0.5, \beta_2=0.999$. The hyper-parameter $\lambda_{ddm}$ and $\lambda_{di}$ in Eqn. (\ref{eqn:total}) are set as $0.01$ for DIRL\cite{liang2021inharmonious} and $0.001$ for UNet~\cite{ronneberger2015u} respectively. \emph{The detailed network architecture of domain encoder and iHDRNet can be found in Supplementary.} For quantitative evaluation, we calculate Average Precision (AP), $F_1$ score, and Intersection over Union (IoU) based on the predicted mask $\hat{M}$ and the ground-truth mask $M$ following~\cite{liang2021inharmonious}. \subsection{Baselines} To the best of our knowledge, DIRL~\cite{liang2021inharmonious} is the only existing method designed for inharmonious region localization method. Therefore, we also consider other works from related fields. 1) blind image harmonization method S2AM~\cite{cun2020improving}; 2) image manipulation detection methods: MantraNet~\cite{wu2019mantra}, MFCN~\cite{salloum2018image}, MAGritte~\cite{kniaz2019point}, H-LSTM~\cite{bappy2019hybrid}, SPAN~\cite{hu2020span}; 3) salient object detection methods: F3Net~\cite{wei2020f3net}, GATENet~\cite{zhao2020suppress}, MINet~\cite{pang2020multi}; 4) semantic segmentation methods: UNet~\cite{ronneberger2015u}, DeepLabv3~\cite{chen2017rethinking}, HRNet-OCR~\cite{sun2019deep}. \begin{table}[t] \centering \begin{tabular} {l|c|c|c } \toprule[1pt] \multirow{2}{1.0in}{\textbf{Methods}} & \multicolumn{3}{c}{\textbf{Evaluation Metrics}} \\ \cline{2-4} & \multicolumn{1}{c|}{AP(\%) $\uparrow$} & \multicolumn{1}{c|}{$F_1$ $\uparrow$} & \multicolumn{1}{c}{IoU(\%) $\uparrow$} \\ \hline \textbf{UNet} & 74.90 & 0.6717 & 64.74 \\ \textbf{DeepLabv3} & 75.69 & 0.6902 & 66.01 \\ \textbf{HRNet-OCR} & 75.33 & 0.6765 & 65.49 \\ \hline \textbf{MFCN} & 45.63 & 0.3794 & 28.54 \\ \textbf{MantraNet} & 64.22 & 0.5691 & 50.31 \\ \textbf{MAGritte} & 71.16 & 0.6907 & 60.14 \\ \textbf{H-LSTM} & 60.21 & 0.5239 & 47.07 \\ \textbf{SPAN} & 65.94 & 0.5850 & 54.27 \\ \hline \textbf{F3Net} & 61.46 & 0.5506 & 47.48 \\ \textbf{GATENet} & 62.43 & 0.5296 & 46.33 \\ \textbf{MINet} & 77.51 & 0.6822 & 63.04 \\ \hline \textbf{S2AM} & 43.77 & 0.3029 & 22.36 \\ \hline \textbf{DIRL} & 80.02 & 0.7317 & 67.85 \\ \hline \textbf{MadisNet(UNet)} & 81.15 & 0.7372 & 67.28 \\ \textbf{MadisNet(DIRL)} & \textbf{85.86} & \textbf{0.8022} & \textbf{74.44} \\ \bottomrule[1pt] \end{tabular} \caption{Quantitative comparison with baseline methods on iHarmony4 dataset. The best results are denoted in boldface.} \label{tab:quantitative} \end{table} \begin{figure*}[!ht] \centering \includegraphics[width=0.95\linewidth]{figures/mask_comparison.jpg} \caption{Qualitative comparison with baseline methods. GT is the ground-truth inharmonious region mask. } \label{fig:qualitative} \end{figure*} \subsection{Experimental Results} \subsubsection{Quantitative Comparison} The quantitative results are summarized in Table~\ref{tab:quantitative}. All of the baseline results are directly copied from \cite{liang2021inharmonious} except SPAN, GATENet, F3Net, and MINet. For fair comparison, we trained the baselines from scratch. One observation is that image manipulation localization methods~\cite{wu2019mantra, kniaz2019point, bappy2019hybrid, hu2020span} are weak in localizing the inharmonious region. One possible explanation is that they focus on the noise pattern and forgery feature extraction while paying less attention to the low-level statistics of color and illumination. We also notice that salient object detection methods~\cite{wei2020f3net, zhao2020suppress} also achieve worse performance than the semantic segmentation methods~\cite{ronneberger2015u, chen2017rethinking, sun2019deep} while MINet~\cite{pang2020multi} beats all of the semantic segmentation methods in AP metric. In S2AM~\cite{cun2020improving}, they predict an inharmonious region mask as side product to indicate the region to be harmonized. Unfortunately, the quality of inharmonious mask is far from satisfactory since image harmonization is their main focus. Another interesting observation is that typical segmentation methods achieve the most competitive performance among the methods that are not specifically designed for inharmonious region localization. It might be attributed to that semantic segmentation methods are originally designed in a general framework and generalizable to inharmonious region localization task. Since our framework can accommodate any region localization network, we explore using UNet and DIRL under our framework, which are referred to as MadisNet(UNet) and MadisNet(DIRL) respectively. It can be seen that MadisNet(DIRL) (\emph{resp.}, MadisNet(UNet)) outperforms DIRL (\emph{resp.}, UNet). MadisNet(DIRL) beats the existing inharmonious region localization method and all of the state-of-the-art methods from related fields by a large margin, which verifies the effectiveness of our framework. \emph{In the remainder of experiment section, we use DIRL as our default region localization network (i.e., ``MadisNet" is short for ``MadisNet(DIRL)"), unless otherwise specified. } \subsubsection{Qualitative Comparison} We show the visualization results as well as baselines in Figure~\ref{fig:qualitative}, which shows that our method can localize the inharmonious region correctly and preserve the boundaries accurately. In comparison, the baseline methods may locate the wrong object (row 4) or only detect an incomplete region (row 3). More visualization results can be found in Supplementary. \subsection{Ablation Studies} \subsubsection{Loss Terms} First, we analyze the necessity of each loss term in Table~\ref{tab:losses}. One can learn that our proposed $\mathcal{L}_{ddm}$ and $\mathcal{L}_{di}$ are complementary to each other. Without our proposed $\mathcal{L}_{ddm}$ and $\mathcal{L}_{di}$, the performance is significantly degraded, which proves that $\mathcal{L}_{ddm}$ and $\mathcal{L}_{di}$ play important roles in inharmonious region localization. \begin{table}[t] \centering \begin{tabular} {c|c | c|c|c } \toprule[1pt] \multicolumn{2}{c|}{\textbf{Components}} & \multicolumn{3}{c}{\textbf{Evaluation Metrics}}\\ \cline{1-5} \multicolumn{1}{c|}{Encoder } & \multicolumn{1}{c|}{Self Attention} & \multicolumn{1}{c|}{AP $\uparrow$} & \multicolumn{1}{c|}{$F_1$ $\uparrow$} & \multicolumn{1}{c}{IoU $\uparrow$} \\ \hline VC & & 81.05 & 0.7508 & 69.43 \\ VC & \checkmark & 83.54 & 0.7749 & 72.08 \\ CDC & & 82.80 & 0.7697 & 71.64 \\ CDC & \checkmark & \textbf{85.86} & \textbf{0.8022} & \textbf{74.44} \\ \bottomrule[1pt] \end{tabular} \caption{Ablation study on the components of improved HDRNet. ``VC" denotes the vanilla convolution layer and ``CDC" means the central difference convolution layer.} \label{tab:iHDRNet} \end{table} \subsubsection{iHDRNet} Then, we conduct ablation study to validate the effectiveness of CDC layer and self-attention layer in our iHDRNet. The results are summarized in Table~\ref{tab:iHDRNet}. By comparing row 1 (\emph{resp.}, 3) and row 2 (\emph{resp.}, 4), we can see that it is useful to employ self-attention layer to capture the long-range dependencies with promising improvement. The comparison between row 2 and row 4 demonstrates that CDC layer performs more favorably than vanilla convolution layer, since CDC layer can capture both intensity-level information and gradient-level information. \begin{table}[t] \centering \begin{tabular} {l |c|c|c } \toprule[1pt] \multirow{2}{0.8in}{\textbf{Loss Terms}} & \multicolumn{3}{c}{\textbf{Evaluation Metrics}} \\ \cline{2-4} & \multicolumn{1}{c|}{AP(\%) $\uparrow$} & \multicolumn{1}{c|}{$F_1$ $\uparrow$} & \multicolumn{1}{c}{IoU(\%) $\uparrow$} \\ \hline $\mathcal{L}_{loc}$ & 80.95 & 0.7401 & 68.81 \\ \hline $\mathcal{L}_{loc} + \mathcal{L}_{ddm}$ & 81.86 & 0.7533 & 69.84 \\ $\mathcal{L}_{loc} + \mathcal{L}_{di}$ & 83.18 & 0.7701 & 71.67 \\ $\mathcal{L}_{loc} + \mathcal{L}_{ddm} + \mathcal{L}_{di}$ & \textbf{85.86} & \textbf{0.8022} & \textbf{74.44} \\ \bottomrule[1pt] \end{tabular} \caption{The comparison among different loss terms.} \label{tab:losses} \end{table} \subsection{Study on Color Manipulation Approaches} To find the best color manipulation approach for inharmonious region localization, we compare our color mapping module iHDRNet with non-learnable color transformation and learnable color transformation. For non-learnable color transformation, we transform the input RGB image to other color spaces (HSV, YCrCb). Besides, one might concern that whether the learnable color mapping is equivalent to applying random color jittering to the input image, thereby we also take the color jittering augmentation into account. Because the above color transformation approaches do not involve learnable model parameters, we simply apply them to the input images and feed transformed images into the region localization network, during which DDM loss and DI loss are not used. For learnable color transformation, we compare with LUTs \cite{zeng2020learning}, DCENet~\cite{guo2020zero}, and HDRNet~\cite{gharbi2017deep}. We directly replace iHDRNet with these color transformation approaches and the other components of our proposed framework remain the same, in which DDM loss and DI loss are used. The results are summarized in Table~\ref{tab:ddm_variants}. We also include the RGB baseline, which means that no color mapping is applied, and the result is identical with DIRL in Table~\ref{tab:quantitative}. One can observe that the non-learnable color mapping methods achieve comparable or even worse results compared with RGB baseline. We infer that they are unable to reveal the relationship between inharmonious region and background through simple traditional color transformation. In learnable color mapping methods, LUT achieves even worse scores than RGB baseline. This might be that LUT only learns a global transformation for the whole image without considering local variation. HDRNet and DCENet slightly improve the performance. One possible explanation is that both HDRNet and DCENet are region-specific color manipulation methods, so they could learn color transformation for different regions adaptively to make downstream localization module easily discover the inharmonious region. Our iHDRNet achieves the best results, because the central difference convolution~\cite{yu2020searching} can help identify the color inconsistency in synthetic images and the self-attention layer can capture long-range dependencies between distant pixels. \begin{table}[t] \centering \begin{tabular} {l |c|c|c } \toprule[1pt] \multirow{2}{1.1in}{\textbf{Color Mapping}} & \multicolumn{3}{c}{\textbf{Evaluation Metrics}} \\ \cline{2-4} & \multicolumn{1}{c|}{AP(\%) $\uparrow$} & \multicolumn{1}{c|}{$F_1$ $\uparrow$} & \multicolumn{1}{c}{IoU(\%) $\uparrow$} \\ \hline \textbf{RGB}(Baseline) & 80.02 & 0.7317 & 67.85 \\ \hline \textbf{HSV} & 79.86 & 0.7282 & 67.40 \\ \textbf{YCrCb} & 81.07 & 0.7484 & 69.35 \\ \textbf{ColorJitter} & 77.50 & 0.7068 & 65.40 \\ \hline \textbf{LUTs} & 78.39 & 0.7181 & 66.16 \\ \textbf{DCENet} & 81.90 & 0.7623 & 70.92 \\ \textbf{HDRNet} & 81.05 & 0.7508 & 69.43 \\ \hline \textbf{iHDRNet} & \textbf{85.86} & \textbf{0.8022} & \textbf{74.44} \\ \bottomrule[1pt] \end{tabular} \caption{The comparison among different color mapping methods. RGB(baseline) means that no color mapping is applied.} \label{tab:ddm_variants} \end{table} \subsection{Analyses of Domain Discrepancy} \begin{table}[t] \centering \begin{tabular} {|c|c|c|} \hline & $d_{f,b} + m< d'_{f,b}$ & $d_{f,b} < d'_{f,b}$ \\ \hline Training set & 76.22\% & 99.74\% \\ \hline Test set & 77.38\% & 99.68\% \\ \hline \end{tabular} \caption{The percentage of images whose domain discrepancy is enlarged after color mapping. $d_{f,b}$ is short for $d(\bm{z}_f,\bm{z}_b)$ and $d'_{f,b}$ is short for $d(\bm{z}'_f,\bm{z}'_b)$. Here $m = 0.01$ as described in section ~\ref{sec:ddm}.} \label{tab:domain_discrepancy} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{figures/failure.jpg} \caption{Failure cases of our method. ``GT" is the ground-truth inharmonious region mask.} \label{fig:failure} \end{figure} We report the percentage of images whose domain discrepancy is magnified after color transformation in Table \ref{tab:domain_discrepancy}. For both training set and test set, we report two results: the percentage of $d(\bm{z}_f,\bm{z}_b)+m<d(\bm{z}'_f,\bm{z}'_b)$ and the percentage of $d(\bm{z}_f,\bm{z}_b)<d(\bm{z}'_f,\bm{z}'_b)$, in which the latter one is a special case of the former one by setting $m=0$. From Table \ref{tab:domain_discrepancy}, we can see that the color mapping module learnt on the training set can generalize to the test set very well. In test set, the domain discrepancy of $77.38\%$ images is enlarged by at least a margin $m$ after color transformation. When we relax the requirement, \emph{i.e.}, $m=0$, the percentage is as high as $99.68\%$ on the test set. \subsection{Discussion on Limitation} Figure~\ref{fig:failure} shows three failure cases of our model. In row 1, our model treats the white pigeon at the bottom left of image as the inharmonious region. We conjecture that the inharmonious region has similar dark tone with surrounding pigeons so that our model is misled by the white pigeon. In row 2, the white cup is recognized as the inharmonious region, probably because the ground-truth inharmonious region and background share warm color tone. In the last row, our model views the yellow light sign as inharmonious region too, because the inharmonious region is brighter than the background. In summary, our model may be weak when the target inharmonious region is surrounded by objects with similar color or intensity. \subsection{Results on Four Sub-datasets and Multiple Inharmonious Regions} Because iHarmony4~\cite{cong2020dovenet} contains four sub-datasets, we show the results on four sub-datasets in Supplementary. Furthermore, this paper mainly focuses on one inharmonious region, but there could be multiple disjoint inharmonious regions in a synthetic image. Therefore, we also demonstrate the ability of our method to identify multiple disjoint inharmonious regions in Supplementary. \section{Conclusion} In this paper, we have proposed a novel framework to resolve the inharmonious region localization problem with color mapping module and our designed domain discrepancy magnification loss. With the process of color mapping module, the inharmonious region could be more easily discovered from the synthetic images. Extensive experiments on iHarmony4 dataset have demonstrated the effectiveness of our approach. \section*{Acknowledgements} This work is partially sponsored by National Natural Science Foundation of China (Grant No. 61902247), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), Shanghai Municipal Science and Technology Key Project (Grant No. 20511100300). \bibliographystyle{aaai22} \section{Network Architecture}\label{sec:implementation} In this section, we elaborate the details of domain-aware code extractor $E_\text{dom}$ and our improved HDRNet (iHDRNet). \subsection{Domain Encoder} We employ a domain encoder $E_\text{dom}$ to extract the domain-aware codes of inharmonious region and background separately. The domain encoder takes an image and a mask as input. The network structure of domain encoder is shown in Figure~\ref{fig:E_dom}. We use pre-trained VGG-19~\cite{simonyan2014very} to extract multi-scale features, in which the vanilla convolution layer is replaced by partial convolution layer \cite{Liu2018}. Each partial convolutional layer performs convolution operation only within the masked area, where the mask is updated by rule and the information leakage from the unmasked area is prevented. For the technical details of partial convolution, please refer to \cite{Liu2018}. We take the \texttt{conv1\_2}, \texttt{conv2\_2}, \texttt{conv3\_3} features, which separately pass through a global average pooling layer and a fully-connected layer to yield three shape-independent vectors $\{\mathbf{z}_i|^3_{i=1}\}, \mathbf{z}_i \in \mathbb{R}^{d_{dom}}$ with $d_{dom}$ being the latent code dimension. Here, we set $d_{dom} = 16$ following the configuration of BargainNet~\cite{cong2021bargainnet}. Finally, we use learnable weights $\{w_i|^3_{i=1}\}$ combined with $\{\mathbf{z}_i|^3_{i=1}\}$ to produce the final domain-aware code $\mathbf{z}$, which can be formulated as $\mathbf{z} = \sum_{i=1}^3 w_i \cdot \mathbf{z}_i$. \subsection{Improved HDRNet} As shown in Figure~\ref{fig:iHDRNet}, we build up our improved HDRNet inheriting the spirits of HDRNet~\cite{gharbi2017deep}. At first, we replace the vanilla convolution used in encoder with central difference convolution layer~\cite{yu2020searching}. As indicated in CDCNet~\cite{yu2020searching}, Central Difference Convolution (CDC) layer merges central difference term into vanilla convolution layer, which can capture both intensity-level information and gradient-level information. CDC layer combines the advantages of central difference term and vanilla convolution, which can enhance the generalization ability and modeling capacity. In HDRNet~\cite{gharbi2017deep}, global features are acquired by global average pooling on local features. We argue that the extracted global features may not be capable of revealing the relation between regions, in which inharmonious regions are determined by comparing with other regions in an image. Therefore, we replace the global feature branch with self-attention layer~\cite{zhang2019self} to model the long-range dependence between distant pixels. In self-attention layer, input feature map $\textbf{X} \in \mathbb{R}^{H\times W \times C}$ are projected into three terms named query $\textbf{Q} \in \mathbb{R}^{H\times W \times C'}$, key $\textbf{K} \in \mathbb{R}^{H\times W \times C'}$, and value $\textbf{V} \in \mathbb{R}^{H\times W \times C}$ respectively, where $H, W$ refer to spatial size and $C$ (\emph{resp.}, $C'$) means the original channel dimension (\emph{resp.}, reduced channel dimension). Then, the output feature $\textbf{Y}$ augmented by self-attention is computed by: \begin{eqnarray} \begin{aligned} \textbf{X}' = \text{Softmax}(\frac{\textbf{Q}\textbf{K}^{T}}{\sqrt{C'}})\cdot \textbf{V},\quad \textbf{Y} = \textbf{X} + \gamma \cdot \textbf{X}', \end{aligned} \end{eqnarray} where $\gamma$ is a learnable parameter that reweights the contribution of self-attention feature $\textbf{X}'$. Please refer to~\citet{zhang2019self} for more technical details. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{figures/E_dom.jpg} \caption{The architecture of domain encoder $E_\text{dom}$. } \label{fig:E_dom} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{figures/transform_cmp.jpg} \caption{The impact of our color mapping module on localization results. ``Retouched Image" is the output of color mapping module. ``GT" is the ground-truth inharmonious region mask. }\label{sec:transform_cmp} \label{fig:transform_cmp} \end{figure*} \subsection{Analysis of Model Size} We report the model size of each module in our framework, including the domain encoder $E_\text{dom}$, our iHDRNet, and the region localization network DIRL~\cite{liang2021inharmonious}. The comparison among three modules shows that the extra modules (domain encoder and iHDRNet) added in our framework are relatively efficient. Moreover, the domain encoder is only used in the training stage, while not used in the testing stage. \begin{table}[t] \centering \begin{tabular} {c|c|c|c} \toprule[1pt] Module & iHDRNet & $E_\text{dom}$ & DIRL \\ \hline Size(M) & 1.47 & 1.78 & 53.47 \\ \bottomrule[1pt] \end{tabular} \caption{The model size of each module in our method.} \label{tab:model_size} \end{table} \section{Hyper-parameter Analyses}\label{sec:hyperparameter} \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{figures/hyperparameter.jpg} \caption{$F_1$ of our method when varying the hyper-parameters $m$, $\lambda_{ddm}$, and $\lambda_{di}$.} \label{fig:hyperparamter} \end{figure*} \begin{table}[t] \centering \setlength{\tabcolsep}{1.25mm}{ \scalebox{0.95}{ \begin{tabular} {|c| c|c|c| c|c|c|} \hline \textbf{Method} & \multicolumn{3}{c|}{\textbf{DIRL}} & \multicolumn{3}{c|}{\textbf{MadisNet(DIRL)}} \\ \hline \multicolumn{1}{|c|}{Metrics} & \multicolumn{1}{c|}{AP $\uparrow$ } & \multicolumn{1}{c|}{$F_1$ $\uparrow$} & \multicolumn{1}{c|}{IoU $\uparrow$} & \multicolumn{1}{c|}{AP $\uparrow$} & \multicolumn{1}{c|}{$F_1$ $\uparrow$} & \multicolumn{1}{c|}{IoU $\uparrow$} \\ \hline \text{HCOCO} & 74.25 & 0.6701 & 60.85 & 83.78 & 0.7741 & 70.50 \\ \hline \text{HAdobe5k} & 92.16 & 0.8801 & 84.02 & 92.45 & 0.8850 & 84.75 \\ \hline \text{HFlickr} & 84.21 & 0.7786 & 73.21 & 85.65 & 0.8032 & 75.49 \\ \hline \text{HDay2night} & 38.74 & 0.2396 & 20.11 & 57.40 & 0.4672 & 40.47 \\ \hline \end{tabular} } } \caption{Quantitative comparison with baseline method DIRL on four sub-datasets of iHarmony4.} \label{tab:sub_datasets} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{figures/multi_objects.jpg} \caption{The visualization comparison on the synthetic images with multiple inharmonious regions. ``GT" is the ground-truth inharmonious region mask.} \label{fig:multi_objects} \end{figure} In this section, we investigate the impact of hyper-parameters $m$, $\lambda_{ddm}$, and $\lambda_{di}$. We vary $m$ (\emph{resp.}, $\lambda_{ddm}$, $\lambda_{di}$) in the range of [$10^{-4}$, $1$] (\emph{resp.}, [$10^{-3}$, $10^1$], [$10^{-3}$, $10^1$]) and plot $F_1$ of our method in Figure \ref{fig:hyperparamter}, in which we tune the hyperparameter reported in the horizontal axis and keep others fixed. We use beam search for the hyperparameter $\{\lambda_{ddm},\lambda_{di}\}$ and then fixed the best $\{\lambda_{ddm},\lambda_{di}\}$ for hyperparameter $m$ configuration. It can be seen that our method is relatively robust with the hyper-parameters $m$, $\lambda_{ddm}$, and $\lambda_{di}$ when setting them in a reasonable range. \section{More Visualization Results}\label{sec:visual} We show more examples of the comparison between our method and baseline methods DIRL~\cite{liang2021inharmonious}, MINet~\cite{pang2020multi}, UNet~\cite{ronneberger2015u}, Deeplabv3~\cite{chen2017rethinking}, HRNet-OCR~\cite{sun2019deep}, and SPAN~\cite{hu2020span} in Figure \ref{fig:visualization}. In row 1, we can see that our method successfully shows the right inharmonious swan, while most of the baselines are struggling to identify the inharmonious swan and even assign inharmonious label to another harmonious swan. In row 3, our method is capable of depicting the accurate boundary of inharmonious region and other baselines suffer from the shadow artifacts. In row 9, the input synthetic image is an extremely challenging scene, in which the inharmonious boat is surrounded by crowded boats. Our method is powerful enough to distinguish the inharmonious boat from the background, whereas other methods are unable to detect such region. These visualization results demonstrate the superiority of our method again. \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{figures/visualization.jpg} \caption{Qualitative comparison with baseline methods. GT is the ground-truth inharmonious region mask. }\label{sec:visulization} \label{fig:visualization} \end{figure*} \section{The Impact of Color Transformation on Localization Results}\label{sec:color_local} In Figure~\ref{fig:transform_cmp}, we demonstrate the impact of our proposed color mapping module on the localization results. Here, ``DIRL" refers to results of the DIRL~\cite{liang2021inharmonious} model trained with RGB input directly. ``Retouched Image" is the output of color mapping module. ``MadisNet" means that the results are derived from the DIRL model trained with the output of color mapping module. From the localization results, we can see that in row 1, MadisNet removes the false alarm from which original DIRL suffers. In row 2 and 3, MadisNet localizes the inharmonious region correctly while original DIRL is distracted by other misleading regions. The improvement of MadisNet is attributed to the color mapping module which enlarges the domain discrepancy and makes the downstream detector localize the inharmonious region more easily. However, based on human perception, it may be arguable that the domain discrepancy between inharmonious region and background is enlarged after color transformation (column 3 \emph{v.s.} column 1). Here, we only provide some tentative interpretations as follows. In the retouched images, we can observe that the false alarm (car in row 1) and the distractive objects (giraffe in row 2 and wood in row 3) are suppressed to some extent and the inharmonious region becomes more obtrusive. For example, in row 2, the distractive giraffe looks more similar to the middle giraffe while the inharmonious giraffe looks more different after transformation. In row 3, the distractive wood at the bottom right looks more harmonious with the wood on its left after transformation. \section{Results on Four Sub-datasets}\label{sec:four_datasets} Our used iHarmony4 \cite{cong2020dovenet} dataset consists of four sub-datasets: HCOCO, HFlickr, HAdobe5K, HDay2night. We compare with the strongest baseline DIRL \cite{liang2021inharmonious} and report the results on four sub-datasets in Table~\ref{tab:sub_datasets}. Among them, HDay2night is a challenging sub-dataset with real composite images and has much fewer training images, leading to overall lower results. However, our method outperforms DIRL significantly on all four sub-datasets. \section{Results for Multiple Inharmonious Regions} \label{sec:multiple_regions} In this paper, we mainly focus on one inharmonious region. However, in real-world applications, there could be multiple disjoint inharmonious regions in a synthetic image. Here, we demonstrate the ability of our method to identify multiple disjoint inharmonious regions, starting from constructing usable dataset based on HCOCO. Specifically, in the sub-dataset HCOCO of iHarmony4 \cite{cong2020dovenet}, there exist some real images which have multiple paired synthetic images with different manipulated foregrounds. Based on HCOCO test set, we composite 19,482 synthetic images with multiple foregrounds for evaluation, in which the number of foregrounds ranges from 2 to 9. The results (AP, $F_1$, IoU) of DIRL and Ours are (73.62, 0.5079, 37.05) and (77.39, 0.5761, 44.03) respectively. We also provide some visualization results in Figure~\ref{fig:multi_objects}, which show that our method can successfully localize multiple inharmonious regions. \bibliographystyle{aaai22}
proofpile-arXiv_065-4024
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Canonical algebras were introduced by C. M. Ringel \cite{Ringel:1984}. A \emph{canonical algebra} $\La$ of quiver type over a field $k$ is a quotient algebra of the path algebra of the quiver $Q$: $$\xymatrix @C +1.5pc @R-1.5pc{&\upcirc{\vx_1}\ar[r]^{\al_2^{(1)}}& \upcirc{2\vx_1}\ar[r]^{\al_3^{(1)}}&\cdots \ar[r]^{\al_{p_1-1}^{(1)}}&\upcirc{(p_1-1)\vx_1} \ar[rdd]^{\al_{p_1}^{(1)}}&\\ &\upcirc{\vx_2}\ar[r]^{\al_2^{(2)}}& \upcirc{2\vx_2}\ar[r]^{\al_3^{(2)}}&\cdots \ar[r]^{\al_{p_2-1}^{(2)}}&\upcirc{(p_2-1)\vx_1}\ar[rd]_{\al_{p_2}^{(2)}}&\\ \upcirc{{0}}\ar[ruu]^{\al_1^{(1)}} \ar[ru]_{\al_1^{(2)}} \ar[rdd]_{\al_1^{(t)}} &&&\vdots&&\upcirc\vc\\ && && &\\ &\upcirc{\vx_{2}}\ar[r]^{\al_2^{(t)}}& \upcirc{2\vx_{t}}\ar[r]^{\al_3^{(t)}}&\cdots \ar[r]^{\al_{p_{t}-1}^{({t})}}&\upcirc{(p_{t}-1)\vx_{t}}\ar[ruu]_{\al_{p_{t}}^{({t})}}&\\ }$$ modulo the ideal $I$ defined by the \emph{canonical relations} $$\al_{p_i}^{(i)}\dots\al_{2}^{(i)}\al_1^{(i)}=\al_{p_1}^{(1)}\dots \al_{2}^{(1)}\al_1^{(1)}+\lambda_i\al_{p_2}^{(2)}\dots \al_{2}^{(2)}\al_1^{(2)}\quad \text{for}\quad i=3,\dots,t,$$ where the $\la_i$ are pairwise distinct non-zero elements of $k$. They are called \emph{parameters}. The positive integer numbers $p_i$ are at least $2$ and they are called the \emph{weights}. Usually we assume that $k$ is algebraically closed, but for many results this is not necessary. The algebra $\La$ depends on a weights sequence $\pp=(p_1,\dots, p_t)$ and a sequence of parameters $\lala= (\la_2,\dots,\la_t)$. We can assume that $\la_2=0$ and $\la_3=1$. We write $\La=\La(\pp,\lala)$. Concerning the complexity of the module category over $\La$ there are three types of canonical algebras: domestic, tubular and wild. Recall that the $\La$ is of wild type if and only if the Euler characteristic $\chi_\La= (2-t)+\sum_{i=1}^t1/{p_i}$ is negative. Denote by $Q_0$ the set of vertices and by $Q_1$ the set of arrows of the quiver $Q$. Then each finite generated right module over $\La$ is given by finite-dimensional vector spaces $M_i$ for each vertex $i$ of $Q_0$ and by linear maps $M_\al:M_j\rightarrow M_i$ for the arrows $\al:i\rightarrow j$ of $Q_1$ such that the canonical relations are satisfied. We will usually identify the linear maps with matrices. The category of all this modules we denote by $\mod\La$. Our aim is to study the possible coefficients, which can appear in the matrices of exceptional modules over wild canonical algebras. In many cases the matrices of special modules can be exhibited by $0$, $1$ matrices. This was shown by C. M. Ringel for exceptional representations of finite acyclic quivers \cite{Ringel:1998} and for indecomposable modules over representation-finite algebras, which is a result of P. Dr\"axler \cite{Draxler:2001}. In some special case explicit $0$, $1$ matrices with few nonzero entries have been calculated, so for indecomposable representation of Dynkin quivers by P. Gabriel \cite{Gabriel} and indecomposable representation of representation-finite posets by M. Kleiner \cite{Kleiner} (see also a result of K. J. B\"ackstroem for orders over lattices \cite{Baeckstroem}). Among new results we mention a paper of M. Grzecza, S. Kasjan and A. Mróz \cite{Grzecza:Kasjan:Mroz:2012}. The problem of determining matrices for indecomposable modules over canonical algebras has been solved in the case of domestic case. In the case of a field of characteristic different from $2$ D. Kussin and the second author computed matrices having entries $0$, $\pm 1$ for all indecomposable modules, where the entries $-1$ appears only for very special regular modules \cite{Kussin:Meltzer:2007a}. Matrices of indecomposable modules over canonical algebras over an arbitrary field were described in \cite{Komoda:Meltzer:2008}. These results were used to determine matrices for exceptional representations for tame quivers \cite{Kussin:Meltzer:2007b}, \cite{Kedzierski:Meltzer:2011}. In the case of tubular canonical algebras it was shown in \cite{Meltzer:2007} that each exceptional module can be described by matrices having as entries $0$, $\pm 1$ in the tubular types $(2,3,6)$, $(3,3,3)$, $(2,4,4)$ and for the weight type $(2,2,2,2)$ with a parameter $\la$ appear entries $0$, $\pm 1$, $\pm \la$ and $1-\la$. The proof uses universal extensions in the sense of K. Bongartz \cite{Bongartz}. Later P. Dowbor, A. Mróz and the second author developed an algorithm and a computer program for explicit calculations of matrices for exceptional modules over tubular canonical algebras \cite{Dowbor:Meltzer:Mroz:2010}. In general little is known about matrices of non-exceptional modules. However in the case of tubular canonical algebra an algorithm for the computation of matrices of non-exceptional modules was developed in \cite{Dowbor:Meltzer:Mroz:2014bimodules}. Moreover, explicit formulas for these matrices were obtained in case the module is of integral slope \cite{Dowbor:Meltzer:Mroz:2014slope}. Recently the $0$, $1$ property was proved for exceptional objects in the category of nilpotent operators of vector spaces with one invariant subspace, where the nilpotency degree is bounded by $6$ \cite{Dowbor:Meltzer:Schmidmeier} and for exceptional objects in the category of nilpotent operators of vector spaces with two incomparable invariant subspaces, where the nilpotency degree is bounded by $3$ \cite{Dowbor:Meltzer:2018}. Both problems are of tubular type and are related to the Birkhoff problem \cite{Birkhoff} and to recent results on stable vector space categories \cite{Kussin:Lenzing:Meltzer:2013a}, \cite{Kussin:Lenzing:Meltzer:2013b}, \cite{Kussin:Lenzing:Meltzer:2018}, The aim of this paper is to present the following result. \begin{mainthm Let $\La=\La(\pp,\lala)$ be a wild canonical algebra of quiver type, with $\lala=(\la_2,\cdots,\la_t)$. Then \textquotedbl almost all\textquotedbl{} exceptional $\La-$modules can be exhibited by matrices involving as coefficients $\la_i-\la_j$, where $2\leq i,j\leq t$. \end{mainthm} The notion \textquotedbl almost all\textquotedbl \ means that in every $\tauX-$orbit of exceptional modules from a certain place to the right all modules have the expected matrices. We strongly believe that the theorem holds for all exceptional $\La-$modules, but the proof of this fact needs additional arguments. The theorem will be shown by induction on the rank of a module. Recall, that matrices for modules of rank $0$ and $1$ are known \cite{Kussin:Meltzer:2007a}, \cite{Meltzer:2007}. Next, by Schofield induction \cite{Schofield} each exceptional $\La-$module $M$ of rank greater than or equal to $2$ can be obtained as the central term of a non-split sequence $$(\star)\quad 0\lra Y^{\oplus v}\lra M\lra X^{\oplus u}\lra 0,$$ where $(X,Y)$ is an orthogonal exceptional pair in the category $\coh\XX$ of coherent sheaves over the weighted projective line $\XX$ corresponding to $\La$ and $(u,v)$ is a dimensional vector of an expceptional representation for the generalized Kronecker algebra having $\dim_k\ExtX XY$ arrows \cite{Kedzierski:Meltzer:2013}. Consequently, like C.M. Ringel in \cite{Ringel:1998} we will study the category $\Ff(X,Y)$ which consists of all middle term of short exact sequences $(\star)$ for $u,v\in\NN_0$. This category is equivalent to the module category of generalized Kronecker algebra. Finally, using an alternative description of extension spaces we will assign coefficients for exceptional modules over wild canonical algebras. The result is part of the PhD thesis of the first author at Szczecin University in 2017. The authors are thankful to C. M. Ringel for helpful discussion concerning the paper \cite{Ringel:1998}. \section{Notations and basic concepts} We recall the concept of a weighted projective line in the sense of Geigle-Lenzing \cite{Geigle:Lenzing:1987} associated to a canonical algebra $\La=\La(\pp,\lala)$. Let $\LL=\LL(\pp)$ be the rank one abelian group with generators $\vx_1,\dots,\vx_t$ and relations $p_1\vx_1=\cdots p_t\vx_t:=\vc$, where $\vc$ is called \emph{canonical element}. Moreover each element $\vy$ of $\LL$ can be written in \emph{normal form} $\vy=a\vc+\sum_{i=1}^t a_i\vx_i$ with $a\in\ZZ$ and $0\leq a_i<p_i$. The polynomial algebra $k[x_1,\dots,x_t]$ is $\LL-$graded, where degree of $x_i$ is $\vx_i$. Because the polynomials $f_i=x^{p_i}_i-x_1^{p_1}-\la_ix_2^{p_2}$ for $i=3,\dots t$ are homogeneous, the quotient algebra $S=k[x_1,\dots,x_t]/\langle f_i\mid i=3,\dots,t\rangle$ is also $\LL-$graded. A \emph{weighted projective line} $\XX$ is by definition the projective spectrum of the $\LL-$graded algebra $S$. The category of coherent sheaves over $\XX$ will be denoted by $\coh\XX$. In other words the category of coherent sheaves $\coh\XX$ is the Serre quotient $\mathrm{mod}^{\ZZ}(S)/ \mathrm{mod}^{\ZZ}_0 (S)$, where $\mathrm{mod}^{\ZZ} S$ is the category of finitely generated $\ZZ$-graded modules over $S$ and $\mathrm{mod}^{\ZZ}_0 (S) $ the subcategory of modules of finite length. It is well known, that each indecomposable sheaf in $\coh\XX$ is a locally free sheaf, called a \emph{vector bundle}, or a \emph{sheaf of finite length}. Denote by $\vect\XX$ (resp. $\cohnull\XX$) the category of vector bundles (resp. finite length sheaves) on $\XX$. The category $\coh\XX$ is a $\mathrm{Hom}-$finite, abelian $k-$category. Moreover, it is hereditary that means that $\Ext i\XX --=0$ for $i\geq 2$ and it has Serre duality in the form $\ExtX FG\iso D\HomX G{\tauX F}$, where the Auslander-Reiten translation $\tauX$ is given by the shift $F\mapsto F(\vw)$, where $\vw:=(t-2)\vc-\sum_{i=1}^t\vx_i$ denotes the \emph{dualizing element}, equivalently the category $\coh\XX$ has Auslander-Reiten sequences. Moreover, there is a tilting object composed of line bundles with $\EndX T=\La$ and it induces an equivalence of a bounded derived category $\mathcal{D}^b(\coh\XX)\stackrel{\iso}{\lra}\mathcal{D}^b(\mod\La)$. For coherent sheaves there are well known invariants \emph{rank}, \emph{degree} and \emph{determinant}, which correspond to linear forms $\rk, \deg : K_0(\XX)\lra\ZZ$ and $\det:K_0(\XX)\lra \LL(\pp)$, again called rank, degree and determinant. Recall that a coherent sheaf $E$ over $\XX$ is called \emph{exceptional} if $\ExtX EE=0$ and $\EndX E$ is a division ring, in case $k$ is algebraically closed the last means that $\EndX E=k$. A pair $(X,Y)$ in $\coh\XX$ is called \emph{exceptional} if $X$ and $Y$ are exceptional and $\HomX YX=0=\ExtX YX$. Finally, an exceptional pair is \emph{orthogonal} if additionally $\HomX XY=0$. The \emph{rank} of a $\La-$module is defined $\rk M:= \dim_k M_{0}-\dim_k M_{\vc}$. The rank of a module in this sense equals the rank of the corresponding sheaf in the geometric meaning. We denote by $\modplus\La$ (respectively $\modminus\La$ or $\modzero\La$) the full subcategory consisting of all $\La-$modules, which indecomposable summands of the decomposition into a direct sum have positive (respectively negative or zero) rank. Similarly, by $\cohplus\XX$ (resp. $\cohminus\XX$) we denote the full subcategory of all vector bundles over $\XX$, such that the functor $\ExtX T-$ (resp. $\HomX T-$) vanishes. Under the equivalence $\mathcal{D}^b(\coh\XX)\stackrel{\iso}{\lra}\mathcal{D}^b(\mod\La)$ \begin{itemize} \item $\cohplus\XX$ corresponds to $\modplus\La$ by means of $E\mapsto \HomX TE$, \item $\cohnull\XX$ corresponds to $\modzero\La$ by means of $E\mapsto \HomX TE$, \item $\cohminus\XX[1]$ corresponds to $\modminus\La$ by means of $E[1]\mapsto \ExtX TE$, where $[1]$ denotes suspension functor of the triangulated category $D^b(\coh\XX)$. \end{itemize} For simplicity we will often identify a sheaf $E$ in $\cohplus\XX$ or $\cohnull\XX$ with the corresponded $\La-$module $\HomX TE$. \section{Exceptional modules of the small rank}\label{sec:samll_rank} First, we start with some matrix notations. For a natural numbers $n$ by $I_n$ denote the square diagonal matrix of degree $n$ with each non-zero element equal $1$. For a natural number $n$ and $k$ by $X_{n+k,n}$ and $Y_{n+k,n}$ we denote the following matrices. $$X_{n+k,n}:=\left[\begin{array}{ccc} &&\\ &I_n&\\ &&\\ \hline 0&\cdots&0\\ \vdots&\ddots&\vdots\\ 0&\cdots&0\\ \end{array}\right]\in M_{n+k,n}(k),\quad Y_{n+k,n}:=\left[\begin{array}{ccc} 0&\cdots&0\\ \vdots&\ddots&\vdots\\ 0&\cdots&0\\ \hline &&\\ &I_n&\\ &&\\ \end{array}\right]\in M_{n+k,n}(k).$$ A $\La-$module of rank zero is called \emph{regular}. It is well known that the Auslander-Reiten quiver of the regular $\La-$modules consists of a family of orthogonal regular tubes with $t$ exceptional tubes $\Tt_1,\dots, \Tt_t$ of rank $p_1,\dots, p_t$, respectively, while the other tubes are homogeneous. Moreover an exceptional regular modules lies in an exceptional tube and its quasi-length is less of the rank of the tube. We will use the description from \cite{Kussin:Meltzer:2007a}, for the indecomposable regular modules. However we will only quote the shape of the exceptional ones, which lies in the tube $\Tt_i$ for $i \in \{3,...,t \} $. For the tubes $\Tt_1$ and $\Tt_2$ the description is similar. Following the notations from \cite{Kussin:Meltzer:2007a} we denote a regular module by $S_a^{[l]}$, where $l$ is the quasi-length of $S_a^{[l]}$ and $a$ indicates the position on the corresponding floor of the tube. For an exceptional module $S_a^{[l]}$ the quasi-length $l< p_i$ and so all vector space of $S_a^{[l]}$ are zero or one dimensional. There are $3$ cases: \begin{itemize} \item[$(1)$] $1\leq a<p_i\quad\textnormal{and}\quad 0<l<p_i-a$, \item[$(2)$] $1\leq a<p_i\quad\textnormal{and}\quad p_i-a<l<p_i$, \item[$(3)$] $a=p_i\quad\textnormal{and}\quad 0<l<p_i$. \end{itemize} {\bf Case $(1)$}. Then $S_a^{[l]}$ has the form $$\xymatrix @R -15pt {& 0\ar[ldd] & 0\ar[l] && \ar[ll]\cdots& &\ar[ll]0& \ar[l]0 & \\ & 0\ar[ld] & 0\ar[l] && \ar[ll]\cdots& &\ar[ll]0& \ar[l]0 &\\ 0 && & & &&&&0, \ar[ld] \ar[ul]\ar[uul]\ar[ddl]\\ & \cdots\ar[lu] &\ar[l]0 &\ar[l] k & \cdots\ar[l]_-{\id} &\ar[l]_-{\id} k&\ar[l]0 & \cdots\ar[l] \\ & 0\ar[luu] & 0\ar[l] && \ar[ll]\cdots& &\ar[ll]0& \ar[l]0 & \\ }$$ where $0\longleftarrow k$ and $k\longleftarrow 0$ correspond to the arrow $\al_a^{(i)}$ and $\al_{a+l}^{(i)}$ in the $i-$th arm. {\bf Case $(2)$}. Let $s:=l-(p_i-a)$. Then $S_a^{[l]}$ is the form $$\xymatrix @R -15pt{& k\ar[ldd]_-{-\la_i} & k\ar[l]_-{\id} && \ar[ll]_-{\id} \cdots& &\ar[ll]_-{\id} k& \ar[l]_-{\id} k & \\ & k\ar[ld]^-{\la_2-\la_i} & k\ar[l]_-{\id} && \ar[ll]_-{\id} \cdots& &\ar[ll]_-{\id} k& \ar[l]_-{\id} k &\\ k && & & &&&&k, \ar[ld]_-{\id} \ar[ul]^-{\id}\ar[uul]_-{\id}\ar[ddl]^-{\id}\\ & \cdots\ar[lu]_-{\id} &\ar[l]_-{\id} k &\ar[l] 0 & \cdots\ar[l] &\ar[l]0 &\ar[l]k & \cdots\ar[l]_-{\id} \\ & k\ar[luu]^-{\la_t-\la_i} & k\ar[l] && \ar[ll]\cdots& &\ar[ll]k& \ar[l]k & \\ }$$ where $k\longleftarrow 0$ and $0\longleftarrow k$ correspond to the arrow $\al_s^{(i)}$ and $\al_{a}^{(i)}$ in the $i-$th arm. {\bf Case $(3)$} Then $S_a^{[l]}$ is the form $$\xymatrix @R -15pt{& k\ar[ldd]_-{-\la_i} & k\ar[l]_-{\id} && \ar[ll]_-{\id} \cdots& &\ar[ll]_-{\id} k& \ar[l]_-{\id} k & \\ & k\ar[ld]^-{\la_2-\la_i} & k\ar[l]_-{\id} && \ar[ll]_-{\id} \cdots& &\ar[ll]_-{\id} k& \ar[l]_-{\id} k &\\ k && & & &&&&k, \ar[ld]_-{\id} \ar[ul]^-{\id} \ar[uul]_-{\id} \ar[ddl]^-{\id}\\ & \cdots\ar[lu]_-{\id} &\ar[l]_-{\id} k &\ar[l] 0 & \cdots\ar[l] & &\ar[ll]0 & k\ar[l] \\ & k\ar[luu]^-{\la_t-\la_i} & k\ar[l]_-{\id} && \ar[ll]_-{\id}\cdots& &\ar[ll]_-{\id} k& \ar[l]_-{\id} k & \\ }$$ where $k\longleftarrow 0$ and $0\longleftarrow k$ correspond to the arrow $\al_l^{(i)}$ and $\al_{p_i}^{(i)}$ in the $i-$th arm. For $\La-$modules of rank one there is the following characterization. \begin{Prop}[\cite{Meltzer:2007}] \label{thm:case:rank:one} Let $\La$ be a canonical algebra of quiver type and of arbitrary representation type and $M$ an exceptional $\La$-module of rank $1$. Then $M$ is isomorphic to one of the following modules. $$\xymatrix @C -1.8pc @R -15pt{ &&\ar[lldd]&\cdots&&&\ar[lll] M_{r_1\vx_1} && \ar[ll] M_{(r_1+1)\vx_1} && \ar[ll]\cdots&&\\ &&\ar[lld]&\cdots&&\ar[ll] M_{r_2\vx_2} & & \ar[ll] M_{r_2\vx_2} & & \ar[ll]\cdots&&\\ k^{n+1}&&& & & & &\cdots & &&&&&\ar[llluu] \ar[llllu] k^n \ar[llldd] \ar[lllld]\\ &&\ar[llu]&\cdots&&\ar[ll] M_{r_{t-1}\vx_{t-1}} & & \ar[ll] M_{(r_{t-1}+1)\vx_{t-1}} & &\ar[ll]\cdots&&\\ &&\ar[lluu]&\cdots&&&\ar[lll] M_{r_t\vx_t} & &\ar[ll] M_{(r_t+1)\vx_t} & &\ar[ll]\cdots&&\\ }$$ where $r_i$ is an integer number such that $0\leq r_i<p_i$ for each $i=1,2,\dots,t$ and \begin{itemize} \item $M_{s\vx_i}=\left\{\begin{array}{ccl} k^{n+1} &\text{for} &0\leq s\leq r_i\\ k^{n} &\text{for} &r_i< s\leq p_i\\ \end{array}\right. $ \end{itemize} Further the matrices of $M$ are given as follows \begin{itemize} \item $M_{\al_{s}^{(i)}}=\left\{\begin{array}{ccl} I_{n+1} &\text{for}& 1<s<r_i\\ I_n &\text{for}& r_i<s\leq p_i \end{array}\right.\quad\text{for}\quad i=1,2,\dots,t. $ \item $M_{\al_{r_1}^{(1)}}=X_{n+1,n}$ \item $M_{\al_{r_2}^{(2)}}=Y_{n+1,n}$ \end{itemize} and for $i=3,\dots, t$ we distinguish two cases\\ a) $r_i=0$ \begin{itemize} \item $M_{\al_{1}^{(i)}}=\left[\begin{array}{cccc} 1&0&\cdots& 0\\ \la_i&1&\cdots&0\\ \vdots&\ddots&\ddots&\vdots \\ 0&0&\cdots&1\\ 0&0&\cdots& \la_i\\ \end{array}\right]\in M_{n+1,n}(k).$ \end{itemize} b) $r_i>0$ \begin{itemize} \item $M_{\al_{1}^{(i)}}=\left[\begin{array}{ccccc} 1&0&\cdots&0& 0\\ \la_i&1&\cdots&0&0\\ \vdots&\ddots&\ddots&\vdots &\vdots\\ 0&0&\cdots&1&0\\ 0&0&\cdots&\la_i&1\\ \end{array}\right]\in M_{n+1}(k)$,\qquad $\bullet$ $M_{\al_{r_i}^{(i)}}=X_{n+1,n}.$ \end{itemize} \end{Prop} \section{Schofield induction from sheaves to modules} Let $M$ be an exceptional object from $\modplus\La$ of rank greater than or equal to $2$. Then there is a short exact sequence $$ (\star)\quad 0\lra Y^{\oplus v} \lra M \lra X^{\oplus u}\lra 0,$$ where $(X,Y)$ is an orthogonal exceptional pair in the category $\coh\XX$, such that the $\rk X< \rk M $ and $\rk Y< \rk M $ and that $(u,v)$ is a dimension vector of an exceptional representation of the generalized Kronecker algebra given by the quiver $\Theta(n)$: $$\xymatrix {1 \ar @/^1pc/[rr]^{\al_1} \ar @{}[rr]|{\vdots} \ar @/_1pc/[rr]_{\al_n} &&2,}$$ with $n:=\dim_k\ExtX XY$ arrows. This result is called Schofield induction \cite{Schofield} and was applied by C. M. Ringel in the situation of exceptional representations over finite acyclic quivers, hence of hereditary algebras \cite{Ringel:1998}. In the case, that the rank of $X$ or $Y$ is at least $2$, we can reapply Schofield induction again and as a result we receive the following sequences $$0\lra Y_2^{\oplus v_2}\lra Y\lra X_2^{\oplus u_2}\lra 0 \quad \text{and}\quad 0\lra Y_3^{\oplus v_3}\lra X\lra X_3^{\oplus u_3}\lra 0.$$ Because with each successive use of the Schofield induction, the rank of the sheaves decreases, after a finite number of steps we receive pairs of exceptional sheaves of rank $0$ or $1$. This situation is illustrated by the following diagram, which has the shape of a tree like the following one. \begin{equation} \label{eq:induction_tree} \xymatrix @C -2pc @R-1pc{ &&&&&&&M\ar @{->}[lllld]\ar @{->}[rrrrd]\\ &&& Y\ar @{->}[lld]\ar @{->}[rrd] &&&&&&&& X\ar @{->}[lld]\ar @{->}[rrd] \\ & Y_2\ar @{-->}[ldd]\ar @{-->}[rdd]& & & & X_2\ar @{-->}[ld]\ar @{-->}[rd]& & & & Y_3\ar @{-->}[ldd]\ar @{-->}[rdd]& & & & X_3\ar @{-->}[lddd]\ar @{-->}[rddd]\\ & && &Y_{n_2}& &X_{n_2}& & & && & & &\\ Y_{n_1}& &X_{n_1}& && && & Y_{n_3}& &X_{n_3}& & & &\\ & && && && & & && & Y_{n_4}& &X_{n_4}\\ } \end{equation} Applying the functor $\ExtX T-$ to the exact sequence $(\star)$ we see that if $M$ is a $\La-$module, then each sheaf $X_{i_n}$ such that there is a path form $M$ to $X_{i_n}$ in the tree \eqref{eq:induction_tree} is also $\La-$module. However, we do not know that a sheave $Y_*$ is a $\La-$module. The following lemma will allows us, by using the $\tauX$-translation, to shift the tree \eqref{eq:induction_tree} such that all its components will be $\La-$modules. \begin{Lem} \label{lem:translation_line_bundle} Let $\{L_1,\dots,L_m\}$ be a family of line bundles over $\XX$. Then there is a natural number $N$ such that $\ExtX T{\tauX^n L_j}=0$ for $j=1,\dots,m$ and for all $n>N$. \end{Lem} \begin{pf} Let $L_j=\Oo\left( a_{j}\vc+\sum_{i=1}^{t}b_{j,i}\vx_i\right)$, where $a_j\in\ZZ$, $0\leq b_{j,i}\leq p_i-1$ for $j=1,\dots,m$ and $i=1,\dots,t$. We put $N:=\max\Big\{\big\lfloor (1-a_j)(t-2)\big\rfloor+1\mid {1\leq j\leq m}\Big\}$. Then $\vc+\vw-\det \tauX^n L_j=\big(1-a_j-(n-1)(t-2)\big)\vc+\sum_{i=1}^t\big(n-1-b_{j,i} \big)\vx_i<0$ for all $n>N$ and $j=1,\dots,m$. Therefore by Serre duality $$(\triangle)\quad\ExtX {\Oo(\vc)}{\tauX^n L_j}\iso D\HomX {\tauX^n L_j}{\Oo(\vc+\vw)}=0 \quad \text{for}\quad j=1,\dots,m.$$ We have to shown that if $n>N$ then $\ExtX{\Oo(\vx)}{\tauX^n L_j}=0$ for $0\leq\vx <\vc$ and $j=1,\dots,m,$. Suppose, that $\ExtX{\Oo(\vx)}{\tauX^nL_j}\neq 0$ for some $0\leq\vx<\vc$. Then using Serre duality we get $\det \tauX^nL_j\leq \vx+\vw.$ Because $\vx+\vw\leq \vc+\vw$, then $\det \tauX^nL_j\leq \vc+\vw$ and $\ExtX {\Oo(\vc)}{\tauX^n L_j}\iso D\HomX {\tauX^n L_j}{\Oo(\vc+\vw)}\neq 0$ it is contradictory to $(\triangle)$. \end{pf} Immediately from the lemma above we receive the following corollary. \begin{Cor} \label{cor:translation_induction_tree} There is a natural number $N$ such that for $n>N$ all components of the tree \eqref{eq:induction_tree} shifted by $\tauX^n$ are $\La-$modules. \end{Cor} \begin{pf} First, note that if the sheaves $X$ and $Y$ in the sequence $(\star)$ are $\La-$modules, then middle term $M$ is a $\La-$module. Next, because there are no nonzero morphisms from finite length sheaves to vector bundles, each finite length sheaf is a $\La-$module. Let $\Ll=\{L_i\}_{i\in I}$ be the set of all line bundles appearing in the tree \eqref{eq:induction_tree}. From lemma \ref{lem:translation_line_bundle} applied to the family $\Ll$, there is natural number $N$, such that for all natural number $n>N$ the line bundles $\tauX^nL_i$ are $\La-$modules for $i\in I$. So the vector bundles in the penultimate parts of the tree \eqref{eq:induction_tree} are also $\La-$modules. Moving up from the bottom, we get all sheaves in the $\tauX^n$ image are $\La-$modules. \end{pf} \section{Description of extension spaces} Let $(X,Y)$ be an orthogonal exceptional pair in the category $\coh\XX$, this means that $\HomL XY=0=\HomL YX$, $\ExtL YX=0$ and $\ExtL XY=k^n$ is non zero space. Assume further that both sheaves $X$ and $Y$ in the sequence $(\star)$ are $\La-$modules. We consider the category $\Ff(X,Y)$, consisting of all right $\La-$modules $M$, that appear as the middle term in a short exact sequence $$0\lra Y^{\oplus v}\lra M\lra X^{\oplus u}\lra 0\quad\text{for some}\quad v,u\in \mathbb{N}_0.$$ It is well known, that the category $\Ff(X,Y)$ is abelian and has only two simple objects $X$ and $Y$, where the first one is injective simple and the second one is projective simple \cite{Ringel:1976}. Acting like C. M. Ringel in the situation of modules over a hereditary algebras \cite{Ringel:1998} we show that the problem of classifying the objects in the categories $\Ff(X,Y)$ can be reduced to the classification of the modules over the generalized Kronecker algebra, with $n$ arrows. To do so let $\eta_1$, \dots , $\eta_n$ be a basis of the vector space $\ExtL XY$. Thus we have short exact sequences $$ \eta_i:\quad 0\lra Y\lra Z_i\lra X \lra 0\quad \text{for}\quad i=1,2,\dots n $$ From the ''pull-back'' construction there is commutative diagram $$\xymatrix{0\ar[r] & Y^{\oplus n} \ar[r]\ar @{=}[d] & Z \ar[r]\ar[d] & X\ar[r] \ar[d]^{[1_X,\cdots,1_X]^T}& 0\\ 0\ar[r] & Y^{\oplus n} \ar[r] & \bigoplus_{i=1}^nZ_i \ar[r] & X^{\oplus n}\ar[r] & 0},$$ where the upper sequence is a universal extension and $Z$ is an exceptional projective object in $\Ff(X,Y)$. In addition, the projective module $Y\oplus Z$ is progenerator of $\Ff(X,Y)$. Therefore the functor $\HomX{Y\oplus Z}-$ induces an equivalence between the category $\Ff(X,Y)$ and the category of modules over the endomorphism algebra $\End\La{Y\oplus Z}$, which is isomorphic to generalized Kronecker algebra $k\Theta(n)$, where $n:=\dim_k\ExtL XY$. Now, we need a more precise description of the above equivalence. Recall from \cite{Ringel:1998} the following concept of extension space between two quiver representations $X$ and $Y$. Let $C^0(X,Y)$ and $C^1(X,Y)$ be the vector spaces defined as follows \begin{equation}\nonumber \begin{split} C^0(X,Y):=&\bigoplus_{0\leq \vx\leq \vc}\Hom {k}{X_\vx}{Y_\vx}, \\ C^1(X,Y):=&\bigoplus_{\al:\vx \to \vy}\Hom {k}{X_{\vy} }{Y_{\vx}}, \end{split} \end{equation} and let $\delta_{X,Y}:C^0(X,Y)\lra C^1(X,Y)$ be the linear map, defined by $$\delta_{X,Y}\left(\big[f_\vx\big]_{0\leq \vx\leq \vc}\right)=\left[f_{\vy}X_{\al}- Y_{\al} f_{\vz}\right]_{\al:\vy\to \vz,}$$ where $\al$ passing the set $Q_1$. For a path algebra $kQ$ the map $\delta_{X,Y}:C^0(X,Y)\lra C^1(X,Y)$ gives also useful description of the extension space of $kQ-$modules \cite{Ringel:1998}. Indeed, then there is $k-$linear isomorphism $$\Ext 1{kQ}XY\iso C^1(X,Y)/\im (\delta_{X,Y}).$$ For modules over a canonical algebra $\La=\La(\pp,\lala)$ we must additionally consider the canonical relations of the algebra $\La$. For this we take the subspace $U(X,Y)$ of $C^1(X,Y)$ containing all $\left[f_\al\right]_{\al\in Q_1} $ satisfying the following equations. \begin{equation}\nonumber \label{eq:condition_U_X_Y} \begin{split} &Y_{\omega_{1,p_i-1}^{(i)}}f_{\al_{p_i}^{(i)}} +Y_{\omega_{1,p_i-2}^{(i)}}f_{\al_{p_i-1}^{(i)}}X_{\al_{p_i}^{(i)}} +\cdots + Y_{\al_1^{(i)}}f_{\al_{2}^{(i)}}X_{\omega_{3,p_i}^{(i)}}+ f_{\al_{1}^{(i)}}X_{\omega_{2,p_i}^{(i)}}=\\ =&Y_{\omega_{1,p_1-1}^{(1)}}f_{\al_{p_1}^{(1)}} +Y_{\omega_{1,p_1-2}^{(1)}}f_{\al_{p_1-1}^{(1)}}X_{\al_{p_1}^{(1)}} +\cdots + Y_{\al_1^{(1)}}f_{\al_{2}^{(1)}}X_{\omega_{3,p_1}^{(1)}}+ f_{\al_{1}^{(1)}}X_{\omega_{2,p_1}^{(1)}}+\\ +&\lambda_i\left( Y_{\omega_{1,p_2-1}^{(2)}}f_{\al_{p_2}^{(2)}}\right. +\left. Y_{\omega_{1,p_2-2}^{(2)}}f_{\al_{p_2-1}^{(2)}}X_{\al_{p_2}^{(2)}} +\cdots + Y_{\al_1^{(2)}}f_{\al_{2}^{(2)}}X_{\omega_{3,p_2}^{(2)}}+ f_{\al_{1}^{(2)}}X_{\omega_{2,p_2}^{(2)}} \right)\\ &\text{for}\quad i=3,4,...,t. \end{split} \end{equation} \begin{Lem}[\cite{Meltzer:2007}] \label{lem:przedstawienie_Ext} $\Ext 1{\Lambda}XY\iso U(X,Y)/\im(\delta_{X,Y})$. \end{Lem} We recall the definition of the isomorphism above. Choosing the bases of the spaces $M_\vx$ we can assume, that for each arrow $\al:i\lra j$ corresponding map $M_{\al}: M_j\lra M_i$ have the shape $\left[\begin{array}{c|c} Y_\al&\varphi_\al\\ \hline 0&X_\al\\ \end{array}\right]$. Then an isomorphism $\phi:\ExtL XY\lra U(X,Y)/\im(\delta_{X,Y})$ is given by the formula $M=(M_i,M_\al)\stackrel{\phi}{\mapsto} (\varphi_\al)_{\al\in Q_0}+\im(\delta_{X,Y})$. Now, we can describe $\La-$modules contained in $\Ff(X,Y)$, using the matrices of $X$, $Y$ and the representation of the quiver $\Theta(n)$, which corresponds to the module $M$. Each module in $\Ff(X,Y)$ can be identified with an element of the extension space $\ExtL{X^{\oplus u}}{Y^{\oplus v}}$. Because $X^{\oplus u}= X\otimes k^u$ and $Y^{\oplus v}= Y\otimes k^v$, then the space $\ExtL{X^{\oplus u}}{Y^{\oplus v}}=\ExtL{X\otimes k^u}{Y\otimes k^v}$ is given by the map $\delta_{X\otimes k^u,Y\otimes k^v}$, where the tensor product is taken over the field $k$. In this situation the vector space $C^1(X\otimes k^u,Y\otimes k^v)=C^1(X,Y)\otimes\Hom {k}{k^u}{k^v}$ and also $U(X\otimes k^u,Y\otimes k^v)=U(X,Y)\otimes\Hom {k}{k^u}{k^v}$. Therefore, from lemma \ref{lem:przedstawienie_Ext} and from the commutativity of the following diagram $$\xymatrix{C^0(X\otimes k^u,Y\otimes k^v)\ar @{=}[d]\ar[rr]^-{\delta_{X\otimes k^u,Y\otimes k^v}}&&U(X\otimes k^u,Y\otimes k^v)\ar @{=}[d] \\ C^0(X,Y)\otimes\Hom {k}{k^u}{k^v}\ar[rr]^-{\delta_{X,Y}\otimes 1_{\Hom {}{k^u}{k^v}}} && U(X,Y)\otimes\Hom {k}{k^u}{k^v} }$$ we obtain that $\ExtL{X\otimes k^u}{Y\otimes k^v}\iso \ExtL XY\otimes \Hom {k}{k^u}{k^v}$. Let $\phi_1,...,\phi_t$ be a basic of the space $U(X,Y)$. Then $\phi_1+\im(\delta_{X,Y}) ,\dots, \phi_n+\im(\delta_{X,Y})$ form a basis of $\ExtL XY$. Now any element in $\ExtL{X^{\oplus u}}{Y^{\oplus v}}$ is given by an expression $\sum\limits_{k=1}^n\left(f_\al^{(k)}\otimes A_k\right)$, where $A_k\in\Hom k{k^u}{k^v}$ and $\phi_k=\left[f^{(k)}_\al\right]_{\al\in Q_1}$. Therefore an exceptional $\La-$module $M$, that appears in the sequence $0\lra Y^{\oplus v}\lra M\lra X^{\oplus u}\lra 0,$ has the form $$M=\left(Y_{\vx}^{\oplus v}\oplus X_{\vx}^{\oplus u}, \left[\begin{array}{c|c} Y^{\oplus v}_\al&\varphi_\al\\ \hline 0&X^{\oplus u}_\al\\ \end{array}\right] \right)_{0\leq \vx\leq \vc,\ \al\in Q_1}\text{where}\quad\varphi_\al=\sum\limits_{m=1}^n \left(f_{\al}^{(m)}\otimes A_m\right),$$ for an exceptional $\Theta(n)-$representation $\xymatrix {k^v &&\ar @/^1pc/[ll]^{A_n} \ar @{}[ll]|{\vdots} \ar @/_1pc/[ll]_{A_1} k^u}$ of the generalized Kronecker algebra. An explicit basis for the subspace $U(X,Y)$ we will construct in the next section. Now we will focus on exceptional modules over the generalized Kronecker algebra. The exceptional modules in this case are known. They are preprojective or preinjective and can be exhibited by matrices having only $0$ and $1$ entries \cite{Ringel:1998} For recent results concerning modules over generalized Kronecker algebra we refer to \cite{Ringel:2013}, \cite{Ringel:2018}, \cite{Ringel:2016} \cite{Weist}. \begin{Lem}\label{lem:Kronecker_niezerowe_wyrazy} Let $V=\xymatrix {k^v &&\ar @/^1pc/[ll]^{A_n} \ar @{}[ll]|{\vdots} \ar @/_1pc/[ll]_{A_1} k^u}$ be an exceptional representation of the quiver $\Theta(n)$ and let $A_m=\left[a_{i,j}^{(m)}\right]$ for $m=1,\dots,n$. Then for each pair of natural numbers $(i,j)$ there is at most one index $m$ such that the coefficient $a_{i,j}^{(m)}$ of the matrix $A_m$ is non-zero. \end{Lem} \begin{pf} We will use the description of the extension space to show that if for two matrices $A_1$ and $A_2$ of an exceptional representation $V$ of $\Theta(n)$ non-zero coefficient appear at the same row and column, then $\Ext 1{k\Theta(n)}VV\neq 0$. Consider the map $\delta=\delta_{V,V}:C^0(V,V)\lra C^1(V,V)$, where $C^0(V,V)=\Hom k{k^v}{k^v}\bigoplus \Hom k {k^u}{k^u}$ and $C^1(V,V)=\bigoplus_{m=1}^n\Hom k{k^u}{k^v}$. Then for $(f,g) \in C^0(V,V)$ we have $\delta(f,g)=\bigoplus_{m=1}^n(f A_m-A_m g)$. The vector space $C^0(V,V)$ has a base of the form $(e_{i,j}^v,0)$ for $1\leq i,j\leq v$, and $(0,e_{i,j}^u)$ for $1\leq i,j\leq u,$ where $e_{i,j}^*$ is an elementary matrix with one non-zero element (equal $1$) in the $i\times j-$place. Because $A_m=\left[a_{i,j}^{(m)}\right]$ for $m=1,\dots,n$, then $\im(\delta)$ is generated by the elements \begin{equation}\nonumber \begin{split} \delta(e_{i,j}^v,0)&=\bigoplus_{m=1}^n e_{i,j}^vA_m=\bigoplus_{m=1}^n\left[\begin{array}{ccccccc} 0&\cdots&0&a_{1,j}^{(m)}&0&\cdots&0\\ \vdots&&\vdots&\vdots&\vdots&&\vdots\\ 0&\cdots&0&a_{v,j}^{(m)}&0&\cdots&0\\ \end{array}\right]\\ \delta(0,e_{i,j}^u)&=\bigoplus_{m=1}^n A_me_{i,j}^u=\bigoplus_{m=1}^n\left[\begin{array}{cccc} 0&0&\cdots &0\\ \vdots&\vdots& &\vdots\\ 0&0&\cdots &0\\ a_{i,1}^{(m)}&a_{i,2}^{(m)}&\cdots&a_{i,u}^{(m)}\\ 0&0&\cdots &0\\ \vdots&\vdots& &\vdots\\ 0&0&\cdots &0\\ \end{array}\right]. \end{split} \end{equation} Without lost of generality we can assume that $a_{1,1}^{(m)}\neq 0$ for $m=1,2$. Then the element $x=e_{1,1}\oplus 0\oplus\cdots\oplus 0$ belongs to $C^1(V,V)$ and $x\notin\im(\delta)$. Therefore $\Ext 1{k\Theta(n)}VV\iso C^1(V,V)/\im(\delta)\neq 0$. \end{pf} \section{A construction of a base for $U(X,Y)$.} Let $\La$ be a canonical algebra of the type $\pp=(p_1,\dots,p_t)$ and with parameters $\lala=(\la_2=0,\la_3=1,\dots, \la_t)$. A representation $M=\rep MQ{\vx}{\al}$ of an exceptional $\La-$module with positive rank is called \emph{acceptable} if it satisfies the following conditions. \begin{itemize} \item[\textbf{C1.}] The matrices $M_{\al_{1}^{(1)}}$, $M_{\al_{1}^{(3)}}$, $M_{\al_{1}^{(4)}}$, \dots, $M_{\al_{1}^{(t)}}$ have entries of the form $\lambda_a-\lambda_b$ for same $a,b\geq 2$, only. \item[\textbf{C2.}] All other matrices have only $0$ and $1$ as their coefficients. \item[\textbf{C3.}] For each path $\omega_{u,v}^{(2)}:(u-1)\vx_2\lra v\vx_2$ the entries of the matrix $M_{\omega_{u,v}^{(2)}}$ are equal to $0$ or $1$. \item[\textbf{C4.}] For each path $\omega_{1,v}^{(i)}:0\lra v\vx_i$, where $i\neq 2$ the entries of the matrix $M_{\omega_{1,v}^{(i)}}$ are of the form $\lambda_a-\lambda_b$ for same $a,b\geq 2$. \item[\textbf{C5.}] For each path $\omega_{u,v}^{(i)}:(u-1)\vx_i\lra v\vx_i$, where $i\neq 2$ and $u\geq 2$ the entries of the matrix $M_{\omega_{u,v}^{(i)}}$ are equal to $0$ or $1$. \end{itemize} The following lemma \cite[Lemma 3.4]{Meltzer:2007} is useful. \begin{Lem} \label{lem:odwzorania_rkM>0} Let $M$ be an acceptable representation of a module in $\modplus\La$. Then by base change we can assume that $$M_{\al_{j}^{(i)}}=\left[\begin{array}{ccc} \\ &I_n&\\ \\ \hline 0&\cdots& 0\\ \vdots&\ddots&\vdots\\ 0&\cdots &0\\ \end{array}\right]\quad \text{for}\quad 2\leq j\leq p_i,\quad i=3,...,t.$$ In addition, the matrices $M_{\al_1^{(i)}}$ have again entries $\lambda_a-\lambda_b$ for same $a,b\geq 2$.\kwadracik \end{Lem} For an exceptional pair $(X,Y)$ with acceptable representations of $X$ and $Y$ we will construct a basis of subspace $U(X,Y)$, for which each basis vector has only coefficients of the form $\lambda_a-\lambda_b$. In the case that the ranks of $\rk X>0$ and $\rk Y>0$ this was done in \cite{Meltzer:2007}. \begin{Lem}\label{lem:base_for_positive_rank} Let $X$ and $Y$ be $\La$-modules in $\modplus\La$, with acceptable representations. Then there is basis $F^{(1)},\dots, F^{(d)}$ of the subspace $U(X,Y),$ where $F^{(j)}=\left[f_\al^{(j)}\right]_{\al\in Q_1}$ satisfies the following properties: \begin{itemize} \item[$(i)$] The entries of the matrix $f_{\al_j^{(2)}}$ are equal $0$ or $1$ for $1 \leq j\leq p_i$, \item[$(ii)$] The entries of the matrix $f_{\al_j^{(i)}}$ are equal $0$, $1$ for $2 \leq j\leq p_i$ and $i=1,3,4,\dots,t$. \item[$(iii)$] The entries of the matrix $f_{\al_1^{(i)}}$ are equal $\la_a-\la_b$ for $i=1,3,4,\dots,t$. \end{itemize} \end{Lem} Note, that in the sequence $(\star)$ of the Schofield induction the $\La-$module $Y$ always have the positive rank, but $X$ can have rank zero. In this situation, we need one more lemma. \begin{Lem} \label{lem:baze_for_regular} Let $Y$ and $X$ be an exceptional $\La-$modules such that $Y\in\modplus\La$ and $X \in\modzero\La$. Assume that $X$ and $Y$ have acceptable representations and $X$ lies in the exceptional tube corresponding to $i-$th arm of the canonical algebra. Then there is a base $F^{(1)},\dots,F^{(d)}$ of the subspace $U(X,Y)$, where $F^{(j)}=\big[f_\al^{(j)}\big]_{\al\in Q_1}$ satisfies the following properties: \begin{itemize} \item[$(i)$] The entries of the matrix $f_{\al_j^{(2)}}$ are equal $0$, $1$ for $1\leq j\leq p_2$, \item[$(ii)$] The entries of the matrix $f_{\al_j^{(i)}}$ are equal $0$, $1$ for $1\leq j\leq p_i$, \item[$(iii)$] The entries of the matrix $f_{\al_1^{(m)}}$ are equal $\lambda_a-\lambda_b$ for $m=1,3,4,\dots, t$ $\wedge$ $m\neq i$, \item[$(iv)$] The entries of the matrix $f_{\al_j^{(m)}}$ are equal $0$, $1$ for $2\leq j\leq p_m$ and $m=3,4,\dots, t$. \end{itemize} \end{Lem} \begin{pf} Because $X$ lies in the exceptional tube corresponding to $i-$th arm of the canonical algebra, then it has a representation of the form $S_a^{[l]}$ from the section \ref{sec:samll_rank}, such that \begin{itemize} \item[$(1)$] $1\leq a<p_i\quad\textnormal{and}\quad 0<l<p_i-a$, \item[$(2)$] $1\leq a<p_i\quad\textnormal{and}\quad p_i-a<l<p_i$, \item[$(3)$] $a=p_i\quad\textnormal{and}\quad 0<l<p_i$. \end{itemize} In particular, all vector space of $S_a^{[l]}$ are zero or one dimensional. {\bf Case $(1)$}. From the shape of $S_a^{[l]}$ any element of the subspace $U(X,Y)$ has the form $$F=\left[\begin{array}{ccccccccc} 0&\cdots& 0&0&\cdots&0&0&\cdots&0\\ \vdots& &\vdots&\vdots&&\vdots&\vdots&&\vdots\\ 0&\cdots& 0&0&\cdots&0&0&\cdots&0\\ 0& \cdots&0& f_{\al_{a}^{(i)}} & \cdots & f_{\al_{a+l-1}^{(i)}}& 0&\cdots &0 \\ 0&\cdots& 0&0&\cdots&0&0&\cdots&0\\ \vdots& &\vdots&\vdots&&\vdots&\vdots&&\vdots\\ 0&\cdots& 0&0&\cdots&0&0&\cdots&0\\ \end{array}\right].$$ In addition, the condition describing the subspace $U(X,Y)$ vanishes. Now we fix $j$ such that $a\leq j < a+l$. Let denote by $e_{r}$ the matrix unit (the matrix with one coefficient $1$ namely the coefficient in the row with index $r$, the remaining coefficients are zero). Then $e_{r}$ is an element in $\Hom{k}{X_{j\vec{x}_i}}{Y_{(j-1)\vec{x}_i}}= \Hom{k}{k}{Y_{(j-1)\vec{x}_i}}$, where $1\leq r\leq \dim _kY_{(j-1)\vec{x}_i}$ and $$F_j^r=\left[\begin{array}{ccccccccc} 0&\cdots& 0&0&0&\cdots&0\\ \vdots& &\vdots&\vdots&\vdots&&\vdots\\ 0&\cdots& 0&0&0&\cdots&0\\ 0& \cdots&0& e_r & 0&\cdots &0 \\ 0&\cdots& 0&0&0&\cdots&0\\ \vdots& &\vdots&\vdots&\vdots&&\vdots\\ 0&\cdots& 0&0&0&\cdots&0\\ \end{array}\right].$$ belongs to $U(X,Y)$ ($e_{r}$ lies in $j$-th column). It is easy to check, that $F_j^r$ for $1\leq r\leq \dim _kY_{(j-1)\vec{x}_i}$ and $a\leq j < a+l$ create a base of the subspace $U(X,Y)$. {\bf Case $(1)$}. Any element of the subspace $U(X,Y)$ has the form $$F_j^r=\left[\begin{array}{ccccccccc} f_{\al_{1}^{(1)}}&\cdots& f_{\al_{s-1}^{(1)}}& f_{\al_{s}^{(1)}}&\cdots & f_{\al_{a-1}^{(1)}}&f_{\al_{a}^{(1)}} &\cdots&f_{\al_{p_1}^{(1)}}\\ \vdots& &\vdots&\vdots&\vdots&&\vdots\\ f_{\al_{1}^{(i-1)}}&\cdots& f_{\al_{s-1}^{(i-1)}}& f_{\al_{s}^{(i-1)}}&\cdots & f_{\al_{a-1}^{(i-1)}}&f_{\al_{a}^{(i-1)}} &\cdots&f_{\a l_{p_{i-1}}^{(1)}}\\ f_{\al_{1}^{(i)}}& \cdots& f_{\al_{s-1}^{(i)}}& 0 & \cdots& 0 &f_{\al_{a}^{(i)}} &\cdots&f_{\al_{p_i}^{(i)}} \\ f_{\al_{1}^{(i+1)}}&\cdots& f_{\al_{s-1}^{(i+1)}}& f_{\al_{s}^{(i+1)}}&\cdots & f_{\al_{a-1}^{(i+1)}}&f_{\al_{a}^{(i+1)}} &\cdots &f_{\al_{p_{i+1}}^{(1)}}\\ \vdots& &\vdots&\vdots&\vdots&&\vdots\\ f_{\al_{1}^{(t)}}&\cdots& f_{\al_{s-1}^{(t)}}& f_{\al_{s}^{(t)}}&\cdots & f_{\al_{a-1}^{(t)}}&f_{\al_{a}^{(t)}} &\cdots&f_{\al_{p_t}^{(t)}}\\ \end{array}\right],$$ where the condition described $U(X,Y)$ has the following shape. \begin{equation}\nonumber \begin{split} &Y_{\omega_{1,p_i-1}^{(i)}}f_{\al_{p_i}^{(i)}}+\cdots +Y_{\omega_{1,a-1 }^{(i)}}f_{\al_{a}^{(i)}}\\ =&Y_{\omega_{1,p_1-1}^{(1)}}f_{\al_{p_1}^{(1)}} +Y_{\omega_{1,p_1-2}^{(1)}}f_{\al_{p_1-1}^{(1)}} +\cdots + Y_{\al_1^{(1)}}f_{\al_{2}^{(1)}}+ f_{\al_{1}^{(1)}}\\ +&\lambda_i\left( Y_{\omega_{1,p_2-1}^{(2)}}f_{\al_{p_2}^{(2)}}\right. +\left. Y_{\omega_{1,p_2-2}^{(2)}}f_{\al_{p_2-1}^{(2)}} +\cdots + Y_{\al_1^{(2)}}f_{\al_{2}^{(2)}}+ f_{\al_{1}^{(2)}}\right) \end{split} \end{equation} and for $j\in\{3,4,\dots,t\}$ and $j\neq i$ we get \begin{equation}\nonumber \begin{split} &Y_{\omega_{1,p_j-1}^{(j)}}f_{\al_{p_j}^{(j)}} +Y_{\omega_{1,p_j-2}^{(j)}}f_{\al_{p_j-1}^{(j)}} +\cdots + Y_{\al_1^{(j)}}f_{\al_{2}^{(j)}}+ f_{\al_{1}^{(j)}}\\ =&Y_{\omega_{1,p_1-1}^{(1)}}f_{\al_{p_1}^{(1)}} +Y_{\omega_{1,p_1-2}^{(1)}}f_{\al_{p_1-1}^{(1)}} +\cdots + Y_{\al_1^{(1)}}f_{\al_{2}^{(1)}}+ f_{\al_{1}^{(1)}}\\ +&\lambda_i\left( Y_{\omega_{1,p_2-1}^{(2)}}f_{\al_{p_2}^{(2)}}\right. +\left. Y_{\omega_{1,p_2-2}^{(2)}}f_{\al_{p_2-1}^{(2)}} +\cdots + Y_{\al_1^{(2)}}f_{\al_{2}^{(2)}}+ f_{\al_{1}^{(2)}}\right). \end{split} \end{equation} We fix $j$ such that $2\leq j\leq p_i$. Again $e_{r}$ is unit matrix belongs to $\Hom{k}{X_{j\vx_1}}{Y_{(j-1)\vx_1}}= \Hom{k}{k}{Y_{(j-1)\vx_1}}$ for $1\leq r\leq \dim _kY_{(j-1)\vx_1}$. Then the element $$F_{\al_j^{(1)}}^{r}=\left[\begin{array}{cccccccc} -Y_{\omega_{1,j-1}^{(1)}} e_{r} &0& \cdots & 0 & e_{r} &0 &\cdots&0\\ 0 &0& \cdots & 0 & 0 &0 &\cdots&0\\ 0 & 0& \cdots & 0 & 0 &0 &\cdots&0\\ \vdots & \vdots& \ddots & \vdots & \vdots &\vdots &\ddots&\vdots\\ 0 & 0& \cdots & 0 & 0 &0 &\cdots&0\\ \end{array}\right]$$ (where $e_{r}$ lies in the $j$-th column) belongs to $U(X,Y)$. We fix $j$ such that $1\leq j\leq p_2$ and let $e_{r}$ belongs to $\Hom{k}{X_{j\vec{x}_2}}{Y_{(j-1)\vec{x}_2}}=\Hom{k}{k}{Y_{(j-1)\vec{x}_2}}$ for $1\leq r\leq \dim _kY_{(j-1)\vec{x}_2}$. Then the element $$F_{\al_j^{(2)}}^{r}=\left[\begin{array}{cccccccc} -\lambda_i Y_{\omega_{1,j-1}^{(2)}} e_{r} & 0&\cdots &0 &0&0&\cdots&0\\ 0 & 0& \cdots & 0 & e_{r} &0 &\cdots&0\\ (\la_3-\la_i) Y_{\omega_{1,j-1}^{(2)}} e_{r} & 0& \cdots & 0 & 0 &0 &\cdots&0\\ \vdots & \vdots& \ddots & \vdots & \vdots &\vdots &\ddots&\vdots\\ (\lambda_{i-1}-\lambda_i) Y_{\omega_{1,j-1}^{(2)}} e_{r} & 0& \cdots & 0 & 0 &0 &\cdots&0\\ 0 & 0& \cdots & 0 & 0 &0 &\cdots&0\\ (\lambda_{i+1}-\lambda_i) Y_{\omega_{1,j-1}^{(2)}} e_{r} & 0& \cdots & 0 & 0 &0 &\cdots&0\\ \vdots & \vdots& \ddots & \vdots & \vdots &\vdots &\ddots&\vdots\\ (\lambda_t-\lambda_i) Y_{\omega_{1,j-1}^{(2)}} e_{r} & 0& \cdots & 0 & 0 &0 &\cdots&0\\ \end{array}\right]$$ belongs to $U(X,Y)$. Next, assume that $3\leq m\leq t$, $m\neq i$ and $1<j\leq p_m$. Let $e_{r}$ be a unit matrix in $\Hom{k}{X_{j\vec{x}_m}}{Y_{(j-1)\vec{x}_m}}=\Hom{k}{k}{Y_{(j-1)\vec{x}_m}}$ for $1\leq r\leq \dim _kY_{(j-1)\vec{x}_k}$. Then the element $$F_{\al_j^{(m)}}^{r}=\left[\begin{array}{cccccccc} 0 & 0& \cdots & 0 & 0 &0 &\cdots&0\\ 0 & 0& \cdots & 0 & 0 &0 &\cdots&0\\ 0 &0& \cdots & 0 & 0 &0 &\cdots&0\\ \vdots & \vdots& \ddots & \vdots & \vdots &\vdots &\ddots&\vdots\\ -Y_{\omega_{1,j-1}^{(m)}}e_{r} & 0& \cdots & 0 & e_{r} &0 &\cdots&0\\ \vdots & \vdots& \ddots & \vdots & \vdots &\vdots &\ddots&\vdots\\ 0 & 0& \cdots & 0 & 0 &0 &\cdots&0\\ \end{array}\right]$$ belongs to $U(X,Y)$. Now, we fix $j$ such that $j\in\{a, a+1,\dots, p_i \}$ and let $e_{r}$ be a unit matrix in $\Hom{k}{X_{j\vec{x}_i}}{Y_{(j-1)\vec{x}_i}}=\Hom{k}{k}{Y_{(j-1)\vec{x}_i}}$, where $1\leq r\leq \dim _kY_{(j-1)\vec{x}_i}$. Then the element $$F_{\al_j^{(i)}}^{r}=\left[\begin{array}{cccccccc} Y_{\omega_{1, j-1}^{(i)}}e_{r} & 0& \cdots & 0 & 0 &0 &\cdots&0\\ 0 & 0& \cdots & 0 & 0 &0 &\cdots&0\\ Y_{\omega_{1, j-1}^{(i)}}e_{r}& 0& \cdots & 0 & 0 &0 &\cdots&0\\ \vdots & \vdots& \ddots & \vdots & \vdots &\vdots &\ddots&\vdots\\ Y_{\omega_{1, j-1}^{(i)}}e_{r}& 0& \cdots & 0 & 0 &0 &\cdots&0\\ 0 & \cdots& \cdots & 0 & e_{r} &0 &\cdots&0\\ Y_{\omega_{1, j-1}^{(i)}}e_{r} & 0& \cdots & 0 & 0 &0 &\cdots&0\\ \vdots & \vdots& \ddots & \vdots & \vdots &\vdots &\ddots&\vdots\\ Y_{\omega_{1, j-1}^{(i)}}e_{r} & 0& \cdots & 0 & 0 &0 &\cdots&0\\ \end{array}\right]$$ belongs to $U(X,Y)$. Let $j$ be a natural number such that $j\in\{1, 2,\dots, s-1 \}$ and let $e_{r}$ belongs to $\Hom{k}{X_{j\vec{x}_i}}{Y_{(j-1)\vec{x}_i}}=\Hom{k}{k}{Y_{(j-1)\vec{x}_i}}$, for $1\leq r\leq \dim _kY_{(j-1)\vec{x}_i}$. Then the element $$F_{\al_j^{(i)}}^r=\left[\begin{array}{ccccccccc} 0&\cdots& 0&0&0&\cdots&0\\ \vdots& &\vdots&\vdots&\vdots&&\vdots\\ 0&\cdots& 0&0&0&\cdots&0\\ 0& \cdots&0& e_r & 0&\cdots &0 \\ 0&\cdots& 0&0&0&\cdots&0\\ \vdots& &\vdots&\vdots&\vdots&&\vdots\\ 0&\cdots& 0&0&0&\cdots&0\\ \end{array}\right],$$ belongs to $U(X,Y)$, where $e_r$ lies in $j-$th column and $i-$th row. It is easy to check, that $F_{\al}^{r}$ are a base of $U(X,Y)$. In the end, we must check that matrices $f_{\al}^{r}$ of basis vectors $F_{\al}^{r}$ have desired entries. Because the representation for $Y$ is acceptable, then the matrices $Y_{\omega_{u,v}^{(m)}}$ have only entries $0$, $\pm 1$, $\pm\la_a$, $\la_a-\la_b$. Hence the matrix $Y_{\omega_{1,j-1}^{(m)}}e_{r}$ has the same entries. In addition, for $m=2$, the coefficients of matrices $Y_{\omega_{u,v}^{(2)}}$ are equal to $0$ or $1$. Therefore matrices $(\la_a-\la_b)Y_{\omega_{1,j-1}^{(2)}}e_r$ have only entries $0$, $\pm 1$, $\pm \la_a$, $\lambda_a-\lambda_b$. The case $(3)$ is similar to $(2)$. \end{pf} Remark, that the coefficients of the form $\la_a-\la_b$ occur as coefficients of basis vector of $U(X,Y)$ only if they appear in the acceptable representations $X$ or $Y$. In particular if $X$ and $Y$ are rank modules of rank $1$, from Proposition \ref{thm:case:rank:one}, then the all basis vectors of $U(X,Y)$ have only coefficients $0$, $\pm 1$ and $\pm\la_i$. \section{Proof of the main theorem} \begin{Prop}[Induction step] \label{Thm:induction_step} Let $M$ be an exceptional module over a canonical algebra $\La$, such that $\rk M\geq 2$. Let $(X,Y)$ be an orthogonal exceptional pair of $\La-$modules, obtained from Schofield induction applied to $M$. If $X$ and $Y$ allows acceptable representations, then also $M$ allows an acceptable representation. \end{Prop} \begin{pf} We will use the basis $F^{(1)}=\left[f_{\al}^{(1)}\right]_{\al\in Q_1}$,\dots, $F^{(n)}=\left[f_{\al}^{(n)}\right]_{\al\in Q_1}$ of the subspace $U(X,Y)$ from the lemma \ref{lem:base_for_positive_rank} or lemma \ref{lem:baze_for_regular}. Because $M$ belong to $\Ff(X,Y)$, it has the following form. $$M=\left(Y_{\vx}^{\oplus v}\oplus X_{\vx}^{\oplus u}, \left[\begin{array}{c|c} Y^{\oplus v}_\al&\varphi_\al\\ \hline 0&X^{\oplus u}_\al\\ \end{array}\right] \right)_{0\leq \vx\leq \vc,\ \al\in Q_1}\text{where}\quad\varphi_\al=\sum\limits_{m=1}^n \left(f_{\al}^{(m)}\otimes A_m\right),$$ for an exceptional $\Theta(n)-$representation $\xymatrix {k^v &&\ar @/^1pc/[ll]^{A_n} \ar @{}[ll]|{\vdots} \ar @/_1pc/[ll]_{A_1} k^u}$. Recall that all matrices $A_1$,\dots, $A_n$ have entries only $0$ and $1$ ( see \cite{Ringel:1998}) and moreover from lemma \ref{lem:Kronecker_niezerowe_wyrazy} non-zero coefficients in consecutive matrices $A_1$,\dots,$A_n$ occur in different places. Therefore the matrix $\sum\limits_{m=1}^{n}(f^{(m)}_{\al}\otimes A_m)$ has the same entries as matrices of basis vector $F^{(1)}$,\dots, $F^{(n)}$ of $U(X,Y)$. Therefore, because $X$ and $Y$ are acceptable, the matrix $M_{\al_1^{(i)}}$ has entries of the form $\lambda_a-\lambda_b$ for $i=1,3,4,\dots,t$ and for $i=2$ only $0$ and $1$ appear. Next, $M_{\al_j^{(i)}}$ is a zero-one matrix for $2\leq j\leq p_i$ and $i=1,2,\dots,t$. Now, we must check, that for each path $\omega_{l,m}^{(i)}=\al_m^{(i)}\dots\al_l^{(i)}$ the matrix $M_{\omega_{l,m}^{(i)}}=M_{\al_l^{(i)}}\dots M_{\al_m^{(i)}}$ has only expected coefficients. After standard calculations we obtain, that $$M_{\omega_{l,m}^{(i)}}= \left[\begin{array}{c|c} \left(Y_{\omega_{l,m}^{(i)}}\right)^{\oplus v}&\sum YfX_{\omega_{l,m}^{(i)}}\\ \\ \hline \\ 0& \left(X_{\omega_{l,m}^{(i)}}\right)^{\oplus u} \end{array}\right],$$ where by $\sum YfX_{\omega_{l,m}^{(i)}}$ we denote $$\sum\limits_{j=1}^{n} \left\{Y_{\omega_{l,m-1}^{(i)}} f^{(j)}_{{\al_m^{(i)}}}+ Y_{\omega_{l,m-2}^{(i)}} f^{(j)}_{\al_{m-1}^{(i)}} X_{\al_{m}^{(i)}}+ ...+ f^{(j)}_{\al_l^{(i)}} X_{\omega_{l+1,m}^{(i)}}\right\}\otimes A_j.$$ Because $X$ and $Y$ are acceptable, then $Y_{\omega_{l,m}^{(i)}}$ and $X_{\omega_{l,m}^{(i)}}$ allow only desired entries. Again $\sum YfX_{\omega_{l,m}^{(i)}}$ has the same coefficients as $$Y_{\omega_{l,m-1}^{(i)}} f^{(j)}_{{\al_m^{(i)}}}+ Y_{\omega_{l,m-2}^{(i)}} f^{(j)}_{\al_{m-1}^{(i)}} X_{\al_{m}^{(i)}}+ ...+ f^{(j)}_{\al_l^{(i)}} X_{\omega_{l+1,m}^{(i)}}.$$ Now the statement concerning the coefficients of the matrices $M_{\omega_{l,m}^{(i)}}$ follows from the explicit description of the elements $F^{(i)}$ by a case by case inspection. \end{pf} Let us note, that coefficients of the form $\la_a-\la_b$ appear only for regular modules. This means that if in the tree \eqref{eq:induction_tree} there are only vector bundles, then each modules in this tree (after translations) can by established by matrices having coefficients $0$, $\pm 1$, $\pm\la_a$. \begin{pf}[Proof of Main Theorem] We prove the fact by induction on the rank of the exceptional module. Remember that a description of exceptional modules of the zero and one rank in section \ref{sec:samll_rank}, which gives us the start of induction. Let $M$ be an exceptional $\La-$module of rank r and assume that $r\geq 2$. Then $M$ is corresponds to an exceptional vector bundle over the weighted projective line $\XX$ associated to $\La$. By repeated use of Schofield induction, we obtain the figure \eqref{eq:induction_tree} in the category $\coh\XX$ for $M$. Then from Corollary \ref{cor:translation_induction_tree} we can shift all tree, such that each sheaf in the tree is a $\La-$module. Therefore, up to \textquotedbl almost all\textquotedbl{} we can assume that all tree \eqref{eq:induction_tree} belongs to the category $\mod\La$. Because all tree components have smaller rank than $M$, then they have acceptable representations. Therefore the claim follows from Proposition \ref{Thm:induction_step}. \end{pf} \bibliographystyle{amsplain}
proofpile-arXiv_065-4036
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In the theory of vertex algebras and conformal field theory, determination of fusion rules is one of the most important problems. By a result by Y. Z. Huang \cite{H} for a rational vertex algebra, fusion rules can be determined by using the Verlinde formula. However, although there are certain versions of Verlinde formula for a broad class of non-rational vertex algebras, so far there is no proof that fusion rules for such algebras can be determined by using the Verlinde formula. One important example is the singlet vertex algebra for $(1,p)$--models whose irreducible representations were classified in \cite{A-2003}. Verlinde formula for fusion rules was also presented by T. Creutzig and A. Milas in \cite{CM}, but so far the proof was only given for the case $p=2$ in \cite{AdM-2017}. We should also mention that the fusion rules and intertwining operators for some affine and superconformal vertex algebras were studied in \cite{A-2001}, \cite{A-CMP}, \cite{CRTW} and \cite{KR-CMP}. In this paper we study the case of the Weyl vertex algebra, which we denote by $M$, also called the $\beta \gamma$ system in the physics litarature. Its Verlinde type conjecture for fusion rules was presented by S. Wood and D. Ridout in \cite{RW}. Here, we present a short proof of Verlinde conjecture in this case. We prove the following fusion rules result: \begin{theorem} Assume that $\lambda , \mu, \lambda + \mu \in {\Bbb C} \setminus {\Bbb Z}$. Then we have: \begin{eqnarray} && \rho_{\ell_1} (M) \times \rho_{\ell_2 } (\widetilde{U(\lambda)} ) = \rho_{\ell_1+\ell_2 } (\widetilde{U(\lambda})), \label{fusion-uv-1} \\ && \rho_{\ell_1} (\widetilde{U(\lambda )} ) \times \rho_{\ell_2 } (\widetilde{U(\mu )} ) = \rho_{\ell_1+\ell_2 } (\widetilde{U(\lambda + \mu)} ) + \rho_{\ell_1+\ell_2 -1} (\widetilde{U(\lambda + \mu)} ), \label{fus-uv-2} \end{eqnarray} where $\widetilde{U(\lambda )} $ is an irreducible weight module, and $\rho_{\ell}$, $\ell \in {\Bbb Z}$, are the spectral flow automorphisms defined by (\ref{sepctral-flow-def}). \end{theorem} The fusion rules (\ref{fusion-uv-1}) was proved in Proposition \ref{sc-fusion-1}, and it is a direct consequence of the construction of H. Li \cite{Li-1997}. The main contribution of our paper is vertex-algebraic proof of (\ref{fus-uv-2}) which uses the theory of intertwining operators for vertex algebras and the fusion rules for the affine vertex superalgebra $V_1(gl(1 \vert 1))$. We also prove a general irreducibility result which relates irreducible weight modules for the Weyl vertex algebra $M$ to irreducible weight modules for $V_1(gl(1 \vert 1))$ (see Theorem \ref{ired-general}). \begin{theorem} Assume that $\mathcal N$ is an irreducible weight $M$--module. Then $\mathcal N \otimes F$ is a completely reducible $V_1(gl(1 \vert 1))$--module. \end{theorem} The construction of intertwining operators appearing in the fusion rules is based on two different embeddings of the Weyl vertex algebra $M$ into the lattice vertex algebra $\Pi(0)$. Then one $\Pi(0)$--intertwining operator gives two different $M$--intertwining operators. Therefore, both intertwining operators are realized as $\Pi(0)$--intertwining operators. Once we tensor the Weyl vertex algebra $M$ with the Clifford vertex algebra $F$, we can use the fusion rules for $V_1(gl(1 \vert 1))$ to calculate the fusion rules for $M$. It is known that fusion rules can be determined by using fusion rules for the singlet vertex algebra (cf. \cite{AdM-2017}, \cite{CM}). However, we believe that our methods, which use $V_1(gl(1 \vert 1) )$, can be generalized to a wider class of vertex algebras. In our future work we plan to study the following related fusion rules problems: \begin{itemize} \item Connect fusion rules for higher rank Weyl vertex algebra with fusion rules for $V_1(gl(n \vert m))$. \item Extend fusion ring with weight modules having infinite-dimensional weight spaces (cf. Subsection \ref{more-modules}) and possibly with irreducible Whittaker modules (cf. \cite{ALPY-2018}). \end{itemize} \vskip 5mm {\bf Acknowledgment.} This work was done in part during the authors’ stay at the Research Institute for Mathematical Sciences (RIMS), Kyoto in July 2018. The results of this paper were also reported by V. P. at the Southeastern Lie Theory Workshop X University of Georgia, Athens, GA, June 10--12, 2018. The authors are partially supported by the QuantiXLie Centre of Excellence, a project coffinanced by the Croatian Government and European Union through the European Regional Development Fund - the Competitiveness and Cohesion Operational Programme (KK.01.1.1.01.0004). \section{Fusion rules and intertwining operators} In this section we recall the definition of intertwining operators and fusion rules. More details can be found in \cite{FZ}, \cite{FHL}, \cite{DLM}, \cite{CHY}. We also prove an important result on the action of certain automorphisms on intertwining operators. This result will enable us to produce new intertwining operators from the existing one. Let $V$ be a conformal vertex algebra with the conformal vector $\omega$ and let $Y(\omega, z)= \sum_{n \in {\Bbb Z}} L(n) z^{-n-2}$. We assume that the derivation in the vertex algebra $V$ is $D=L(-1)$. A $V$-module (cf. \cite{LL}) is a vector space $M$ endowed with a linear map $Y_M$ from $V$ to the space of $End(M)$-valued fields $$ a\mapsto Y_M(a,z)=\sum_{n\in\Bbb Z}a^M_{(n)}z^{-n-1} $$ such that: \begin{enumerate} \item $Y_M(|0\rangle,z)=I_M$, \item for $a,b \in V$, \begin{align*}& z_0^{-1}\delta\Big(\frac{z_1-z_2}{z_0}\Big)Y_M(a,z_1)Y_M(b,z_2) - z_0^{-1}\delta\Big(\frac{z_2-z_1}{-z_0}\Big)Y_M(b,z_2)Y_M(a,z_1)\\ &= z_2^{-1}\delta\Big(\frac{z_1-z_0}{z_2}\Big)Y_M(Y(a,z_0)b,z_2). \end{align*} \end{enumerate} Given three $V$-modules $M_1$, $M_2$, $M_3$, an {\it intertwining operator of type $\Big(\begin{matrix}M_3\\M_1\ M_2\end{matrix}\Big)$} (cf. \cite{FHL}, \cite{FZ}) is a map $I:a\mapsto I(a,z)=\sum_{n\in \ganz}a^I_{(n)}z^{-n-1}$ from $M_1$ to the space of $Hom(M_2,M_3)$-valued fields such that: \begin{enumerate} \item for $a \in V$, $b \in M_1$, $c\in M_2$, the following Jacobi identity holds: \begin{align*}& z_0^{-1}\delta\Big(\frac{z_1-z_2}{z_0}\Big)Y_{M_3}(a,z_1)I(b,z_2)c - z_0^{-1}\delta\Big(\frac{z_2-z_1}{-z_0}\Big)I(b,z_2)Y_{M_2}(a,z_1)c\\ &= z_2^{-1}\delta\Big(\frac{z_1-z_0}{z_2}\Big)I(Y_{M_{1}}(a,z_0)b,z_2)c, \end{align*} \item for every $a \in M_1$, \[ I(L(-1)a,z) = \frac{d}{dz}I(a, z).\] \end{enumerate} We let $I { M_3 \choose M_1 \ M_2}$ denote the space of intertwining operators of type ${ M_3 \choose M_1 \ M_2}$, and set $$N^{M_3}_{M_1,M_2}=\dim I\left(\begin{matrix}M_3\\M_1\ M_2\end{matrix}\right).$$ When $N^{M_3}_{M_1,M_2}$ is finite, it is usually called a {\it fusion coefficient}.\par Assume that in the category $K$ of $L(0)$-diagonalizable $V$-modules, irreducible modules $\{M _i \ \vert \ i \in I\}$, where $I$ is an index set, have the following properties \begin{itemize} \item[(1)] for every $i, j \in I$, $N^{M_k}_{M_i,M_j}$ is finite for any $k\in I$; \item[(2)] $N^{M_k}_{M_i,M_j} = 0 $ for all but finitely many $k \in I$. \end{itemize} Then the algebra with basis $\{e_i \in I \}$ and product $$e_i\cdot e_j = \sum_{k\in I } N^{M_k}_{M_i,M_j} e_k$$ is called the {\it fusion algebra} of $V, K$. \vskip5pt Let $K$ be a category of $V$-modules. Let $M_1$, $M_2$ be irreducible $V$-modules in $K$. Given an irreducible $V$-module $M_3$ in $K$, we will say that the fusion rule \begin{equation}\label{fr-sc} M_1 \times M_2=M_3 \end{equation} holds in $K$ if $N^{M_3}_{M_1,M_2}=1$ and $N^{R}_{M_1,M_2}=0$ for any other irreducible $V$-module $R$ in $K$ which is not isomorphic to $M_3$. We say that an irreducible $V$-module $M_1$ is a simple current in $K$ if $M_1$ is in $K$ and, for every irreducible $V$-module $M_2$ in $K$, there is an irreducible $V$-module $M_3$ in $K$, such that the fusion rule \eqref{fr-sc} holds in $K$ (see \cite{DLM}). Recall that for any automorphism $g$ of $V$, and any $V$--module $(M,Y_M(\cdot, z))$, we have a new $V$--module $M \circ g=M^g$, such that $M^g \cong M$ as a vector space and the vertex operator $Y_M^g$ is given by $Y_M^g(v, z) := Y_M(g v,z)$, for $v \in V$. Namely, the only axiom we have to check is the Jacobi identity, and we have: \begin{align*} &z_0^{-1}\delta\Big(\frac{z_1-z_2}{z_0}\Big)Y_M^g(a,z_1)Y_M^g(b,z_2) - z_0^{-1}\delta\Big(\frac{z_2-z_1}{-z_0}\Big)Y_M^g(b,z_2)Y_M^g(a,z_1)\\ &= z_0^{-1}\delta\Big(\frac{z_1-z_2}{z_0}\Big)Y_M(ga,z_1)Y_M(gb,z_2) - z_0^{-1}\delta\Big(\frac{z_2-z_1}{-z_0}\Big)Y_M(gb,z_2)Y_M(ga,z_1) \\ & = z_2^{-1}\delta\Big(\frac{z_1-z_0}{z_2}\Big)Y_M(Y(ga,z_0)gb,z_2)\\ & = z_2^{-1}\delta\Big(\frac{z_1-z_0}{z_2}\Big)Y_M^g(Y^g(a,z_0)b,z_2). \end{align*} Therefore, $M^g$ is a $V$--module. The following proposition shows that automorphism $g$ also produces a new intertwining operator. \begin{proposition}\label{auto} Let $g$ be an automorphism of the vertex algebra $V$ satisfying the condition \begin{eqnarray} \label{uvjet-autom} && \omega - g(\omega) \in \mbox{Im} (D).\end{eqnarray} Let $M_1$, $M_2$, $M_3$ be $V$--modules and $I(\cdot, z)$ an intertwining operator of type ${ M_3 \choose M_1 \ M_2}$. Then we have an intertwining operator $I^g$ of type ${ M_3 ^ g \choose M_1 ^ g \ M_2 ^ g}$, such that $I^g (b, z_1) = I(b, z_1)$, for all $b \in M_1$. Moreover, \[ N^{M_3}_{M_1,M_2} = N^{M_3 ^ g }_{M_1 ^ g ,M_2 ^ g}. \] \end{proposition} \begin{proof} We have: \begin{align*} &z_0^{-1}\delta\Big(\frac{z_1-z_2}{z_0}\Big)Y_{M_3}^g(a,z_1)I^g(b,z_2)c - z_0^{-1}\delta\Big(\frac{z_2-z_1}{-z_0}\Big)I^g(b,z_2)Y_{M_2}^g(a,z_1)c\\ &= z_0^{-1}\delta\Big(\frac{z_1-z_2}{z_0}\Big)Y_{M_3}(ga,z_1)I(b,z_2)c - z_0^{-1}\delta\Big(\frac{z_2-z_1}{-z_0}\Big)I(b,z_2)Y_{M_2}(ga,z_1)c\\ &= z_2^{-1}\delta\Big(\frac{z_1-z_0}{z_2}\Big)I(Y_{M_{1}}(ga,z_0)b,z_2)c\\ &= z_2^{-1}\delta\Big(\frac{z_1-z_0}{z_2}\Big)I(Y_{M_{1}}^g(a,z_0)b,z_2)c. \end{align*} Set $$Y^g (\omega, z) = \sum_{n \in {\Bbb Z}} L(n) ^g z^{-n-1}. $$ Since $g(\omega) = \omega + D v$ for certain $v \in V$, we have that $$ g(\omega) _0 = \omega_0 + (Dv)_0 = \omega_0 = L(-1). $$ This implies that $ L(-1)^g = L(-1)$. Hence for $a \in M_1$ we have $$ I^g ( L(-1) ^g a, z) = I^g ( L(-1) a, z) =I (L(-1) a, z) = \frac{d}{d z} I(a, z) = \frac{d}{d z} I^g (a, z). $$ Therefore, $I^g$ has the $L(-1)$--derivation property and $I^g$ is an intertwining operator of type ${ M_3 ^ g \choose M_1 ^ g \ M_2 ^ g}$. \end{proof} \begin{remark} If $V$ is a vertex operator algebra and $g$ an automorphism of $V$, then $g(\omega)= \omega$ and the condition (\ref{uvjet-autom}) is automatically satisfied. In our applications, $g$ will only be a vertex algebra automorphism such that $g(\omega) \ne \omega$, yet the condition (\ref{uvjet-autom}) will be satisfied. \end{remark} \section{The Weyl vertex algebra} \label{Weyl} \subsection{The Weyl vertex algebra} The {\it Weyl algebra} $\widehat{\mathcal A}$ is an associative algebra with generators $$ a(n), a^{*} (n) \quad (n \in {\Bbb Z})$$ and relations \begin{eqnarray} && \ \ [a(n), a^{*} (m)] = \delta_{n+m,0}, \ \ [a(n), a(m)] = [a ^{*} (m), a ^* (n) ] = 0 \ \ (n,m \in {\Bbb Z}). \label{comut-Weyl} \end{eqnarray} Let $M$ denote the simple {\it Weyl module} generated by the cyclic vector ${\bf 1}$ such that $$ a(n) {\bf 1} = a ^* (n+1) {\bf 1} = 0 \quad (n \ge 0). $$ As a vector space, $$ M \cong {\Bbb C}[a(-n), a^*(-m) \ \vert \ n > 0, \ m \ge 0 ]. $$ There is a unique vertex algebra $(M, Y, {\bf 1})$ (cf. \cite{FB}, \cite{KR}, \cite{efren}) where the vertex operator map is $$ Y: M \rightarrow \mbox{End}(M) [[z, z ^{-1}]] $$ such that $$ Y (a(-1) {\bf 1}, z) = a(z), \quad Y(a^* (0) {\bf 1}, z) = a ^* (z),$$ $$ a(z) = \sum_{n \in {\Bbb Z} } a(n) z^{-n-1}, \ \ a^{*}(z) = \sum_{n \in {\Bbb Z} } a^{*}(n) z^{-n}. $$ In particular we have: $$ Y (a(-1) a^* (0) {\bf 1}, z) = a(z) ^+ a^* (z) + a ^* (z) a(z) ^- , $$ where $$a (z) ^+ = \sum_{n \le -1} a(n) z ^{-n-1}, \quad a (z) ^- = \sum_{n \ge 0} a(n) z ^{-n-1}. $$ Let $\beta := a(-1) a^* (0) {\bf 1}$. Set $\beta (z) = Y(\beta ,z) = \sum_{n \in {\Bbb Z} } \beta (n) z ^{-n-2}$. Then $\beta$ is a Heisenberg vector in $M$ of level $-1$. This means that the components of the field $\beta(z)$ satisfy the commutation relations $$[ \beta(n), \beta(m)] = - n \delta_{n+m,0} \quad (n, m \in {\Bbb Z}). $$ Also, we have the following formula $$ [\beta(n), a(m) ] = - a (n+m), \quad [\beta(n), a^* (m) ] = a^* (n+m). $$ The vertex algebra $M$ admits a family of Virasoro vectors $$ \omega_{\mu} = (1-\mu) a (-1) a^* (-1) {\bf 1} - \mu a (-2) a^{*} (0) {\bf 1} \quad (\mu \in {\Bbb C})$$ of central charge $c_{\mu} = 2 (6 \mu (\mu-1) +1)$. Let $$L^{\mu} (z) = Y(\omega, z) = \sum_{n \in {\Bbb Z} } L^{\mu} (n) z^{-n-2}.$$ This means that the components of the field $L(z)$ satisfy the relations $$[L^{\mu} (n), L^{\mu} (m) ] = (n-m) L^{\mu} (m+n) + \frac{n^3 - n} {12} \delta_{n+m,0} c_{\mu}.$$ For $\mu =0$, we write $\omega = \omega_{\mu}$, $L^{\mu} (n) = L(n)$. Then $c_{\mu} = 2$. Clearly $$\omega_{\mu} = \omega - \mu \beta(-2) {\bf 1}. $$ Since $(\beta(-2)\textbf{1})_0 = (D\beta)_0$, we have that \begin{eqnarray} && L^{\mu}(-1) = L(-1), \quad \mbox{for every} \ \mu \in \mathbb{C}. \label{der-mu} \end{eqnarray} For $n, m \in {\Bbb Z}$ we have $$ [L(n), a(m) ] = - m a(n+m), \quad [L(n), a^*(m) ] = - (m+ n) a^*(n+m). $$ In particular, $$[L(0), a(m) ] =-m a(m), [L(0), a^*(m)]=-m a^*(m). $$ \begin{lemma} \label{classification-zhu} Assume that $W = \bigoplus_{\ell \in {\Bbb Z_{\ge 0} } } W(\ell) $ is a ${\Bbb Z}_{\ge 0}$--graded $M$--module with respect to $L(0)$. Then $$L(0) \equiv 0 \quad \mbox{on} \ W(0). $$ \end{lemma} \begin{proof} Since $W$ is ${\Bbb Z}_{\ge 0}$ with top component $W(0)$, the operators $a(n), a^*(n)$ must act trivially on $W(0)$ for all $n \in {\Bbb Z}_{>0}$. Since $$ L(z) = :\partial a^*(z) a (z): = \sum_{n \in {\Bbb Z} } L(n) z ^{-n-2}, $$ we have $$ L(0) = \sum_{n= 1} ^{\infty} n ( a ^* (-n) a (n) - a(-n) a^* (n)) \equiv 0 \quad (\mbox{on} \ W(0)). $$ The Lemma holds. \end{proof} \begin{remark} One can show that Zhu's algebra $A(M) \cong A_1$ and that $[\omega] = 0$ in $A(M)$. This can give a second proof of Lemma \ref{classification-zhu}. \end{remark} \begin{definition} A module $W$ for the Weyl algebra $\widehat{\mathcal A}$ is called restricted if the following condition holds: \begin{itemize} \item For every $w \in W$, there is $N \in {\Bbb Z}_{\ge 0}$ such that $$ a(n) w = a^* (n) w = 0, \quad \mbox{for} \ n \ge N. $$ \end{itemize} \end{definition} \subsection{Automorphisms of the Weyl algebra} Denote by $\mbox{Aut} (\widehat{ {\mathcal A}})$ the group of automorphisms of the Weyl algebra $\widehat{ {\mathcal A}}$. For any $f \in \mbox{Aut} (\widehat{ {\mathcal A}})$, and $\widehat{ {\mathcal A}}$--module $N$, one can construct $\widehat{ {\mathcal A}}$--module $f(N)$ as follows: $$ f(N) := N \quad \mbox{as vector space, and action is} \ \ x. v = f(x) v \quad (v \in N).$$ For $f, g \in \mbox{Aut} (\widehat{ {\mathcal A}})$, we have \begin{eqnarray} \label{composit-action} (f \circ g) (N) = g (f(N)). \end{eqnarray} For every $ s \in {\Bbb Z}$ the Weyl algebra $\widehat{ {\mathcal A}}$ admits the following automorphism \begin{eqnarray} \rho_s ( a(n) ) = a (n+s), \quad \rho_s ( a^* (n) ) = a^* (n-s). \label{sepctral-flow-def} \end{eqnarray} Then $\rho_s$ is an automorphism of $\widehat{ {\mathcal A}}$ which can be lifted to an automorphism of the vertex algebra $M$. Automorphism $\rho_s$ is called spectral flow automorphism. Assume that $U$ is any restricted module for $\widehat{ {\mathcal A}}$. Then $\rho_s(U)$ is also a restricted module for $\widehat{ {\mathcal A}}$ and $\rho_s(U)$ is a module for the vertex algebra $M$. Let $\mathcal K$, be the category of weight $M$--modules such that the operators $\beta(n)$, $n \ge 1$, act locally nilpotent on each module $N$ in $\mathcal K$. Applying the automorphism $\rho_s$ to the vertex algebra $M$, we get $M$--module $\rho_s(M)$, which is a simple current in the category $\mathcal K$. The proof is essentially given by H. Li in \cite[Theorem 2.15]{Li-1997} in a slightly different setting. \begin{proposition} \label{sc-fusion-1} \cite{Li-1997} Assume that $N$ is an irreducible weight $M$--module in the category $\mathcal K$. Then the following fusion rules hold: \begin{eqnarray} \rho_{s_1} (M) \times \rho_{s_2} (N) = \rho_{s_1+ s_2} (N) \quad (s_1, s_2 \in {\Bbb Z}). \label{fusion-sc} \end{eqnarray} \end{proposition} \begin{proof} First we notice that if $N$ is an irreducible $M$--module in $\mathcal K$, we have the following fusion rules \begin{eqnarray} M \times N = N. \label{fusion-basic} \end{eqnarray} Using \cite{Li-1997}, one can prove that $\rho_s(M)$ is constructed from $M$ as: $$ (\rho_s(M), Y_s (\cdot,z)):= (M, Y( \Delta(-s \beta, z)\cdot, z)),$$ where $$ \Delta(v, z):=z^{v_0} \exp\left( \sum_{n=1}^{\infty} \frac{v_n}{-n} (-z) ^{-n}\right). $$ Assume that $N_i$, $i=1,2,3$ are irreducible modules in $\mathcal K$. By \cite[Proposition 2.4]{Li-1997} from an intertwining operator $I(\cdot, z)$ of type ${ N_3 \choose N_1 \ N_2}$, one can construct intertwining operator $I_{s_2}(\cdot, z)$ of type ${ \rho_{s_2} (N_3) \choose N_1 \ \rho_{s_2} (N_2)}$, where $$I_{s_2} (v, z):= I(\Delta(-s_2 \beta, z) v, z) \quad (v \in N_1 ). $$ Now, the fusion rules (\ref{fusion-sc}) follows easily from the above construction using (\ref{fusion-basic}). \end{proof} Consider the following automorphism of the Weyl vertex algebra \begin{eqnarray} g : && M \rightarrow M \nonumber \\ && a\mapsto -a^{*}, \ \ a^* \mapsto a \nonumber \end{eqnarray} Assume that $U$ is any $M$--module. Then $U^{g }= U \circ g $ is generated by the following fields $$ a_{g} (z) = - a^*(z), \ a^ * _{g} (z) = a(z). $$ As an $\widehat {\mathcal A}$--module, $U^{g}$ is obtained from $U$ by applying the following automorphism $g$: \begin{eqnarray} && a(n) \mapsto - a^* (n+1), \ \ a^{*} (n) \mapsto a(n- 1). \label{autom-1} \end{eqnarray} This implies that \begin{eqnarray} && g = \rho_{-1} \circ \sigma = \sigma \circ \rho_{1} \label{autom-2} \end{eqnarray} where $\sigma$ is the automorphism of $\widehat {\mathcal A}$ determined by \begin{eqnarray} && a(n) \mapsto - a^* (n), \ \ a^{*} (n) \mapsto a(n). \label{autom-3} \end{eqnarray} The automorphism $g$ is then a vertex algebra automorphism of order $4$. Denote by $\sigma_0 $ the restriction of $\sigma$ on $A_1$. Using (\ref{autom-1})-(\ref{autom-3}) we get the following result: \begin{lemma} \label{djelovanje-autom} Assume that $W$ is an irreducible $M$--module. Then $$W^g \cong \rho_{1} (\sigma (W) ). $$ \end{lemma} \begin{proof} We have that as an $\widehat{\mathcal A}$--module: $$ W^ g = (\rho_{-1} \circ \sigma) (W). $$ Since $\rho_{-1} \circ \sigma = \sigma \circ \rho_1$, by applying (\ref{composit-action}) we get: $$ W^ g = (\sigma \circ \rho_1 )(W) = \rho_1 (\sigma (W)).$$ The proof follows. \end{proof} \subsection{Weight modules for the Weyl vertex algebra} \begin{definition} A module $W$ for the Weyl vertex algebra $M$ is called {\bf weight} if the operators $\beta(0)$ and $L(0)$ act semisimply on $W$. \end{definition} Clearly, vertex algebra $M$ is a weight module. We will now construct a family of weight modules. \begin{itemize} \item Recall that the first Weyl algebra $A_1$ is generated by $x, \partial _x$ with the commutation relation $$[\partial_x, x ] = 1. $$ \item For every $\lambda \in {\Bbb C}$, $$U(\lambda) :=x^{\lambda} {\Bbb C}[x, x^{-1}]$$ has the structure of an $A_1$--module. \item $U(\lambda)$ is irreducible if and only if $\lambda \in {\Bbb C} \setminus {\Bbb Z}$. \item Note that $a(0), a^*(0)$ generate a subalgebra of the Weyl algebra, which is isomorphic to the first Weyl algebra $A_1$. Therefore $U(\lambda)$ can be treated as an $A_1$--module by letting $a(0) = \partial_x$, $a^*(0) = x$. \item By applying the automorphism $\sigma_0$ on $U(\lambda)$ we get $$ \sigma_0 (U(\lambda)) \cong U(-\lambda). $$ Indeed, let $z_{-\mu-1} = \frac{x^{\mu}} {\Gamma(\mu+1)}$, where $\mu \in {\Bbb C}\setminus{\Bbb Z}$. Then $$ \sigma_0( a(0)). z_{-\mu} = - \frac{x^{\mu}} {\Gamma(\mu)} = - \mu \frac{x^{\mu}} {\Gamma(\mu +1)} = -\mu z_{-\mu -1}. $$ $$ \sigma_0( a^* (0)). z_{-\mu} = (\mu-1) \frac{x^{\mu-2}} {\Gamma(\mu)} = \frac{x^{\mu-2}} {\Gamma(\mu-1)} = z_{-\mu +1} . $$ \item Define the following subalgebras of $\widehat{\mathcal A}$: $$ \widehat{\mathcal A}_{\ge 0 } = {\Bbb C}[a( n), a^*( m) \ \vert \ n, m \in {\Bbb Z}_{\ge 0} ], $$ $$ \widehat{\mathcal A}_{< 0 } = {\Bbb C}[a( - n), a^*( - n) \ \vert \ n \in {\Bbb Z}_{\ge 1} ]. $$ \item The $A_1$--module structure on $U(\lambda)$ can be extended to a structure of $\widehat{\mathcal A}_{\ge 0 }$--module by defining $$ \restr{a(n)}{U(\lambda)} = \restr{a^*(n)}{U(\lambda)} \equiv 0 \qquad (n \ge 1). $$ \item Then we have the induced module for the Weyl algebra: $$\widetilde{U(\lambda)} = \widehat{\mathcal A} \otimes _{. \widehat{\mathcal A}_{\ge 0 } } U(\lambda)$$ \item[] which is isomorphic to $$ {\Bbb C}[a (-n), a^*(-n) \ \vert n \ge 1] \otimes U(\lambda)$$ as a vector space. \end{itemize} \begin{proposition} For every $\lambda \in {\Bbb C} \setminus {\Bbb Z}$, $\widetilde{U(\lambda)}$ is an irreducible weight module for the Weyl vertex algebra $M$. \end{proposition} \begin{proof} The proof follows from Lemma \ref{classification-zhu} and the fact that $\widetilde{U(\lambda)}$ is a ${\Bbb Z_{\ge 0}}$--graded $M$--module whose top component is an irreducible module for $A_1$. \end{proof} Applying Lemma \ref{djelovanje-autom} we get: \begin{corollary} \label{ired-relaxed} For every $\lambda \in {\Bbb C} \setminus {\Bbb Z}$ and $s \in {\Bbb Z}$ we have $$\widetilde{U(\lambda)} ^{g} \cong \rho_{1} ( \widetilde{U(-\lambda)}), \quad \left( \rho_{-s+1} ( \widetilde{U(\lambda)}) \right)^g \cong \rho_{s} ( \widetilde{U(-\lambda)}). $$ \end{corollary} \subsection{More general weight modules} \label{more-modules} A classification of irreducible weight modules for the Weyl algebra $\widehat{\mathcal A}$ is given in \cite{FGM}. Let us describe here a family of weight modules having infinite--dimensional weight spaces. Take ${\lambda}, \mu \in {\Bbb C} \setminus {\Bbb Z}$. Let $$U(\lambda, \mu) = x_1 ^{\lambda} x_2 ^{\mu} {\Bbb C} [x_1, x_2, x_1 ^{-1}, x_2 ^{-1}]. $$ Then $U(\lambda, \mu)$ is an irreducible module for the second Weyl algebra $A_2$ generated by $\partial _1, \partial_2, x_1, x_2$. Note that $A_2$ can be realized as a subalgebra of $\widehat{\mathcal A}$ generated by $\partial_2 = a(1), \partial _1 = a(0), x_2 = a^*(-1), x_1 = a^*(0)$. Then we have the irreducible $\widehat{\mathcal A}$--module $\widetilde{U( \lambda, \mu)} $ as follows. Let $\mathcal B$ be the subalgebra of $\widehat{\mathcal A}$ generated by $a(i), a^*(j)$, $i \ge 0, j \ge -1$. Consider $U(\lambda, \mu)$ as a $\mathcal B$--module such that $ a(n) $, $a^*(m)$ act trivially for $n \ge 2$, $m \ge 1$. Then by \cite{FGM}, $$\widetilde{U( \lambda, \mu)} =\widehat{\mathcal A} \otimes _{\mathcal B} U(\lambda, \mu)$$ is an irreducible $\widehat{\mathcal A}$--module. As a vector space: \begin{eqnarray} \widetilde{U( \lambda, \mu)} &\cong & {\Bbb C}[ a (-n-1), a^{*} (-m-2) \ \vert \ n,m \in {\Bbb Z}_{\ge 0}] \otimes U(\lambda, \mu) \nonumber \\ &\cong& a^* (0) ^{\lambda} a^*(-1) ^{\mu} {\Bbb C}[ a (-n-1), a^{*} (-m) \ \vert \ n,m \in {\Bbb Z}_{\ge 0} ]. \nonumber \end{eqnarray} Since $\widetilde{U( \lambda, \mu)}$ is a restricted $\widehat{\mathcal A}$--module we get: \begin{proposition} $\widetilde{U( \lambda, \mu)}$ is an irreducible weight module for the Weyl vertex algebra $M$. \end{proposition} One can see that the weight spaces of the module $\widetilde{U( \lambda, \mu)}$ are all infinite-dimensional with respect to $(\beta(0), L(0))$. In particular, vectors $$a(-1) ^m a^ * (0) ^{\lambda + 2m } a^*(-1) ^{\mu- m}, \quad m \in {\Bbb Z}_{\ge 0}, $$ are linearly independent and they belong to the same weight space. \begin{remark} Note that modules $\widetilde{U( \lambda, \mu)}$ are not in the category $\mathcal K$, and therefore Proposition \ref{sc-fusion-1} can not be applied in this case. \end{remark} \section{The vertex algebra $\Pi(0)$ and the the construction of intertwining operators } \label{lattice} In this section we present a bosonic realization of the weight modules for the Weyl vertex algebra. We also construct intertwining operators using this bosonic realization. \subsection{The vertex algebra $\Pi(0)$ and its modules} Let $L$ be the lattice $$ L= {\Bbb Z} \alpha + {\Bbb Z}\beta, \ \langle \alpha , \alpha \rangle = - \langle \beta , \beta \rangle = 1, \quad \langle \alpha, \beta \rangle = 0, $$ and $V_L = M_{\alpha, \beta} (1) \otimes {\Bbb C} [L]$ the associated lattice vertex superalgebra, where $M_{\alpha, \beta}(1) $ is the Heisenberg vertex algebra generated by fields $\alpha(z)$ and $\beta(z)$ and ${\Bbb C}[L]$ is the group algebra of $L$. We have its vertex subalgebra $$ \Pi (0) = M_{\alpha, \beta} (1) \otimes {\Bbb C} [\Bbb Z (\alpha + \beta) ] \subset V_L. $$ There is an injective vertex algebra homomorphism $f : M \rightarrow \Pi(0)$ such that $$ f(a) = e^{\alpha + \beta}, \ f(a^{*}) = -\alpha(-1) e^{-\alpha-\beta}. $$ We identify $a, a^*$ with their image in $\Pi(0)$. We have (cf. \cite{efren}) $$ M \cong \mbox{Ker}_{\Pi(0)} e ^{\alpha}_0.$$ The Virasoro vector $\omega$ is mapped to $$\omega = a(-1) a^* (-1) {\bf 1} = \frac{1}{2} (\alpha (-1) ^2 - \alpha(-2) - \beta(-1) ^2 + \beta(-2) ) {\bf 1}. $$ Note also that $$ g(\omega) = - a(-2) a^* = \omega_{\mu} = \frac{1}{2} (\alpha (-1) ^2 - \alpha(-2) - \beta(-1) ^2 - \beta(-2) ) {\bf 1}. \quad (\mu =1). $$ Since \begin{eqnarray} && \Pi(0) = {\Bbb C}[\Bbb Z (\alpha+\beta)] \otimes M_{\alpha, \beta} (1) \label{def-pi0} \end{eqnarray} is a vertex subalgebra of $V_L$, for every $\lambda \in {\Bbb C}$ and $r \in {\Bbb Z}$, $$\Pi _r (\lambda) ={\Bbb C}[r \beta + (\Bbb Z + \lambda )(\alpha+\beta)] \otimes M_{\alpha, \beta} (1) = \Pi(0). e^{r \beta + \lambda (\alpha + \beta)} $$ is an irreducible $\Pi(0)$--module. We have \begin{eqnarray} L (0) e^{ r \beta + (n + \lambda )(\alpha+\beta) } &=& \frac{1-r}{2} ( r + 2 (n + \lambda)) e^{ r \beta + (n + \lambda )(\alpha+\beta) }, \nonumber \end{eqnarray} and for $\mu = 1$ \begin{eqnarray} L^{\mu} (0) e^{ r \beta + (n + \lambda )(\alpha+\beta) } &=& \frac{1}{2} r ( 1+ r + 2 (n + \lambda)) e^{ r \beta + (n + \lambda )(\alpha+\beta) } \nonumber \end{eqnarray} \begin{proposition} \label{ired-weyl-1}Assume that $\ell \in {\Bbb Z}$, $\lambda \in {\Bbb C} \setminus {\Bbb Z}$. Then as $M$--modules: \begin{itemize} \item[(1)] $\Pi_{\ell}( \lambda) \cong \rho_{-\ell +1} ( \widetilde{U(-\lambda)})$, \item[(2)] $\Pi_{\ell}( \lambda)^g \cong \rho_{\ell} ( \widetilde{U(\lambda)})$. \end{itemize} \end{proposition} \begin{proof} Assume first that $r=1$. Then $\Pi _1 (-\lambda)$ is a $\Zp$--graded $M$--module whose lowest component is $$ \Pi _1 (-\lambda) (0) \cong {\Bbb C}[\beta + (\Bbb Z - \lambda )(\alpha+\beta)] \cong U(\lambda). $$ Now Lemma \ref{classification-zhu} and Corollary \ref{ired-relaxed} imply that $\Pi _1 (-\lambda)$ is an irreducible $M$--module isomorphic to $\widetilde{U(\lambda)}$. Modules $\Pi _{\ell} (-\lambda)$ can by obtained from $\Pi _1 (-\lambda)$ by applying the spectral flow automorphism $\rho_{-\ell+1}:=e ^{(\ell-1)\beta}$. Using Corollary \ref{ired-relaxed} we get \begin{eqnarray} \Pi_{\ell}( \lambda)^g &= & \left( \rho_{-\ell+1} ( \widetilde{U(-\lambda)} )\right)^g = \rho_1 \sigma \rho_{-\ell +1} ( \widetilde{U(- \lambda)} ) \nonumber \\ &=& \rho_{\ell} \sigma ( \widetilde{U(-\lambda)}) = \rho_{\ell} ( \widetilde{U(\lambda)}). \nonumber \end{eqnarray} The proof follows. % \end{proof} \subsection{Construction of intertwining operators} \begin{proposition} \label{konstrukcija-int-op} For every $\ell _1, \ell _2 \in {\Bbb Z}$ and $\lambda , \mu \in {\Bbb C}$ there exist non-trivial intertwining operators of types \begin{eqnarray} {\rho_{\ell _1 + \ell _2-1 } ( \widetilde{U(\lambda + \mu )}) \choose \rho _{\ell _1} ( \widetilde{U(\lambda )}) \ \ \rho_{\ell_2 } ( \widetilde{U(\mu )}) }, \quad {\rho_{\ell _1 + \ell _2 } ( \widetilde{U(\lambda + \mu )}) \choose \rho _{\ell _1} ( \widetilde{U(\lambda )}) \ \ \rho_{\ell_2 } ( \widetilde{U(\mu )}) } \label{int-op} \end{eqnarray} in the category of weight $M$--modules. \end{proposition} \begin{proof} By using explicit bosonic realization, as in \cite{DL}, one can construct a unique nontrivial intertwining operator $I(\cdot, z)$ of type % \begin{eqnarray} &&{\Pi_{s_1 + s_2} (\lambda_1 + \lambda_2) \choose \Pi _{s_1} (\lambda_1) \ \ \Pi_{s_2} (\lambda_2) } \label{int-pi} \end{eqnarray} % in the category of $\Pi(0)$--modules such that $$ e^{ s_1 \beta + \lambda_1 (\alpha+ \beta)} _{\nu} e^{ s_2 \beta + \lambda_2 (\alpha+ \beta)} = e^{ (s_1+ s_2) \beta + (\lambda_1 + \lambda_2) (\alpha+ \beta)} \quad (\nu \in {\Bbb C}). $$ By restriction, this gives a non-trivial intertwining operator in the category of weight $M$--modules. Taking the embedding $f : M \rightarrow \Pi(0)$ and applying Corollary \ref{ired-weyl-1}, we conclude the operator (\ref{int-pi}) gives the intertwining operator of type $$ {\rho_{-s_1 - s_2+1 } ( \widetilde{U(-\lambda_1 - \lambda_2)}) \choose \rho _{-s_1+1} ( \widetilde{U(-\lambda_1)}) \ \ \rho_{-s_2+1} ( \widetilde{U(-\lambda_2)}) }, $$ which for $\ell_1 = -s_1 +1$, $\ell_2 = -s_2+1$, $\lambda= -\lambda_1$, $\mu = -\lambda_2$ gives the first intertwining operator. By using action of the automorphism $g$ and Corollary \ref{ired-weyl-1} we get the following intertwining operator $$ {\rho_{s_1 + s_2 } ( \widetilde{U(\lambda_1 + \lambda_2)}) \choose \rho _{s_1} ( \widetilde{U(\lambda_1)}) \ \ \rho_{s_2} ( \widetilde{U(\lambda_2)}) }, $$ which for $\ell_1 = s_1$, $\ell_2 = s_2$, $\lambda= \lambda_1$, $\mu = \lambda_2$ gives the second intertwining operator. \end{proof} \begin{remark} Intertwining operators in Proposition \ref{konstrukcija-int-op} are realized on irreducible $\Pi(0)$--modules. This result can be read as $$ \Pi _{\ell _1} (\lambda) \times \Pi_{\ell_2} (\mu) \supseteq \Pi_{\ell _1 + \ell_2-1} (\lambda + \mu) + \Pi_{\ell _1 + \ell_2} (\lambda + \mu). $$ In the category of $M$--modules, we have non-trivial intertwining operators \begin{eqnarray} &&{\Pi_{\ell _1 + \ell_2-1} (\lambda + \mu ) \choose \Pi _{\ell _1} (\lambda) \ \ \Pi_{\ell_2} (\mu) }, \label{int-pi-2} \end{eqnarray} which are not $\Pi(0)$--intertwining operators. \end{remark} \begin{remark} Note that $(M, Y, \textbf{1})$ is a conformal vertex algebra with the conformal vector \[\omega = a(-1)a^{*}(-1)\textbf{1}.\] Note that the intertwining operators (\ref{int-op}) satisfy the $L(-1)$-derivative property. Intertwining operators (\ref{int-pi}) satisfy this property by using the lattice realization as before, and intertwining operators (\ref{int-pi-2}) satisfy it by Proposition \ref{auto}, using the facts that $g(\omega) = \omega_1$ and $L^{\mu}(-1) = L(-1)$, for $\mu = 1$. Moreover, using relation (\ref{der-mu}) we see that the $L^{\mu} (-1)$--derivation property holds for every $\mu \in \mathbb{C} $, for all intertwining operators constructed above. \end{remark} \section{ The vertex algebra $V_1 (gl(1 \vert 1))$ and its modules} \subsection{On the vertex algebra $V_1(gl(1 \vert 1))$.} We now recall some results on the representation theory of $gl(1 \vert 1)$ and $\widehat{ gl(1\vert 1)}$. The terminology follows \cite[Section 5]{CR1}. Let $\frak g = gl(1 \vert 1)$ be the complex Lie superalgebra generated by two even elements $E$, $N$ and two odd elements $\Psi^{\pm}$ with the following (super)commutation relations: $$ [\Psi^+, \Psi^{-}] = E, \ [E, \Psi^{\pm}] = [E, N] = 0, \ [N, \Psi^{\pm}] = \pm \Psi^{\pm}. $$ Other (super)commutators are trivial. Let $ (\cdot, \cdot)$ be the invariant super-symmetric invariant bilinear form such that $$ ( \Psi^+, \Psi^-) = ( \Psi^-, \Psi^+) = 1, \ (N, E) = (E, N) = 1.$$ All other products are zero. Let $\widehat {\frak g} = \widehat{gl(1 \vert 1)}= {\frak g} \otimes {\Bbb C}[t,t^{-1}] + {\Bbb C} K$ be the associated affine Lie superalgebra with the commutation relations $$ [x (n), y(m)] = [x, y](n+m) + n \delta_{n+m,0} K, $$ where $K$ is central and for $x \in {\frak g}$ we set $x(n) = x \otimes t^n$. Let $V_k(\frak g)$ be the associated simple affine vertex algebra of level $k$. Let $\mathcal{V} _{r,s}$ be the Verma module for the Lie superalgebra $\frak g$ generated by the vector $v_{r,s}$ such that $ N v_{r,s} = r v_{r, s}$, $ E v_{r,s} = s v_{r,s}$. This module is a $2$--dimensional module and it is irreducible iff $s \ne 0$. If $s= 0$, $\mathcal{V} _{r,s}$ has a $1$--dimensional irreducible quotient, which we denote by $\mathcal{A}_{r}$. We will need the following tensor product decompositions: \begin{eqnarray} && \mathcal{A}_{r_1} \otimes \mathcal{A}_{r_2} = \mathcal{A}_{r_1+ r_2} , \quad \mathcal{A}_{r_1} \otimes \mathcal{V} _{r_2,s_2} = \mathcal{V} _{r_1 + r_2 , s_2}, \label{dek-1} \\ && \mathcal{V} _{r_1,s_1} \otimes \mathcal{V} _{r_2,s_2} = \mathcal{V} _{r_1 + r_2 ,s_1+s_2} \oplus \mathcal{V} _{r_1+r_2 - 1 ,s_1 + s_2} \quad (s_1 + s_2 \ne 0), \label{dek-2} \\ && \mathcal{V} _{r_1,s_1} \otimes \mathcal{V} _{r_2,-s_1} = \mathcal{P} _{r_1 + r_2 }, \label{dek-3} \end{eqnarray} where $\mathcal{P}_r$ is the $4$--dimensional indecomposable module which appears in the following extension $$ 0 \rightarrow \mathcal{V} _{r ,0} \rightarrow \mathcal{P}_r \rightarrow \mathcal{V} _{r -1,0} \rightarrow 0. $$ Let $\widehat {\mathcal{V} } _{r,s}$ denote the Verma module of level $1$ induced from the irreducible $gl(1 \vert 1)$--module $\mathcal{V} _{r ,s }$. If $s\notin \Bbb Z$, then $\widehat {\mathcal{V} } _{r,s}$ is an irreducible $V_1 (gl(1 \vert 1))$--module. If $s\in {\Bbb Z}$, $\widehat {\mathcal{V} } _{r,s}$ is reducible and its structure is described in \cite[Section 5.3]{CR1}. By using the tensor product decomposition (\ref{dek-2}) we get the following result on fusion rules for $V_1( \frak g)$--modules: \begin{proposition} \label{non-gl11} Let $r_1, r_2, s_1, s_2 \in {\Bbb C}$, $s_1, s_2, s_1 + s_2 \notin {\Bbb Z}$. Assume that there is a non-trivial intertwining operator $$ { \widehat {\mathcal{V} } _{r_3,s_3} \choose \widehat {\mathcal{V} } _{r_1,s_1} \ \ \widehat {\mathcal{V} } _{r_2,s_2} } $$ in the category of $V_1(\frak g)$--modules. Then $s_3 = s_1 + s_2$ and $r_3 = r_1 + r_2$, or $r_3 = r_1 + r_2 -1$. \end{proposition} Recall that the Clifford algebra is generated by $\psi(r), \psi^{*}(s)$, where $r, s \in \mathbb{Z} + \frac{1}{2}$, with relations \begin{gather} \begin{aligned} & [\psi(r), \psi^{*}(s)] = \delta_{r+s, 0}, \label{rel}\\ & [\psi(r), \psi(s)] = [\psi^{*}(r), \psi^{*}(s)] = 0, \text{ for all } r, s. \end{aligned} \end{gather} Note that the commutators (\ref{rel}) are actually anticommutators because both $\psi(r)$ and $\psi^{*}(s)$ are odd for every $r$ and $s$. The Clifford vertex algebra $F$ is generated by the fields \begin{align*} & \psi(z) = \sum_{n \in \mathbb{Z}} \psi(n+\frac{1}{2})z^{-n-1}, \\ & \psi^{*}(z) = \sum_{n \in \mathbb{Z}} \psi^{*}(n+\frac{1}{2})z^{-n-1}. \end{align*} As a vector space, \[F \cong \bigwedge\big(\big\{\psi(r), \psi^{*}(s)\ \big\vert\ r,s < 0\big\}\big)\] Let $ V_{\Bbb Z \gamma}$ be the lattice vertex algebra associated to the lattice $\Bbb Z \gamma\cong \Bbb Z $, $\langle \gamma, \gamma \rangle = 1$. By using the boson--fermion correspondence, we have that $F \cong V_{\Bbb Z \gamma}$, and we can identify the generators of the Clifford vertex algebra as follows (cf. \cite{K2}): \[ \psi:= e^{\gamma}, \psi^{*} = e^{-\gamma}\]. Now we define the following vertex superalgebra: $$ S\Pi (0) = \Pi (0) \otimes F \subset V_L,$$ and its irreducible modules $$ S\Pi _r(\lambda) = \Pi_r (\lambda) \otimes F = S\Pi(0). e^{r \beta + \lambda (\alpha + \beta)}. $$ Let $\mathcal U = M \otimes F$. Using \cite[Section 5.8]{K2} we define the vectors \begin{eqnarray} && \Psi ^+ := e ^{\alpha + \beta + \gamma} = a(-1) \psi, \ \Psi^- :=- \alpha(-1) e ^{-\alpha - \beta - \gamma} = a^* (0) \psi ^*, \nonumber \\ && E := \gamma + \beta, \ N:= \frac{1}{2} (\gamma - \beta). \nonumber \end{eqnarray} Then the components of the fields $$ X (z) = Y(X , z) = \sum_{n \in {\Bbb Z}} X (n) z^{-n-1}, \ \ X\in \{ \Psi^+, \Psi^-, E, N\}$$ satisfy the commutation relations for the affine Lie algebra $\widehat {\frak g} = \widehat{gl(1 \vert 1)}$, so that $M \otimes F$ is a $\widehat {\frak g} $--module of level $1$. (See also \cite{A-2017} for a realization of $\widehat{gl(1 \vert 1)}$ at the critical level). The Sugawara conformal vector is \begin{eqnarray} \ \omega_{c=0} &= & \tfrac{1}{2} ( N (-1) E (-1) + E (-1) N (-1) - \Psi^+ (-1) \Psi^- (-1) + \nonumber \\ && \quad \Psi^- (-1) \Psi^+ (-1) + E(-1) ^2 ) {\bf 1}\label{expression-omega-1}\ \\ &=& \tfrac{1}{2} (\beta(-1) + \gamma(-1)) (\gamma(-1) - \beta(-1)) + \alpha(-1) ( \alpha (-1) + \beta(-1) + \gamma(-1)) \nonumber \\ & & - \tfrac{1}{2} ( ( \alpha (-1) + \beta(-1) + \gamma(-1) )^2 + (\alpha(-2) + \beta(-2) + \gamma(-2)) ) \nonumber \\ & & + \tfrac{1}{2} (\beta (-1) + \gamma(-1) ) ^2 + \tfrac{1}{2} (\beta (-2) + \gamma(-2) ) \nonumber \\ &= &\tfrac{1}{2} (\alpha(-1) ^2 - \alpha(-2) -\beta (-1) ^2 + \gamma(-1) ^2 ) \nonumber \\ &=&\omega_{c=-1} + \tfrac{1}{2} \gamma(-1) ^2 \quad (\omega_{c=-1} = \omega_{1/2}) \nonumber \end{eqnarray} \subsection{Construction of irreducible $V_1 (\frak g)$--modules from irreducible $M$--modules.} Let $V_1 (\frak g)$ be the simple affine vertex algebra of level $1$ associated to $\frak g$. We have the following gradation: $$ \mathcal U = \bigoplus \mathcal U ^{\ell}, \quad E(0) \vert _{ \mathcal U ^{\ell} } = \ell \ \mbox{Id}. $$ We will present an alternative proof of the following result: \begin{proposition} \label{simpl-gl11} \cite{K2} We have: $$ V_1 (\frak g) \cong \mathcal U^0 =\mbox{Ker}_{M \otimes F} E(0) . $$ \end{proposition} \begin{proof} Let $ \widetilde V_1 (\frak g)$ be the vertex subalgebra of $\mathcal U^0$ generated by $\frak g$. Assume that $ \widetilde V_1 (\frak g) \ne \mathcal U^0$. Then there is a subsingular vector $v_{r,s} \notin {\Bbb C}{\bf 1}$ for $\widehat {\frak g} $ of weight $(r, s)$ such that for $n > 0$: \begin{eqnarray} && \Psi ^+ (0) v_{r,s} \in \widetilde V_1 (\frak g), \quad X(n) v_{r,s} \in \widetilde V_1 (\frak g), \quad X \in \{ E, N, \Psi^{\pm}\}\nonumber \\ && E(0) v_{r,s} = s v_{r,s}, \ N(0) v_{r,s} = r v_{r,s}.\nonumber \end{eqnarray} In other words, $v_{r,s}$ is a singular vector in the quotient $\widetilde { \mathcal U ^0 }= \mathcal U ^0 / \widetilde V_1 (\frak g)$. Since $E(0)$ acts trivially on $\mathcal U ^{0}$, we conclude that $s=0$. Recalling the expression for the Virasoro conformal vector (\ref{expression-omega-1}), we get that in $\widetilde { \mathcal U ^0 }$: $$ L^{c=0} (0) v_{r,0} = ( \omega_{c=0} )_1 v_{r,0} = \tfrac{1}{2} ( 2 N (0) E (0) - E(0) + E(0) ^2 ) v_{r,0} = 0. $$ This implies that $v_{r,0}$ has the conformal weight $0$ and hence must be proportional to ${\bf 1}$. A contradiction. Therefore, $\mathcal U ^0 = \widetilde V_1 (\frak g)$. Since $\mathcal U^0$ is a simple vertex algebra, we have that $ \widetilde V_1 (\frak g) = V_1 (\frak g)$. \end{proof} We can extend this irreducibility result to a wide class of weight modules. The proof is similar to the one given in \cite[Theorem 6.2]{A-2007}. \begin{theorem} \label{ired-general} Assume that $\mathcal N$ is an irreducible weight module for the Weyl vertex algebra $M$, such that $\beta(0)$ acts semisimply on $\mathcal N$: $$ \mathcal N = \bigoplus_{s \in {\Bbb Z} + \Delta} \mathcal N^s, \quad \beta(0) \vert N^s \equiv s \mbox{Id} \quad (\Delta \in {\Bbb C}). $$ Then $\mathcal N \otimes F$ is a completely reducible $V_1(\frak g)$--module: $$ \mathcal N \otimes F = \bigoplus_{s \in {\Bbb Z}} \mathcal L_s(N) \quad \mathcal L_s (N) = \{ v \in \mathcal N \otimes F \ \vert E(0) v = ( s + \Delta) v \},$$ and each $ \mathcal L_s(N) $ is irreducible $V_1(\frak g)$--module. \end{theorem} \begin{proof} Clearly $\mathcal L_s (N)$ is a $\mathcal U^0$$ (= V_1(\frak g))$--module. It suffices to prove that each vector $ w \in \mathcal L_s (N) $ is cyclic. Since $ \mathcal N \otimes F$ is a simple $\mathcal U$--module, we have that $\mathcal U. w = \mathcal N \otimes F$. On the other hand, $ \mathcal N \otimes F $ is ${\Bbb Z}$--graded $\mathcal U$--module so that $$ \mathcal U ^{r} \cdot \mathcal L_{s} (N) \subset \mathcal L_{r+ s}(N), \quad (r, s \in {\Bbb Z}). $$ This implies that $\mathcal U^r . w \subset \mathcal L_{r+s} (N)$ for each $r \in {\Bbb Z}$. Theferore $\mathcal U^0 . w = \mathcal L_{r} (N)$. The proof follows. \end{proof} As a consequence we get a family of irreducible $V_1 (\frak g)$--modules: \begin{corollary} Assume that $\lambda, \mu \in {\Bbb C} \setminus {\Bbb Z}$. Then for each $s \in {\Bbb Z}$ we have: \begin{itemize} \item[(1)] $\mathcal L_s (\widetilde{ U( \lambda)})$ is an irreducible $V_1(\frak g)$--module, \item[(2)] $\mathcal L_s (\widetilde{ U( \lambda, \mu )})$ is an irreducible $V_1(\frak g)$--module. \end{itemize} \end{corollary} We will prove in the next section that $\mathcal L_s (\widetilde{ U( \lambda)})$ are irreducible highest weight modules. But one can see that $\mathcal L_s (\widetilde{ U( \lambda, \mu )})$ have infinite-dimensional weight spaces. A detailed analysis of the structure of these modules will appear in our forthcoming papers (cf. \cite{AdP-2019}). \section{The calculation of fusion rules} In this section we will finish the calculation of fusion rules for the Weyl vertex algebra $M$. We will first identify certain irreducible highest weight $\widehat {\frak g} $--modules. \begin{lemma} \label{identif-lema} Assume that $r, n \in {\Bbb Z}$, $\lambda \in {\Bbb C} $, $\lambda+ n \notin {\Bbb Z}$. Then $e ^{ r (\beta + \gamma) + (\lambda +n) (\alpha + \beta) }$ is a singular vector in $S\Pi _r(\lambda)$ and \begin{eqnarray} && U (\widehat {\frak g} ). e ^{ r (\beta + \gamma) + (\lambda +n) (\alpha + \beta) } \cong \widehat{ \mathcal{V} }_{ r + \tfrac{1}{2} ( \lambda + n ), -\lambda -n} , \label{identif-h-w}\\ && L(0) e ^{ r (\beta + \gamma) + (\lambda +n) (\alpha + \beta) } = \frac{1}{2} (1-2 r) (n+\lambda) e ^{ r (\beta + \gamma) + (\lambda +n) (\alpha + \beta) }. \label{conf-w-m} \end{eqnarray} \end{lemma} \begin{proof} By using standard calculation in lattice vertex algebras we get for $m \ge 0$ \begin{eqnarray} \Psi^+ (m ) e ^{ r (\beta + \gamma) + (\lambda +n) (\alpha + \beta) } &=& e^{\alpha+ \beta + \gamma}_{m} e ^{ r (\beta + \gamma) + (\lambda +n) (\alpha + \beta) } = 0, \nonumber \\ \Psi^- (m+1) e ^{ r (\beta + \gamma) + (\lambda +n) (\alpha + \beta) } &=& \left( -\alpha(-1) e^{-\alpha- \beta - \gamma} \right)_{m+1} e ^{ r (\beta + \gamma) + (\lambda +n) (\alpha + \beta) } = 0, \nonumber \\ E(m) e ^{ r (\beta + \gamma) + (\lambda +n) (\alpha + \beta) } &=& - (\lambda + n) \delta_{m ,0} e ^{ r (\beta + \gamma) + (\lambda +n) (\alpha + \beta) }, \nonumber \\ N(m) e ^{ r (\beta + \gamma) + (\lambda +n) (\alpha + \beta) } &=& \frac{1}{2} ( 2 r + \lambda + n ) \delta_{m ,0} e ^{ r (\beta + \gamma) + (\lambda +n) (\alpha + \beta) }. \nonumber \end{eqnarray} Therefore $e ^{ r (\beta + \gamma) + (\lambda +n) (\alpha + \beta) } $ is a highest weight vector for $\widehat {\frak g} $ with highest weight $(r + \tfrac{1}{2} ( \lambda + n ), -\lambda -n)$ with respect to $(N(0), E(0))$. This implies that $U (\widehat {\frak g} )$ is isomorphic to a certain quotient of the Verma module $\widehat{ \mathcal{V} }_{ r + \tfrac{1}{2} ( \lambda + n ), -\lambda -n} $. But since, $\lambda+ n \notin {\Bbb Z}$, $\widehat{ \mathcal{V} }_{ r + \tfrac{1}{2} ( \lambda + n ), -\lambda -n} $ is irreducible and therefore (\ref{identif-h-w}) holds. Relation (\ref{conf-w-m}) follows by applying the expression $\omega = \frac{1}{2} (\alpha(-1) ^2 - \alpha(-2) - \beta(-1) ^2 + \gamma(-1) ^2)$: \begin{eqnarray} && L(0) e ^{ r (\beta + \gamma) + (\lambda +n) (\alpha + \beta) } \nonumber \\ = && \frac{1}{2} \left( (\lambda+n) ^2 - (\lambda + n + r)^2 + r^2 + (\lambda + n) \right) e ^{ r (\beta + \gamma) + (\lambda +n) (\alpha + \beta) } \nonumber \\ = && \frac{1}{2} (1-2 r) (n+\lambda) e ^{ r (\beta + \gamma) + (\lambda +n) (\alpha + \beta) }. \nonumber \end{eqnarray} \end{proof} \begin{theorem} Assume that $r \in {\Bbb Z}$, $\lambda \in {\Bbb C} \setminus {\Bbb Z}$. Then we have: \item[(1)] $S\Pi _r(\lambda)$ is an irreducible $M \otimes F$--module, \item[(2)] $S\Pi _r(\lambda)$ is a completely reducible $\widehat{ gl(1\vert 1)}$--module: \begin{eqnarray} S\Pi _r(\lambda ) &\cong& \bigoplus _{ s \in {\Bbb Z} } U (\widehat {\frak g} ). e ^{ r (\beta + \gamma) + (\lambda +s) (\alpha + \beta) } \nonumber \\ &\cong& \bigoplus _{ s \in {\Bbb Z} } \widehat {\mathcal{V}} _{ r + \tfrac{1}{2} ( \lambda + s ), -\lambda -s} . \label{dec-int-12}\end{eqnarray} \end{theorem} \begin{proof} The assertion (1) follows from the fact that $\Pi _r(\lambda)$ is an irreducible $M$--module (cf. Proposition \ref{ired-weyl-1}). Note next that the operator $E(0) = \beta (0) +\gamma(0)$ acts semi--simply on $M \otimes F$: $$ M \otimes F = \bigoplus _{s\in {\Bbb Z}} ( M \otimes F ) ^ {(s)}, \quad ( M \otimes F ) ^ {(s)} = \{ v \in M \otimes F \vert \ E(0) v = -s v \}. $$ In particular, $ ( M \otimes F ) ^ {(0)} \cong V_1 (\frak g)$ (cf. \cite{K2} and Proposition \ref{simpl-gl11}). But $E(0) $ also defines the following $\Bbb Z$--gradation on $S\Pi _r(\lambda)$: $$S\Pi _r(\lambda) = \bigoplus _{s\in {\Bbb Z}} S\Pi _r(\lambda) ^ {(s)}, \quad S\Pi _r(\lambda) ^ {(s)} = \{ v \in S\Pi _r(\lambda) \vert \ E(0) v = (-s - \lambda) v \}. $$ Applying Theorem \ref{ired-general} we see that each $S\Pi _r(\lambda) ^ {(s)}$ is an irreducible $(M \otimes F ) ^ {(0)} \cong V_1 (\frak g)$--module. Using Lemma \ref{identif-lema} we see that it is an irreducible highest weight $\widehat {\frak g} $--module with highest weight vector $e ^{ r (\beta + \gamma) + (\lambda +s) (\alpha + \beta) }$. The proof follows. \end{proof} \begin{theorem} \label{fusion-rules-1} Assume that $\lambda_1, \lambda_2, \lambda_1 + \lambda_2 \in {\Bbb C}\setminus {\Bbb Z}$, $r_1, r_2, r_3 \in {\Bbb Z}$. Assume that there is a non-trivial intertwining operator of type $$ { S \Pi_{r_3} (\lambda_3) \choose S\Pi_{r_1} (\lambda_1) \ \ S\Pi_{r_2} (\lambda_2) } $$ in the category of $M\otimes F$--modules. Then $\lambda_3 = \lambda_1 + \lambda_2$ and $r_3 = r_1 + r_2$, or $r_3 = r_1 + r_2 - 1 $. \end{theorem} \begin{proof} Assume that $I$ is an non-trivial intertwining operator of type $$ { S \Pi_{r_3} (\lambda_3) \choose S\Pi_{r_1} (\lambda_1) \ \ S\Pi_{r_2} (\lambda_2) }, $$ Since $S\Pi_r (\lambda)$ are simple $M \otimes F$--modules, we have that for every $s_1, s_2 \in {\Bbb Z}$: $$ I ( e ^{ r_1 (\beta + \gamma) + (\lambda_1 +s_1 ) (\alpha + \beta) } , z) e ^{ r_2 (\beta + \gamma) + (\lambda_2 +s_2 ) (\alpha + \beta) } \ne 0. $$ Here we use the well-known result which states that for every non-trivial intertwining operator $I$ between three irreducible modules we have that $I(v,z) w \ne 0$ (cf. \cite[Proposition 11.9]{DL}). Note that $e ^{ r_i (\beta + \gamma) + \lambda_i(\alpha + \beta) }$ is a singular vector for $\widehat {\frak g} $ which generates $V_1(\frak g)$--module $\widehat{\mathcal{V}}_{ r_i + \tfrac{1}{2} \lambda_i, -\lambda_i}$, $i=1,2$. The restriction of $I(\cdot,z)$ on $$ \widehat{\mathcal{V}}_{ r_1 + \tfrac{1}{2} \lambda_1, -\lambda_1} \otimes \widehat{\mathcal{V}}_{ r_2 + \tfrac{1}{2} \lambda_2, -\lambda_2} $$ gives a non-trivial intertwining operator $$ { S \Pi_{r_3} (\lambda_3) \choose \widehat{\mathcal{V}}_{ r_1 + \tfrac{1}{2} \lambda_1, -\lambda_1} \ \ \widehat{\mathcal{V}}_{ r_2 + \tfrac{1}{2} \lambda_2, -\lambda_2} } $$ in the category of $V_1(\frak g)$--modules. Proposition \ref{non-gl11} implies that then $$\widehat{\mathcal{V}}_{ r_1 + r_2 + \tfrac{1}{2} ( \lambda_1 + \lambda_2) , -\lambda_1- \lambda_2} \quad \mbox{or} \quad \widehat{\mathcal{V}}_{ r_1 + r_2 + \tfrac{1}{2} ( \lambda_1 + \lambda_2) -1 , -\lambda_1- \lambda_2} $$ has to appear in the decomposition of $S \Pi_{r_3} (\lambda_3) $ as a $V_1(\frak g)$--module. Using decomposition (\ref{dec-int-12}) we get that there is $s \in {\Bbb Z}$ such that \begin{eqnarray} && r_1 + r_2 + \tfrac{1}{2} ( \lambda_1 + \lambda_2) = r_3 + \tfrac{1}{2} ( \lambda + s ), \quad -\lambda_1- \lambda_2 = -\lambda -s \label{jedn-prva} \end{eqnarray} or \begin{eqnarray} && r_1 + r_2- 1 + \tfrac{1}{2} ( \lambda_1 + \lambda_2) = r_3 + \tfrac{1}{2} ( \lambda + s ), \quad -\lambda_1- \lambda_2 = -\lambda -s . \label{jedn-druga} \end{eqnarray} Solution of (\ref{jedn-prva}) is $$ \lambda + s= \lambda_1 + \lambda_2, \ r_3 =r_1 + r_2, $$ and of (\ref{jedn-druga}) is $$ \lambda + s= \lambda_1 + \lambda_2, \quad r_3 =r_1 + r_2-1.$$ Since $S\Pi _r(\lambda) \cong S\Pi _r(\lambda + s) $ for $ s \in {\Bbb Z}$, we can take $s = 0$. Thus, $\lambda_3 = \lambda_1 + \lambda_2$ and $r_3 = r_1 + r_2$ or $r_3 = r_1 + r_2 - 1$. The claim holds. % \end{proof} By using the following natural isomorphism of the spaces of intertwining operators (cf. \cite[Section 2]{ADL}): $$ \mbox{I}_{M \otimes F} { S \Pi_{r_3} (\lambda_3) \choose S\Pi_{r_1} (\lambda_1) \ \ S\Pi_{r_2} (\lambda_2) } \cong \mbox{I}_{M } { \Pi_{r_3} (\lambda_3) \choose \Pi_{r_1} (\lambda_1) \ \ \Pi_{r_2} (\lambda_2) },$$ Theorem \ref{fusion-rules-1} implies the fusion rules result in the category of modules for the Weyl vertex algebra $M$ (see also \cite[Corollary 6.7]{RW}, for a derivation of the same fusion rules using Verlinde formula). \begin{corollary} Assume that $\lambda_1, \lambda_2, \lambda_1 + \lambda_2 \in {\Bbb C}\setminus {\Bbb Z}$, $r_1, r_2, r_3 \in {\Bbb Z}$. There exists a non-trivial intertwining operator of type $$ { \Pi_{r_3} (\lambda_3) \choose \Pi_{r_1} (\lambda_1) \ \ \Pi_{r_2} (\lambda_2) } $$ in the category of $M$--modules if and only if $\lambda_3 = \lambda_1 + \lambda_2$ and $r_3 = r_1 + r_2$ or $r_3 = r_1 + r_2 - 1 $. The fusion rules in the category of weight $M$--modules are given by $$ \Pi_{r_1} (\lambda_1) \times \Pi_{r_2} (\lambda_2) = \Pi_{r_1+r_2 } (\lambda_1+ \lambda_2) + \Pi_{r_1+ r_2-1} (\lambda_1 + \lambda_2). $$ \end{corollary}
proofpile-arXiv_065-4045
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{supplementary material} See online Supplementary Material for further details on PEEM measurements, for the sample fabrication procedure and for the determination of the SiO$_2$, TiO$_2$ and hBN VB offsets. \section*{acknowledgement} S. U. acknowledges financial support from VILLUM FONDEN (Grant. No. 15375). R. J. K. is supported by a fellowship within the Postdoc-Program of the German Academic Exchange Service (DAAD). D. S. acknowledges financial support from the Netherlands Organisation for Scientific Research under the Rubicon Program (Grant 680-50-1305). The Advanced Light Source is supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This work was supported by IBS-R009-D1. The work at NRL was supported by core programs and the Nanoscience Institute.
proofpile-arXiv_065-4079
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The stellar initial mass function (IMF) describes the mass spectrum of stars at birth and plays a fundamental role in our understanding of galaxies. From the stellar feedback \citep[e.g.][]{Thales18,Barber18} to the chemical enrichment \citep[e.g.][]{Ferreras15,MN16,Philcox18,Barber18b}, baryonic processes in the Universe ultimately depend on the IMF. Moreover, observationally, our interpretation of the electromagnetic spectrum of unresolved stellar populations is heavily sensitive to the IMF. Beyond a few Mpc, where the properties of resolved individual stars cannot still be measured, translating spectro-photometric measurements into physical quantities requires strong assumptions on the shape of the IMF. From star formation rate \citep[e.g.][]{Kennicutt98,Madau14} and stellar mass measurements \citep[e.g.][]{Mitchell13,Courteau14,Bernardi18} to chemical enrichment predictions \citep[e.g.][]{McGee14,Clauwens16}, the shape of the IMF has to be either fixed or modelled. The pioneering work of \citet{Salp:55} showed that the IMF in the Milky Way can be described by a power law \begin{equation} \label{eq:imf} \Phi(\log M) = d N / d \log M \propto M^{-\Gamma} \end{equation} \noindent with a slope $\Gamma = 1.35$ for stellar masses above 1M$_\odot$. These initial measurements were later extended to lower-mass stars (M$\lesssim 0.5$M$_\odot$), where the slope of the IMF was found to be flatter ($\Gamma\sim0$) than for massive stars \citep[e.g.][]{Miller79}. The seminal works of \citet{mw,Kroupa} and \citet{Chabrier} consolidated the idea of a universal, Milky Way-like IMF, independent of the local star formation conditions \citep[e.g.][]{bastian}. IMF measurements are, however, not limited to the relatively small local volume around the Milky Way. In unresolved stellar populations, two main approaches have been developed to constrain the IMF shape. The first approach makes use of the effect of the IMF on the mass-to-light (M/L) ratio. Low-mass stars constitute the bulk of stellar mass in galaxies, but high-mass stars dominate the light budget. Hence, a change in the IMF slope (i.e. in the relative number of low-mass to massive stars) will have a measurable impact on the expected M/L, and the shape of the IMF can be estimated by independently measuring the total mass (through dynamics or gravitational lensing) and luminosity of a stellar population \citep[see e.g][]{Treu,thomas11}. This technique, however, suffers from a strong degeneracy between dark matter halo mass and IMF slope \citep[e.g.][]{auger}. Alternatively, the shape of the IMF can also be measured in unresolved stellar populations by analyzing their integrated absorption spectra. In particular, IMF-sensitive features subtly vary, at fixed effective temperature, with the surface gravity of stars, and therefore can be used to measure the dwarf-to-giants ratio, i.e., the slope of the IMF \citep[e.g][]{vandokkum}. Although more direct than the dynamical/lensing approach, measuring the IMF from integrated spectra is observationally challenging as the low-mass star contribution to the observed spectrum is usually outshined by the flux emitted by more massive stars. The relative simplicity of the stellar populations (i.e. roughly coeval) in massive early-type galaxies (ETGs) has made them benchmark test cases to study IMF variations beyond the Local Group. In addition, massive ETGs are more metal-rich, denser and have experienced more intense star formation events than the Milky Way, and thus, the universality of the IMF shape can be tested under much more extreme conditions. Over the last decade, IMF studies in ETGs have consistently indicated a non-universal IMF shape in massive ETGs, as the IMF slope becomes steeper (i.e. with a larger fraction of low-mass stars) with increasing galaxy mass. It is important to note that ETGs host very old stellar populations and thus, IMF measurements in these objects are restricted to long-lived stars, with masses $m\lesssim1$M$_\odot$, and therefore insensitive to variations in the high-mass end of the IMF. In general, the agreement between dynamics-based \citep{auger,cappellari,Dutton12,wegner12,Tortora13b,Lasker13,Tortora14,Corsini17} and stellar population-based \citep{vandokkum,spiniello12,Spiniello2013,Spiniello15,conroy12,Smith12,ferreras,labarbera,Tang17} studies support a systematic variation in the IMF of massive ETGs. However, dynamical and stellar population studies do not necessarily agree on the details \citep[see e.g.][]{smith,Smith13,Newman17}. This seems to suggest that both approaches might be effectively probing different mass-scales of the IMF, although differences between dynamical and stellar population-bases studies are not as striking after a proper systematics modeling \citep{Lyubenova16}. IMF variations with galaxy mass, however, provide limited information about the process(es) shaping the IMF as, in general, galaxy properties tend to scale with galaxy mass. Therefore, galaxy-wide IMF variations may be equally attributed to a number of different mechanisms \citep[e.g.][]{conroy12,labarbera,LB15}. This observational degeneracy can be partially broken by analyzing how the IMF shape changes as a function of radius, since different parameters like velocity dispersion, stellar density, metallicity or abundance pattern vary differently with galactocentric distance. Since first measured in the massive galaxy NGC\,4552 \citep{MN15a}, radial IMF gradients have been widely found in a large number of massive ETGs \citep{MN15b,LB16,Davis17,vdk17,Oldham,Parikh,Sarzi18,Vaughan18a}. These spatially resolved IMF studies have shown how IMF variations occur in the inner regions of massive ETGs, and that metallicity seems to be the local property that better tracks the observed IMF variations \citep{MN15c}. However, it is not clear whether the observed correlation with metallicity is sufficient to explain all IMF variations \citep[e.g.][]{Alexa17}, or even if there are massive ETGs with Milky Way-like IMF slopes \citep{McConnell16,Zieleniewski17,Alton18,Vaughan18b}. The aim of this work is to take a step further in the observational characterization of the IMF by analyzing its two-dimensional (2D) variation in the massive ($M_B=-20.3$), fast-rotating ETG FCC\,167 (NGC\,1380), as part of the Fornax 3D project (F3D). Taking advantage of the unparalleled capabilities of the Multi Unit Spectroscopic Explorer (MUSE) integral-field spectrograph \citep{Bacon10}, we present here a 2D analysis of the stellar population properties of FCC\,167, showing for the first time the IMF map of a massive ETG. The paper is organized as follows: in \S~\ref{sec:data} we briefly present the data. Stellar population model ingredients are described in \S~\ref{sec:model}. \S~\ref{sec:fif} contains a detailed explanation about the stellar population modeling, and fitting method 9 and in \S~\ref{sec:results} the results of the stellar population analysis of FCC\,167 are presented. In \S~\ref{sec:discussion} we discuss our findings. Finally, in \S~\ref{sec:summary} we summarize the main conclusions of this work, briefly describing future IMF efforts within the Fornax 3D project. \section{Data} \label{sec:data} We based our stellar population analysis on MUSE data from F3D described in \citet{f3d}. In short, F3D is an IFU survey of 33 bright ($m_B < 15$) galaxies selected from the Fornax Cluster Catalog \citep[FCC, ][]{Ferguson89} within the virial radius of the Fornax cluster \citep{Drinkwater01}. The survey was carried out using the Wide Field Mode of the MUSE IFU \citep{Bacon10}, which provides a 1$\times$1 arcmin$^2$ field-of-view at a 0.2 arcsec pixel$^{-1}$ spatial scale. The wavelength range covers from 4650\AA \ to 9300\AA, with an spectral sampling of 1.25 \AA \ pixel$^{-1}$ and a nominal resolution of FWHM = 2.5\AA \ at $\lambda=7000$\AA. This work is focused on the ETG (S0/a) galaxy FCC\,167, \mbox{located} at a distance of 21.2 Mpc \citep{Blakeslee09} and at 222 kpc from NGC\,1399, the brightest galaxy in the cluster. The total stellar mass of FCC\,167 is $9.85\times10^{10} M_\odot$, with an effective radius of $R_e=6.17$ kpc (60 arcsec) in the $i$ band \citep{Enrica18}. \citet{f3d} found that FCC\,167 has two embedded (thin and thick) discs based on its orbital decomposition. The thin disc is clearly seen in the photometric structure, along with some other interesting features. Nebular gas emission is present in the central regions \citep{Viaene19}, and the exquisite spatial resolution of the MUSE data allows us to systematically detect and characterise planetary nebulae within the MUSE field-of-view \citep{f3d}. The final F3D data cube of FCC\,167 combines three different pointings, covering from the center of the galaxy out to $\sim$4 R$_e$, with a total exposure time per pointing of $\sim 1.2$ hours. Data reduction was done using the MUSE pipeline \citep{Weilbacher16} running under the ESO Reflex environment \citep{Freudling13}. The initial sky subtraction was done using either dedicated sky exposures or IFU spaxels free from galactic flux at the edge of the MUSE field of view. In a later step, the sky subtraction process was further improved by using the Zurich Atmospheric Purge algorithm \citep{Soto16}. More details on the observational strategy and data reduction process are given in the F3D presentation paper \citep{f3d}. In order to measure IMF variation in the absorption spectra of galaxies it is necessary to accurately model variations of a few percent in the depth of gravity-sensitive features. We achieved the required precision level by spatially binning the MUSE data into Voronoi bins with a minimum signal-to-noise ratio (S/N) of $\sim$100 per bin \citep{voronoi}. This S/N threshold is similar to that used in previous IMF studies, but the exquisite spatial resolution and sensitivity of MUSE, combined with the 2D information, provides an unprecedented number of spatial bins: while IMF gradients have usually been measured using $\sim$ 10 data points \citep[e.g.][]{MN15a,vdk17}, our S/N=100 binned cube of FCC\,167 consists of more than 6000 independent Voronoi bins. \section{Stellar population model ingredients} \label{sec:model} Our stellar population analysis relies on the most recent version of the MILES evolutionary stellar population synthesis models \citep{Vazdekis15}. These models are fed with the MILES stellar library of \citet{Pat06}, with a constant spectral resolution of 2.51 \AA \ \citep[FWHM, ][]{Jesus11}. The main difference of the \citet{Vazdekis15} single stellar population models (SSP) with respect to previous MILES models \citep[e.g.][]{miles} is the treatment of the $\alpha$-elements abundance ratio. In this new set of models, MILES stars are used to populate BaSTI isochrones \citep{basti1,basti2} which were explicitly calculated at [$\alpha$/Fe]=0.4 and at the solar scale ([$\alpha$/Fe] = 0.0). To compute [$\alpha$/Fe] = 0.0 models at low metallicities, a regime where MILES stars are $\alpha$-enhanced \citep{Milone11}, a theoretical differential correction was applied on top of the fully-empirical SSP. The same procedure was followed to generate [$\alpha$/Fe]=0.4 SSP models at high metallicities. Therefore this new version of the MILES models allows for a self-consistent treatment of the abundance pattern. In addition to variable [$\alpha$/Fe] (0.0 and +0.4), the \citet{Vazdekis15} MILES models fed with BaSTI cover a range in metallicities ([M/H]) from -2.27 to +0.26\footnote{Note that, although there are MILES/BaSTI models at [M/H]=+0.4, these predictions are not considered {\it safe} and therefore we do not include them as templates in our analysis}, expanding from 0.03 Gyr to 14 Gyr in age. The wavelength range in the [$\alpha$/Fe]-variable models is relatively short \citep{Vazdekis16}, from $\lambda$=3540 \AA \ to $\lambda$=7410 \AA, but it contains enough gravity-sensitive features to safely measure the effect of the IMF. For the IMF functional form, we assumed the so-called {\it bimodal} shape \citep{vazdekis96}. In this parametrization, the IMF is varied through the (logarithmic) high mass end slope $\Gamma_\mathrm{B}$. The main difference between this IMF shape and a single power-law parametrization is that, for masses below $\sim0.5$M$_\odot$, the bimodal IMF flattens. This feature allows for a better agreement with dynamical IMF measurements \citep{Lyubenova16} while recovering a Milky Way-like behavior for $\Gamma_\mathrm{B} = 1.3$. Although $\Gamma_\mathrm{B}$ controls the high mass end slope, the number of low-mass stars is effectively varied at the same time as the integral of the IMF is normalized to 1 M$_\odot$. Note that a variation in the high-mass end of the IMF as presented here would be in tension with the observed chemical composition of massive ETGs, unless the slope of the IMF also changes with time \citep[][]{weidner:13,Ferreras15,MN16}. However, our stellar population analysis of FCC\,167 is completely insensitive to this potential issue (see details below). \subsection{IMF in quiescent galaxies: the $\xi$ parameter} Understanding IMF variations in unresolved, old stellar populations from optical and near-IR spectra requires the acknowledgement of two empirical limitations. First, only stars less massive than $\sim1$M$_\odot$ contribute to the light budget and hence, IMF measurements from integrated spectra are insensitive to variations of the high mass end slope. Second, the contribution from very low mass stars close to the Hydrogen burning limit ($m\sim0.1$M$_\odot$) is virtually unconstrained unless very specific near-IR spectral features are targeted \citep{conroy12,conroy17}. The lack of constraints on the number of very low-mass stars explains why a single-power law IMF parametrization fits equally well the observed spectra of massive ETGs \citep{labarbera,Spiniello2013}, but dramatically overestimates the expected M/L ratios \citep{ferreras,Lyubenova16}. In practice, these two limitations imply that stellar population-based IMF measurements are mostly sensitive to the IMF slope for stars with masses 0.2$\lesssim m \lesssim$1M$_\odot$ regardless of the adopted IMF parametrization. Given the rising number of stellar population-based IMF measurements and the need for an unbiased comparison among them, we introduce here a new quantity, $\xi$, which is virtually independent of the IMF parametrization. The $\xi$ parameter, quantifying the mass fraction locked in low-mass stars, is defined as follows \begin{equation}\label{eq:shape} \xi \equiv \frac{\int_{m=0.2}^{m=0.5} \Phi(\log m) \ dm}{\int_{m=0.2}^{m=1} \Phi(\log m) \ dm} = \frac{\int_{m=0.2}^{m=0.5} m \cdot X(m) \ dm}{\int_{m=0.2}^{m=1} m \cdot X(m) \ dm} \end{equation} \noindent where in the second equality the IMF, $X(m)$, is expressed in linear mass units. This is similar to the definition of F$_{0.5}$ in \citet{labarbera} but $\xi$ is normalized only to the mass contained in stars below 1M$_\odot$, while \citet{labarbera} normalized to the mass in stars below 100 M$_\odot$. Hence, $\xi$ does not depend on the number of massive stars, which in fact cannot be measured from the absorption spectra of ETGs. Moreover, since the denominator in Eq.~\ref{eq:shape} is roughly equivalent to the total stellar mass for old stellar populations, $\xi$ offers a quick conversion factor to transform the observed stellar mass of a galaxy into the total mass in low-mass stars. This definition of $\xi$ explicitly takes into account the fact that very low-mass stars (M$\lesssim0.2$M$_\odot$) are not strongly constrained by most optical/near-IR spectroscopic data. Note also that $\xi$ does not account for the amount of mass locked in stellar remnants. This {\it dark} stellar mass is only measurable through dynamical studies, and it heavily depends on the shape and slope of the high-mass end of the IMF. The $\xi$ parameter, as defined by Eq.~\ref{eq:shape} is therefore a useful quantity to compare different IMF measurements, as shown in \S~\ref{sec:fif}, and it can be even applied to IMF functional forms without a well-defined low-mass end IMF slope \citep[e.g.][]{Chabrier14,conroy17}. Table~\ref{table:imf} shows how commonly used IMF functional forms can be translated into $\xi$ mass ratios. \begin{table} \caption{\label{table:imf} Conversion coefficients between commonly used IMF shapes and $\xi$ mass ratios. For a given IMF slope $\gamma$, the corresponding $\xi$ can be accurately approximated by a polynomial $\xi(\gamma) = c_0 + c_1\gamma + c_2\gamma^2 + c_3\gamma^3$} \centering \begin{tabular}{cccc} \multicolumn{4}{c}{Single-power law ($\Gamma$)\tablefootmark{a}} \\ \hline\hline $c_0$ & $c_1$ & $c_2$ & $c_3$ \\ \hline 0.3751 & 0.1813 & 0.0223 & -0.0095 \\ \multicolumn{4}{c}{ } \\ \multicolumn{4}{c}{Broken-power law ($\Gamma_\mathrm{B}$)} \\ \hline\hline $c_0$ & $c_1$ & $c_2$ & $c_3$ \\ \hline 0.3739 & 0.1269 & 0.0000 & -0.0014 \\ \hline \end{tabular} \tablefoot{For a \citet{Chabrier} IMF, $\xi=0.4607$; for \citet{Kroupa}, $\xi=0.5194$, and for \citet{Salp:55}, $\xi=0.6370$ \\ \tablefoottext{a}{$\Gamma$ is in $\log$ units. In linear mass units, the IMF slope is $\alpha=\Gamma+1$.} } \end{table} \section{Full-Index-Fitting: a novel approach}~\label{sec:fif} \begin{figure*} \centering \includegraphics[width=9cm]{Mgb5177_line.pdf} \includegraphics[width=9cm]{TiO2_line.pdf} \caption{The full-index-fitting (FIF) approach. The top panels show the Mgb\,5177 (left) and the TiO$_2$ (right) spectral features, normalized using the index pseudo-continua (blue shaded regions). In the FIF approach, every pixel within the central bandpass (grey area) is fitted to measure the stellar population parameters. The black line correspond to a model of solar metallicity [M/H]=0, [Mg/Fe]=0, [Ti/Fe]=0, and Kroupa-like IMF, at a resolution of 200 \hbox{km s$^{-1}$}. The FIF approach breaks the degeneracies more efficiently than the standard line-strength analysis since every pixel responds differently to changes in the stellar population properties. Colors in the bottom panels show the relative change in the spectrum after varying different stellar population parameters. For reference, the IMF slope was varied by $\Delta \Gamma_\mathrm{B}=1$, the metallicity and abundance ratios by 0.2 dex, and the age by 2 Gyr.} \label{fig:indices} \end{figure*} Measuring detailed stellar population properties, and in particular IMF variations, from the absorption spectra of unresolved stellar populations requires a precise and reliable comparison between stellar population models and data. Two main approaches are usually followed. The standard line-strength analysis makes use of the equivalent width of well-defined spectral features to derive the stellar population properties of a given spectrum \citep[e.g.][]{Burstein84}. The main advantage of this method is that it focuses on relatively narrow spectral regions where the bulk of the information about the stellar population properties is encoded. Moreover, these narrow spectral features have been thoroughly studied and their dependence on the different stellar population parameters has been extremely well characterized \citep[e.g.][]{Worthey94,cat,TMB:03,Schiavon07,johansson12}. However usually only a handful of these indices can be analysed simultaneously, which may lead to degeneracies in the recovered stellar population properties \citep{Pat11}. Additionally, and thanks to the development of intermediate resolution stellar population models, full spectral fitting techniques are now widely adopted. \citep[e.g.][]{CF05,Ocvirk06,Conroy09,Cappellari17,Wilkinson17}. Instead of focusing on specific absorption features, this second approach aims to fit every pixel across a relatively wide wavelength range ($\sim1000$\AA). Since every pixel is treated as an independent measurement, S/N requirements are lower than for a line-strength analysis, and with a given S/N ratio, degeneracies in the recovered stellar population parameters tend to be weaker using full spectral fitting algorithms \citep{Pat11}. Despite presenting clear advantages over the use of line-strength indices, the number of free parameters is also higher with full spectral stellar population fitting \citep[e.g.][]{Conroy18}, increasing the computational cost. Moreover, since every pixel in the spectrum is treated equally, the information about the stellar population properties, concentrated in narrow features, might get diluted \citep{labarbera}. A hybrid approach, in between line-strength analysis and full spectral fitting, is also possible. The idea consists of selecting key absorption features, where the information about the stellar population properties is concentrated. Then, instead of calculating the equivalent widths, every pixel within the feature is fitted after normalizing the continuum using the index definition \citep{MN15d}. In practice, this is a generalization of the line-strength analysis: while in the standard approach the equivalent width is measured with respect to a well-defined continuum, the hybrid method quantifies the depth of each pixel with respect to the same continuum definition. Fig.~\ref{fig:indices} exemplifies normalized Mgb\,5177 and TiO$_2$ spectral features. This hybrid approach, or full index fitting (FIF), presents significant advantages. First, only specific spectral features are fitted, where the information is concentrated and the behavior of the stellar population properties is well determined. This allows for a low number of free parameters, as in the standard line-strength analysis, reducing the computational time. This is a key feature given the large number of spatial bins provided by the MUSE spectrograph. Moreover, since the continuum is fitted by a straight line using the index definition, the FIF approach is insensitive to large-scale flux calibration issues in the data. Compared with the line-strength analysis, the main advantage of the FIF method is that the number of independent observables is significantly increased. For example, in a standard index-index diagram (e.g. $H_\beta$ vs Mgb\,5177), only two measurements are compared to the model predictions. However, using FIF, the same two indices lead to more than $\sim100$ measurements (at the MILES resolution). In practice, adjacent pixels in the observed spectrum of a galaxy are correlated so the effective improvement of the S/N does not necessarily scales as $\sqrt{N_{pix}}$. In addition to these practical advantages, the use of FIF has a crucial characteristic: each pixel in a given spectral feature depends differently on the stellar populations parameters, as shown in Fig.~\ref{fig:indices}. This implies that not only the S/N requirements are lower but also that degeneracies among stellar population properties become weaker (see Fig.~\ref{fig:corner}). \subsection{Application to F3D data} \label{sec:f3ddata} In order to apply the FIF method to the F3D data of FCC167, we first measured the stellar kinematics (mean velocity and velocity dispersion) of each individual spatial bin (see \S~\ref{sec:data}) using the pPXF code presented in \citet{ppxf}. For consistency, we fed pPXF with the same set of MILES models used for the stellar population analysis. Spectral regions potentially contaminated by ionized gas emission were masked in this first step of the fitting process. Note that, for consistency, we did not use the kinematics described in the survey presentation paper \citep{f3d}, as we made different assumptions in the modeling process. An important limitation of the MUSE spectrograph is the relatively {\it red} wavelength coverage, which starts at $\lambda=$4650\AA, and consequently, the only reliable age sensitive feature in the observed wavelength range is the H$_\beta$ line. However, H$_\beta$ is known to depend, not only on the age, but also on some other stellar population properties. In particular, H$_\beta$ shows a significant sensitivity to the [C/Fe] abundance ratio \citep{conroy,LB16}. Unfortunately, there are no prominent C-sensitive features in the MUSE data, making the use of H$_\beta$ unreliable. We overcome this problem by measuring the luminosity weighted age of FCC\,167 using the pPXF regularization scheme \citep{Cappellari17}, which can provide robust stellar population measurements \citep{McDermid15}. Given the sensitivity of recovered star formation histories on the assumed IMF slope \citep{FM13}, we regularized over the age--metallicity--IMF slope parameter space, and then we fixed the pPXF best-fitting age throughout the rest of the stellar population analysis. Given the wavelength coverage of the [$\alpha$/Fe]-variable MILES models ($\lambda\lambda = 3540-7410$\AA), we based our stellar population analysis of FCC\,167 on features bluewards of $\lambda\sim7000$. This wavelength range shows a wide variety of stellar population-sensitive features and is much less affected by telluric absorption than the near-IR regime. To constrain the metallicity and [$\alpha$/Fe] ratio we focused on the Fe\,5270, Fe\,5335, and Mgb5177 indices \citep{trager}. Although all $\alpha$ elements are varied in lock-step in the MILES models, our only [$\alpha$/Fe]-sensitive feature is the Mgb5177 absorption feature. Hence, it is only the [Mg/Fe] abundance ratio which is constrained by our analysis. In the MUSE wavelength range, the most important IMF sensitive features are the aTiO \citep{Jorgensen94}, the TiO$_1$ and the TiO$_2$ absorptions \citep{serven}. We therefore also included the effect of [Ti/Fe] as an additional free parameter using the response functions of \citet{conroy}. The [Ti/Fe] is not treated in the same way as the [Mg/Fe] ratio, as the same response function is assumed irrespective of the value of the other stellar population parameters \citep[see][]{Spiniello15}. Effectively, [Ti/Fe] has a relatively mild effect on the selected features. Finally, because the effect of [C/Fe] in our set of features balances out that of the [$\alpha$/Fe] \citep{LB16}, and both ratios are expected to track each other \citep{johansson12}, we neglect the [$\alpha$/Fe] sensitivity of the MILES models beyond $\lambda=5400$\AA. In short, we follow the FIF stellar population fitting approach described in \S~\ref{sec:fif} focusing on six spectral features (Fe\,5270, Fe\,5335, Mgb5177, aTiO, TiO$_1$, and TiO$_2$). We fit for four stellar population parameters, namely, metallicity, [Mg/Fe], [Ti/Fe] and IMF slope ($\Gamma_\mathrm{B}$). The age was fixed to that measured using pPXF, and the effect of the [Mg/Fe] was only considered for wavelengths $\lambda\leqq5400$\AA. The implications of these assumptions on the recovered stellar population parameters are shown and discussed in \S~\ref{sec:etg}. In order to compare models and data, we implemented the same scheme as in \citet{MN18}. We used the {\it emcee} Bayesian Markov chain Monte Carlo sampler \citep{emcee}, powered by the Astropy project \citep{astropya,astropyb}, to maximize the following objective function \begin{equation}~\label{eq:min} \ln ({\bf O} \, | \, {\bf S} ) = -\frac{1}{2} \mathlarger{\sum}_n \bigg[ \frac{(\mathrm{O}_n - \mathrm{M}_n)^2}{\sigma_n^2}-\ln \frac{1}{\sigma_n^2}\bigg] \end{equation} \noindent where {\bf S} = $\lbrace\mathrm{\Gamma_B,[M/H],[Mg/Fe],[Ti/Fe]}\rbrace$. The summation extends over all the pixels within the band-passes of the selected spectral features. O$_n$ and M$_n$ are the observed and the model flux\footnote{This model flux is obtained by linearly interpolating a grid of MILES models.} of the $n$th-pixel, and $\sigma_n$ the measured uncertainty. Eq.~\ref{eq:min} is therefore just a Gaussian likelihood function, where the distance (scaled by the expected uncertainity) between data and models is minimized. For FCC\,167, model predictions M$_n$ were calculated at a common resolution of 250 \hbox{km s$^{-1}$} \ so they had to be calculated only once and not for every spatial bin. This resolution corresponds to the lowest measured in FCC\,167. Before comparing models and data, every MUSE spectrum was smoothed to match the 250 \hbox{km s$^{-1}$} resolution of the models using the velocity dispersion measurements from pPXF. Fig.~\ref{fig:corner} shows how our approach is able to break the degeneracies among the different stellar population parameters when applied to FCC\,167 F3D data. \begin{figure} \centering \includegraphics[width=\hsize]{FCC167_popfit_B4068_chemi_corner.pdf} \caption{Best-fitting stellar population properties. Posterior distributions for one of the spatial bins of FCC\,167 showing how our FIF approach is able to recover the stellar population properties with high precision, breaking the degeneracies among them. Vertical red lines indicate the best fitting value (solid line) and the 1$\sigma$ uncertainty (dashed lines). These values are quoted on top of the posterior distributions. The luminosity-weighted age of this bin is 12.7 Gyr.} \label{fig:corner} \end{figure} Thanks to the Bayesian fitting scheme described above, we could further improve the robustness of our results by imposing informative priors. In particular, we used the best-fitting metallicity and IMF values from the regularized pPXF fit as gaussian priors on the final solution. This final improvement does not bias the solution, but it increases the stability of the recovered stellar population parameters as shown in Fig.~\ref{fig:sdss}. We assumed flat priors for the other free parameters (e.g. [Mg/Fe] and [Ti/Fe]). \subsection{ETG scaling relations}~\label{sec:etg} \begin{figure} \centering \includegraphics[width=8.9cm]{SDSS_trend_v2.pdf} \caption{From top to bottom, this figure shows the metallicity, [Mg/Fe], and IMF ($\xi$) scaling relations with galaxy velocity dispersion. The recovered trends agree with expectations from previous studies. In particular, the measured IMF-$\sigma$ relation is remarkably close to the results of \citet{labarbera} and \citet{Spiniello2013}, even though they rely on different model assumptions. Moreover, the use of $\xi$ as a proxy for the low-mass end IMF slope is able to unify different IMF parametrizations. The main difference between the trends shown in this figure and previous works is the fact that we measure a flatter trend between [Mg/Fe] and galaxy velocity dispersion. This is however, not related to the FIF fitting, but due to a combination of the metallicity-dependent [Mg/Fe] effect on the MILES models, plus the sensitivity of the Mgb\,5177 line to changes in the IMF slope (see Fig.~\ref{fig:indices}).} \label{fig:sdss} \end{figure} \begin{figure*} \centering \includegraphics[width=9cm]{FCC167_age.pdf} \includegraphics[width=9cm]{FCC167_met.pdf} \includegraphics[width=9cm]{FCC167_alp.pdf} \includegraphics[width=9cm]{FCC167_IMF.pdf} \caption{Stellar population maps of FCC\,167. The top left panel shows the age map, as measured from pPXF, where the presence of a relatively older central component is clear. The metallicity (top right) and the [Mg/Fe] (bottom left) maps show the clear signature of a chemically evolved disk, confined within a vertical height of $\sim10$ arcsec, coinciding with the kinematically cold component observed in this galaxy \citep{f3d}. The IMF map on the bottom right exhibits however a different two-dimensional structure, not following the metallicity variations, but closely following the hot+warm orbits of this galaxy \citep{f3d}.} \label{fig:maps} \end{figure*} \noindent The stellar population properties of ETGs follow tight scaling relations with galaxy mass \citep[e.g.][]{Worthey92,Thomas05,Kuntschner10}. Therefore, before attempting the stellar population analysis of FCC\,167, we test whether our FIF approach is able to recover these well-known trends. We made use of the ETG stacked spectra of \citet{labarbera}, based on the public Sloan Digital Sky Survey DR6 \citep{DR6}, and we fit them with our reference set up described above. In addition, to understand the effect of the different model assumptions on the recovered stellar population properties, we also performed a series of tests where we varied our fitting scheme. The first variation consisted of removing the informative priors. Second, we also tested the effect of allowing for the [Mg/Fe] variations to affect wavelengths beyond $\lambda=5400$\AA. Third, the robustness of the FIF approach, and its dependence on the set of indices was put to the test by removing all IMF sensitive features but TiO$_2$ from the analysis. The results of these tests are shown in Fig.~\ref{fig:sdss}. The top panel of Fig.~\ref{fig:sdss} shows the metallicity--stellar velocity dispersion using our FIF approach. The expected trend, with galaxies with higher velocity dispersion (more massive) being more metal rich is recovered \citep[e.g.][]{Thomas10}. Only the case where the TiO$_2$ is our only IMF sensitive feature departs from the rest, proving that metallicity is well constrained by our approach. The middle panel in Fig.~\ref{fig:sdss} shows how the [Mg/Fe] abundance ratio changes as a function of galaxy velocity dispersion. Again, as for the metallicity, all the different variations agree, and they show how more massive galaxies are more [Mg/Fe] enhanced. An important point should be noticed: the [Mg/Fe] -- velocity dispersion relation is flatter than expected \citep[e.g.][]{Thomas10,dlr11,LB14,Conroy14}. We checked that this flattening is not due to the FIF approach by repeating the analysis with the standard line-strength indices and finding the same result. The weak [Mg/Fe] trend with galaxy velocity dispersion results from the combination of two processes not explored in previous studies. First, the new MILES models consistently capture the effect of the [Mg/Fe], whose effect becomes weaker at sub-solar metallicities \citep{Vazdekis15}. This implies that for low-$\sigma$ galaxies, where metallicities are also low, a higher [Mg/Fe] is needed to match the data compared to other stellar population models. In consequence, low-$\sigma$ galaxies are found to have higher [Mg/Fe] than previously reported \citep{Aga17}. The second effect flattening the [Mg/Fe]-$\sigma$ relation has to do with the sensitivity of the Mgb\,5177 feature to the IMF, as shown for example in the left panels of Fig.~\ref{fig:indices}. A variation in the IMF slope of $\delta \Gamma\sim1$ as typically observed in massive ETGs leads to a change in the Mgb\,5177 index which is equivalent to an increment of $\delta$[Mg/Fe]$\sim0.1$. Hence, the depth of the Mgb\,5177 absorption feature, traditionally interpreted as a change in the [Mg/Fe] \citep{Thomas05}, is also driven by a steepening in the IMF slope of massive ETGs \citep{conroy}. Finally, the bottom panel of Fig.~\ref{fig:sdss} shows the recovered trend between IMF slope ($\xi$) and galaxy mass, in very good agreement with previous works. Shaded regions indicate the trends found by \citet{labarbera} and \citet{Spiniello2013} with the typical 1$\sigma$ uncertainty. It is clear from this panel that with our FIF approach, we do not only recover the expected trends, but also with a smaller uncertainty. This is ultimately due to the larger number of pixels used in the analysis and to the effect of model systematics \footnote{\citet{labarbera} also included in their error budget uncertainties on the treatment of abundance patterns and the emission correction on Balmer lines.}. Even using a single IMF indicator, in this case the TiO$_2$ feature, we are able to robustly measure the IMF in ETGs. Moreover, Fig.~\ref{fig:sdss} also shows how our $\xi$ definition is able to unify IMF measurements based on different IMF parametrizations. It is also worth mentioning that our FIF approach, and the works of \citet{labarbera} and \citet{Spiniello2013} are based on different sets of indices, and even different stellar population model ingredients, making the agreement among the three approaches even more remarkable. \section{Results} \label{sec:results} Having validated the FIF approach with the \citet{labarbera} stacked spectra, we applied our stellar population fitting scheme to the F3D data cube of FCC\,167. As mentioned in \S~\ref{sec:data}, after spatially binning the three MUSE pointings of FCC\,167, we measured the stellar population properties in $\sim$6000 independent Voronoi bins, mapping the two-dimensional structure of the galaxy. The main results of our analysis are shown in Fig.~\ref{fig:maps}. On the top left panel of Fig.~\ref{fig:maps}, the age map of FCC\,167 shows how this galaxy hosts old stellar populations at all radii, although slightly older in the center. These relatively older stars seem to track the bulge-like structure of FCC\,167 \citep[as shown in Figure 10 of][]{f3d}. Note that ages shown in Fig.~\ref{fig:maps} are luminosity-weighted values derived using pPXF (where IMF and metallicity were also left as free parameters in a 20 (age) x 10 ([M/H]) x 10 (IMF) model grid, covering the same parameter space as described in \S~\ref{sec:model}). The metallicity (top right) and [Mg/Fe] (bottom left) maps show clear evidence of the presence of a chemically evolved thin disk. Interestingly, the age map appears partially decoupled from the chemical properties of FCC\,167. It is particularly striking how some of the oldest regions, where star formation ceased very early in the evolution of FCC\,167, show at the same time chemically evolved (e.g. metal rich and [Mg/Fe] poor) stellar populations. Finally, the bottom right panel in Fig.~\ref{fig:maps} shows, for the first time, the IMF map of an ETG. The overall behavior of the IMF is similar to what was found previously in massive ETGs, namely, it is only in the central regions where the fraction of low-mass stars appears enhanced with respect to the Milky Way expectations \citep[e.g.][]{MN15b,LB16,vdk17}. At a distance of $\sim$100 arcsec from the center of FCC\,167, a Milky Way-like IMF slope is found. However, an important difference with respect to previous studies should be noticed. The two-dimensional structure of the IMF in this galaxy is clearly decoupled from the chemical one, in particular, from the metallicity map \citep{MN15c}. Instead of following a disky structure, the IMF map of FCC\,167 appears much less elongated. \section{Discussion} \label{sec:discussion} The ground-breaking potential of the F3D project to understand the formation and evolution of ETGs through their stellar population properties is clear from the two-dimensional maps shown in Fig.~\ref{fig:maps}. In spatially unresolved studies, the global properties of ETGs appear highly coupled, as more massive galaxies are also denser, more metal-rich, more [Mg/Fe]-enhanced, older and with steeper (low-mass end) IMF slopes \citep[e.g.][]{Thomas10,labarbera}. In the same way, by collapsing the information of a galaxy into a one-dimensional radial gradient, valuable information about the stellar population parameters is lost. For example, in FCC\,167, both IMF slope and metallicity decrease smoothly with radius, but it is only when studying its full 2D structure that their different behavior becomes evident. \subsection{The age - [Mg/Fe] discrepancy} The [Mg/Fe] map in FCC\,167 is clearly anti-correlated with the age one (Fig.~\ref{fig:maps}), transitioning from old and low [Mg/Fe] populations in the center towards relatively younger and more Mg-enhanced outskirts. Under the standard interpretation, age and [Mg/Fe] should tightly track each other, as lower [Mg/Fe] is reached by longer star formation histories \citep[e.g.][]{Thomas99}. In FCC\,167, the age map and its chemical properties seem to describe two different formation histories. The [Mg/Fe] map, in agreement with the metallicity distribution, suggests that the outer parts of the galaxy formed rapidly, which lead to chemically immature stellar populations (high [Mg/Fe] and low metallicities). This picture for the formation of the outskirts of FCC\,167 is in agreement with the properties of the Milky Way halo \citep[e.g.][]{Venn04,Hayes18,Emma18} and other spiral galaxies \citep[e.g.][]{Vargas14,Molaeinezhad17}. The inner metal-rich, low [Mg/Fe] regions would have formed during a longer period of time, leaving enough time to recycle stellar ejecta into new generations of stars. However, this scenario would imply a relatively younger inner disk, which is not evident from the age map. This apparent contradiction suggests that, in order to truly understand the stellar population properties of ETGs, {\em SSP-glasses} are not enough, and a more complex chemical evolution modeling is needed, taking into account the time evolution of the different stellar population parameters and our limitations on the stellar population modeling side, in particular the coarse time-resolution inherent to old stellar populations. This apparent tension between age and chemical composition properties is not unique to FCC\,167 and has been reported in previous IFU-based studies \citep[e.g.][]{MN18}. \subsection{The IMF - metallicity relation} The complexity of understanding the stellar population properties of ETGs with the advent of IFU spectroscopy is further increased by the observed IMF variations. In Fig.~\ref{fig:met} we show how IMF and metallicity measurements compare in FCC\,167, where the dashed line shows the relation found by \citet{MN15c}. The agreement between the FCC\,167 measurements and the empirical IMF-metallicity is remarkable given all the differences in the stellar population modeling between the two studies, further supporting the robustness of our approach. However, it is clear that the IMF-metallicity relation of \citet{MN15c} is not enough to explain the 2D stellar population structure of FCC\,167. The core of FCC\,167 agrees with the expectations, but it clearly departs in the outer (lower metallicity and $\xi$) regions. This is not surprising given the fact that the measurements of \citet{MN15c} are biased towards the central regions of their sample of ETGs from the CALIFA survey \citep{califa}. It is worth mentioning that \citet{Sarzi18} found a good agreement between the metallicity and the IMF gradients in M\,87, tightly following the IMF-metallicity found by \citet{MN15c}. The decoupling in FCC\,167 is likely due to the fact that its internal structure has been preserved over cosmic time due to the lack of major disruptive merger events, which would have washed out the observed differences. Hence, massive lenticular galaxies appear as ideal laboratories to study the origin of the observed IMF variations. \begin{figure} \centering \includegraphics[width=\hsize]{IMF_MET_dist_vstack.pdf} \caption{IMF -- metallicity relation. The individual bins of FCC\,167 are shown color-coded by their distance to the center of the galaxy, compared with the empirical relation of \citet{MN15c}, shown as an blue dashed line. This relation agrees with the FCC\,167 measurements in the central regions of the galaxy (top right corner), but it does not hold for the outskirts, where an additional parameter is needed to explain the observed variations in the IMF. The $\xi$ ratio of the Milky Way \citep{mw} is shown as a grey dashed line.} \label{fig:met} \end{figure} \begin{figure*} \centering \includegraphics[width=14cm]{isocompare.pdf} \caption{Iso-metallicity vs iso-IMF contours. The surface brightness map of FCC\,167 is shown, as measured from the F3D data cube, with the iso-metallicity (solid lines) and iso-IMF contours (dashed lines) over-plotted. As in Fig.~\ref{fig:maps}, this figure shows how the two-dimensional IMF map does not exactly follow the metallicity variations. IMF variations appear more closely related to the surface brightness distribution, in particular around the central bulge. The metallicity distribution on the contrary is structured in a more disk-like configuration.} \label{fig:iso} \end{figure*} The different behavior of the stellar population properties in FCC\,167 can also be seen by comparing the iso-metallicity and the iso-IMF contours. Fig.~\ref{fig:iso} shows the $r$-band surface brightness map of FCC\,167, as measured from the MUSE F3D datacubes. On top of the surface brightness map, the iso-metallicity and the iso-IMF contours are also shown. To generate the contours in Fig.~\ref{fig:iso}, we fitted the stellar population maps with a multi-Gaussian-expansion model \citep{Emsellem94,Cappellari02}, as generally done with photometric data. This allows a smooth modeling of the large-scale behavior of the stellar population maps, which can then be easily transformed into iso contours. The decoupling in the two-dimensional structure of the IMF and metallicity maps appears clearly in Fig.~\ref{fig:iso}. As described above, the metallicity distribution follows a diskier structure, which is consistent with a long-lasting chemical recycling within the cold kinematic component of FCC\,167. The IMF on the other hand follows a rounder, more symmetric distribution. \subsection{Stellar populations properties vs orbital decomposition} In order to further investigate the connection between the internal structure of FCC\,167 and its stellar population properties we fit the synthetic $r$-band image with a bulge plus exponential disc model using {\it Imfit} \citep{Erwin15}. The top panels of Fig.~\ref{fig:decomp} show how the metallicity and IMF maps of FCC\,167 compare with its photometric decomposition. It is clear that the disc component does not capture the observed structure of the metallicity map. The agreement between IMF variations and bulge light distribution is slightly better, although the latter is much rounder. Thus, a simple bulge plus disc analysis is not able to capture the variation observed in the stellar population properties. \citet{f3d} presented the Schwarzschild orbit-based decomposition \citep{remco08,Ling18} of FCC\,167, where they roughly distinguished between three types of orbits: cold ($\lambda_z > 0.7$), warm ($0.2 < \lambda_z < 0.7$), and hot ($\lambda_z < 0.2$). The bottom three panels in Fig.~\ref{fig:decomp} present the comparison between light distributions of these three types of orbits and the metallicity and IMF maps of FCC\,167. The coupling between the metallicity (and therefore the [Mg/Fe] ratio) maps and the spatial distribution of cold orbits is remarkable. This further supports the idea that the elongated structure shown by the chemical properties of FCC\,167 is indeed tracking a dynamically cold disc. A relatively more extended star formation history in this disc would naturally explain the high-metallicities and low [Mg/Fe] ratios. IMF variations on the contrary seem to be closely tracking the distribution of warm orbits, particularly in the central regions of FCC\,167. This result is rather unexpected, as it has been extensively argued that IMF variations are associated with the extreme star formation conditions within the cores of massive ETGs \citep[e.g.][]{MN15a,MN15b,vdk17}. The comparison shown in Fig.~\ref{fig:decomp} suggests, however, that the IMF was set during the early formation of the warm (thick disc) component of FCC\,167, where the pressure and density conditions may have had an impact on the shape of the IMF \citep{Chabrier14,Jerabkova18}. The weak correlation between stellar population properties and hot orbits might have strong implications for our understanding of bulge formation, as it suggests that most of the stars belonging to this dynamically hot structure were not born hot, but they must have been heated up at a later stage \citep[e.g.][]{Grand16,GK18}. From Fig.~\ref{fig:decomp} it becomes clear that the orbital decomposition offers a more meaningful and physically motivated framework than a standard photometric analysis \citep{Zhu18}. The connection with the metallicity and IMF distribution opens an alternative approach to understand the origin of the stellar population radial variations, in particular, in lenticular galaxies as FCC\,167 with a rich internal structure. Note however than while the stellar population maps are integrated quantities, the orbital analysis shown in Fig.~\ref{fig:decomp} is a decomposition into different orbit types, and this has to be taken into account before any further interpretation. For example, the iso-IMF contours are more elongated than the isophotes of the warm in the outskirts of FCC\,167 because at large radii the flux starts to be dominated by cold orbits. A more quantitative comparison between stellar population properties and orbital distributions will be presented in an upcoming F3D paper. \begin{figure*} \centering \includegraphics[width=9cm]{isocompare_disk.pdf} \includegraphics[width=9cm]{isocompare_bulge.pdf} \includegraphics[width=9cm]{isocompare_cold.pdf} \includegraphics[width=9cm]{isocompare_warm.pdf} \includegraphics[width=9cm]{isocompare_hot.pdf} \caption{Internal structure vs stellar population maps. Top panels show the comparison between metallicity and photometric disc (top left) and IMF and bulge (top right). Neither the disc nor the bulge seem to follow the structure of the stellar population maps. The three bottom panels compare the orbital decomposition of FCC\,167 with the stellar population properties. On the left, the agreement between the metallicity structure (white solid lines) and the spatial distribution of the cold orbits indicates that the chemically evolved (metal rich and [Mg/Fe] poor) structure observed in the stellar population maps corresponds to a cold stellar disc with an extended star formation history. IMF contours (white dashed lines) on the other hand closely follow the distribution of warm orbits (right), suggesting that the IMF was set at high $z$ during the assembly of the thick disc. Hot orbits on the contrary are decoupled from the stellar population properties, which might indicate that the bulge of FCC\,167 was formed through stellar heating processes. } \label{fig:decomp} \end{figure*} \section{Summary and conclusions}\label{sec:summary} We have shown that the spatially-resolved stellar population properties of ETGs can be robustly measured thanks to the high quality MUSE-IFU data from the Fornax 3D project, combined with a novel approach which benefits from the advantages of both line-strength analysis and full-spectral fitting techniques. The analysis tools described in this work have allowed us to measure the two-dimensional stellar population property maps of the massive, S0/a galaxy FCC\,167. The chemical properties (i.e. metallicity and [Mg/Fe]) show a clear disc-like structure, associated with the cold orbital component of FCC\,167. IMF variations roughly follow the radial metallicity variation, in agreement with previous studies, but with a clearly distinct spatial distribution. Iso-IMF contours are much rounder than the iso-metallicity ones, and seem to follow the distribution of hot and warm orbits in this galaxy. These results suggests that metallicity can not be the only driver of the observed IMF variations in ETGs. The comparison between the orbital decomposition and the stellar population properties provides a physically meaningful framework that captures the underlying IMF and metallicity variations in FCC\,167 better than a standard photometric decomposition. Our analysis describes a scenario where the IMF was set during the early formation of the stars with relatively warm orbits. The formation of the cold orbital component took place during a more extended period of time, leading to metal-rich and [Mg/Fe]-poor stellar populations. We argue that the time difference between the assembly of these two components is too short to be measured in old stellar populations. The chemical properties of ETGs, regulated by the ejecta of massive stars, would therefore be a finer timer than SSP-equivalent ages. Finally, the hot orbital component of FCC\,167 appears decoupled from the stellar population properties, suggesting that the formation of the bulge was likely due to a stellar heating process. The orbital-based analysis appears therefore as an insightful probe of the relation between galaxy structure and the emergence of the stellar population properties, and it will be explored in a upcoming work (Mart\'in-Navarro et al., {\it in prep}). With the complex IMF variations shown by this object, understanding the stellar population properties of ETGs in the IFU era requires a deep change in both our modeling and our analysis of this type of galaxies. Well-known scaling relations supporting our picture of galaxy formation and evolution might be partially biased by the lack of spatial resolution. Moreover, the stellar population properties and the IMF in these objects have likely evolved over time \citep[e.g.][]{weidner:13,DM18,Fontanot18}, and at $z\sim0$ we only see an integrated snapshot of their lives. The assumption of SSP-like stellar populations in ETGs is starting to break down under the pressure of wider and deeper spectroscopic data, and the on-going projects within the F3D project will contribute to this change of paradigm \citep[e.g.][]{Pinna19}. In an upcoming paper, we will present and discuss the IMF variations for the whole sample of F3D galaxies, covering a wide range in masses and star formation conditions. \begin{acknowledgement} We would like to thank Bronwyn Reichardt Chu for her useful comments and for the fruitful discussions, and also the referee for a careful and efficient revision of the manuscript. IMN ad JFB acknowledge support from the AYA2016-77237-C3-1-P grant from the Spanish Ministry of Economy and Competitiveness (MINECO). IMN acknowledges support from the Marie Sk\l odowska-Curie Individual {\it SPanD} Fellowship 702607. GvdV acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 724857 (Consolidator Grant ArcheoDyn). E.M.C. acknowledges financial support from Padua University through grants DOR1699945/16, DOR1715817/17, DOR1885254/18, and BIRD164402/16 \end{acknowledgement} \bibliographystyle{aa}
proofpile-arXiv_065-4084
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In nature, interactions between particles act locally, motivating the study of many-body Hamiltonians consisting only of terms involving particles spatially near each other. An important method that has emerged from this course of study is the Density Matrix Renormalization Group (DMRG) algorithm \cite{white1992density,white1993density}, which aims to find a description of the ground state of local Hamiltonians on a one-dimensional chain of sites. DMRG has been an indispensable tool for research in many-body physics, but the rationale for its empirical success did not become fully apparent until long after it was being widely used. Its eventual justification required two ingredients: first, that the method can be recast \cite{klumper1993matrix,weichselbaum2009variational} as a variational algorithm that minimizes the energy over the set of matrix product states (MPS), a tensor network ansatz for 1D systems; and second, that the MPS ansatz set actually contains a good approximation to the ground state, at least whenever the Hamiltonian has a nonzero spectral gap. More specifically, Hastings \cite{hastings2007area} showed that ground states of gapped local Hamiltonians on chains with $N$ sites can be approximated to within trace distance $\epsilon$ by an MPS with bond dimension (a measure of the complexity of the MPS) only $\text{poly}(N,1/\epsilon)$, exponentially smaller than what is needed to describe an arbitrary state on the chain. Even taking into account these observations, DMRG is a heuristic algorithm and is not guaranteed to converge to the global energy minimum as opposed to a local minimum; however, a recently developed alternative algorithm \cite{landau2015polynomial, arad2017rigorous,roberts2017implementation}, sometimes referred to as the ``Rigorous RG'' (RRG) algorithm, avoids this issue and provides a way one can guarantee finding an $\epsilon$-approximation to the ground state in $\text{poly}(N,1/\epsilon)$ time. These are extremely powerful results, but their value breaks down when the chain becomes very long. The bond dimension required to describe the ground state grows relatively slowly, but it still diverges with $N$. Meanwhile, if we run the RRG algorithm on longer and longer chains, we will eventually encounter an $N$ too large to handle given finite computational resources. Indeed, often we wish to learn something about the ground state in the thermodynamic limit ($N\rightarrow \infty$) but in this case these results no longer apply. Analogues of DMRG for the thermodynamic limit \cite{mcculloch2008infinite,vidal2007classical,haegeman2011time,zauner2018variational,ostlund1995thermodynamic,vanderstraeten2019tangent} --- methods that, for example, optimize over the set of constant bond dimension ``uniform MPS'' consisting of the same tensor repeated across the entire infinite chain --- have been implemented with successful results, but these methods lack the second ingredient that justified DMRG: it is not clear how large we must make the bond dimension to guarantee that the set over which we are optimizing contains a good approximation to the ground state. Progress toward this ingredient can be found work by Huang \cite{huang2015computing} (and later by Schuch and Verstraete \cite{schuch2017matrix}), who showed that the ground state of a gapped local 1D Hamiltonian can be approximated \textit{locally} by a matrix product operator (MPO) --- a 1D tensor network object that corresponds to a (possibly mixed) density operator as opposed to a quantum state vector --- with bond dimension independent of $N$ and polynomial in the inverse local approximation error. Here their approximation sacrifices global fidelity with the ground state, which decays exponentially with the chain length, in exchange for \textit{constant} bond dimension, while retaining high fidelity with the ground state reduced density matrices on all segments of the chain with constant length. In other words, the statistics for measurements of local operators are faithfully captured by the MPO approximation, a notion of approximation that is often sufficient in practice since many relevant observables, such as the individual terms in the Hamiltonian, are local. However, the result does not provide the necessary ingredient to justify infinite analogues of DMRG because MPO do not make a good ansatz class for variational optimization algorithms. One can specify the matrix elements for an MPO, but the resulting operator will only correspond to a valid quantum state if it is positive semi-definite, and verifying that this is the case is difficult: it is $\mathsf{NP}$-hard for finite chains, and in the limit $N \rightarrow \infty$ it becomes undecidable \cite{kliesch2014matrix}. Thus, if we attempt to perform variational optimization over the set of constant bond dimension MPO, we can be certain that our search space contains a good local approximation to the ground state, but we have no way of restricting our search only to the set of valid quantum states; ultimately the minimal energy MPO we find may not correspond to any quantum state at all. In this work, we fix this problem by showing an analogous result for MPS instead of MPO. We show that for any gapped nearest-neighbor Hamiltonian on a 1D chain with $N$ sites, and for any parameters $k$ and $\epsilon$, there is an MPS representation of a state $\kettilde{\psi}$ with bond dimension $\text{poly}(k, 1/\epsilon)$ such that the reduced density matrix of $\kettilde{\psi}\bratilde{\psi}$ on any contiguous region of length $k$ is $\epsilon$-close in trace distance to that of the true ground state. Importantly, the bond dimension is independent of $N$. For general states (including ground states of non-gapped local Hamiltonians), we give a construction with bond dimension that is also independent of $N$ but exponential in $k/\epsilon$. This extends a previous result \cite{fannes1992abundance} that formally implied the existence of a uniform MPS approximation of this type when the state is translationally invariant and $N=\infty$, albeit without explicit attention paid to the dependence of the bond dimension on the locality $k$ or approximation error $\epsilon$, or to an improvement therein when the state is the ground state of a gapped Hamiltonian. Thus, we provide the missing ingredient for variational algorithms in the thermodynamic limit as we show that a variational set over a MPS with bond dimension independent in $N$ and polynomial in $1/\epsilon$ contains a state that simultaneously captures all the local properties of the ground state. We present two proofs of our claim about ground states of gapped Hamiltonians. The first yields superior scaling of the bond dimension, which grows asymptotically slower than $(k/\epsilon)^{1+\delta}$ for any $\delta > 0$; however, it constructs an MPS approximation $\kettilde{\psi}$ that is long-range correlated and non-injective. In contrast, the second proof constructs an approximation that is injective and can be generated by a constant-depth quantum circuit with nearest-neighbor gates, while retaining $\text{poly}(k,1/\epsilon)$ bond dimension. The latter construction also follows merely from the assumption that the state has exponential decay of correlations. The proof idea originates with a strategy first presented in \cite{verstraete2006matrix} and constructs $\kettilde{\psi}$ by beginning with the true ground state $\ket{\psi}$ and applying three rounds of operations: first, a set of unitaries that, intuitively speaking, removes the short-range entanglement from the chain; second, a sequence of rank 1 projectors that zeroes-out the long-range entanglement; and third, the set of inverse unitaries from step 1 to add back the short-range entanglement. Intuitively, the method works because ground states of gapped Hamiltonians have a small amount of long-range entanglement. The non-trivial part is arguing that the local properties are preserved even as the small errors induced in step 2 accumulate to bring the global fidelity with the ground state to zero. The fact that $\kettilde{\psi}$ can be produced by a constant-depth quantum circuit acting on an initial product state suggests the possibility of an alternative variational optimization algorithm using the set of constant-depth circuits (a strict subset of constant-bond-dimension MPS) as the variational ansatz. Additionally, we note that the disentangle-project-reentangle process that we utilize in our proof might be of independent interest as a method for truncating the bond dimension of MPS. We can bound the truncation error of this method when the state has exponentially decaying correlations. We also consider the question of whether these locally approximate MPS approximations can be rigorously found (\`a la RRG) more quickly than their globally approximate counterparts (and if they can be found at all in the thermodynamic limit). We prove a reduction for ground states of translationally invariant Hamiltonians showing that finding approximations to local properties to even a fixed $O(1)$ precision implies being able to find an approximation to the ground state energy to $O(1)$ precision with only $O(\log(N))$ overhead. Since strategies for estimating the ground state energy typically involve constructing a globally accurate approximation to the ground state, this observation gives us an intuition that it may not be possible to find the local approximation much more quickly than the global approximation, despite the fact that the bond dimensions required for the two approximations are drastically different. \section{Background} \label{sec:background} \subsection{One-dimensional local Hamiltonians} In this paper, we work exclusively with gapped nearest-neighbor 1D Hamiltonians that have a unique ground state. Our physical system is a set of $N$ sites, arranged in one dimension on a line with open boundary conditions (OBC), each with its own Hilbert space $\mathcal{H}_i$ of dimension $d$. The Hamiltonian $H$ consists of terms $H_{i,i+1}$ that act non-trivially only on $\mathcal{H}_i$ and $\mathcal{H}_{i+1}$: \begin{equation} H = \sum_{i=1}^{N-1} H_{i,i+1}. \end{equation} We will always require that $H_{i,i+1}$ be positive semi-definite and satisfy $\lVert H_{i,i+1} \rVert \leq 1$ for all $i$, where $\lVert \cdot \rVert$ is the operator norm. When this is not the case it is always possible to rescale $H$ so that it is. We call $H$ translationally invariant if $H_{i,i+1}$ is the same for all $i$. We will also always assume that $H$ has a unique ground state $\ket{\psi}$ with energy $E$ and an energy gap $\Delta > 0$ to its first excited state. We let $\rho = \ket \psi \bra \psi$ refer to the (pure) density matrix representation of the ground state. For any density matrix $\sigma$ and subregion $X$ of the chain, we let $\sigma_X$ refer to $\text{Tr}_{X^c}(\sigma)$, the reduced density matrix of $\sigma$ after tracing out the complement $X^c$ of $X$. Theorems \ref{thm:improvedbd} and \ref{thm:mainthm} will make statements about efficiently approximating the ground state of such Hamiltonians with matrix product states, and Theorem \ref{thm:reduction} is a statement about algorithms that estimate the ground state energy $E$ or approximate the expectation $\bra{\psi} O \ket{\psi}$ of a local observable $O$ in the ground state. \subsection{Matrix product states and matrix product operators} It is often convenient to describe states with one-dimensional structure using the language of matrix product states (MPS). \begin{definition}[Matrix product state] A matrix product state (MPS) $\ket \eta$ on $N$ sites of local dimension $d$ is specified by $Nd$ matrices $A_j^{(i)}$ with $i = 1, \ldots, d$ and $j = 1, \ldots, N$. The matrices $A_1^{(i)}$ are $1 \times \chi$ matrices and $A_N^{(i)}$ are $\chi \times 1$ matrices, with the rest being $\chi \times \chi$. The state is defined as \begin{equation} \ket \eta = \sum_{i_1 = 1}^d \ldots \sum_{i_N = 1}^d A_1^{(i_1)}\ldots A_N^{(i_N)} \ket{i_1 \ldots i_N}. \end{equation} The parameter $\chi$ is called the \textit{bond dimension} of the MPS. \end{definition} The same physical state has many different MPS representations, although one may impose a canonical form \cite{perez2006matrix} to make the representation unique. The bond dimension of the MPS is a measure of the maximum amount of entanglement across any ``cut'' dividing the state into two contiguous parts. More precisely, if we perform a Schmidt decomposition on a state $\ket\eta$ across every possible cut, the maximum number of non-zero Schmidt coefficients (i.e.~Schmidt rank) across any of the cuts is equal to the minimum bond dimension we would need to exactly represent $\ket\eta$ as an MPS \cite{vidal2003efficient}. Thus to show a state has an MPS representation with a certain bond dimension, it suffices to bound the Schdmidt rank across all the cuts. This line of reasoning shows that a product state, which has no entanglement, can be written as an MPS with bond dimension 1. Meanwhile, a general state with any amount of entanglement can always be written as an MPS with bond dimension $d^{N/2}$. A cousin of matrix product states are matrix product operators (MPO). \begin{definition}[Matrix product operator] A matrix product operator (MPO) $\sigma$ on $N$ sites of local dimension $d$ is specified by $Nd^2$ matrices $A_j^{(i)}$ with $i = 1, \ldots, d^2$ and $j = 1, \ldots, N$. The matrices $A_1^{(i)}$ are $1 \times \chi$ matrices and $A_N^{(i)}$ are $\chi \times 1$ matrices, with the rest being $\chi \times \chi$. The operator is defined as \begin{equation} \sigma = \sum_{i_1 = 1}^{d^2} \ldots \sum_{i_N = 1}^{d^2} A_1^{(i_1)}\ldots A_N^{(i_N)} \sigma_{i_1} \otimes \ldots \otimes \sigma_{i_N}, \end{equation} where $\{\sigma_{i}\}_{i=1}^{d^2}$ is a basis for operators on a single site. The parameter $\chi$ is called the \textit{bond dimension} of the MPO. \end{definition} However, MPO representations have the issue that specifying a set of matrices $A_j^{(i)}$ does not always lead to an operator $\sigma$ that is positive semi-definite, which is a requirement for the MPO to correspond to a valid quantum state. Checking positivity of an MPO in general is $\mathsf{NP}$-hard for chains of length $N$ and undecidable for infinite chains \cite{kliesch2014matrix}. \subsection{Notions of approximation} We are interested in the existence of an MPS that approximates the ground state $\ket \psi$. We will have both a global and a local notion of approximation, which we define here. We will employ two different distance measures at different points in our theorems and proofs, the purified distance \cite{gilchrist2005distance, tomamichel2009fully} and the trace distance. \begin{definition}[Purified distance] If $\sigma$ and $\sigma'$ are two normalized states on the same system, then \begin{equation} D(\sigma, \sigma') = \sqrt{1-\mathcal{F}(\sigma,\sigma')^2} \end{equation} is the purified distance between $\sigma$ and $\sigma'$, where $\mathcal{F}(\sigma,\sigma') = \text{Tr}(\sqrt{\sigma^{1/2}\sigma'\sigma^{1/2}})$ denotes the fidelity between $\sigma$ and $\sigma'$. \end{definition} \begin{definition}[Trace distance] If $\sigma$ and $\sigma'$ are two normalized states on the same system, then \begin{equation} D_1(\sigma, \sigma') = \frac{1}{2}\lVert \sigma - \sigma' \rVert_1 = \frac{1}{2}\text{Tr}(\lvert \sigma-\sigma' \rvert) \end{equation} is the trace distance between $\sigma$ and $\sigma'$. \end{definition} \begin{lemma}[\cite{tomamichel2009fully}]\label{lem:purifiedvstrace} \begin{equation} D_1(\sigma,\sigma') \leq D(\sigma,\sigma') \leq \sqrt{2D_1(\sigma,\sigma')}. \end{equation} \end{lemma} We also note that $D_1(\sigma,\sigma') = D(\sigma,\sigma')$ if $\sigma$ and $\sigma'$ are both pure. If the trace distance between $\rho$ and $\sigma$ is small then we would say $\sigma$ is a good global approximation to $\rho$. We are also interested in a notion of distance that is more local. \begin{definition}[$k$-local purified distance] If $\sigma$ and $\sigma'$ are two normalized states on the same system, then the $k$-local purified distance between $\sigma$ and $\sigma'$ is \begin{equation} D^{(k)}(\sigma, \sigma') = \max_{X: \lvert X \rvert = k}D(\sigma_X,\sigma'_X), \end{equation} where the max is taken over all contiguous regions $X$ consisting of $k$ sites. \end{definition} \begin{definition}[$k$-local trace distance] If $\sigma$ and $\sigma'$ are two normalized states on the same system, then the $k$-local trace distance between $\sigma$ and $\sigma'$ is \begin{equation} D_1^{(k)}(\sigma, \sigma') := \max_{X: \lvert X \rvert = k}D_1(\sigma_X,\sigma'_X), \end{equation} where the max is taken over all contiguous regions $X$ consisting of $k$ sites. \end{definition} Note that these quantities lack the property that $0=D^{(k)}(\sigma,\sigma')=D_1^{(k)}(\sigma,\sigma')$ implies $\sigma = \sigma'$,\footnote{To see this consider the simple counterexample where $k=2$, $\sigma = \ket{\eta}\bra{\eta}$, $\sigma' = \ket{\nu}\bra{\nu}$, with $\ket{\eta}= (\ket{000}+\ket{111})/\sqrt{2}$, $\ket{\nu} = (\ket{000}-\ket{111})/\sqrt{2}$. In fact here $\braket{\nu}{\eta}=0$. This counterexample can be generalized to apply for any $k$.} but they do satisfy the triangle inequality. It is also clear that taking $k=N$ recovers our notion of global distance: $D^{(N)}(\sigma,\sigma') = D(\sigma,\sigma')$ and $D_1^{(N)}(\sigma,\sigma') = D_1(\sigma,\sigma')$. \begin{definition}[Local approximation] We say a state $\sigma$ on a chain of $N$ sites is a $(k,\epsilon)$-local approximation to another state $\sigma'$ if $D_1^{(k)}\left(\sigma, \sigma'\right) \leq \epsilon$. \end{definition} As we discuss in the next subsection, previous results show that $\ket\psi$ has a good global approximation $\kettilde{\psi}$ that is an MPS with bond dimension that scales like a polynomial in $N$. We will be interested in the question of what bond dimension is required when what we seek is merely a good local approximation. \subsection{Previous results} \subsubsection{Exponential decay of correlations and area laws} A key fact shown by Hastings \cite{hastings2004lieb} (see also \cite{hastings2006spectral,hastings2004locality, nachtergaele2006lieb} for improvements and extensions) about nearest-neighbor 1D Hamiltonians with a non-zero energy gap is that the ground state $\ket{\psi}$ has exponential decay of correlations. \begin{definition}[Exponential decay of correlations] A pure state $\sigma = \ket \eta \bra \eta$ on a chain of sites is said to have $(t_0,\xi)$-exponential decay of correlations if for every $t \geq t_0$ and every pair of regions $A$ and $C$ separated by at least $t$ sites \begin{align} & \text{Cor}(A:C)_{\ket{\eta}} \nonumber\\ & := \max_{\lVert M \rVert, \lVert N \rVert \leq 1} \text{Tr}\left((M \otimes N) (\sigma_{AC}-\sigma_A \otimes \sigma_C)\right) \nonumber\\ & \leq \exp(-t/\xi). \end{align} The smallest $\xi$ for which $\sigma$ has $(t_0,\xi)$-exponential decay of correlations for some $t_0$ is called the correlation length of $\sigma$. \end{definition} \begin{lemma} If $\ket \psi$ is the unique ground state of a Hamiltonian $H = \sum_i H_{i,i+1}$ with spectral gap $\Delta$, then $\ket\psi$ has $(t_0,\xi)$-exponential decay of correlations for some $t_0 = O(1)$ and $\xi = O(1/\Delta)$. \end{lemma} \begin{proof} This statement is implied by Theorem 4.1 of \cite{nachtergaele2006lieb}. \end{proof} While the exponential decay of correlations holds for lattice models in any spatial dimension, the other results we discuss are only known to hold in one dimension. For example, in one dimension it has been shown that ground states of gapped Hamiltonians obey an area law, that is, the entanglement entropy of any contiguous region is bounded by a constant times the length of the boundary of that region, which in one dimension is just a constant. This statement was also first proven by Hastings in \cite{hastings2007area} where it was shown that for any contiguous region $X$ \begin{equation} S(\rho_X) \leq \exp(O(\log(d)/\Delta)), \end{equation} which is independent of the number of sites in $X$, where $S$ denotes the von Neumann entropy $S(\sigma) = -\text{Tr}(\sigma \log \sigma)$. The area law has since been improved \cite{arad2013area, arad2017rigorous} to \begin{equation} S(\rho_X) \leq \tilde{O}(\log^3(d)/\Delta), \end{equation} where the $\tilde{O}$ signifies a suppression of factors that scale logarithmically with the quantity stated. It was also discovered that an area law follows merely from the assumption of exponential decay of correlations in one dimension: if a pure state $\rho$ has $(t_0,\xi)$-exponential decay of correlations, then it satisfies \cite{brandao2015exponential} \begin{equation} S(\rho_X) \leq t_0\exp(\tilde{O}(\xi \log(d))). \end{equation} \subsubsection{Efficient global MPS approximations} The area law is closely related to the existence of an efficient MPS approximation to the ground state. To make this implication concrete, one needs an area law using the $\alpha$-Renyi entropy for some value of $\alpha$ with $0 < \alpha < 1$ \cite{verstraete2006matrix}, where the Renyi entropy is given by $S_\alpha(\rho_X) = -\text{Tr}(\log(\rho_X^\alpha))/(1-\alpha)$. An area law for the von Neumann entropy (corresponding to $\alpha=1$) is not alone sufficient \cite{schuch2008entropy}. However, for all of the area laws mentioned above, the techniques used are strong enough to also imply the existence of efficient MPS approximations, and, moreover, area laws have indeed been shown for the $\alpha$-Renyi entropy \cite{huang2014area} with $0 < \alpha < 1$. Hastings' \cite{hastings2007area} original area law implied the existence of a global $\epsilon$-approximation $\kettilde{\psi}$ for $\ket{\psi}$ with bond dimension \begin{equation} \chi = e^{\tilde{O}\left(\frac{\log(d)}{\Delta}\right)} \left(\frac{N}{\epsilon}\right)^{O\left(\frac{\log(d)}{\Delta}\right)}. \end{equation} The improved area law in \cite{arad2013area, arad2017rigorous} yields a better scaling for $\chi$ which is asymptotically sublinear in $N$: \begin{equation} \chi = e^{\tilde{O}\left(\frac{\log^3(d)}{\Delta}\right)} \left(\frac{N}{\epsilon}\right)^{\tilde{O}\left(\frac{\log(d)}{(\Delta\log(N/\epsilon))^{1/4}}\right)}. \end{equation} Finally, the result implied only from exponential decay of correlations \cite{brandao2015exponential} is \begin{equation} \chi = e^{t_0e^{\tilde{O}\left(\xi \log(d)\right)}} \left(\frac{N}{\epsilon}\right)^{\tilde{O}\left(\xi\log(d)\right)}. \end{equation} Crucially, if the local Hilbert space dimension $d$ and the gap $\Delta$ (or alternatively, the correlation length $\xi$) are taken to be constant, then all three results read $\chi = \text{poly}(N, 1/\epsilon)$. \subsubsection{Existence of MPS approximations in the thermodynamic limit} The aforementioned results, which describe explicit bounds on the bond dimension needed for good MPS approximations, improved upon important prior work that characterized which states can be approximated by MPS in the first place. Of course, any state on a finite chain can be exactly described by an MPS, but the question of states living on the infinite chain is more subtle. In \cite{fannes1992finitely}, the proper mathematical framework was developed to study MPS, which they call \textit{finitely correlated states}, in the limit of infinite system size, and in \cite{fannes1992abundance} it was shown that any translationally invariant state on the infinite chain can be approximated arbitrarily well by a uniform (translationally invariant) MPS in the following sense: for any translationally invariant pure state $\rho$ there exists a \textit{net} --- a generalization of a sequence --- of translationally invariant MPS $\rho_\alpha$ for which the expectation value $\text{Tr}(\rho_\alpha A)$ of any finitely supported observable $A$ converges to $\text{Tr}(\rho A)$. An implication of this is that if we restrict to observables $A$ with support on only $k$ contiguous sites, there exists a translationally invariant MPS that approximates the expectation value of all $A$ to arbitrarily small error $\epsilon$. Thus, they established that local approximations for translationally invariant states exist within the set of translationally invariant MPS, but provided no discussion of the bond dimension required for such an approximation, and did not explicitly consider the case where the state is the ground state of a gapped, local Hamiltonian. Our Theorem \ref{thm:improvedbd}, which is stated in the following section, may be viewed as a generalization and improvement on this work in several senses. Most importantly, we present a construction for which a bound on the bond dimension can be explicitly obtained. This bound scales like $\text{poly}(k,1/\epsilon)$ when the state is a ground state of a gapped nearest-neighbor Hamiltonian, and exponentially in $k/\epsilon$ when it is a general state. Furthermore, our method works for states on the finite chain that are not translationally invariant, where it becomes unclear how the methods of this previous work would generalize. \subsubsection{Constant-bond-dimension MPO local approximations} The problem of finding matrix product operator representations that capture all the local properties of a state has been studied before. Huang \cite{huang2015computing} showed the existence of a positive semi-definite MPO $\rho^\chi$ with bond dimension \begin{equation}\label{eq:HuangMPO} \chi = e^{\tilde{O}\left(\frac{\log^3(d)}{\Delta}+\frac{\log(d)\log^{3/4}(1/\epsilon)}{\Delta^{1/4}}\right)} = (1/\epsilon)^{o(1)} \end{equation} that is a $(2,\epsilon)$-local approximation to the true ground state $\rho$, where $o(1)$ indicates the quantity approaches 0 as $1/\epsilon \rightarrow \infty$. Crucially, this is independent of the length of the chain $N$. Additionally, because the Hamiltonian is nearest-neighbor, we have $\text{Tr}(H\rho^\chi)-\text{Tr}(H\rho) \leq (N-1)\epsilon$, i.e., the energy per site (energy density) of the state $\rho^\chi$ is within $\epsilon$ of the ground state energy density. Huang constructs this MPO explicitly and notes it is a convex combination over pure states which themselves are MPS with bond dimension independent of $N$. Thus, one of these MPS must have energy density within $\epsilon$ of the ground state energy density. However, it is not guaranteed (nor is it likely) that one of these constant-bond-dimension MPS is also a good local approximation to the ground state; thus our result may be viewed as an improvement on this front as we show the existence not only of a low-energy-density constant-bond-dimension MPS, but also one that is a good local approximation to the ground state. An alternative MPO construction achieving the same task was later given in \cite{schuch2017matrix}. In this case, the MPO is a $(k,\epsilon)$-local approximation to the ground state and has bond dimension \begin{equation} \chi=(k/\epsilon)e^{\tilde{O}\left(\frac{\log^3(d)}{\Delta}+\frac{\log(d)\log^{3/4}(k/\epsilon^3)}{\Delta^{1/4}}\right)} = (k/\epsilon)^{1+o(1)}. \end{equation} The idea they use is simple. They break the chain into blocks of size $l$, which is much larger than $k$. On each block they construct a constant bond dimension MPO that closely approximates the reduced density matrix of the ground state on that block, which is easy since each block has constant length and they must make only a constant number of bond truncations to the exact state. The tensor product of these MPO will be an MPO on the whole chain that is a good approximation on any length-$k$ region that falls within one of the larger length $l$ blocks, but not on a region that crosses the boundary between blocks. To remedy this, they take the mixture of MPO formed by considering all $l$ translations of the boundaries between the blocks. Now as long as $l$ is much larger than $k$, any region of length $k$ will only span the boundary between blocks for a small fraction of the MPO that make up this mixture, and the MPO will be a good local approximation. This same idea underlies our proof of Theorem \ref{thm:improvedbd}, with the complication that we seek a pure state approximation and cannot take a mixture of several MPS. Instead, we combine the translated MPS in superposition, which brings new but manageable challenges. \section{Statement of results} \label{sec:results} \subsection{Existence of local approximation} \begin{theorem}\label{thm:improvedbd} Let $\ket \psi$ be a state on a chain of $N$ sites of local dimension $d$. For any $k$ and $\epsilon$ there exists an MPS $\kettilde{\psi}$ with bond dimension at most $\chi$ such that \begin{enumerate} [(1)] \item $\kettilde{\psi}$ is a $(k,\epsilon)$-local approximation to $\ket{\psi}$ \item $ \chi = e^{O(k\log(d)/\epsilon)} $ \end{enumerate} provided that $N$ is larger than some constant $N_0 = O(k^3/\epsilon^3)$ that is independent of $\ket{\psi}$. If $\ket{\psi}$ has $(t_0,\xi)$-exponential decay of correlations, then the bound on the bond dimension can be improved to \begin{enumerate} [(2')] \item $ \chi = e^{t_0e^{\tilde{O}\left(\xi\log(d)\right)}} (k/\epsilon^3)^{O\left(\xi\log(d)\right)} $ \end{enumerate} with $N_0 = O(k^2/\epsilon^2) + t_0\exp(\tilde{O}(\xi \log(d))$ and if, additionally, $\ket{\psi}$ is the unique ground state of a nearest-neighbor 1D Hamiltonian $H$ with spectral gap $\Delta$, it can be further improved to \begin{enumerate}[(2'')] \item $\chi = (k/\epsilon)e^{\tilde{O}\left(\frac{ \log^{3}(d)}{\Delta}+\frac{\log(d)}{\Delta^{1/4}}\log^{3/4}(k/\epsilon^3)\right)}$ \end{enumerate} with $N_0 = O(k^2/\epsilon^2) + \tilde{O}(\log(d)/\Delta^{3/4})$. Here $\chi$ is asymptotically equivalent to $(k/\epsilon)^{1+o(1)}$ where $o(1)$ indicates that the quantity approaches 0 as $(k/\epsilon) \rightarrow \infty$. \end{theorem} However, the state $\kettilde{\psi}$ that we construct in the proof of Theorem \ref{thm:improvedbd} is long-range correlated and cannot be generated from a constant-depth quantum circuit. Thus, while $\kettilde{\psi}$ is a good local approximation to the ground state $\ket{\psi}$ of $H$, it is not the exact ground state of any gapped local 1D Hamiltonian. Next, we show that it remains possible to approximate the state even when we require the approximation to be produced by a constant-depth quantum circuit; the scaling of the bond dimension is faster in $k$ and $1/\epsilon$, but it is still polynomial. \begin{theorem}\label{thm:mainthm} Let $\ket \psi$ be a state on a chain of $N$ sites of local dimension $d$. If $\ket{\psi}$ has $(t_0,\xi)$-exponential decay of correlations, then, for any $k$ and $\epsilon$, there is an MPS $\kettilde{\psi}$ with bond dimension at most $\chi$ such that \begin{enumerate}[(1)] \item $\kettilde{\psi}$ is a $(k,\epsilon)$-local approximation to $\ket{\psi}$ \item $ \chi = e^{t_0e^{\tilde{O}\left(\xi\log(d)\right)}} \left(k/\epsilon^2\right)^{O\left(\xi^2\log^2(d)\right)}$ \item $\kettilde{\psi}$ can be prepared from the state $\ket{0}^{\otimes N}$ by a quantum circuit that has depth $\tilde{O}(\chi^2)$ and consists only of unitary gates acting on neighboring pairs of qubits \end{enumerate} If, additionally, $\ket{\psi}$ is the unique ground state of a nearest-neighbor 1D Hamiltonian with spectral gap $\Delta$, then the bound on the bond dimension can be improved to \begin{enumerate}[(2')] \item $\chi = e^{\tilde{O}\left(\frac{\log^{4}(d)}{\Delta^{2}}\right)} \left(k/\epsilon^2\right)^{O\left(\frac{\log(d)}{\Delta}\right)}$ \end{enumerate} \end{theorem} The sort of constant-depth quantum circuit that can generate the state $\kettilde{\psi}$ in Theorem \ref{thm:mainthm} is shown in Figure \ref{fig:constantdepthcircuit}. Proof summaries as well as full proofs of Theorems \ref{thm:improvedbd} and \ref{thm:mainthm} appear in Section \ref{sec:proofs}. We also note that, unlike Theorem \ref{thm:improvedbd}, Theorem \ref{thm:mainthm} does not require that the chain be longer than some threshold $N_0$; the statement holds regardless of the chain length, although this should be considered a technical detail and not an essential aspect of the constructions. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{constant_depth_circuit_figure.png} \caption{ \label{fig:constantdepthcircuit}Constant-depth quantum circuit that constructs the $(k,\epsilon)$-local approximation $\kettilde{\psi}$ in Theorem \ref{thm:mainthm} starting from the initial state $\ket{0}^{\otimes N}$. It is drawn here as a depth-2 circuit where unitaries $U_j$ act on segments consisting of $O(\xi^2 \log(d) \log(k/{\epsilon^2}))$ contiguous sites. Each of these unitaries could themselves be decomposed into a sequence of nearest-neighbor gates with depth $\text{poly}(k,1/\epsilon)$.} \end{figure} \subsection{Reduction from estimating energy density to finding local properties} The previously stated results show that there exists a state that is both a $(k,\epsilon)$-local approximation and an MPS with bond dimension $\text{poly}(k,1/\epsilon)$. They say nothing of the algorithmic complexity required to \textit{find} a $(k,\epsilon)$-local approximation. The proofs describe how to construct the local approximation from a description of the exact ground state, but following this strategy would require first finding a description of the exact ground state (or perhaps a global approximation to it). One might hope that a different strategy would allow the local approximation to be found much more quickly than the global approximation, since the bond dimension needed to represent the approximation is much smaller. However, the following result challenges the validity of this intuition, at least in the case that the Hamiltonian is translationally invariant, by showing a relationship between the problem of finding a local approximation and the problem of estimating the energy density. \begin{problem}[Estimating energy density]\label{prob:energydensity} Given a nearest-neighbor translationally invariant 1D Hamiltonian $H$ on $N\geq 2$ sites and error parameter $\epsilon$, produce an estimate $\tilde{u}$ such that $\lvert u- \tilde{u}\rvert \leq \epsilon$ where $u = E/(N-1)$ is the ground state energy density. \end{problem} \begin{problem}[Approximating local properties]\label{prob:localprops} Given a nearest-neighbor translationally invariant Hamiltonian $H$, an error parameter $\delta$, and an operator $O$ whose support is contained within a contiguous region of length $k$, produce an estimate $\tilde{v}$ such that $\lvert v - \tilde{v} \rvert \leq \delta$, where $v = \bra \psi O \ket \psi/ \lVert O \rVert$ is the expectation value of the operator $O/\lVert O \rVert$ in the ground state $\ket{\psi}$ of $H$. \end{problem} Problem \ref{prob:energydensity} is the restriction of Problem \ref{prob:localprops} to the case where $k=2$ and the operator $O$ is the energy interaction term. Thus, there is a trivial reduction from Problem \ref{prob:energydensity} to Problem \ref{prob:localprops} with $\delta = \epsilon$. However, the next theorem, whose proof is presented in Section \ref{sec:proofs}, states a much more powerful reduction. \begin{theorem}\label{thm:reduction} Suppose one has an algorithm that solves Problem \ref{prob:localprops} for any single-site ($k=1$) operator $O$ and $\delta = 0.9$ in $f(\Delta,d,N)$ time, under the promise that the Hamiltonian $H$ has spectral gap at least $\Delta$. Here $d$ denotes the local dimension of $H$ and $N$ the length of the chain. Then there is an algorithm for Problem \ref{prob:energydensity} under the same promise that runs in time \begin{equation} f\left(\frac{\min(2\Delta,(N-1)\epsilon,2)}{12},2d,N\right)O(\log(1/\epsilon)). \end{equation} \end{theorem} Estimating the energy density to precision $\epsilon$ is equivalent to measuring the total energy to precision $\epsilon(N-1)$, so the quantity $\min(2\Delta,(N-1)\epsilon,2)$ is equivalent to the global energy resolution, twice the gap, or two, whichever is smallest. Thus, one may take $\epsilon = O(1/N)$ and understand the theorem as stating that finding local properties to within $O(1)$ precision can be done at most $O(\log(N))$ faster than finding an estimate to the total ground state energy to $O(1)$ precision. If local properties can be found in time independent of $N$ (i.e.~there is an $N$-independent upper bound to $f$), then the ground state energy can be estimated to $O(1)$ precision in time $O(\log(N))$, which would be optimal since the ground state energy scales extensively with $N$, and $\Omega(\log(N))$ time would be needed simply to write down the output. Another way of understanding the significance of the theorem is in the thermodynamic limit. Here it states that if one could estimate expectation values of local observables in the thermodynamic limit to $O(1)$ precision in some finite amount of time (for constant $\Delta$ and $d$), then one could compute the ground state energy density of such Hamiltonians to precision $\epsilon$ in $O(\log(1/\epsilon))$ time. This would be an exponential speedup over the best-known algorithm for computing the energy density given in \cite{huang2015computing}, which has runtime $\text{poly}(1/\epsilon)$. Taking the contrapositive, if one could show that $\text{poly}(1/\epsilon)$ time is necessary for computing the energy density, this would imply that Problem \ref{prob:localprops} with $\delta = O(1)$ is in general uncomputable in the thermodynamic limit, even given the promise that the input Hamiltonian is gapped. It is already known that Problem \ref{prob:localprops} is uncomputable when there is no such promise \cite{bausch2018undecidability}. It is not clear whether a $O(\log(1/\epsilon))$ time algorithm for computing the energy density is possible. The $\text{poly}(1/\epsilon)$ algorithm in \cite{huang2015computing} works even when the Hamiltonian is not translationally invariant, but it is not immediately apparent to us how one might exploit translational invariance to yield an exponential speedup. \section{Proofs}\label{sec:proofs} \subsection{Important lemmas for Theorems \ref{thm:improvedbd} and \ref{thm:mainthm}} The pair of lemmas stated here are utilized in both Theorem \ref{thm:improvedbd} and Theorem \ref{thm:mainthm}. The first lemma captures the essence of the area laws stated previously, and will be essential when we want to bound the error incurred by truncating a state along a certain cut. \begin{lemma}[Area laws \cite{brandao2015exponential}, \cite{arad2013area}]\label{lem:lowrank} If $\sigma = \ket{\psi}\bra{\psi}$ has $(t_0, \xi)$-exponential decay of correlations then for any $\chi$ and any region of the chain $A$, there is a state $\tilde{\sigma}_A$ with rank at most $\chi$ such that \begin{equation}\label{eq:lemmaarealaw} D(\sigma_A, \tilde{\sigma}_A) \leq C_1\exp\left(-\frac{\log(\chi)}{8\xi\log(d)}\right), \end{equation} where $C_1 = \exp(t_0\exp(\tilde{O}(\xi\log(d))))$ is a constant independent of $N$. If $\sigma$ is the unique ground state of a nearest-neighbor Hamiltonian with spectral gap $\Delta$, then this can be improved to \begin{equation}\label{eq:lemmabetterarealaw} D(\sigma_A, \tilde{\sigma}_A) \leq C_2\exp\left(-\tilde{O}\left(\frac{\Delta^{1/3}\log^{4/3}(\chi)}{\log^{4/3}(d)}\right)\right), \end{equation} where $C_2 = \exp(\tilde{O}(\log^{8/3}(d)/\Delta))$. \end{lemma} \begin{proof} The first part follows from the main theorem of \cite{brandao2015exponential}. The second follows from the 1D area law presented in \cite{arad2013area}, and $\log(d)$ dependence explicitly stated in \cite{arad2017rigorous}. \end{proof} In both proofs we will also utilize the well-known fact that Schmidt ranks cannot differ by more than a factor of $d$ between any two neighboring cuts on the chain. \begin{lemma}\label{lem:rankrelations} Any state $\sigma_{AB}$ on a bipartite system $AB$ satisfies the following relations. \begin{align} \text{rank}(\sigma_{AB})\text{rank}(\sigma_B) &\geq \text{rank}(\sigma_A) \\ \text{rank}(\sigma_{AB})\text{rank}(\sigma_A) &\geq \text{rank}(\sigma_B) \\ \text{rank}(\sigma_A) \text{rank}(\sigma_B) &\geq \text{rank}(\sigma_{AB}) \end{align} \end{lemma} \begin{proof} We can purify $\sigma_{AB}$ with an auxiliary system $C$ into the state $\ket{\eta}$. We can let $\sigma = \ket{\eta}\bra{\eta}$ and note that $\text{rank}(\sigma_{AB}) = \text{rank}(\sigma_C)$. Thus each of these three equations say the same thing with permutations of $A$, $B$, and $C$. We will show the first equation. Write Schmidt decomposition \begin{equation} \ket{\eta} = \sum_{j=1}^{\text{rank}(\sigma_{AB})} \lambda_j \ket{\nu_j}_{AB} \otimes \ket{\omega_j}_C \end{equation} and then decompose $\ket{\nu_j}$ to find \begin{equation} \ket{\eta} = \sum_{j=1}^{\text{rank}(\sigma_{AB})}\sum_{k=1}^{\text{rank}(\sigma_B)} \lambda_j \gamma_{jk}\ket{\tau_{jk}}_A \otimes \ket{\mu_k}_{B} \otimes \ket{\omega_j}_C, \end{equation} where $\{\ket{\mu_k}\}_{k=1}^{\text{rank}(\sigma_B)}$ are the eigenvectors of $\sigma_{B}$. This shows that the support of $\sigma_A$ is spanned by the set of $\ket{\tau_{jk}}$ and thus its rank can be at most $\text{rank}(\sigma_{AB})\text{rank}(\sigma_B)$. \end{proof} \begin{corollary}\label{cor:schmidtrankincrease} If $\ket{\eta}$ is a state on a chain of $N$ sites with local dimension $d$, and the Schmidt rank of $\ket{\eta}$ across the cut between sites $m$ and $m+1$ is $\chi$, then the Schmidt rank of $\ket{\eta}$ across the cut between sites $m'$ and $m'+1$ is at most $\chi d^{\lvert m - m' \rvert}$. \end{corollary} \begin{proof} Without loss of generality, assume $m \leq m'$. The reduced density matrix of $\ket{\eta}\bra{\eta}$ on sites $[m+1,m']$ has rank at most $d^{\lvert m'-m\rvert}$ since this is the dimension of the entire Hilbert space on that subsystem. Meanwhile the rank of the reduced density matrix on sites $[1,m]$ is $\chi$. So by the previous lemma, the rank over sites $[1,m']$ is at most $\chi d^{\lvert m'-m\rvert}$. \end{proof} \subsection{Proof of Theorem \ref{thm:improvedbd}} First we state and prove a lemma that will be essential for showing the first part of Theorem \ref{thm:improvedbd}. Then we provide a proof summary of Theorem \ref{thm:improvedbd}, followed by its full proof. \begin{lemma}\label{lem:absorbentropy} Given two quantum systems $A$ and $B$ and states $\tau_A$ on $A$ and $\tau_B$ on $B$, there exists a state $\sigma_{AB}$ on the joint system $AB$ such that $\sigma_A = \tau_A$, $\sigma_B = \tau_B$, and $\text{rank}(\sigma_{AB}) \leq \max(\text{rank}(\tau_A), \text{rank}(\tau_B))$. \end{lemma} \begin{proof} We'll apply an iterative procedure. For round 1 let $\alpha_1 = \tau_A$ and $\beta_1 = \tau_B$. In round $j$ write spectral decomposition \begin{align} \alpha_j &= \sum_{i=1}^{a_j} \lambda_{j,i} \ket{s_{j,i}}\bra{s_{j,i}}_A\\ \beta_j &= \sum_{i=1}^{b_j} \mu_{j,i} \ket{r_{j,i}}\bra{r_{j,i}}_B, \end{align} where $a_j$ and $b_j$ are the ranks of states $\alpha_j$ and $\beta_j$, eigenvectors $\{s_{j,i}\}_{i=1}^{a_j}$ and $\{r_{j,i}\}_{i=1}^{b_j}$ form orthonormal bases of the Hilbert spaces of systems $A$ and $B$, respectively, and eigenvalues $\{\lambda_{j,i}\}_{i=1}^{a_j}$ and $\{\mu_{j,i}\}_{i=1}^{b_j}$ are non-decreasing with increasing index $i$ (i.e.~smallest eigenvalues first). Then define \begin{equation}\label{eq:uj} \ket{u_j} = \sum_{i=1}^{\min(a_j,b_j)} \sqrt{\min(\lambda_{j,i},\mu_{j,i})} \ket{s_{j,i}}_A \otimes \ket{r_{j,i}}_B, \end{equation} which may not be a normalized state. Define recursion relation \begin{align} \alpha_{j+1} &= \alpha_j - \text{Tr}_{B}(\ket{u_j}\bra{u_j})\nonumber \\ \beta_{j+1} &= \beta_j - \text{Tr}_{A}(\ket{u_j}\bra{u_j}) \label{eq:recursion} \end{align} and repeat until round $m$ when $\alpha_{m+1} = \beta_{m+1} = 0$. Let \begin{equation}\label{eq:sigmaAB} \sigma_{AB} = \sum_{j=1}^m \ket{u_j} \bra{u_j}. \end{equation} Clearly $\text{rank}(\sigma_{AB}) \leq m$. We claim that $m \leq \max(\text{rank}(\tau_A), \text{rank}(\tau_B))$. To show this we note that \begin{equation}\label{eq:rankdecrease} a_{j+1} + b_{j+1} \leq \max(a_j, b_j). \end{equation} We can see this is true by inspecting the $i$th term in the Schmidt decomposition of $\ket{u_j}$ in Eq.~\eqref{eq:uj}, and noting that either its reduced density matrix on system $A$ is $\lambda_{j,i}\ket{s_{j,i}} \bra{s_{j,i}}$ or its reduced density matrix on system $B$ is $\mu_{j,i}\ket{r_{j,i}} \bra{r_{j,i}}$ (or both). So when the reduced density matrices of $\ket{u_j}\bra{u_j}$ are subtracted from $\alpha_j$ and $\beta_j$ to form $\alpha_{j+1}$ and $\beta_{j+1}$ in Eqs.~\eqref{eq:recursion}, each of the $\min(a_j, b_j)$ terms causes the combined rank $a_{j+1} + b_{j+1}$ to decrease by at least one in comparison to $a_j+b_j$. This alone implies that $m \leq \max(a_1, b_1)+1$, since by Eq.~\eqref{eq:rankdecrease}, the sequence $\{a_j+b_j\}_j$ must decrease by at least $\min(a_1,b_1)$ after the first round, and then by at least $1$ in every other round, reaching 0 when $j = \max(a_1,b_1)+1$. However, we can also see that the last round must see a decrease by at least 2, because it is impossible for $a_{m} = 0$ and $b_m =1$ or vice versa (since $\text{Tr}(\alpha_j)$ must equal $\text{Tr}(\beta_j)$ for all $j$). Thus $m \leq \max(a_1,b_1)$. Moreover, Eqs.~\eqref{eq:recursion} and \eqref{eq:sigmaAB} imply that $\sigma_A = \alpha_1 = \tau_A$ and $\sigma_B = \beta_1 = \tau_B$. \end{proof} \begin{figure}[ht] \includegraphics[width=\linewidth]{proof_figure_thm1.png} \caption{\label{fig:improvedbd}Schematic overview of the proof of Theorem \ref{thm:improvedbd}. Many states $\ket{\phi_j}$ are constructed with staggered divisions between regions $M_{j,i}$ of length $l$, then the $\ket{\phi_j}$ are summed in superposition. Properties supported within a length-$k$ region $X$ are faithfully captured by $\ket{\phi_j}$ for values of $j$ such that $X$ does not overlap the boundaries between regions $M_{j,i}$. Most values of $j$ qualify under this criteria as long as $l$ is much larger than $k$. Additional structure is defined (the regions $B_{j,j'}$ in Part 1, and $B_{j,i}$ in Part 2) in order to force $\braket{\phi_j}{\phi_{j'}} =\delta_{jj'}$, but this structure is not reflected on the schematic.} \end{figure} \begin{proof}[Proof summary of Theorem \ref{thm:improvedbd}] In Ref.~\cite{schuch2017matrix}, an MPO that is a $(k,\epsilon)$-local approximation to a given state $\ket{\psi}$ was formed by dividing the chain into many length-$l$ segments, tensoring together a low-bond-dimension approximation of the exact state reduced to each segment, and then summing over (as a mixture) translations of locations for the divisions between the segments. We follow the same idea but for pure states: for each integer $j = 0,\ldots, l-1$, we divide the state into many length-$l$ segments and create a pure state approximation $\ket{\phi_j}$ that captures any local properties that are supported entirely within one of the segments. Then to form $\kettilde{\psi}$, we sum in superposition over all the $\ket{\phi_j}$, where each $\ket{\phi_j}$ has boundaries between segments occurring in different places (see Figure \ref{fig:improvedbd}). Thus, for any length-$k$ region $X$, a large fraction of the terms $\ket{\phi_j}$ in the superposition are individually good approximations on region $X$. The fact that a small fraction of the terms are not necessarily a good approximation creates additional, but small, error in the local approximation. In order to avoid interaction between different terms in the superposition, we add additional structure to make the $\ket{\phi_j}$ in the superposition exactly orthogonal to one another. In our construction for general states, this additional structure consists of a set of disjoint, sparsely distributed, single-site regions $B_{j,j'}$, one for each pair of integers $j \neq j'$ with $0\leq j, j' < l$. We force $\ket{\phi_j}$ to be the pure state $\ket{0}$ and $\ket{\phi_{j'}}$ to be $\ket{1}$ when reduced to $B_{j,j'}$ to guarantee that $\braket{\phi_j}{\phi_{j'}}=0$. Our construction for states that have exponential decay of correlations is similar: we define a series of regions $B_{j,i}$ and for each pair $(j,j')$ force $\ket{\phi_j}$ and $\ket{\phi_{j'}}$ to have orthogonal supports when reduced to one of these regions. Our approach for constructing $\ket{\phi_j}$ differs if $\ket{\psi}$ is a general state (Part 1), or if it is a state that either has exponentially decaying correlations or (additionally) is the ground state of a nearest-neighbor Hamiltonian (Part 2). In the latter case, we examine each length-$l$ segment individually and truncate the bonds of the exact state on all but a few of the rightmost sites within that segment. The area law implies these truncations have minimal effect. We use those few rightmost sites to purify the mixed state on the rest of the segment. Then $\ket{\phi_j}$ is a tensor product over pure states on each of the segments. The bond dimension can be bounded within each segment of $\ket{\phi_j}$ which is sufficient to bound the bond dimension of $\kettilde{\psi}$. This does not work for general states because without an area law, bond truncations result in too much error, and without the truncations we do not have enough room to purify the state. For a general state, there is simply too much entropy in the length-$l$ segment to fully absorb with only a few sites at the edge of the segment. Instead, we have the various segments absorb each other's entropy by developing a procedure to engineer entanglement between different length-$l$ segments that exactly preserves the reduced density matrix on each segment and keeps the Schmidt rank constant (albeit exponential in $k/\epsilon$) across any cut. The crux of this procedure is captured in Lemma \ref{lem:absorbentropy}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:improvedbd}] \textit{Part 1: Items (1) and (2)} \\ First we consider the case of a general state $\ket{\psi}$ and construct an approximation $\kettilde{\psi}$ satisfying items (1) and (2). A different construction is given in Part 2 to show items (2') and (2''), but it is similar in approach. Throughout this proof, we use a convention where sites are numbered $0$ to $N-1$, which differs from the rest of the paper. We choose integer $l$ to be specified later. We require $l > k$. To construct $\kettilde{\psi}$, we will first construct states $\ket{\phi_j}$ for $j=0,\ldots, l-1$ and then sum these states in superposition. Any reference to the index $j$ will be taken mod $l$. Consider a fixed value of $j$. To construct $\ket{\phi_j}$ we break the chain into $n = \lfloor N/l \rfloor$ blocks where $n-1$ blocks $\{M_{j,i}\}_{i=1}^{n-1}$ have length exactly $l$ and the final block $M_{j,0}$ has length at least $l$ and less than $2l$. We arrange the blocks so that the leftmost site of block $M_{j,1}$ is site $j$; thus block $M_{j,i}$ contains the sites $[j+l(i-1),j+li-1]$ for $i=1,\ldots, n-1$, and the last block $M_{j,0}$ ``rolls over'' to include sites at both ends of the chain: sites $[j+l(n-1),N-1]$ and $[0,j-1]$. A schematic of the arrangement is shown in Figure \ref{fig:improvedbd}. Any reference to the index $i$ will be taken mod $n$. We also define $l^2-l$ single-site blocks on the chain which we label $B_{j,j'}$ for all pairs $j \neq j'$. $B_{j,j'}$ consists of the site with index $3l^2(j+j')+l(j-j')+2l^2$. This definition is possible as long as their are sufficiently many sites: $N \geq N_0 = O(l^3)$. It can be verified that since $0 \leq j,j' \leq l-1$, the distance between any $B_{j,j'}$ and $B_{j'',j'''}$ for any distinct pairs $(j,j')$ and $(j'',j''')$ is at least $l$. For each $j$, let $B_{j} = \cup_{j' \neq j} B_{j,j'} \cup B_{j',j}$. For each $j,i$, let $A_{j,i} = M_{j,i} \setminus M_{j,i}\cap B_j$. Thus, in most cases $A_{j,i} = M_{j,i}$ since $B_j$ is a relatively small set of sites. Let $A_j = \cup_{i=0}^{n-1} A_{j,i}$ = $B_j^c$, the complement of $B_j$. The state $\ket{\phi_j}$ will have the form \begin{equation}\label{eq:phij1} \ket{\phi_j} = \ket{Q_{j}}_{A_j} \otimes \bigotimes_{j'\neq j}\left( \ket{0}_{B_{j,j'}}\otimes \ket{1}_{B_{j',j}}\right), \end{equation} where $\ket{0}$ and $\ket{1}$ are two of the $d$ computational basis states located on a single site. In other words $\ket{\phi_j}$ is a product state over all single site regions $B_{j,j'}$ with some other (yet to be specified) state $\ket{Q_j}$ on the remainder of the chain $A_j$. To construct $\ket{Q_j}$, we apply Lemma \ref{lem:absorbentropy} iteratively as follows. We let $\sigma_1 = \rho_{A_{j,j+2}}$ (the reduced matrix of the exact state $\rho$ on region $A_{j,j+2}$). We combine $\sigma_1$ with $\rho_{A_{j,j+3}}$ using Lemma \ref{lem:absorbentropy} to form a state $\sigma_2$ on region $A_{j,j+2}A_{j,j+3}$ such that $\text{rank}(\sigma_2) \leq \max( \text{rank}(\sigma_1), \text{rank}(\rho_{A_{j,j+3}}))$. For any $j,i$, the rank of the state on any region $A_{j,i}$ is less than $d^{2l}$ since any region contains at most $2l$ sites. So if we apply this process iteratively, forming $\sigma_{p+1}$ by combining $\sigma_p$ and the state on region $A_{j,j+p+2}$ ($j+p+2$ is taken mod $n$), then we end up with a state $\sigma_{n-2}$ with rank at most $d^{2l}$ defined over all of $A_j$ except $A_{j,j}$ and $A_{j,j+1}$. Since by construction $B_j$ contains no sites with index smaller than $2l^2$ and $A_{j,j}$ and $A_{j,j+1}$ are contained within the first $2l^2$ sites, we have $A_{j,j}=M_{j,j}$ and $A_{j,j+1} = M_{j,j+1}$ meaning each of these two regions each contain $l$ sites and the total dimension of the Hilbert space over $A_{j,j}A_{j,j+1}$ is at least $d^{2l}$. Thus we may use regions $A_{j,j}$ and $A_{j,j+1}$ to purify the state $\sigma_{n-2}$. We let $\ket{Q_j}$ be any such purification. The key observation is that the state $\ket{\phi_j}$, as defined by Eq.~\eqref{eq:phij1}, will get any local properties exactly correct as long as they are supported entirely within a segment $A_{j,i}$ for some $i \neq j, j+1$. As long as $l$ is large, this will be the case for most length-$k$ regions, but it will not be the case for some regions that cross the boundaries between regions $A_{j,i}$ or for regions that contain one of the single site regions $B_{j,j'}$ or $B_{j',j}$. To fix this we sum in \textit{superposition} over the states $\ket{\phi_j}$ for each value of $j$. The motivation to do this is so that every length-$k$ region will be contained within $A_{j,i}$ for some value of $i$ in most, but not all, of the terms in the superposition. We will show that most is good enough. We let \begin{equation} \kettilde{\psi} = \frac{1}{\sqrt{l}}\sum_{j=0}^{l-1}\ket{\phi_j}. \end{equation} We note that $\braket{\phi_j}{\phi_{j'}} = \delta_{jj'}$ since $\ket{\phi_j}$ is simply $\ket{0}$ when reduced to region $B_{j,j'}$ and $\ket{1}$ when reduced to region $B_{j',j}$, while $\ket{\phi_{j'}}$ is $\ket{1}$ when reduced to region $B_{j,j'}$ and $\ket{0}$ when reduced to $B_{j',j}$. Thus $\kettilde{\psi}$ is normalized: \begin{equation} \langle \tilde{\psi} | \tilde{\psi} \rangle = \frac{1}{l}\sum_{j,j'} \braket{\phi_j}{\phi_{j'}} = 1. \end{equation} This completes the construction of the approximation. We now wish to show it has the desired properties. To show item (1), we compute the (local) distance from $\ket{\psi}$ to $\kettilde{\psi}$. Let $\tilde{\rho} = \kettilde{\psi}\bratilde{\psi}$ and consider an arbitrary length-$k$ region $X$. We may write \begin{align} & D_1(\rho_X, \tilde{\rho}_X) \nonumber\\ ={}&\frac{1}{2}\left\lVert\left(\frac{1}{l} \sum_{j=0}^{l-1}\sum_{j'=0}^{l-1}\text{Tr}_{X^c}(\ket{\phi_j}\bra{\phi_{j'}})\right)-\text{Tr}_{X^c}(\ket{\psi}\bra{\psi})\right\rVert_1. \end{align} First we examine terms in the sum for which $j \neq j'$. Since $B_{j,j'}$ and $B_{j',j}$ are separated by at least $l$ sites and $l > k$, $X^c$ must include either $B_{j,j'}$ or $B_{j',j}$ (or both). Since $\ket{\phi_j}$ and $\ket{\phi_{j'}}$ have orthogonal support on both those regions, and at least one of them is traced out, the term vanishes. Thus we have \begin{align} D_1(\rho_X, \tilde{\rho}_X) &= \frac{1}{2}\left\lVert\frac{1}{l} \sum_{j=0}^{l-1}\text{Tr}_{X^c}(\ket{\phi_j}\bra{\phi_{j}}-\ket{\psi}\bra{\psi})\right\rVert_1 \nonumber\\ &\leq\frac{1}{l} \sum_{j=0}^{l-1}D_1\left(\rho_X, \text{Tr}_{X^c}(\ket{\phi_j}\bra{\phi_j})\right). \label{eq:tracesumj2} \end{align} For a particular $j$, there are two cases. Case 1 includes values of $j$ for which $X$ falls completely within the $A_{j,i}$ for some $i$ with $i \neq j, j+1$. For these values of $j$ the term vanishes because the reduced density matrix of $\ket{\phi_j}\bra{\phi_j}$ on $X$ is exactly $\rho_X$. Case 2 includes all other values of $j$. For this to be the case, either $X$ spans the boundary between two regions $M_{j,i}$ and $M_{j,i+1}$ (at most $k-1$ different values of $j$), $X$ contains a site $B_{j,j'}$ or $B_{j',j}$ for some $j'$ (at most 2 values of $j$, since the separation between sites $B_{j,j'}$ implies only one may lie within $X$), or $X$ is contained within $A_{j,j}$ or $A_{j,j+1}$ (at most 2 values of $j$). In this case, the term will not necessarily be close to zero, but we can always upper bound the trace distance by 1. The number of terms in the sum that qualify as Case 2 is therefore at most $k+3$, and the total error can be bounded: \begin{equation} D_1(\rho_X, \tilde{\rho}_X) \leq \frac{k+3}{l}. \end{equation} Choosing $l = (k+3)/\epsilon$ shows item (1), that $\kettilde{\psi}$ is a $(k,\epsilon)$-local approximation to $\ket{\psi}$. To show item (2), we bound the Schmidt rank of the state $\kettilde{\psi}$ across every cut that bipartitions the chain into two contiguous regions. Since $\kettilde{\psi}$ is a superposition over $l$ terms $\ket{\phi_j}$, the Schmidt rank can be at most $l$ times greater than that of an individual $\ket{\phi_j}$. Fix some value of $j$ and some cut of the chain at site $s$. Since $\ket{\phi_j}$ is a product state between regions $A_j$ and $B_j$, and moreover it is a product state on each individual site in $B_j$, we may ignore $B_j$ when calculating the Schmidt rank (it has no entanglement), and focus merely on $\ket{Q_j}_{A_j}$. We constructed the state $\ket{Q_j}$ by building up mixed states $\sigma_p$ on region $A_{j,j+2}\ldots A_{j,j+p+1}$ until $p = n-2$, then purified with the remaining two regions. Each $\sigma_p$ has $\text{rank}(\sigma_p) \leq d^{2l}$. Now consider an integer $b$ with $1 \leq b \leq n-1$ and $b \neq j,j+1$. Denote $\sigma = \sigma_{n-2}$ and note that \begin{align} &\text{rank}(\sigma_{A_{j,1}\ldots A_{j,b}}) \nonumber\\ \leq{}&\text{rank}(\sigma_{A_{j,j+2}\ldots A_{j,b}})\text{rank}(\sigma_{A_{j,j+2}\ldots A_{j,0}}) \nonumber\\ ={}&\text{rank}(\sigma_{b-j-1})\text{rank}(\sigma_{n-j-1}) \leq d^{4l}, \end{align} where the first inequality follows from Lemma \ref{lem:rankrelations}. Moreover, we may choose $b$ such that the cut between $A_{j,b}$ and $A_{j,b+1}$ falls within $l$ sites of site $s$. We find that the region $A_{j,1}\ldots A_{j,b}$ differs from the region containing sites $[0,s]$ by at most $l$ sites at each edge. Thus the Schmidt rank on the region left of the cut can be at most $d^{2l}$ larger than that of $A_{j,1}\ldots A_{j,b}$ (Corollary \ref{cor:schmidtrankincrease}), giving a bound of $d^{6l}$ for the the Schmidt rank of $\ket{\phi_j}$. This implies the Schmidt rank of $\kettilde{\psi}$ is at most $ld^{6l}$, which proves item (2). This applies whenever $N \geq N_0 = O(l^3) = O(k^3/\epsilon^3)$, a bound which must be satisfied in order for the chain to be long enough to fit all the regions $B_{j,j'}$ as defined above. This completes Part 1. \vspace{12 pt} \textit{Part 2: Items (2') and (2'')} \\ This construction is mostly similar to the previous one with a few key differences. We choose integers $l$, $t$, and $\chi'$ to be specified later. We require $t$ be even and $l \geq 2k$, $l \geq 2t$. We assume $N \geq N_0 = 2l^2$. We also require that $d \geq 4$. If this is not the case, we coarse-grain the system by combining neighboring sites, and henceforth we assume $d \geq 4$. As in Part 1, to construct $\kettilde{\psi}$, we will first construct states $\ket{\phi_j}$ for $j=0,\ldots, l-1$ and then sum these states in superposition. Consider a fixed value of $j$. To construct $\ket{\phi_j}$ we break the chain into $n = \lfloor N/l \rfloor$ blocks and we arrange them exactly as in Part 1. Now the construction diverges from Part 1: the state $\ket{\phi_j}$ will be a product state over each of these blocks \begin{equation} \ket{\phi_j} = \ket{\phi_{j,0}}_{M_{j,0}} \otimes \ldots \otimes \ket{\phi_{j,n-1}}_{M_{j,n-1}} \end{equation} with states $\ket{\phi_{j,i}}$ for $i=0,\ldots, n-1$ that we now specify. The idea is to create a state $\ket{\phi_{j,i}}$ that has nearly the same reduced density matrix as $\ket{\psi}$ on the leftmost $l-t$ sites of region $M_{j,i}$. It uses the rightmost $t$ sites to purify the reduced density matrix on the leftmost $l-t$ sites. First we denote the leftmost $l-t$ sites of block $M_{j,i}$ by $A_{j,i} = [j+l(i-1),j+li-t-1]$ and the rightmost $t$ sites by $B_{j,i} = [j+li-t,j+li-1]$ (or appropriate ``roll over'' definitions when $i=0$). We write $\ket{\psi}$ as an exact MPS with exponential bond dimension and form $\ket{\psi_{j,i}}$ by truncating to bond dimension $\chi'$ (i.e.~projecting onto the span of the right Schmidt vectors associated with the largest $\chi'$ Schmidt coefficients, then normalizing the state) at the cut to the left of region $A_{j,i}$, every cut within $A_{j,i}$, and at the cut to the right of $A_{j,i}$, for a total of at most $2l-t$ truncations (recall the final region $M_{j,0}$ may have as many as $2l-1$ sites). We denote the pure density matrix of this state by $\rho^{(j,i)} = \ket{\psi_{j,i}}\bra{\psi_{j,i}}$. We can bound the effect of these truncations using the area law given by Lemma \ref{lem:lowrank}: \begin{equation} D(\rho^{(j,i)}, \rho) \leq \sqrt{2l-t}\epsilon^{\chi'}, \end{equation} where $\epsilon^{\chi'}$ is the cost (in purified distance) of a single truncation, given by the right hand side of Eq.~\eqref{eq:lemmaarealaw}, or by Eq.~\eqref{eq:lemmabetterarealaw} in the case $\ket{\psi}$ is the ground state of a gapped nearest-neighbor 1D Hamiltonian. These truncations were not possible in Part 1 because we could not invoke the area law for general states. Because of the truncations, we can express the reduced density matrix $\rho_{A_{j,i}}^{(j,i)}$ as a mixture of $\chi'^2$ pure states $\{\ket{\phi_{j,i,z}}\}_{z=0}^{\chi'^2-1}$ each of which can be written as an MPS with bond dimension $\chi'$: % \begin{equation} \rho_{A_{j,i}}^{(j,i)} = \sum_{z=0}^{\chi'^2-1}p_{j,i,z} \ket{\phi_{j,i,z}}\bra{\phi_{j,i,z}}, \end{equation} for some probability distribution $\{p_{j,i,z}\}_{z=0}^{\chi'^2-1}$. We now form $\ket{\phi_{j,i}}$ by purifying $\rho_{A_{j,i}}^{(j,i)}$ onto the region $M_{j,i}$ using the space $B_{j,i}$, which contains $t$ sites, as the purifying subspace: \begin{equation} \ket{\phi_{j,i}}_{M_{j,i}}= \sum_{z=0}^{\chi'^2-1}\sqrt{p_{j,i,z}} \ket{\phi_{j,i,z}}_{A_{j,i}} \otimes \ket{r_{j,i,z}}_{B_{j,i}}, \end{equation} where the set of states $\{\ket{r_{j,i,z}}\}_{z=0}^{\chi'^2-1}$ is an orthonormal set defined on region $B_{j,i}$. This purification will only be possible if the dimension of $B_{j,i}$ is sufficiently large, and we comment later on this fact, as well as how exactly to choose the set $\{\ket{r_{j,i,z}}\}_{z=0}^{\chi'^2-1}$. The key observation is that state $\ket{\phi_j}$ will get any local properties approximately correct as long as they are supported entirely within a segment $A_{j,i}$ for some $i$, and as long as $\chi'$ is large enough that the $2l-t$ truncations do not have much effect on the reduced density matrix there. Thus we will choose our parameters so that $\chi'$ is large (but independent of $N$), such that $l$ is much larger than $t$ and $k$ (so that most regions fall within a region $A_{j,i}$), and such that $t$ is large enough that it is possible to purify states on $A_{j,i}$ onto $M_{j,i}$. But, as in Part 1, we have the issue that some regions will not be contained entirely within region $A_{j,i}$ for some $i$. We again deal with this issue by summing in superposition: \begin{equation} \kettilde{\psi} = \frac{1}{\sqrt{l}}\sum_{j=0}^{l-1}\ket{\phi_j}. \end{equation} To complete the construction we also must specify the orthonormal states $\{\ket{r_{j,i,z}}\}_{z=0}^{\chi'^2-1}$ defined on the $t$ sites in region $B_{j,i}$. We choose a set that satisfies the following requirements. \begin{enumerate}[(1)] \item The reduced density matrix of $\ket{r_{j,i,z}}$ on any single site among the leftmost $t/2$ sites of $B_{j,i}$ (recall we have assumed $t$ is even) is entirely supported on basis states $1, \ldots, \lfloor d/2 \rfloor$. \item The reduced density matrix on any single site among the rightmost t/2 sites is entirely supported on basis states $\lfloor d/2 \rfloor +1, \ldots, d$. \item Let $j' = j+i \mod l$, and let $i_0 = i \mod l$. If $t \leq i_0 \leq l-t$ then for all $z$, $\ket{r_{j,i,z}}$ is orthogonal to the support of the reduced density matrix of $\ket{\phi_{j'}}$ on region $B_{j,i}$. \end{enumerate} We assess how large $t$ must be for it to be possible to satisfy these three conditions. The third item specifically applies only for values of $i$ that lead to values of $j'$ that are at least $t$ away from $j$ (modulo $l$) so that the purifying system $B_{j,i}$ does not overlap with $B_{j',i'}$ for any $i'$. The support of the reduced density matrix of any $\ket{\phi_{j'}}$ on region $B_{j,i}$ has dimension at most $\chi'^2$. Thus, if the dimension of $B_{j,i}$ is more than $2\chi'^2$ it will always be possible to choose an orthonormal set $\{\ket{r_{j,i,z}}\}_{z=0}^{\chi'^2-1}$ satisfying the third condition. The first and second conditions cut the accessible part of the local dimension of the purifying system in half, so a purification that satisfies all three conditions will be possible if $\lfloor d/2 \rfloor^t \geq 2\chi'^2$. Any choice of set that meets all three conditions is equally good for our purposes. We now demonstrate that the three conditions imply that for any pair $(j,j')$ there are regions of the chain on which the supports of the reduced density matrices of states $\ket{\phi_j}$ and $\ket{\phi_{j'}}$ are orthogonal. If it is the case that $j-j' \mod l < t$ or $j'-j \mod l < t$, then for every $i$ the region $B_{j,i}$ overlaps with $B_{j',i'}$ for some $i'$. Because $B_{j,i} \neq B_{j',i'}$, there will be some site that is in the right half of one of the two regions, but in the left half of the other, and items 1 and 2 imply that the two states will be orthogonal when reduced to this site. If this is not the case, then as long as there is some $i$ for which $j' = j+i \mod l$, then item 3 implies the orthogonality of the supports of $\ket{\phi_j}$ and $\ket{\phi_{j'}}$. In fact because $n \geq 2l$, there will be at least 2 such values of $i$. We conclude that $\braket{\phi_j}{\phi_{j'}} = \delta_{jj'}$, which implies that $\kettilde{\psi}$ is normalized as shown by the computation \begin{equation} \langle \tilde{\psi} | \tilde{\psi} \rangle = \frac{1}{l}\sum_{j,j'} \braket{\phi_j}{\phi_{j'}} = 1. \end{equation} We have now shown how to define the approximation $\kettilde{\psi}$ and discussed the conditions for the parameters $t$, $\chi'$, and $d$ that make the construction possible. Now we assess the error in the approximation (locally). Let $\tilde{\rho} = \kettilde{\psi}\bratilde{\psi}$ and consider an arbitrary length-$k$ region $X$. We may write \begin{align} & D_1(\rho_X, \tilde{\rho}_X) \nonumber\\ ={}&\frac{1}{2}\left\lVert\left(\frac{1}{l} \sum_{j=0}^{l-1}\sum_{j'=0}^{l-1}\text{Tr}_{X^c}(\ket{\phi_j}\bra{\phi_{j'}})\right)-\text{Tr}_{X^c}(\ket{\psi}\bra{\psi})\right\rVert_1. \end{align} For the same reason that led to the conclusion $\braket{\phi_j}{\phi_{j'}} = \delta_{jj'}$, we can conclude that $\text{Tr}_{X^c}(\ket{\phi_j}\bra{\phi_{j'}}) = \delta_{jj'}$, so long as $k$ is smaller than $l/2$. To see that this holds, it is sufficient to show that there is a region lying completely outside of $X$ with the property that $\ket{\phi_j}$ and $\ket{\phi_j'}$ share no support on the region. Since $k = \lvert X \rvert \leq l/2$ and $\lvert B_{j,i}\rvert = t \leq l/2 $, for any $j$, $X$ can overlap the region $B_{j,i}$ for at most one value of $i$. We showed before that for any $j$ there would be at least two values of $i$ for which a subregion of $B_{j,i}$ has this property, implying one of them must lie outside $X$. Thus we have \begin{align} D_1(\rho_X, \tilde{\rho}_X) &= \frac{1}{2}\left\lVert\frac{1}{l} \sum_{j=0}^{l-1}\text{Tr}_{X^c}(\ket{\phi_j}\bra{\phi_{j}}-\ket{\psi}\bra{\psi})\right\rVert_1 \nonumber\\ &\leq\frac{1}{l} \sum_{j=0}^{l-1}D_1\left(\rho_X, \text{Tr}_{X^c}(\ket{\phi_j}\bra{\phi_j})\right). \label{eq:tracesumj} \end{align} For a particular $j$, there are two cases. Case 1 occurs if $X$ falls completely within the $A_{j,i}$ for some $i$, in which case the only error is due to the $2l-t$ truncations to bond dimension $\chi'$. Since the trace norm $D_1$ is smaller than the purified distance $D$ (Lemma \ref{lem:purifiedvstrace}), the contribution for these values of $j$ is at most $\sqrt{2l-t}\epsilon^{\chi'}$. Case 2 includes values of $j$ for which $X$ does not fall completely within a region $A_{j,i}$ for any $i$. In this case, the term will not necessarily be close to zero, but we can always upper bound the trace distance by 1. The number of terms in the sum that qualify as Case 1 is of course bounded by $l$ (there are only $l$ terms) and the number of Case 2 terms is at most $t+k-1$. Thus the total error can be bounded: \begin{align} & D_1(\rho_X, \tilde{\rho}_X) \nonumber \\ \leq{}&\frac{1}{l} (l\sqrt{2l-t}\epsilon^{\chi'} + (t+k-1)) \nonumber\\ \leq{} & \sqrt{2l} \epsilon^{\chi'} + (k+t)/l. \end{align} We will choose parameters so this quantity is less than $\epsilon$. Parameters $l$ and $t$ will be related to $\chi'$ as follows. \begin{align} t &= \log(2\chi'^2)/\log(\lfloor d/2\rfloor)\\ l &= 2(k+t)/\epsilon \end{align} If $\ket{\psi}$ is known only to have exponentially decaying correlations, then we choose \begin{equation} \log(\chi') = 16 \xi \log(d)\log(16C_1\sqrt{ \xi \log(d)(k+3)}/\epsilon^{3/2}), \end{equation} where $C_1 = \exp(t_0\exp(\tilde{O}(\xi\log(d))))$ is the constant from Eq.~\eqref{eq:lemmaarealaw}. We note that $t \leq 3\log(\chi')$, so we can bound \begin{align} D_1(\rho_X, \tilde{\rho}_X) &\leq \frac{\sqrt{4(k+t)}C_1}{\sqrt{\epsilon}}e^{-\frac{\log(\chi')}{8\xi \log(d)}}+\frac{\epsilon}{2} \nonumber \\ &\leq \frac{\sqrt{4(k+3)\log(\chi')}C_1}{\sqrt{\epsilon}}e^{-\frac{\log(\chi')}{8\xi \log(d)}}+\frac{\epsilon}{2} \nonumber \\ &\leq \frac{\sqrt{64\xi\log(d)(k+3)}C_1}{\sqrt{\epsilon}}e^{-\frac{\log(\chi')}{16\xi \log(d)}}+\frac{\epsilon}{2} \nonumber \\ &\leq \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon, \end{align} where in the third line we have used the (crude) bound $\sqrt{u} \leq e^u$ with $u = \log(\chi')/(16\xi\log(d))$. In the case that $\ket{\psi}$ is known to be the ground state of a gapped local Hamiltonian, we may choose \begin{equation} \log(\chi') = \tilde{O}(\Delta^{-1/4} \log(d)\log^{3/4}(C_2 \sqrt{k+3}/\epsilon^{3/2})), \end{equation} where $C_2 = \exp(\tilde{O}(\log^{8/3}(d)/\Delta)$ is the constant in Eq.~\eqref{eq:lemmabetterarealaw} and the same analysis will follow. This proves item (1) of the theorem for the construction in Part 2. Items (2') and (2'') assert that $\kettilde{\psi}$ can be written as an MPS with constant bond dimension, which we now show. Each state $\ket{\phi_j}$ is a product state with pure state $\ket{\phi_{j,i}}$ on each each block $M_{j,i}$, and $\ket{\phi_{j,i}}$ has bond dimension at most $2\chi'^2$. Thus, if we cut the state $\ket{\phi_j}$ at a certain site, the bond dimension will be at most $4 \chi'^2$ (recall that block $M_{j,0}$ may have sites at both ends of the chain and can contribute to the bond dimension). Since $\kettilde{\psi}$ is a sum over $\ket{\phi_j}$ for $l$ values of $j$, the bond dimension $\chi$ of $\kettilde{\psi}$ is at most $4l \chi'^2$. For the case of exponentially decaying correlations, this evaluates to \begin{align} \chi &= 4l(256C_1^2 \xi \log(d)(k+3)/\epsilon^3)^{16\xi\log(d)} \nonumber\\ &\leq e^{t_0e^{\tilde{O}(\xi\log(d))}} (k/\epsilon^3)^{O(\xi\log(d))}, \end{align} proving item (2'), and for the case of ground state of gapped Hamiltonian, we find \begin{align} \chi &= 4l\exp(\tilde{O}(\Delta^{-1/4} \log(d)\log^{3/4}(C_2\sqrt{k+3}/\epsilon^{3/2}))) \nonumber\\ &\leq (k/\epsilon)e^{\tilde{O}\left(\frac{\log^3(d)}{\Delta}+\frac{\log(d)}{\Delta^{1/4}}\log^{3/4}(k/\epsilon^3)\right)}, \end{align} proving item (2''), where the second factor is asymptotically $(k/\epsilon)^{o(1)}$. For completeness, we note that if we combined neighboring sites because $d < 4$, we can now uncombine them possibly incurring a factor of 2 or 3 increase in the bond dimension, which has no effect on the stated asymptotic forms for $\chi$. These results hold as long as $N \geq 2 l^2$, which translates to $N \geq O(k^2/\epsilon^2) + t_0\exp(\tilde{O}(\xi \log (d)))$ in the case of exponentially decaying correlations, and $N \geq O(k^2/\epsilon^2) + \tilde{O}(\log(d)/\Delta^{3/4})$ in the case that $\ket{\psi}$ is a ground state of a local Hamiltonian. This completes the proof. \end{proof} Now we demonstrate that the state $\kettilde{\psi}$ constructed in the proof of Theorem \ref{thm:improvedbd}, Part 2, is long-range correlated. Given an integer $m$, consider the pair of regions $A = [0,l(l-1)-1]$ and $C=[l(l-1+m),N-1]$, which are separated by $ml$ sites. Assume $n \geq 2l+m$, so that $A$ and $C$ both contain at least $l^2$ sites. Define the following operators. \begin{align} Q_1 =& \ket{\phi_{0,1}}\bra{\phi_{0,1}}\otimes \ldots \otimes \ket{\phi_{0,l}}\bra{\phi_{0,l}} \nonumber \\ Q_2 =& \ket{\phi_{0,l+m}}\bra{\phi_{0,l+m}}\otimes \ldots \nonumber \\ & \ldots \otimes \ket{\phi_{0, n-1}}\bra{\phi_{0, n-1}} \otimes \ket{\phi_{0,0}}\bra{\phi_{0,0}} \end{align} The operator $Q_1$ is supported on $A$ and $Q_2$ is supported on $C$. Since $A$ and $C$ each contain blocks $M_i$ for at least $l$ values of $i$, conditions (1), (2), and (3) above imply that $Q_1\kettilde{\psi} = Q_2\kettilde{\psi} = \ket{\phi_{0}}/\sqrt{l}$. Thus \begin{align} \text{Cor}(A:C)_{\kettilde{\psi}} \geq& \text{Tr}((Q_1 \otimes Q_2)(\tilde{\rho}_{AC}-\tilde{\rho}_A \otimes \tilde{\rho}_C)) \nonumber\\ =& \text{Tr}(Q_1 \otimes Q_2 \kettilde{\psi}\bratilde{\psi}) \nonumber \\ &- \text{Tr}(Q_1 \kettilde{\psi}\bratilde{\psi})\text{Tr}(Q_2 \kettilde{\psi}\bratilde{\psi}) \nonumber \\ =&1/l-1/l^2. \end{align} The choice of $l$ is independent of the chain length $N$, so the above quantity is independent of $N$ and independent of the parameter $m$ measuring the distance between $A$ and $C$. Thus, the correlation certainly does not decay exponentially in the separation between the regions. \subsection{Proof of Theorem \ref{thm:mainthm}} In this section, we first state a pair of lemmas that will be essential for the proof of Theorem \ref{thm:mainthm}, then we give a proof summary of Theorem \ref{thm:mainthm}, and finally we provide the full proof of Theorem \ref{thm:mainthm}. First, an important and well-known tool we use is Uhlmann's theorem \cite{uhlmann1976transition}, which expresses the fact that if two states are close, their purifications will be equally close up to a unitary acting on the purifying auxiliary space. \begin{lemma}[Uhlmann's theorem \cite{uhlmann1976transition}]\label{lem:uhlmann} Suppose $\tau_A$ and $\sigma_A$ are states on system $A$. Suppose $B$ is an auxiliary system and $\ket{T}_{AB}$ and $\ket{S}_{AB}$ are purifications of $\tau_A$ and $\sigma_A$, respectively. Then \begin{equation} D(\tau_A,\sigma_A) = \min_{U}\sqrt{1-\lvert\bra{S}_{AB}(I_A \otimes U) \ket T_{AB} \rvert^2}, \end{equation} where the min is taken over unitaries on system $B$. \end{lemma} Second, we prove the following essential statement about states with exponential decay of correlations. \begin{lemma}\label{lem:tracebound} If $L$ and $R$ are disjoint regions of a 1D lattice of $N$ sites and the state $\tau = \ket\eta \bra \eta$ has $(t_0, \xi)$-exponential decay of correlations, then \begin{equation} D(\tau_{LR}, \tau_L \otimes \tau_R) \leq C_3\exp(-\text{dist}(L,R)/\xi') \end{equation} whenever $\text{dist}(L,R) \geq t_0$, where $\xi' = 16\xi^2\log(d)$ and $C_3=\exp(t_0\exp(\tilde{O}(\xi\log(d))))$. If $\tau$ is the unique ground state of a gapped nearest-neighbor 1D Hamiltonian with spectral gap $\Delta$, then this can be improved to \begin{equation} D(\tau_{LR}, \tau_L \otimes \tau_R) \leq C_4\exp(-\text{dist}(L,R)/\xi') \end{equation} whenever $\text{dist}(L,R) \geq \Omega(\log^4(d)/\Delta^2)$, where $\xi' = O(1/\Delta)$ and $C_4 = \exp(\tilde{O}(\log^3(d)/\Delta))$. \end{lemma} For pure states $\sigma$, we call $\sigma$ a Markov chain for the tripartition $L/M/R$ if $\sigma_{LR} = \sigma_L \otimes \sigma_R$. Thus Lemma \ref{lem:tracebound} states that exponential decay of correlations implies that the violation of the Markov condition, as measured by the purified distance (or alternatively, trace distance) decays exponentially with the size of $M$. \begin{proof}[Proof of Lemma \ref{lem:tracebound}] The goal is to show that an exponential decay of correlations in $\tau = \ket{\eta}\bra{\eta}$ implies that $\tau_{LR}$ is close to $\tau_L \otimes \tau_R$. We will do this by truncating the rank of $\tau$ on the region $L$ to form $\sigma$, arguing that $\sigma_{LR}$ is close to $\sigma_L \otimes \sigma_R$, and finally using the triangle inequality to show the same holds for $\tau$. Lemma \ref{lem:lowrank} says that there is a state $\sigma_L$ with rank $\chi$ defined on region $L$ such that \begin{equation} D(\tau_L,\sigma_L) \leq C_1 e^{-\frac{\log(\chi)}{8 \xi \log(d)}}. \end{equation} In fact, the choice of $\sigma_L$ of rank $\chi$ that minimizes the distance to $\tau_L$ is the state $P_L\tau_L/\text{Tr}(P_L\tau_L)$ where $P_L$ is the projector onto the eigenvectors of $\tau_L$ associated with the largest $\chi$ eigenvalues. Accordingly, we define a normalization constant $q=1/\text{Tr}(P_L \tau_L)$ and let $\ket{\nu} = \sqrt{q}P_L\ket{\eta}$ and $\sigma = \ket{\nu}\bra{\nu} = qP_L \tau P_L$ be normalized states. We first need to show that \begin{equation} \text{Cor}(L:R)_{\ket{\nu}} := \max_{\lVert A \rVert, \lVert B \rVert \leq 1}\text{Tr}((A \otimes B) (\sigma_{LR}-\sigma_L \otimes \sigma_R)) \end{equation} is small, given only that $\text{Cor}(L:R)_{\ket{\eta}}$ is small. Suppressing tensor product symbols, we can write \begin{align} &{}\text{Tr}((A B) (\sigma_{LR}-\sigma_L \sigma_R)) \nonumber\\ &= \bra\nu AB \ket \nu - \bra\nu A \ket\nu\bra\nu B \ket\nu \nonumber \\ &= q\bra\eta P_LABP_L \ket \eta - q^2\bra\eta P_LAP_L \ket\eta\bra\eta P_L B P_L\ket\eta \nonumber \\ &= q\bra\eta (P_LAP_L)B \ket \eta - q^2\bra\eta P_LAP_L \ket\eta\bra\eta P_L B \ket\eta \nonumber \\ &= q\bra\eta (P_LAP_L)B\ket\eta \nonumber \\ &\hspace{6pt}-q^2\bra\eta P_LAP_L \ket\eta\left(\bra\eta P_L B \ket\eta-\bra\eta P_L\ket\eta \bra\eta B \ket\eta\right) \nonumber\\ &\hspace{6pt}- q^2\bra\eta P_LAP_L \ket\eta\ \bra\eta P_L\ket\eta \bra\eta B \ket\eta \nonumber\\ &=q\left(\bra\eta (P_LAP_L)B\ket\eta - \bra\eta P_LAP_L\ket\eta \bra\eta B \ket\eta\right) \nonumber \\ &\hspace{6pt}-q^2\bra\eta P_LAP_L \ket\eta\left(\bra\eta P_L B \ket\eta-\bra\eta P_L\ket\eta \bra\eta B \ket\eta\right), \end{align} from which we can conclude \begin{equation} \text{Cor}(L:R)_{\ket{\nu}} \leq (q+q^2) \text{Cor}(L:R)_{\ket{\eta}}. \end{equation} The normalization constant $q$ is $1/(1-D(\tau_L,\sigma_L)^2)$ which will be close to 1 as long as $\chi$ is sufficiently large. If we choose $\log(\chi) = 8 \xi \log(d)(1+\log(C_1))$ or larger, then $q$ will certainly be smaller than 2 and $q+q^2 \leq 6$. The combination of the fact that $\sigma_L$ has small rank and that $\sigma$ has small correlations between $L$ and $R$ will allow us to show that $\sigma_{LR}$ is close to $\sigma_L \otimes \sigma_R$. We do this by invoking Lemma 20 of \cite{brandao2015exponential}, although we reproduce the argument below. We can express the trace norm as \begin{align} &\lVert \sigma_{LR}-\sigma_L \otimes \sigma_R \rVert_1 \nonumber\\ ={}& \max_{\lVert T \rVert \leq 1} \text{Tr}(T(\sigma_{LR}-\sigma_L \otimes \sigma_R)) \nonumber\\ ={}& \max_{\lVert T \rVert \leq 1} \text{Tr}(P_LTP_L(\sigma_{LR}-\sigma_L \otimes \sigma_R)) , \end{align} where the second equality follows from the fact that $P_L$ fixes the state $\sigma$. We can perform a Schmidt decomposition of the operator $P_LTP_L$ into a sum of at most $\chi^2$ terms which are each a product operator across the $L/R$ partition \begin{equation} P_LTP_L = \sum_{j=1}^{\chi^2} T_{L,j} \otimes T_{R,j} \end{equation} and also such that $\lVert T_{L,j} \rVert, \lVert T_{R,j} \rVert \leq 1$ (see Lemma 20 of \cite{brandao2015exponential} for full justification of this). Then we may write \begin{align} &\lVert \sigma_{LR}-\sigma_L \otimes \sigma_R \rVert_1 \nonumber\\ \leq{}& \max_{\lVert T \rVert \leq 1} \text{Tr}\left(\left(\sum_{j=1}^{\chi^2}T_{L,j} \otimes T_{R,j}\right)(\sigma_{LR}-\sigma_L \otimes \sigma_R)\right) \nonumber \\ \leq{}& \sum_{j=1}^{\chi^2}\max_{\lVert T_{L,j} \rVert,\lVert T_{R,j} \rVert \leq 1} \text{Tr}\left(\left(T_{L,j} \otimes T_{R,j}\right)(\sigma_{LR}-\sigma_L \otimes \sigma_R)\right) \nonumber \\ \leq{}& \chi^2 \text{Cor}(L:R)_{\ket{\nu}} \nonumber \\ \leq{}& 6\chi^2 \text{Cor}(L:R)_{\ket{\eta}} \leq 6 \chi^2 \exp(-\text{dist}(L,R)/\xi) \end{align} as long as $\chi \geq 8 \xi \log(d)(1+\log(C))$ and $\text{dist}(L,R) \geq t_0$. Moreover the purified distance is bounded by the square root of the trace norm of the difference (Lemma \ref{lem:purifiedvstrace}), allowing us to say \begin{equation} D(\sigma_{LR}, \sigma_L \otimes \sigma_R) \leq \sqrt{6}\chi \exp(-\text{dist}(L,R)/(2\xi)). \end{equation} Then, by the triangle inequality, we can bound \begin{align} &D(\tau_{LR},\tau_L \otimes \tau_R) \nonumber\\ \leq{}& D(\tau_{LR}, \sigma_{LR}) + D(\sigma_{LR},\sigma_L \otimes \sigma_R) \nonumber \\ &+D(\sigma_L \otimes \sigma_R, \tau_L \otimes \tau_R) \nonumber \\ \leq{}& D(\tau_{LR}, \sigma_{LR}) + D(\sigma_{LR},\sigma_L \otimes \sigma_R) \nonumber \\ &+D(\sigma_L, \tau_L) + D( \sigma_R , \tau_R) \nonumber\\ \leq{}& 3C_1\exp\left(-\frac{\log(\chi)}{8\xi\log(d)}\right)+\sqrt{6}\chi \exp\left(-\frac{\text{dist}(L,R)}{2\xi}\right). \end{align} We can choose \begin{equation} \log(\chi) = 8\xi\log(d)(1+\log(3C_1)) + \text{dist}(L,R)/(4\xi). \end{equation} Then each term can be bounded so that \begin{equation} D(\tau_{LR},\tau_L \otimes \tau_R) \leq 2\sqrt{6}(3eC_1)^{8\xi\log(d)}e^{-\frac{\text{dist}(L,R)}{32\xi^2\log(d)}}, \end{equation} which proves the first part of the Lemma. If $\tau$ is the unique ground state of a gapped Hamiltonian, then we may use the second part of Lemma \ref{lem:lowrank}, and bound \begin{align} &D(\tau_{LR},\tau_L \otimes \tau_R) \nonumber\\ \leq{}& 3C_2 e^{-\tilde{O}\left(\frac{\Delta^{1/3}\log^{4/3}(\chi)}{\log^{4/3}(d)}\right)}+\sqrt{6}\chi e^{-O\left(\Delta\text{dist}(L,R)\right)}. \end{align} Here we can choose \begin{equation} \log(\chi) = O\left(\frac{\log(d)(1+\log(3C_2))^{\frac{3}{4}}}{\Delta^{1/4}} + \Delta\text{dist}(L,R)\right), \end{equation} and then each term is small enough to make the bound \begin{align} &D(\tau_{LR},\tau_L \otimes \tau_R)\nonumber\\ \leq{}& e^{\tilde{O}\left(\frac{\Delta^{5/3}\text{dist}(L,R)^{4/3}}{\log^{4/3}(d)}\right)}\nonumber\\ & +\sqrt{6}e^{O\left(\frac{\log(d)\log^{3/4}(3eC_2)}{\Delta^{1/4}}\right)}e^{-O\left(\Delta\text{dist}(L,R)\right)} \nonumber \\ \leq{}& e^{\tilde{O}\left(\frac{\log^3(d)}{\Delta}\right)}e^{-O\left(\Delta\text{dist}(L,R)\right)} \end{align} as long as $\text{dist}(L,R) \geq \Omega(\log^4(d)/\Delta^2)$, so that the first term in the second line is dominated by the second term. Also note we have used $\log(C_2) = \tilde{O}(\log^{8/3}(d)/\Delta)$ in the last line. This proves the second part of the lemma. \end{proof} \begin{figure}[ht] \centering \includegraphics[width = \columnwidth]{proof_figure_thm2.png} \caption{Schematic for the proof of Theorem \ref{thm:mainthm}. The chain is divided into regions $M_i$ of length $l$, which are themselves divided into left and right halves $M_i^L$ and $M_i^R$. The state $\kettilde{\psi}$ is constructed by starting with $\ket{\psi}$, applying unitaries that act only on $M_i$ for each $i$ to disentangle the state across the $M_i^L/M_i^R$ cut, projecting onto a product state across those cuts, and finally applying the inverse unitaries on regions $M_i$.} \label{fig:mainthm} \end{figure} \begin{proof}[Proof summary for Theorem \ref{thm:mainthm}] First, we make the following observation about tripartitions of the chain into contiguous regions $L$, $M$, and $R$: since $\ket{\psi}$ has exponential decay of correlations, the quantity $D(\rho_{LR}, \rho_L \otimes \rho_R)$ is exponentially small in $\lvert M \rvert / \xi'$, where $\xi' = O(\xi^2 \log(d))$. This is captured in Lemma \ref{lem:tracebound} and requires the area law result from \cite{brandao2015exponential}. One can truncate $\rho_L$ and $\rho_R$ to bond dimension $d^{\lvert M \rvert/2}$, incurring small error, then purify $\rho_L$ into $\ket\alpha$ using the left half of region $M$ as the purifying auxiliary space and $\rho_R$ into $\ket\beta$ using the right half. Since $\ket{\alpha} \otimes \ket{\beta}$ and $\ket{\psi}$ are nearly the same state after tracing out $M$, there is a unitary $U$ acting only on $M$ that nearly disentangles $\ket\psi$ across the central cut of $M$, with $U\ket{\psi}\approx \ket{\alpha}\otimes\ket{\beta}$ (Uhlmann's theorem, Lemma \ref{lem:uhlmann}). The proof constructs the approximation $\kettilde{\psi}$ by applying three steps of operations on the exact state $\ket{\psi}$. First, the chain is broken up into many regions $\{M_i\}_{i=0}^{n+1}$ of length $l$, and disentangling unitaries $U_i$ as described above are applied to each region $M_i$ in parallel. The state is close to, but not exactly, a product state across the center cut of each region $M_i$. To make it an exact product state, the second step is to apply rank-1 projectors $\Pi_i$ onto the right half of the state across each of these cuts, starting with the leftmost cut and working our way down the chain. Then, the third step is to apply the reverse unitaries $U_i^\dagger$ that we applied in step 1. The projection step is the cause of the error between $\ket{\psi}$ and $\kettilde{\psi}$. The number of projections is $O(N)$, but the error accrued locally is only constant. This follows from the fact that the projectors are rank 1, so once we apply projector $\Pi_i$, region $M_j$ is completely decoupled from $M_{j'}$ when $j<i$ and $j' > i$. Thus any additional projections, which act only on the regions to the right of $M_i$, have no effect on the reduced density matrix on $M_j$ (except to reduce its norm). Using this logic, we show that the number of errors that actually affect the state locally on a region $X$ is proportional to the number of sites in $X$, and not the number of sites in the whole chain. To make this error less than $\epsilon$ for any region of at most $k$ sites, we can choose $l = O(\xi'\log(k/\epsilon))$. After step 2, the state is a product state on blocks of $l$ sites each, and in step 3, unitaries are applied that couple neighboring blocks, so the maximum Schmidt rank across any cut cannot exceed $d^l$. This yields the scaling for $\chi$. The result is improved when $\ket{\psi}$ is the ground state of a gapped local Hamiltonian by using the improved area law \cite{arad2013area,arad2017rigorous} in the proof of Lemma \ref{lem:tracebound} and in the truncation of the states $\rho_L$ and $\rho_R$ before purifying into $\ket{\alpha}$ and $\ket{\beta}$. Finally, it can be seen that $\kettilde{\psi}$ is formed by a constant-depth quantum circuit with two layers of unitaries, where each unitary acts on $l$ qubits. The first layer prepares the product state over blocks of length $l$ that is attained after applying projections in step 2, and the second layer applies the inverse unitaries from step 3. Each unitary in this circuit can be decomposed into a sequence of nearest-neighbor gates with depth $\tilde{O}(d^{2l})$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:mainthm}] We fix an even integer $l$, which we will specify later, and divide the $N$ sites of the chain into $n+2$ segments of length $l$, which we label, from left to right: $M_0,M_1,\ldots, M_{n+1}$. If $N$ does not divide $l$ evenly, then we allow segment $M_{n+1}$ to have fewer than $l$ sites. For $i \in [1,n]$ let $L_i$ be the sites to the left of region $M_i$ and let $R_i$ be the sites to the right of region $M_i$. Lemma \ref{lem:tracebound} tells us that, since $\ket \psi$ has $(t_0,\xi)$-exponential decay of correlations, for any $i \in [1,n]$, $\rho_{L_iR_i}$ is close to $\rho_{L_i} \otimes \rho_{R_i}$ when $l$ is much larger than $\xi'$; that is, \begin{equation} D(\rho_{L_iR_i},\rho_{L_i} \otimes \rho_{R_i}) \leq C_3\exp(-l/\xi') \end{equation} whenever $l \geq t_0$. We also choose $\chi = d^{l}$ and for each $i$ define $\rho'_{L_i}$ and $\rho'_{R_i}$, each with rank at most $\sqrt\chi$ by taking $A = L_i$ and $A = R_i$ in Lemma \ref{lem:lowrank}. Thus we have \begin{align} D(\rho_{L_i}, \rho'_{L_i}) &\leq C_1 \exp(-l/(16 \xi)), \\ D(\rho_{R_i}, \rho'_{R_i}) &\leq C_1 \exp(-l/(16 \xi)). \end{align} Then by the triangle inequality we have \begin{align} &D(\rho_{L_iR_i},\rho'_{L_i} \otimes \rho'_{R_i}) \nonumber\\ \leq{}& D(\rho_{L_iR_i},\rho_{L_i} \otimes \rho_{R_i}) + D(\rho_{L_i} \otimes \rho_{R_i},\rho'_{L_i} \otimes \rho_{R_i}) \nonumber\\ & + D(\rho'_{L_i} \otimes \rho_{R_i},\rho'_{L_i} \otimes \rho'_{R_i}) \nonumber\\ ={}& D(\rho_{L_iR_i},\rho_{L_i} \otimes \rho_{R_i}) + D(\rho_{L_i},\rho'_{L_i}) + D(\rho_{R_i},\rho'_{R_i}) \nonumber\\ \leq{}& C \exp(-l/\xi''), \end{align} where $C\leq 2C_1+C_3$, $\xi'' = \max(\xi', 16 \xi)$, whenever $l \geq t_0$. Note that $\ket{\psi}_{L_iM_iR_i}$ can be viewed as a purification of $\rho_{L_iR_i}$ with $M_i$ the purifying auxiliary system. Divide region $M_i$ in half, forming $M_i^L$ (left half) and $M_i^R$ (right half). See Figure \ref{fig:mainthm} for a schematic. Each of these subsystems has total dimension $d^{l/2}$ and thus can act as the purifying auxiliary system for $\rho'_{L_i}$ or $\rho'_{R_i}$. Let $\ket{\alpha_i}_{L_iM_i^L}$ be a purification of $\rho'_{L_i}$ and $\ket{\beta_i}_{M_i^RR_i}$ be a purification of $\rho'_{R_i}$. Thus, $\ket{\alpha_i} \otimes \ket{\beta_i}$, which is defined over the entire original chain, is a purification of $\rho'_{L_i} \otimes \rho'_{R_i}$. Uhlmann's theorem (Lemma \ref{lem:uhlmann}) shows how these purifications are related by a unitary on the purifying auxiliary system: for each $M_i$ with $i \in [1,n]$, there is a unitary $U_i$ acting non-trivially on region $M_i$ and as the identity on the rest of the chain such that $U_i\ket{\psi}$ is very close to $\ket{\alpha_i} \otimes \ket{\beta_i}$, a product state across the cut between $M_i^L$ and $M_i^R$. In other words, $U_i$ disentangles $L_i$ from $R_i$, up to some small error, by acting only on $M_i$. Formally we say that \begin{equation} \left\lvert\bra{\alpha_i}_{L_iM_i^L}\otimes \bra{\beta_i}_{M_i^RR_i}U_i \ket{\psi}_{L_iM_iR_i}\right\rvert = \sqrt{1-\delta_i^2}, \end{equation} where $\delta_i \leq C\exp(-l/\xi'')$ for all $i$. An equivalent way to write this fact is \begin{align}\label{eq:Vipsi} U_i\ket{\psi}_{L_iM_iR_i} ={}& \sqrt{1-\delta_i^2}\ket{\alpha_i}_{L_iM_i^L}\otimes \ket{\beta_i}_{M_i^RR_i} \nonumber \\ &+ \delta_i\ket{\phi_i'}_{L_iM_i^LM_i^RR_i} \end{align} where $\ket{\phi_i'}$ is a normalized state orthogonal to $\ket{\alpha_i} \otimes \ket{\beta_i}$. We can define the projector \begin{equation} \Pi_i = I_{L_iM_i^L}\otimes \ket{\beta_i}\bra{\beta_i}_{M_i^RR_i}, \end{equation} whose rank is 1 when considered as an operator acting only on $M_i^R R_i$. We notice that $\Pi_iU_i\ket{\psi}_{L_iM_iR_i}$ is a product state across the $M_i^L/M_i^R$ cut and has a norm close to 1. Suppose we alternate between applying disentangling operations $U_i$ and projections $\Pi_i$ onto a product state as we move down the chain. Each $\Pi_i$ will reduce the norm of the state, but we claim the norm will never vanish completely (and delay the proof for later for clarity of argument). \begin{claim}\label{prop:normnonzero} If $l \geq \xi'' \log(3C)$, then \begin{equation} \lVert \Pi_nU_n \ldots \Pi_1 U_1 \ket{\psi} \rVert \neq 0. \end{equation} \end{claim} This allows us to define \begin{equation} \kettilde{\phi} = \frac{\Pi_nU_n \ldots \Pi_1 U_1 \ket{\psi}}{\lVert \Pi_nU_n \ldots \Pi_1 U_1 \ket{\psi} \rVert}. \end{equation} Note that, to put our proof in line with what is described in the introduction and proof summary earlier, we may act with all the unitaries prior to the projectors if we conjugate the projectors \begin{equation} \kettilde{\phi} \propto \Pi'_n\ldots \Pi'_1 U_n \ldots U_1 \ket{\psi}, \end{equation} where $\Pi'_i = U_n\ldots U_{i+1}\Pi_iU_{i+1}^\dagger\ldots U_{n}^\dagger$, which still only acts on the region $R_i$. This can be compared with the state $\ket{\phi}$ defined by applying the disentangling operations without projecting: \begin{equation} \ket{\phi} = U_n \ldots U_1 \ket{\psi}. \end{equation} We claim that $\kettilde{\phi}$ is a good local approximation for $\ket{\phi}$ (and delay the proof for clarity of argument). \begin{claim}\label{prop:philocalapprox} For any integer $k'$, $\kettilde{\phi}$ is a $(k', \epsilon')$-local approximation to $\ket{\phi}$ with $\epsilon'=C\sqrt{k'/l+3}\exp(-l/\xi'')$. \end{claim} Next we can define \begin{equation}\label{eq:tildepsifromtildephi} \kettilde{\psi} = U_n^\dag \ldots U_1^\dag \kettilde{\phi}, \end{equation} which parallels the relationship \begin{equation} \ket{\psi} = U_n^\dag \ldots U_1^\dag \ket{\phi}. \end{equation} Now suppose $X$ is a contiguous region of the chain of length $k$. Then there is a region $X'$ of the chain of length at most $k'=k+2l$ that contains $X$ and is made up of regions $M_j$ where $j \in [a',b']$. Then \begin{align}\label{eq:psitildelocalapprox} &\lVert \text{Tr}_{X^c}(\kettilde{\psi}\bratilde{\psi}-\ket{\psi}\bra{\psi})\rVert_1 \nonumber\\ \leq{}& \lVert \text{Tr}_{X'^c}(\kettilde{\psi}\bratilde{\psi}-\ket{\psi}\bra{\psi})\rVert_1 \nonumber\\ ={}&\lVert \text{Tr}_{X'^c}(\kettilde{\phi}\bratilde{\phi}-\ket{\phi}\bra{\phi})\rVert_1 \nonumber \\ \leq{}& C\sqrt{k/l+5}\exp(-l/\xi'') \nonumber \\ \leq{}& C\sqrt{6k}\exp(-l/\xi''), \end{align} where the third line follows from the fact that $\ket{\phi}$ and $\ket{\psi}$ are related by a unitary that does not couple region $X'$ and region $X'^c$, and the fourth line follows from Claim \ref{prop:philocalapprox}. If we choose $l = \max(t_0,\xi''\log(3C\sqrt{k}/\epsilon))$, then the requirements of Claim \ref{prop:normnonzero} are satisfied, and we can see from Eq.~\eqref{eq:psitildelocalapprox} that $\kettilde{\psi}$ is a $(k,\epsilon)$-local approximation to $\ket{\psi}$, item (1) of the theorem. Item (2) states that $\kettilde{\psi}$ can be written as an MPS with constant bond dimension. This can be seen by the following logic. The Schmidt rank of $\kettilde{\phi}$ across any cut $M_i^L/M_i^R$ is 1, as discussed in the proof of Claim \ref{prop:philocalapprox} (see Eq.~\eqref{eq:phitildeproduct}), since the projector $\Pi_i$ projects onto a product state across that cut and unitaries $U_j$ with $j>i$ act trivially across the cut. By Corollary \ref{cor:schmidtrankincrease}, this implies that the Schmidt rank across any cut $M_i^R/M_{i+1}^L$ can be at most $d^{l/2}$. Acting with the inverse unitaries $U_j^\dag$ on $\kettilde{\phi}$ to form $\kettilde{\psi}$ preserves the Schmidt rank across the cut $M_i^R/M_{i+1}^L$, since none couple both sides of the cut. Because any cut is at most distance $l/2$ from some $M_i^R/M_{i+1}^L$ cut, the Schmidt rank across an arbitrary cut can be at most a factor of $d^{l/2}$ greater, again by Corollary \ref{cor:schmidtrankincrease}, meaning the maximum Schmidt rank across any cut of $\ket{\psi}$ is $\chi=d^l$. Given our choice of $l=\xi''\log(C\sqrt{6k}/\epsilon)$, we find that the state can be represented by an MPS with bond dimension \begin{align} \chi &= (\sqrt{6}C)^{\xi''\log(d)}(k/\epsilon^2)^{\xi''\log(d)/2} \nonumber\\ &= e^{e^{\tilde{O}(\xi \log(d))}}(k/\epsilon^2)^{O(\xi^2\log^2(d))}. \end{align} This proves item (2). Note that, in the case that our choice of $l$ exceeds $N$, it is not possible to form the construction we have described. However, in this case $d^l$ will exceed $d^{N}$ and we may take $\kettilde{\psi} =\ket{\psi}$, which is a local approximation for any $k$ and $\epsilon$ and has bond dimension in line with item (2) or (2'). Item (2') follows by using the same equations with $C=2C_2+C_4 = \exp(\tilde{O}(\log^{3}(d)/\Delta))$ and $\xi'' =O(1/\Delta)$. For Lemma \ref{lem:tracebound} to apply, we must have $l \geq \Omega(\log^4(d)/\Delta^2)$, but this will be satisfied for sufficiently large choices of $k/\epsilon^2$. Thus the final analysis yields \begin{equation} \chi = e^{\tilde{O}(\log^{4}(d)/\Delta^{2})}(k/\epsilon^2)^{O\left(\log(d)/\Delta\right)}. \end{equation} Item (3) states that $\kettilde{\psi}$ can be formed from a low-depth quantum circuit. In the proof of Claim 2, we show how the state $\kettilde{\phi}$ is a product state across divisions $M_i^L/M_{i}^R$, as in Eq.~\eqref{eq:phitildeproduct}. Thus the state $\kettilde{\phi}$ can be created from $\ket{0}^{\otimes N}$ by acting with non-overlapping unitaries on regions $M_i^RM_{i+1}^L$ in parallel. Each of these unitaries is supported on $l$ sites. Then, $\kettilde{\psi}$ is related to $\kettilde{\phi}$ by another set of non-overlapping unitaries supported on $l$ sites, as shown in Eq.~\eqref{eq:tildepsifromtildephi}. We conclude that $\kettilde{\psi}$ can be created from the trivial state by two layers of parallel unitary operations where each unitary is supported on $l$ sites, as illustrated in Figure \ref{fig:constantdepthcircuit}. In \cite{brennen2005efficient}, it is shown how any $l$-qudit unitary can be decomposed into $O(d^{2l}) = O(\chi^2)$ two-qudit gates, with no need for ancillas. We can guarantee that these gates are all spatially local by spending at most depth $O(l)$ performing swap operations to move any two sites next to each other, a factor only logarithmic in the total depth. This proves the theorem. \end{proof} \begin{proof}[Proof of Claim \ref{prop:normnonzero}] We prove by induction. Let $|\tilde{\phi}_j\rangle = \Pi_jU_j \ldots \Pi_1U_1 \ket{\psi}$. Note that $\Pi_i U_i\ket{\psi}$ is non-zero for all $i$, so in particular $|\tilde{\phi}_1\rangle$ is non-zero. Furthermore we note that, if it is non-zero, $|\tilde{\phi}_j\rangle$ can be written as a product state $\ket{\alpha'_j}_{L_jM_j^L}\otimes \ket{\beta_j}_{M_j^RR_j}$ for some unnormalized but non-zero state $\ket{\alpha'_j}$, and the reduced density matrix of $|\tilde{\phi}_j\rangle$ on $R_j$ is $\rho'_{R_j}$. If we assume $|\tilde{\phi}_j\rangle$ is non-zero then we can write \begin{align} |\tilde{\phi}_{j+1}\rangle &= \Pi_{j+1}U_{j+1} |\tilde{\phi}_j\rangle \nonumber \\ &= \ket{\alpha_j'}\otimes \Pi_{j+1}U_{j+1} \ket{\beta_j} \end{align} and \begin{align} \lVert |\tilde{\phi}_{j+1}\rangle\rVert^2 &= \lVert \ket{\alpha_j'}\otimes \Pi_{j+1}U_{j+1} \ket{\beta_j}\rVert^2 \nonumber\\ &= \lVert \ket{\alpha'_j}\rVert^2\lVert \Pi_{j+1}U_{j+1} \ket{\beta_j}\rVert^2 \nonumber\\ &= \lVert \ket{\alpha'_j}\rVert^2\text{Tr}(\Pi_{j+1}U_{j+1}\ket{\beta_j}\bra{\beta_j}U_{j+1}^\dag \Pi_{j+1}) \nonumber\\ &= \lVert \ket{\alpha'_j}\rVert^2\text{Tr}(\Pi_{j+1}U_{j+1}\rho'_{R_{j}}U_{j+1}^\dag \Pi_{j+1})\nonumber \\ &\geq \lVert \ket{\alpha'_j}\rVert^2\text{Tr}(\Pi_{j+1}U_{j+1}\rho_{R_j} U_{j+1}^\dag \Pi_{j+1}) \nonumber\\ &-\lVert \ket{\alpha'_j}\rVert^2\lVert \rho_{R_j}- \rho'_{R_j}\rVert_1 \nonumber\\ &\geq \lVert \ket{\alpha'_j}\rVert^2\text{Tr}(\Pi_{j+1}U_{j+1}\rho U_{j+1}^\dag \Pi_{j+1}) \nonumber\\ &-\lVert \ket{\alpha'_j}\rVert^2\lVert \rho_{R_j}- \rho'_{R_j}\rVert_1 \nonumber\\ &\geq \lVert \ket{\alpha'_j}\rVert^2\lVert\Pi_{j+1}U_{j+1}\ket{\psi} \rVert^2 \nonumber\\ &-\lVert \ket{\alpha'_j}\rVert^2\lVert \rho_{R_j}- \rho'_{R_j}\rVert_1 \nonumber\\ &\geq \lVert \ket{\alpha'_j}\rVert^2(1-C\exp(-l/\xi''))^2 \nonumber\\ &- \lVert \ket{\alpha'_j}\rVert^2C\exp(-l/\xi'') \nonumber\\ & > 0 \nonumber \end{align} as long as $l \geq \xi'' \log(3C)$. \end{proof} \begin{proof}[Proof of Claim \ref{prop:philocalapprox}] First, consider the cut $M_i^L/M_i^R$ during the formation of the state $\kettilde{\phi}$. When the projector $\Pi_i$ is applied, the state becomes a product state across this cut. The remaining operators are $U_j$ and $\Pi_j$ with $j>i$, and thus they have no effect on the Schmidt rank across the $M_i^L/M_i^R$ cut, meaning $\kettilde{\phi}$ is a product state across each of these cuts, or in other words \begin{align}\label{eq:phitildeproduct} \kettilde{\phi} ={}& \ket{\phi_1}_{M_0M_1^L}\otimes \ket{\phi_2}_{M_1^RM_2^L} \otimes \ldots \nonumber \\ &\ldots \otimes \ket{\phi_n}_{M_{n-1}^RM_n^L} \otimes \ket{\phi_{n+1}}_{M_n^RM_{n+1}}. \end{align} Given an integer $k$ and a contiguous region $X$ of length $k$, we can find integers $a$ and $b$ such that $Y = M_a^RM_{a+1}\ldots M_{b-1}M_b^L$ contains $X$ and $\lvert b - a \rvert \leq k/l+2$. Then \begin{align} &\text{Tr}_{Y^c}(\kettilde{\phi}\bratilde{\phi}) \nonumber \\ ={}& \ket{\phi_{a+1}}\bra{\phi_{a+1}} \otimes \ldots \otimes \ket{\phi_{b}}\bra{\phi_{b}} \nonumber\\ \propto{}& \text{Tr}_{L_aM_a^LM_b^RR_b}\left(\Pi_bU_b\ldots\Pi_aU_a\ket{\psi}\bra{\psi}U_a^\dag\Pi_a\ldots U_b^\dag \Pi_b\right). \end{align} The advantage here is that all of the $U_i$ and $\Pi_i$ for which $i \not\in [a,b]$ have disappeared. On the other hand, we have \begin{align} &\text{Tr}_{Y^c}(\ket{\phi}\bra{\phi}) \nonumber\\ ={}& \text{Tr}_{Y^c}(U_n\ldots U_1\ket{\psi}\bra{\psi}U_1^\dag \ldots U_n^\dag) \nonumber\\ ={}& \text{Tr}_{L_aM_a^LM_b^RR_b}\left(U_b\ldots U_{a}\ket{\psi}\bra{\psi}U_{a}^\dag\ldots U_{b}^\dag\right). \end{align} Note that, since \begin{equation} U_i \ket{\psi} = \sqrt{1-\delta_i^2}\ket{\alpha_i} \otimes \ket{\beta_i} + \delta_i \ket{\phi_i'}, \end{equation} we can say that \begin{equation} \Pi_i U_i \ket{\psi} = U_i\ket{\psi} - \delta_i(I-\Pi_i)\ket{\phi_i'}, \end{equation} and thus \begin{align} &\Pi_bU_b\ldots \Pi_aU_a\ket{\psi} \nonumber \\ ={}& U_b\ldots U_a \ket{\psi} - \nonumber\\ &\sum_{j=a}^b \delta_j(\Pi_bU_b\ldots \Pi_{j+1}U_{j+1})(I-\Pi_j)\ket{\phi_j'} \nonumber\\ \equiv{}& U_b\ldots U_a \ket{\psi} - \delta \ket{\phi'}, \end{align} where $\delta \leq \sqrt{\sum_{j=a}^b\delta_j^2}$ and normalized $\ket{\phi'}$ is normalized. This implies \begin{equation} \frac{\lvert\bra{\psi} U_a^\dagger \ldots U_b^\dagger \Pi_b U_b \ldots \Pi_a U_a \ket{\psi}\rvert}{\lVert \Pi_a \ldots U_b^\dagger \Pi_b U_b \ldots \Pi_a U_a \ket{\psi}\rVert} \geq \sqrt{1-\delta^2}, \end{equation} which shows that $D_1(\tau, \tau') \leq \delta$, where \begin{align} &\tau = U_b\ldots U_{a}\ket{\psi}\bra{\psi}U_{a}^\dag\ldots U_{b}^\dag \nonumber\\ &\tau' = \frac{\Pi_bU_b\ldots\Pi_aU_a\ket{\psi}\bra{\psi}U_a^\dag\Pi_a\ldots U_b^\dag \Pi_b}{\lVert \Pi_a \ldots U_b^\dagger \Pi_b U_b \ldots \Pi_a U_a \ket{\psi}\rVert^2} \end{align} and hence \begin{align} &D_1( \text{Tr}_{X^c}(\ket{\phi}\bra{\phi}),\text{Tr}_{X^c}(\kettilde{\phi}\bratilde{\phi}))\nonumber \\ &\leq D_1( \text{Tr}_{Y^c}(\ket{\phi}\bra{\phi}),\text{Tr}_{Y^c}(\kettilde{\phi}\bratilde{\phi}))\nonumber \\ &\leq D_1(\tau_Y,\tau'_Y) \leq D_1(\tau,\tau') \leq \delta \nonumber \\ &\leq C\sqrt{k/l+3}\exp(-l/\xi''). \end{align} This holds for any region $X$ of length $k$, so this proves the claim. \end{proof} \subsection{Proof of Theorem \ref{thm:reduction}} First we state a lemma that will do most of the legwork needed for Theorem \ref{thm:reduction}. Then we prove Theorem \ref{thm:reduction}. \begin{lemma}\label{lem:Kcombined} Suppose, for $j=0,1$, $H^{(j)}$ is a translationally invariant Hamiltonian defined on a chain of length $N$ and local dimension $d$. Further suppose that $|\psi^{(j)}\rangle$ is the unique ground state of $H^{(j)}$ with energy $E_0^{(j)}$, and let $\Delta^{(j)}$ be the spectral gap of $H^{(j)}$. We may form a new chain with local dimension $2d$ by adding an ancilla qubit to each site of the chain. Then there is a Hamiltonian $K$ defined on this chain such that \begin{enumerate}[(1)] \item The ground state energy of $K$ is \begin{equation} E_0^K = \frac{1}{3}\min_jE_0^{(j)}. \end{equation} \item If $E_0^{(0)} < E_0^{(1)}$, then the ground state of $K$ is $\ket{0^N}_A \otimes |\psi^{(0)}\rangle$ and if $E_0^{(1)} < E_0^{(0)}$, then the ground state is $\ket{1^N}_A \otimes |\psi^{(1)}\rangle$, where $A$ refers to the $N$ ancilla registers collectively. \item If $E_0^{(0)} < E_0^{(1)}$, then the spectral gap of $K$ is at least $\min(\Delta^{(0)},E_0^{(1)} - E_0^{(0)},1)/3$ and if $E_0^{(1)} < E_0^{(0)}$, then the spectral gap of $K$ is at least $\min(\Delta^{(1)},E_0^{(0)} - E_0^{(1)},1)/3$. \end{enumerate} \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:Kcombined}] Note that a variant of this lemma is employed in \cite{bausch2018undecidability,cubitt2015undecidability,bausch2018size} to show the undecidability of certain properties of translationally invariant Hamiltonians. Since $H^{(j)}$ is translationally invariant, it is specified by its single interaction term $H^{(j)}_{i,i+1}$: \begin{equation} H^{(j)} = \sum_{i=1}^{N-1} H^{(j)}_{i,i+1}. \end{equation} The Hamiltonian $K$ will be defined over a new chain where we attach to each site an ancilla qubit, increasing the local Hilbert space dimension by a factor of 2. We refer to the ancilla associated with site $i$ by the subscript $A_i$, and we refer to the collection of ancillas together with the subscript $A$. Operators or states without a subscript are assumed to act on the original $d$-dimensional part of the local Hilbert spaces. Let \begin{equation} K = \sum_{i=1}^{N-1} K_{i,i+1}, \end{equation} where \begin{align} K_{i,i+1} ={}& \;\;\; \frac{1}{3}H^{(0)}_{i,i+1} \otimes \ket{00}\bra{00}_{A_iA_{i+1}} \\ &+ \frac{1}{3}H^{(1)}_{i,i+1} \otimes \ket{11}\bra{11}_{A_iA_{i+1}} \\ &+ I_{i,i+1} \otimes\left(\ket{01}\bra{01}+\ket{10}\bra{10}\right)_{A_iA_{i+1}} \end{align} with $I_{i,i+1}$ denoting the identity operation on sites $i$ and $i+1$. In this form it is clear that $K$ is a nearest-neighbor translationally invariant Hamiltonian and that each interaction term has operator norm 1 (a requirement under our treatment of 1D Hamiltonians). The picture we get is that if two neighboring ancillas are both $\ket 0$, then $H^{(0)}_{i,i+1}/3$ is applied, if both are $\ket 1$ then $H^{(1)}_{i,i+1}/3$ is applied, and if the ancillas are different, then $I_{i,i+1}$ is applied. Following this intuition, we can rewrite $K$ as follows. \begin{equation} K = \sum_{x=0}^{2^N-1} \left(\ket{x} \bra{x}_A\otimes \sum_{i=1}^{N-1} K_{x,i}\right), \end{equation} where the first sum is over all settings of the ancillas and the operator $K_{x,i}$ acts on the non-ancilla portion of sites $i$ and $i+1$, with \begin{equation} K_{x,i}= \begin{cases} H^{(0)}_{i,i+1}/3 & \text{ if } x_i=x_{i+1}=0 \\ H^{(1)}_{i,i+1}/3 & \text{ if }x_i=x_{i+1}=1 \\ I_{i,i+1} & \text{ if }x_i \neq x_{i+1} \end{cases} \end{equation} We analyze the spectrum of $K$. If $H^{(j)}$ has eigenvalues $E^{(j)}_n$ with corresponding eigenvectors $|\phi^{(j)}_n\rangle$ (where $E^{(j)}_n$ is non-decreasing with increasing integers $n$), then the states $\ket{0^N}_A \otimes |\phi^{(0)}_n\rangle$ and $\ket{ 1^N}_A \otimes |\phi^{(1)}_n\rangle$ are eigenstates of $K$ with eigenvalues $E^{(0)}_n/3$ and $E^{(1)}_n/3$, respectively. Recall that eigenvectors of a Hamiltonian span the whole Hilbert space over which the Hamiltonian is defined. Therefore, the eigenvectors of $K$ listed above span the entire sectors of the Hilbert space associated with the ancillas set to $\ket{0^N}_A$ or $\ket{ 1^N}_A$. Suppose $\ket{\phi}$ is another eigenvector of $K$. Since it is orthogonal to all of the previously listed eigenvectors, $\ket{\phi}$ can be written \begin{equation} \ket{\phi} = \sum_{x=1}^{2^N-2} \alpha_x \ket x_A \otimes \ket{\eta_x} \end{equation} for some set of complex coefficients $\alpha_x$ with $\sum_x \lvert \alpha_x \rvert^2=1$ and some set of normalized states $\ket{\eta_x}$. The sum explicitly leaves out the $x=0 = 0^N$ and $x=2^N-1 = 1^N$ binary strings because these states lie in the subspace spanned by eigenstates already listed. We wish to lower bound the energy of the state $\ket{\phi}$, i.e.,~the quantity \begin{align} &\bra \phi K\ket \phi\nonumber\\ &= \sum_{x=1}^{2^N-2}\sum_{y=1}^{2^N-2} \alpha^*_x\alpha_y \bra{x}_A \bra{\eta_x}K\ket{y}_A \ket{ \eta_y} \nonumber\\ &=\sum_{x=1}^{2^N-2}\sum_{y=1}^{2^N-2}\sum_{z=0}^{2^N-1} \alpha^*_x\alpha_y \braket{x}{z}\braket{z}{y}_A \bra{\eta_x}\sum_{i=1}^{N-1} K_{z,i}\ket{ \eta_y} \nonumber\\ &=\sum_{x=1}^{2^N-2} \lvert \alpha_x\rvert^2 \bra{\eta_x}\sum_{i=1}^{N-1} K_{x,i}\ket{\eta_x}. \end{align} We make the following claim: \begin{claim}\label{claim:partialchain} For any state $\ket{\eta}$, and any $1 \leq a < b \leq N$ \begin{equation} \bra{\eta} \sum_{i=a}^{b-2}H^{(j)}_{i,i+1} \ket{\eta} \geq \frac{b-a}{N-1}E_0^{(j)}-1. \end{equation} \end{claim} \begin{proof}[Proof of Claim \ref{claim:partialchain}] First we prove it in the case that $M:=b-a$ divides $N$. Let region $Y$ refer to sites $[a,b-1]$, let $\rho = \text{Tr}_{Y^c}(\ket{\eta}\bra{\eta})$, and let $\sigma = \rho \otimes\ldots \otimes \rho$ be $N/M$ copies of $\rho$ which covers all $N$ sites. Then \begin{align} \text{Tr}(H^{(j)}\sigma) =& \frac{N}{M}\bra{\eta} \sum_{i=a}^{b-2}H^{(j)}_{i,i+1}\ket\eta \nonumber \\ &+ \sum_{k=1}^{N/M-1}\bra{\eta}H^{(j)}_{kM,kM+1}\ket{\eta} \nonumber\\ \leq& \frac{N}{M}\bra{\eta} \sum_{i=a}^{b-2}H^{(j)}_{i,i+1}\ket\eta+\left(\frac{N}{M}-1\right), \end{align} where the last line follows from the fact that the interaction strength $\lVert H^{(j)}_{i,i+1} \rVert \leq 1$. Moreover, by the variational principle, $\text{Tr}(H^{(j)}\sigma) \geq E_0^{(j)}$. These observations together yield \begin{equation} \bra{\eta} \sum_{i=a}^{b-2}H^{(j)}_{i,i+1}\ket\eta \geq (M/N) (E_0^{(j)}+1)-1, \end{equation} which implies the statement of the claim. Now suppose $M$ does not divide $N$. We decompose $N = s M +r$ for non-negative integers $s$ and $r < M$. Let $\sigma = \rho \otimes\ldots \otimes \rho \otimes \ket{\nu}\bra{\nu}$ where there are $s$ copies of $\rho$ and $\ket{\nu}$ is the exact ground state of $\sum_{i=N-r+1}^{N-1} H^{(j)}_{i,i+1}$, which has energy $E_r$. Then \begin{equation} \text{Tr}(H^{(j)}\sigma) \leq s\bra{\eta} \sum_{i=a}^{i_0+M_0-2}H^{(j)}_{i,i+1}\ket{\eta}+ E_r + s. \end{equation} Here we invoke the variational principle twice. First, note that the expectation value of $\sum_{i=N-r+1}^{N-1} H^{(j)}_{i,i+1}$ in the state $|\phi_0^{(j)}\rangle$ (the exact ground state of the whole chain) is exactly $(r-1)E_0^{(j)}/(N-1)$. Since $\ket{\nu}$ is the exact ground state of that Hamiltonian, $E_r$ must be smaller than this quantity. Second, as before, $\text{Tr}(H^{(j)}\sigma) \geq E^{(j)}_0$. Combining these observations yields \begin{align} \bra{\eta} \sum_{i=i_0}^{i_0+M-1}H^{(j)}_{i,i+1} \ket{\eta} &\geq \frac{1}{s}E_0^{(j)}\left(1-\frac{r-1}{N-1}\right)-1 \nonumber \\ &= \frac{M}{N-1}E_0^{(j)}-1. \end{align} \end{proof} Now we use this claim to complete the proof of Lemma \ref{lem:Kcombined}. For any binary string $x$ we can associate a sequence of indices $1 =i_0 < i_1 < \ldots < i_{m} < i_{m+1} = N+1$ such that $x_i=x_{i_k}$ for all $k = 1,\ldots,m$ and all $i$ in the interval $[i_k, i_{k+1}-1]$. Moreover we require $x_{i_{k-1}} \neq x_{i_{k}}$. In other words, $x$ can be decomposed into substrings of consecutive 0s and consecutive 1s, with $i_j$ representing the index of the ``domain wall'' that separates a substring of 0s from a substring of 1s. The parameter $m$ is the number of domain walls. Using this notation, and letting $E_0^K = \min_j E_0^{(j)}/3$ we can rewrite \begin{align} \bra{\eta_x}\sum_{i=1}^{N-1} K_{x,i}\ket{\eta_x} ={}& \sum_{k=0}^{m} \bra{\eta_x}\sum_{i=i_k}^{i_{k+1}-2} \frac{1}{3}H_{i,i+1}^{(x_{i_k})}\ket{\eta_x}\nonumber\\ &+\sum_{k=1}^m \bra{\eta_x} I_{i_k-1,i_k}\ket{\eta_x} \nonumber\\ \geq{}&\sum_{k=0}^{m}\left( \frac{i_{k+1}-i_k}{3(N-1)}E_0^{(x_{i_j})}-\frac{1}{3}\right)+m\nonumber\\ \geq{}&E_0^K+\frac{2m-1}{3}. \end{align} For any $x$ other than $0^N$ and $1^N$, there is at least one domain wall and $m \geq 1$. Thus we can say \begin{equation}\label{eq:gapbound} \bra \phi K \ket \phi \geq E_0^K + 1/3. \end{equation} We have shown that any state orthogonal to the states $\ket{0^N}_A \otimes |\phi^{(0)}_n\rangle$ and $\ket{1^N}_A \otimes |\phi^{(1)}_n\rangle$ will have energy at least 1/3 larger than the lowest energy state of the system. Without loss of generality, suppose $E_0^{(0)} \leq E_0^{(1)}$. Then, the ground state energy is $E_0^K = E_0^{(0)}/3$ and the ground state is $\ket{0^N}_A \otimes |\phi^{(0)}_0\rangle$ (note that in the statement of the Lemma we have $\ket{\psi^{(0)}} = \ket{\phi_0^{(0)}}$). The first excited state is either $\ket{0^N}_A \otimes |\phi^{(0)}_1\rangle$, $\ket{1^N}_A \otimes |\phi^{(1)}_0\rangle$, or lies outside the sector associated with ancillas $\ket{0^N}$ and $\ket{1^N}$, whichever has lowest energy. The three cases lead to spectral gaps of $\Delta^{(0)}/3$, $(E_0^{(1)}-E_0^{(0)})/3$, and something larger than 1/3 (due to Eq.~\eqref{eq:gapbound}), respectively. This proves all three items of the Lemma. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:reduction}] We begin by specifying a family of Hamiltonians, parameterized by $t \in [0,1/2]$ and defined over a chain of length $N$ with local dimension $d$. \begin{equation} H^Z(t) = \sum_{i=1}^{N-1} I_i \otimes I_{i+1}-(1-t)\ket 0\bra 0_i \otimes \ket 0 \bra 0_{i+1} \end{equation} The ground state of $H^Z(t)$ is the trivial product state ${\ket 0}^{\otimes N}$ with ground state energy $t(N-1)$, and thus energy density $t$. The interaction strength is bounded by $1$ and the spectral gap is $1-t \geq 1/2$. Now we construct an algorithm for Problem \ref{prob:energydensity}. We are given $H$ as input, with associated parameters $N$ and $d$, and a lower bound $\Delta$ on the spectral gap. Let the true ground state energy for $H$ be $E$ and let $u = E/(N-1)$. We choose a value of $s$ between 0 and $1$, and we apply Lemma \ref{lem:Kcombined} to construct a Hamiltonian $K$ combining Hamiltonians $H/2$ and $H^Z(s/2)$. $K$ acts on $N$ sites, has local dimension $2d$, and has spectral gap at least $\min(\Delta, \lvert s-u \rvert (N-1),1)/6$. We are given a procedure to solve Problem \ref{prob:localprops} with $\delta = 0.9$ for a single site, i.e.~we can estimate the expectation value of any single site observable in the ground state of $K$. If $s < u$, the true reduced density matrix of $K$ will have its ancilla bits all set to 1. If $s > u$ the reduced density matrix corresponds to the reduced density matrix of $H$ with all its ancilla bits set to 0. Thus we can choose our single site operator to be the $Z_A$ operator that has eigenvalue $1$ for states whose ancilla bit is $\ket{0}$ and eigenvalue $-1$ for states whose ancilla bit is $\ket{1}$. If we have a procedure to determine $\bra{\psi}Z_A\ket{\psi}$ to precision 0.9 then we can determine the setting of one of the ancilla bits in the ground state and thus determine whether $u$ is larger or smaller than $s$. The time required to make this determination is $f(\min(\Delta, (N-1)\lvert s-u \rvert,1)/6,2d,N)$. Because we have control over $s$, we can use this procedure to binary search for the value of $u$. We assume we are given a lower bound on $\Delta$ but since we do not know $u$ \textit{a priori}, we have no lower bound on $\lvert s-u \rvert$, so we may not know how long to run the algorithm for Problem \ref{prob:localprops} in each step of the binary search. If our desired precision is $\epsilon$, we will impose a maximum runtime of $f(\min(\Delta, (N-1)\epsilon/2,1)/6,2d,N)$ for each step. Thus, if we choose a value of $s$ for which $\lvert s-u \rvert < \epsilon/2$, the output of this step of the binary search may be incorrect. After such a step, our search window will be cut in half and the correct value of $u$ will no longer be within the window. However, $u$ will still lie within $\epsilon/2$ of one edge of the window. Throughout the binary search, some element of the search window will always lie within $\epsilon/2$ of $u$, so if we run the search until the window has width $\epsilon$ and output the value $\tilde{u}$ in the center of the search window, we are guaranteed that $\lvert u - \tilde{u} \rvert \leq \epsilon$. The number of steps required is $O(\log(1/\epsilon))$ and the time for each step is $f(\min(2\Delta, (N-1)\epsilon,2)/12,2d,N)$, yielding the statement of the theorem. \end{proof} \section{Discussion} Our results paint an interesting landscape of the complexity of approximating ground states of gapped nearest-neighbor 1D Hamiltonians locally. On the one hand, we show all $k$-local properties of the ground state can be captured by an MPS with only constant bond dimension, an improvement over the $\text{poly}(N)$ bond dimension required to represent the global approximation. This constant scales like a polynomial in $k$ and $1/\epsilon$, when parameters like $\Delta$, $\xi$, and $d$ are taken as constants. On the other hand, we give evidence that, at least for the case where the Hamiltonian is translationally invariant, finding the local approximation may not offer a significant speedup over finding the global approximation: we have shown that the ability to find even a constant-precision estimate of local properties would allow one to learn a constant-precision estimate of the ground state energy with only $O(\log(N))$ overhead. This reduction does not allow one to learn any global information about the state besides the ground state energy, so it falls short of giving a concrete relationship between the complexity of the global and local approximations. Nonetheless, the reduction has concrete consequences. In particular, at least one of the following must be true about translationally invariant gapped Hamiltonians on chains of length $N$: \begin{enumerate}[(1)] \item The ground state energy can be estimated to $O(1)$ precision in $O(\log(N))$ time. \item Local properties of the ground state cannot be estimated to $O(1)$ precision in time independent of $N$. \end{enumerate} In particular, the second item, if true, would seem to imply that, in the translationally invariant case when $N \rightarrow \infty$, local properties cannot be estimated at all. Indeed, it is when the chain is very long, or when we are considering the thermodynamic limit directly that our results are most relevant. In the translationally invariant case as $N\rightarrow \infty$, our first proof method (Theorem \ref{thm:improvedbd}) yields a local approximation that is a translationally invariant MPS. However, the MPS is non-injective and the state is a macroscopic superposition on the infinite chain. Thus the bulk tensors alone do not uniquely define the state and specification of a boundary tensor at infinity is also required \cite{vanderstraeten2019tangent,zauner2018variational}. Our second proof method (Theorem \ref{thm:mainthm}), on the other hand, yields a periodic MPS (with period $O(\log(k/\epsilon^2))$) that is injective and can be constructed by a constant-depth quantum circuit made from spatially local gates. If we allow the locality of the gates to be $O(\log(k/\epsilon^2))$, then the circuit can have depth 2, as in Figure \ref{fig:constantdepthcircuit}. If we require the locality of the gates be only a constant, say 2, then the circuit can have depth $\text{poly}(k,1/\epsilon)$. The fact that the approximation is injective perhaps makes the latter method more powerful. Injective MPS are the exact ground states of some local gapped Hamiltonian \cite{fannes1992finitely, perez2006matrix}. Additionally, non-injective MPS form a set of measure zero among the entire MPS manifold, so variational algorithms that explore the whole manifold are most compatible with an injective approximation. In fact, since the approximation can be generated from a constant-depth circuit, the result justifies a more restricted variational ansatz using states of that form. This ansatz could provide several advantages over MPS in terms of number of parameters needed and ability to quickly calculate local observables, like the energy density. However, algorithms that perform variational optimization of the energy density generally suffer from two issues, regardless of the ansatz they use. First, they do not guarantee convergence to the global minimum within the ansatz set, and second, even when they do find the global minimum, the output does not necessarily correspond to a good local approximation. This stems from the fact that a state that is $\epsilon$-close to the ground state energy density may actually be far, even orthogonal, to the actual ground state. Therefore, even a brute-force optimization over the ansatz set cannot be guaranteed to give any information about the ground state, other than its energy density. This leaves open many questions regarding the algorithmic complexity of gapped local 1D Hamiltonians. For the general case on a finite chain, can one find a local approximation to the ground state faster than the global approximation? For translationally invariant chains, can one learn the ground state energy to $O(1)$ precision in $O(\log(N))$ time, and can one learn local properties in time independent of the chain length? Relatedly, in the thermodynamic limit, can one learn an $\epsilon$-approximation to the ground state energy density in $O(\log(1/\epsilon))$ time, and can one learn local properties at all? These are interesting questions to consider in future work. We would like to conclude by drawing the reader's attention to independent work studying the same problem by Huang \cite{huang2019approximating}, which appeared simultaneously with our own. \begin{acknowledgments} We thank Thomas Vidick for useful discussions about this work and its algorithmic implications. AMD gratefully acknowledges support from the Dominic Orr Fellowship and the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE‐1745301. This work was supported by NSF and Samsung. The Institute for Quantum Information and Matter (IQIM) is an NSF Physics Frontiers Center. \end{acknowledgments} \bibliographystyle{abbrvnat} \section{Introduction} The quantumarticle document class is the preferred document class for papers that will be submitted to \href{https://quantum-journal.org/}{Quantum -- the open journal of quantum science}. It is based on the widely used \texttt{article} document class and designed to allow a seamless transition from documents typeset with \texttt{article}, \texttt{revtex4-1} and the \texttt{elsarticle} document classes. As Quantum is an arXiv overlay journal, all papers have to be submitted to the arXiv. To make this submission process as user-friendly as possible, the quantumarticle document class implements a number of arXiv-specific checks, which however can be deactivated via the option \texttt{noarxiv}. An example for this is a check that is meant to make sure that the arXiv produces a PDF file and that hyperlinks are correctly broken across multiple lines. To ensure this, the arXiv \href{https://arxiv.org/help/submit_tex}{recommends} putting \begin{verbatim} \pdfoutput=1 \end{verbatim} within the first 5 lines of your main LaTeX file. By default, the quantumarticle document class will throw an error if this line is missing. Just like other similar checks, this can be deactivated by means of the \texttt{noarxiv} option, or only this specific check can be deactivated via the option \texttt{nopdfoutputerror}. Giving the \texttt{noarxiv} option also disables a number of other features of quantum article and removes any Quantum related branding from the document. Authors who would like to keep the checks active but still use this document class for manuscripts not intended for submission to Quantum and therefore without Quantum branding can use the \texttt{unpublished} option. One feature deactivated by both \texttt{unpublished} and \texttt{noarxiv} for example, is the ``title click feature'' of quantumarticle. As this document class can be used for arbitrary documents, Quantum implements a feature where readers can, by clicking on the title of a manuscript, verify whether this manuscript was actually published by Quantum. Obviously this is an unwanted feature in any manuscript not accepted in Quantum and it can thus be deactivated with the \texttt{unpublished} option. \section{Typesetting documents} The following are guidelines for the usage of the quantumarticle document class for manuscripts to be submitted to or accepted in Quantum. A detailed description of the functionality and options of the document class follow in Section~\ref{sec:options}. \subsection{Recommendations on structure} In the \texttt{twocolumn} layout and without the \texttt{titlepage} option a paragraph without a previous section title may directly follow the abstract. In \texttt{onecolumn} format or with a dedicated \texttt{titlepage}, this should be avoided. Longer articles should include a section that, early on, explains the main results, their limitations, and assumptions. This section can be used to, for example, present the main theorem, or provide a summary of the results for a wider audience. \subsection{Title information} The title of the document is given via the common \texttt{title} command. Note that clicking the title performs a search for that title on \href{http://quantum-journal.org}{quantum-journal.org}. In this way readers can easily verify whether a work using the \texttt{quantumarticle} class was actually published in Quantum. By giving the \texttt{accepted=YYYY-MM-DD} option, with \texttt{YYYY-MM-DD} the acceptance date, the note ``Accepted in Quantum YYYY-MM-DD, click title to verify'' can be added to the bottom of each page to clearly mark works that have been accepted in Quantum. You should call \texttt{\textbackslash{}maketitle} before your running text starts. \subsubsection{Authors and affiliations} You can provide information on authors and affiliations in the common format also used by \texttt{revtex}: \begin{verbatim} \author{Author 1} \author{Author 2} \affiliation{Affiliation 1} \author{Author 3} \affiliation{Affiliation 2} \author{Author 4} \affiliation{Affiliation 1} \affiliation{Affiliation 3} \end{verbatim} In this example affiliation 1 will be associated with authors 1, 2, and 4, affiliation 2 with author 3 and affiliation 3 with author 4. Repeated affiliations are automatically recognized and typeset in \texttt{superscriptaddress} style. Alternatively you can use a format similar to that of the \texttt{authblk} package and the \texttt{elsarticle} document class to specify the same affiliation relations as follows: \begin{verbatim} \author[1]{Author 1} \author[1]{Author 2} \author[2]{Author 3} \author[1,3]{Author 4} \affil[1]{Affiliation 1} \affil[2]{Affiliation 1} \affil[3]{Affiliation 1} \end{verbatim} \subsubsection{Other author related information} The quantumarticle document class supports further commands that are author specific: \begin{commands} \command{email}{% E-Mail address of the author, displayed in the bottom of the page. } \command{homepage}{% Homepage of the author, displayed in the bottom of the page. } \command{thanks}{% Additional text that is displayed in the bottom of the page. } \command{orcid}{% If the ORCiD of the author is given, his name will become a link to his ORCiD profile. } \end{commands} \subsection{Abstract} The abstract is typeset using the common \texttt{abstract} environment. In the standard, \texttt{twocolumn}, layout the abstract is typeset as a bold face first paragraph. In \texttt{onecolumn} layout the abstract is placed above the text. Both can be combined with the \texttt{titlepage} option to obtain a format with dedicated title and abstract pages that are not included in the page count. This format can be more suitable for long articles. The \texttt{abstract} environment can appear both before and after the \texttt{\textbackslash{}maketitle} command and calling \texttt{\textbackslash{}maketitle} is optional, as long as there is an \texttt{abstract}. Both \texttt{abstract} and \texttt{\textbackslash{}maketitle} however must be placed after all other \texttt{\textbackslash{}author}, \texttt{\textbackslash{}affiliation}, etc.\ commands. \subsection{Sectioning} Sections, subsections, subsubsections, and paragraphs should be typeset with the standard LaTeX commands. The paragraph is the smallest unit of sectioning. Feel free to end the paragraph title with a full stop if you find this appropriate. \subsection{Equations} You can use the standard commands for equations. For multi-line equations \texttt{align} is preferable over \texttt{eqnarray}, please refrain from using the latter. For complex equations you may want to consider using the \texttt{IEEEeqnarray} environment from the \texttt{IEEEtrantools} package. How you refer to equations is up to you, but please be consistent and use the \texttt{\textbackslash{}eqref\{\dots\}} command instead of writing \texttt{(\textbackslash{}ref\{\dots\})}. As a courtesy for your readers and referees, please suppress equation numbers only if there is a specific reason to do so, to not make it unnecessarily difficult to refer to individual results and steps in derivations. Very wide equations can be shown expanding over both columns using the \texttt{widetext} environment. In \texttt{onecolumn} mode, the \texttt{widetext} environment has no effect. To enable this feature in \texttt{twocolumn} mode, \texttt{quantumarticle} relies on the package \texttt{ltxgrid}. Unfortunately this package has a bug that leads to a sub-optimal placement of extremely long footnotes. \subsection{Floats} Every floating element must have an informative caption and a number. The caption can be placed above, below, or to the side of the figure, as you see fit. Feel free to place them at the top or bottom of the page, or in the middle of a paragraph as you see fit. Try to place them on the same page as the text referring to them. A figure on the first page can help readers remember and recognize your work more easily. \subsubsection{Figures} Figures are typeset using the standard \texttt{figure} environment for single-column figures and \texttt{figure*} for multi-column figures. \subsubsection{Tables} Tables are typeset using the standard \texttt{table} environment for single-column tables and \texttt{table*} for multi-column tables. \subsection{Plots} Quantum provides a Jupyter notebook to create plots that integrate seamlessly with \texttt{quantumarticle}. \subsection{Footnotes} Footnotes are typeset using the \texttt{footnote} command. They will appear in the bottom of the page. Please do only use footnotes when appropriate and do not mix footnotes with references. \subsection{References} Citations to other works should appear in the References section at the end of the work. \paragraph{Important:} As Quantum is a member of Crossref, all references to works that have a DOI \emph{must} be hyperlinked according to the DOI. Those links must start with \texttt{https://doi.org/} (preferred), or \texttt{http://dx.doi.org/}. Direct links to the website of the publisher are not sufficient. This can be achieved in several ways, depending on how you are formatting your bibliography. \subsubsection{Manual bibliography} Suppose the DOI of an article that you want to cite is \texttt{10.22331/idonotexist}. If you are formatting your bibliography manually, you can cite this work using the following in your \texttt{thebibliography} environment: \begin{verbatim} \bibitem{examplecitation} Name Surname, \href{https://doi.org/10.22331/ idonotexist}{Quantum \textbf{123}, 123456 (1916).} \end{verbatim} \paragraph{Important:} If you are formatting your bibliography manually, please do not group multiple citations into one \texttt{\textbackslash{}bibitem}. Having to search through multiple references to find the cited result makes your work less accessible for authors and grouping references can screw up our automatic extraction of citations. \subsubsection{BibTeX bibliography} We encourage the use of BibTeX to generate your bibliography from the BibTeX meta-data provided by publishers. For DOI linking to work, the BibTeX file must contain the \texttt{doi} field as for example in: \begin{verbatim} @article{examplecitation, author = {Surname, Name}, title = {Title}, journal = {Quantum}, volume = {123}, page = {123456}, year = {1916}, doi = {10.22331/idonotexist}, } \end{verbatim} Several authors had problems because of Unicode characters in their BibTeX files. Be advised that \href{http://wiki.lyx.org/BibTeX/Tips}{BibTeX does not support Unicode characters}. All special characters must be input via their respective LaTeX commands. \paragraph{natbib} If you are using BibTeX, you can load the \texttt{natbib} package by putting \begin{verbatim} \usepackage[numbers,sort&compress]{natbib} \end{verbatim} in the preamble of your document and then use the \texttt{plainnat} citation style by including your BibTeX bibliography \texttt{mybibliography.bib} where you want the bibliography to appear as follows: \begin{verbatim} \bibliographystyle{plainnat} \section{Figures} \begin{figure}[t] \centering \includegraphics{example-plot.pdf} \caption{Every figure must have an informative caption and a number. The caption can be placed above, below, or to the side of the figure, as you see fit. The same applies for tables, boxes, and other floating elements. Quantum provides a Jupyter notebook to create plots that integrate seamlessly with \texttt{quantumarticle}, described in Section \ref{sec:plots}. Figures spanning multiple columns can by typeset with the usual \texttt{figure*} environment.} \label{fig:figure1} \end{figure} See Fig.~\ref{fig:figure1} for an example of how to include figures. Feel free to place them at the top or bottom of the page, or in the middle of a paragraph as you see fit. Try to place them on the same page as the text referring to them. A figure on the first page can help readers remember and recognize your work more easily. \section{Sectioning and equations} Sections, subsections, subsubsections, and paragraphs should be typeset with the standard LaTeX commands. You can use the standard commands for equations. \begin{align} \label{emc} E &= m\,c^2\\ a^2 + b^2 &= c^2\\ H\,|\psi\rangle &= E\,|\psi\rangle\\ (\openone \otimes A)\,(B \otimes \openone) &= A \otimes B \end{align} For multi-line equations \texttt{align} is \href{http://tex.stackexchange.com/questions/196/eqnarray-vs-align}{preferable} over \texttt{eqnarray}. Please refrain from using the latter. For complex equations you may want to consider using the \texttt{IEEEeqnarray} environment from the \texttt{IEEEtrantools} package. Whether you prefer to refer to equations as Eq.~\eqref{emc}, Equation~\ref{emc}, or just \eqref{emc} is up to you, but please be consistent and use the \texttt{\textbackslash{}eqref\{\dots\}} command instead of writing \texttt{(\textbackslash{}ref\{\dots\})}. As a courtesy for your readers and referees, please suppress equation numbers only if there is a specific reason to do so, to not make it unnecessarily difficult to refer to individual results and steps in derivations. \paragraph{Paragraphs} The paragraph is the smallest unit of sectioning. Feel free to end the paragraph title with a full stop if you find this appropriate. \subsection{References and footnotes} \label{sec:subsec1} Footnotes\footnote{Only use footnotes when appropriate.} appear in the bottom of the page. Please do not mix them with your references. Citations to other works should appear in the References section at the end of the work. \begin{theorem}[DOI links are required] Important: As Quantum is a member of Crossref, all references to works that have a DOI must be hyperlinked according to the DOI. Those links must start with \texttt{https://doi.org/} (preferred), or \texttt{http://dx.doi.org/}. Direct links to the website of the publisher are not sufficient. \end{theorem} This can be achieved in several ways, depending on how you are formatting your bibliography. Suppose the DOI of an article \cite{examplecitation} that you want to cite is \texttt{10.22331/idonotexist}. If you are formatting your bibliography manually, you can cite this work using the following in your \texttt{thebibliography} environment: \begin{verbatim} \bibitem{examplecitation} Name Surname, \href{https://doi.org/10.22331/ idonotexist}{Quantum \textbf{123}, 123456 (1916).} \end{verbatim} \begin{theorem}[One citation per bibitem] Important: If you are formatting your bibliography manually, please do not group multiple citations into one \texttt{\textbackslash{}bibitem}. Having to search through multiple references to find the cited result makes your work less accessible for authors and grouping references can screw up our automatic extraction of citations. \end{theorem} We encourage the use of BibTeX to generate your bibliography from the BibTeX meta-data provided by publishers. For DOI linking to work, the BibTeX file must contain the \texttt{doi} field as for example in: \begin{verbatim} @article{examplecitation, author = {Surname, Name}, title = {Title}, journal = {Quantum}, volume = {123}, page = {123456}, year = {1916}, doi = {10.22331/idonotexist}, } \end{verbatim} Several authors had problems because of Unicode characters in their BibTeX files. Be advised that \href{http://wiki.lyx.org/BibTeX/Tips}{BibTeX does not support Unicode characters}. All special characters must be input via their respective LaTeX commands. If you are using BibTeX, you can load the \texttt{natbib} package by putting \begin{verbatim} \usepackage[numbers,sort&compress]{natbib} \end{verbatim} in the preamble of your document and then use the \texttt{plainnat} citation style by including your BibTeX bibliography \texttt{mybibliography.bib} where you want the bibliography to appear as follows: \begin{verbatim} \bibliographystyle{plainnat} \section{Introduction} The quantumarticle document class is the preferred document class for papers that will be submitted to \href{https://quantum-journal.org/}{Quantum -- the open journal of quantum science}. It is based on the widely used \texttt{article} document class and designed to allow a seamless transition from documents typeset with \texttt{article}, \texttt{revtex4-1} and the \texttt{elsarticle} document classes. As Quantum is an arXiv overlay journal, all papers have to be submitted to the arXiv. To make this submission process as user-friendly as possible, the quantumarticle document class implements a number of arXiv-specific checks, which however can be deactivated via the option \texttt{noarxiv}. An example for this is a check that is meant to make sure that the arXiv produces a PDF file and that hyperlinks are correctly broken across multiple lines. To ensure this, the arXiv \href{https://arxiv.org/help/submit_tex}{recommends} putting \begin{verbatim} \pdfoutput=1 \end{verbatim} within the first 5 lines of your main LaTeX file. By default, the quantumarticle document class will throw an error if this line is missing. Just like other similar checks, this can be deactivated by means of the \texttt{noarxiv} option, or only this specific check can be deactivated via the option \texttt{nopdfoutputerror}. Giving the \texttt{noarxiv} option also disables a number of other features of quantum article and removes any Quantum related branding from the document. Authors who would like to keep the checks active but still use this document class for manuscripts not intended for submission to Quantum and therefore without Quantum branding can use the \texttt{unpublished} option. One feature deactivated by both \texttt{unpublished} and \texttt{noarxiv} for example, is the ``title click feature'' of quantumarticle. As this document class can be used for arbitrary documents, Quantum implements a feature where readers can, by clicking on the title of a manuscript, verify whether this manuscript was actually published by Quantum. Obviously this is an unwanted feature in any manuscript not accepted in Quantum and it can thus be deactivated with the \texttt{unpublished} option. \section{Typesetting documents} The following are guidelines for the usage of the quantumarticle document class for manuscripts to be submitted to or accepted in Quantum. A detailed description of the functionality and options of the document class follow in Section~\ref{sec:options}. \subsection{Recommendations on structure} In the \texttt{twocolumn} layout and without the \texttt{titlepage} option a paragraph without a previous section title may directly follow the abstract. In \texttt{onecolumn} format or with a dedicated \texttt{titlepage}, this should be avoided. Longer articles should include a section that, early on, explains the main results, their limitations, and assumptions. This section can be used to, for example, present the main theorem, or provide a summary of the results for a wider audience. \subsection{Title information} The title of the document is given via the common \texttt{title} command. Note that clicking the title performs a search for that title on \href{http://quantum-journal.org}{quantum-journal.org}. In this way readers can easily verify whether a work using the \texttt{quantumarticle} class was actually published in Quantum. By giving the \texttt{accepted=YYYY-MM-DD} option, with \texttt{YYYY-MM-DD} the acceptance date, the note ``Accepted in Quantum YYYY-MM-DD, click title to verify'' can be added to the bottom of each page to clearly mark works that have been accepted in Quantum. You should call \texttt{\textbackslash{}maketitle} before your running text starts. \subsubsection{Authors and affiliations} You can provide information on authors and affiliations in the common format also used by \texttt{revtex}: \begin{verbatim} \author{Author 1} \author{Author 2} \affiliation{Affiliation 1} \author{Author 3} \affiliation{Affiliation 2} \author{Author 4} \affiliation{Affiliation 1} \affiliation{Affiliation 3} \end{verbatim} In this example affiliation 1 will be associated with authors 1, 2, and 4, affiliation 2 with author 3 and affiliation 3 with author 4. Repeated affiliations are automatically recognized and typeset in \texttt{superscriptaddress} style. Alternatively you can use a format similar to that of the \texttt{authblk} package and the \texttt{elsarticle} document class to specify the same affiliation relations as follows: \begin{verbatim} \author[1]{Author 1} \author[1]{Author 2} \author[2]{Author 3} \author[1,3]{Author 4} \affil[1]{Affiliation 1} \affil[2]{Affiliation 1} \affil[3]{Affiliation 1} \end{verbatim} \subsubsection{Other author related information} The quantumarticle document class supports further commands that are author specific: \begin{commands} \command{email}{% E-Mail address of the author, displayed in the bottom of the page. } \command{homepage}{% Homepage of the author, displayed in the bottom of the page. } \command{thanks}{% Additional text that is displayed in the bottom of the page. } \command{orcid}{% If the ORCiD of the author is given, his name will become a link to his ORCiD profile. } \end{commands} \subsection{Abstract} The abstract is typeset using the common \texttt{abstract} environment. In the standard, \texttt{twocolumn}, layout the abstract is typeset as a bold face first paragraph. In \texttt{onecolumn} layout the abstract is placed above the text. Both can be combined with the \texttt{titlepage} option to obtain a format with dedicated title and abstract pages that are not included in the page count. This format can be more suitable for long articles. The \texttt{abstract} environment can appear both before and after the \texttt{\textbackslash{}maketitle} command and calling \texttt{\textbackslash{}maketitle} is optional, as long as there is an \texttt{abstract}. Both \texttt{abstract} and \texttt{\textbackslash{}maketitle} however must be placed after all other \texttt{\textbackslash{}author}, \texttt{\textbackslash{}affiliation}, etc.\ commands. \subsection{Sectioning} Sections, subsections, subsubsections, and paragraphs should be typeset with the standard LaTeX commands. The paragraph is the smallest unit of sectioning. Feel free to end the paragraph title with a full stop if you find this appropriate. \subsection{Equations} You can use the standard commands for equations. For multi-line equations \texttt{align} is preferable over \texttt{eqnarray}, please refrain from using the latter. For complex equations you may want to consider using the \texttt{IEEEeqnarray} environment from the \texttt{IEEEtrantools} package. How you refer to equations is up to you, but please be consistent and use the \texttt{\textbackslash{}eqref\{\dots\}} command instead of writing \texttt{(\textbackslash{}ref\{\dots\})}. As a courtesy for your readers and referees, please suppress equation numbers only if there is a specific reason to do so, to not make it unnecessarily difficult to refer to individual results and steps in derivations. Very wide equations can be shown expanding over both columns using the \texttt{widetext} environment. In \texttt{onecolumn} mode, the \texttt{widetext} environment has no effect. To enable this feature in \texttt{twocolumn} mode, \texttt{quantumarticle} relies on the package \texttt{ltxgrid}. Unfortunately this package has a bug that leads to a sub-optimal placement of extremely long footnotes. \subsection{Floats} Every floating element must have an informative caption and a number. The caption can be placed above, below, or to the side of the figure, as you see fit. Feel free to place them at the top or bottom of the page, or in the middle of a paragraph as you see fit. Try to place them on the same page as the text referring to them. A figure on the first page can help readers remember and recognize your work more easily. \subsubsection{Figures} Figures are typeset using the standard \texttt{figure} environment for single-column figures and \texttt{figure*} for multi-column figures. \subsubsection{Tables} Tables are typeset using the standard \texttt{table} environment for single-column tables and \texttt{table*} for multi-column tables. \subsection{Plots} Quantum provides a Jupyter notebook to create plots that integrate seamlessly with \texttt{quantumarticle}. \subsection{Footnotes} Footnotes are typeset using the \texttt{footnote} command. They will appear in the bottom of the page. Please do only use footnotes when appropriate and do not mix footnotes with references. \subsection{References} Citations to other works should appear in the References section at the end of the work. \paragraph{Important:} As Quantum is a member of Crossref, all references to works that have a DOI \emph{must} be hyperlinked according to the DOI. Those links must start with \texttt{https://doi.org/} (preferred), or \texttt{http://dx.doi.org/}. Direct links to the website of the publisher are not sufficient. This can be achieved in several ways, depending on how you are formatting your bibliography. \subsubsection{Manual bibliography} Suppose the DOI of an article that you want to cite is \texttt{10.22331/idonotexist}. If you are formatting your bibliography manually, you can cite this work using the following in your \texttt{thebibliography} environment: \begin{verbatim} \bibitem{examplecitation} Name Surname, \href{https://doi.org/10.22331/ idonotexist}{Quantum \textbf{123}, 123456 (1916).} \end{verbatim} \paragraph{Important:} If you are formatting your bibliography manually, please do not group multiple citations into one \texttt{\textbackslash{}bibitem}. Having to search through multiple references to find the cited result makes your work less accessible for authors and grouping references can screw up our automatic extraction of citations. \subsubsection{BibTeX bibliography} We encourage the use of BibTeX to generate your bibliography from the BibTeX meta-data provided by publishers. For DOI linking to work, the BibTeX file must contain the \texttt{doi} field as for example in: \begin{verbatim} @article{examplecitation, author = {Surname, Name}, title = {Title}, journal = {Quantum}, volume = {123}, page = {123456}, year = {1916}, doi = {10.22331/idonotexist}, } \end{verbatim} Several authors had problems because of Unicode characters in their BibTeX files. Be advised that \href{http://wiki.lyx.org/BibTeX/Tips}{BibTeX does not support Unicode characters}. All special characters must be input via their respective LaTeX commands. \paragraph{natbib} If you are using BibTeX, you can load the \texttt{natbib} package by putting \begin{verbatim} \usepackage[numbers,sort&compress]{natbib} \end{verbatim} in the preamble of your document and then use the \texttt{plainnat} citation style by including your BibTeX bibliography \texttt{mybibliography.bib} where you want the bibliography to appear as follows: \begin{verbatim} \bibliographystyle{plainnat}
proofpile-arXiv_065-4088
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} We consider graphs and digraphs which may contain parallel edges and arcs respectively but no loops, and generally follow the terminology in \cite{bang2009}. Our point of departure is the following theorem of Tutte, which characterizes graphs that contain $k$ edge-disjoint spanning trees. \begin{theorem} \cite{tutteJLMS36} \label{thm:tutte} A graph $G=(V,E)$ has $k$ edge-disjoint spanning trees if and only if, for every partition ${\cal F}$ of $V$, $e_{{\cal F}}\geq k(|{\cal F}|-1)$ where $e_{{\cal F}}$ is the number of edges with end vertices in different sets of ${\cal F}$. \end{theorem} Using matroid techniques, one can obtain a polynomial algorithm which either finds a collection of $k$ edge-disjoint spanning trees of a given graph $G$ or a partition $\cal F$ for which $e_{{\cal F}} < k(|{\cal F}|-1)$ that shows no such collection exists in $G$ (see e.g. \cite{recski1989}). Let $D = (V,A)$ be a digraph and $r$ be a vertex of $D$. An {\bf out-branching} (respectively, {\bf in-branching)} in $D$ is a spanning subdigraph $B^+_r$ (respectively, $B^-_r$) of $D$ in which each vertex $v \neq r$ has precisely one entering (respectively, leaving) arc and $r$ has no entering (respectively, leaving) arc. The vertex $r$ is called the {\bf root} of $B^+_r$ (respectively, $B^-_r$). It follows from definition that the arc set of an out-branching (respectively, in-branching) of $D$ induces a spanning tree in the underlying graph of $D$. It is also easy to see that $D$ has an out-branching $B^+_r$ (respectively, an in-branching $B^-_r$) if and only if there is a directed path from $r$ to $v$ (respectively, from $v$ to $r$) for every vertex $v$ of $D$. A well-known result due to Edmonds \cite{edmonds1973} shows that it can be decided in polynomial time whether a digraph contains $k$ arc-disjoint out-branchings or $k$ arc-disjoint in-branchings. Somewhat surprisingly, Thomassen proved that the problem of deciding whether a digraph contains a pair of arc-disjoint out-branching and in-branching is NP-complete (see \cite{bangJCT51}). This problem has since been studied for various classes of digraphs, giving rise to NP-completeness results as well as polynomial time solutions \cite{bangJCT51,bangJCT102,bangDAM161a,bangJGT42,bangC24,bangTCS438}. In particular, it is proved in \cite{bangJGT42} that the problem is polynomial time solvable for acyclic digraphs, and in \cite{bangJCT102} that every 2-arc-strong locally semicomplete digraph contains a pair of arc-disjoint out-branching and in-branching. It turns out that acyclic digraphs which contain a pair of arc-disjoint out-branching and in-branching admit a nice characterization. Suppose that $D = (V,A)$ is an acyclic digraph and that $B^+_s,B^-_t$ are a pair of arc-disjoint out-branching and in-branching rooted at $s,t$ respectively in $D$. Then $s$ must be the unique vertex of in-degree zero and $t$ the unique vertex of out-degree zero in $D$. Let $X \subseteq V \setminus \{s\}$ and let $X^-$ denote the set of vertices with at least one out-neighbour in $X$. Since each vertex of $x \in X$ has an in-coming arc in $B^+_s$ and each vertex $x' \in X^-$ has an out-going arc in $B^-_t$, we must have \begin{equation} \label{inoutacyc} \sum_{x\in X^-}(d^+(x)-1)\geq |X|. \end{equation} The following theorem shows that these necessary conditions are also sufficient for the digraph $D$ to have such a pair $B^+_s,B^-_t$. \begin{theorem}\cite{bangJGT42} \label{acyclicin-outbr} Let $D=(V,A)$ be an acyclic digraph in which $s$ is the unique vertex of in-degree zero and $t$ is the unique vertex of out degree zero. Then $D$ contains a pair of arc-disjoint out-branching and in-branching rooted at $s$ and $t$ respectively if and only if (\ref{inoutacyc}) holds for every $X \subseteq V \setminus \{s\}$. Furthermore, there exists a polynomial algorithm which either finds the desired pair of branchings or a set $X$ which violates (\ref{inoutacyc}). \end{theorem} Every graph has an acyclic orientation. A natural way of obtaining an acyclic orientation of a graph $G$ is to orient the edges according to a vertex ordering $\prec$ of $G$, that is, each edge $uv$ of $G$ is oriented from $u$ to $v$ if and only if $u \prec v$. In fact, every acyclic orientation of $G$ can be obtained in this way. Given a vertex ordering $\prec$ of $G$, we use $D_{\prec}$ to denote the acyclic orientation of $G$ resulting from $\prec$, and call $\prec$ {\bf good} if $D_{\prec}$ contains a pair of arc-disjoint out-branching and in-branching. We also call an orientation $D$ of $G$ {\bf good} if $D = D_{\prec}$ for some good ordering $\prec$ of $G$. Thus a graph has a good ordering if and only if it has a good orientation. We call such graphs {\bf good} graphs. \jbjr{By Theorem \ref{acyclicin-outbr}, one can check in polynomial time whether a given ordering $\prec$ of $G$ is good and return a pair of arc-disjoint branchings in $D_{\prec}$ if $\prec$ is good.} However, no polynomial time recognition algorithm is known for graphs that have good orderings. An obvious necessary condition for a graph $G$ to have a good ordering is that $G$ contains a pair of edge-disjoint spanning trees. This condition alone implies the existence of a pair of arc-disjoint out-branching and in-branching in an orientation of $G$. But such an orientation may never be made acyclic for certain graphs, which \jh{means that $G$ does not have a good ordering}. On the other hand, to certify that a graph has a good ordering, it suffices to exhibit an acyclic orientation of $G$, often in the form of $D_{\prec}$, and show it contains a pair of arc-disjoint out-branching and in-branching. In this paper we focus on the study of edge-minimal graphs which have good orderings (or equivalently, good orientations). \begin{definition} A graph $G = (V,E)$ is a {\bf 2T-graph} if $E$ is the union of two edge-disjoint spanning trees. \end{definition} \jbj{Clearly, a graph has a good ordering if and only if it contains a spanning 2T-graph which has a good ordering.} A 2T-graph on $n$ vertices has exactly $2n-2$ edges. The following theorem, due to Nash-Williams, implies a characterization of when a graph on $n$ vertices and $2n-2$ edges is a 2T-graph. For a graph $G=(V,E)$ and $X\subseteq V$, the subgraph of $G$ {\bf induced} by $X$ is denoted by $G[X]$. \begin{theorem}\cite{nashwilliamsJLMS39} \label{thm:NWcover2T} The edge set of a graph $G$ is the union of two forests if and only if \begin{equation} \label{sparse} |E(G[X])|\leq 2|X|-2 \end{equation} \noindent{}for every non-empty subset $X$ of $V$. \end{theorem} \jbj{ \begin{corollary} \label{NW2T} A graph $G=(V,E)$ is a 2T-graph if and only if $|V| \geq 2$, $|E|=2|V|-2$, and (\ref{sparse}) holds. \end{corollary} } Generic circuits (see definition below) are important in rigidity theory for graphs. A celebrated theorem of Laman \cite{lamanJEM4} implies that, for any graph $G$, the generic circuits are exactly the circuits of the so-called {\bf generic rigidity matroid} on the edges of $G$. Generic circuits have also been studies by Berg and Jord\'an \cite{bergJCT88}, who proved that every generic circuit is 2-connected and gave a full characterization of 3-connected generic circuits (see Theorem \ref{thm:connelly}). \begin{definition} A graph $G=(V,E)$ is a {\bf generic circuit} if it satisfies the following conditions: \begin{enumerate} \item[(i)] $|E|=2|V|-2>0$, and \item [(ii)] $|E(G[X])|\leq 2|X|-3$, for every $X\subset V$ with $2\leq |X| \leq |V|-1$. \end{enumerate} \end{definition} Generic circuits are building blocks for 2T-graphs. According to Corollary \ref{NW2T}, each generic circuit is a 2T-graph on two or more vertices with the property that no proper induced subgraph with two or more vertices is a 2T-graph. The only two-vertex generic circuit is the one having two parallel edges. Since no proper subgraph of a generic circuit is a generic circuit, every generic circuit on more than two vertices is a simple graph (i.e., containing no parallel edges). There is no generic circuit on three vertices and the only four-vertex generic circuit is $K_4$. \jbjb{The wheels\footnote{The {\bf wheel} $W_k$ is the graph that one obtains for a cycle of length $k$ by adding a new vertex and an edge from this to each vertex of the cycle.} $W_k$, $k\geq 4$ are all (3-connected) generic circuits.} Berg and Jord\'an \cite{bergJCT88} proved that every 3-connected generic circuit can be reduced to $K_4$ by a series of so called Henneberg moves (see definition below). We shall use this to prove that every generic circuit has a good ordering. This paper is organized as follows. In Section \ref{2Tliftsec} we begin with some preliminary results on generic circuits from \cite{bergJCT88} and then prove a technical lemma that shows how to lift a good orientation of a 2T-graph resulted from a Henneberg move (Lemma \ref{lem:liftgoodor}). The lemma will be used in Section \ref{OneGCsec} for the proof of a statement which implies \jbjb{that} every generic circuit has a good ordering (Theorem \ref{thm:GChasgoodor}). Section \ref{2TGCsec} is devoted to the study of the structure of 2T-graphs. We show that every 2T-graph is built from generic circuits and is reducible to a single vertex by a sequence of contractions of generic circuits (Theorem \ref{thequotient}). We also \jbjb{describe} a polynomial algorithm which identifies all generic circuits of a 2T-graph (Theorem \ref{findallGC}). This implies that the problem of deciding whether a graph is a disjoint union of generic circuits is polynomial time solvable for 2T-graphs (Theorem \ref{decomposeintoGC}). \jbjb{We also show that the problem is} NP-complete in general (Theorem \ref{npc}). In Section \ref{good2T} we explore properties of 2T-graphs which have good orderings and identify a forbidden structure for these graphs (Theorem \ref{lem:pathconflict}). In Section \ref{decomposeintoGCsec} we restrict our study on 2T-graphs which are disjoint unions of generic circuits. We prove that if the edges connecting the different generic circuits form a matching, then one can always produce a good ordering (Theorem \ref{matchingcase}) and we also characterize when such an ordering exists if the graph reduces to a double tree by contraction (Theorem \ref{thm:doubleTchar}). Finally, in Section \ref{remarksec} we list some open problems and show that the problem of finding a so called $(s,t)$-ordering of a digraph is NP-complete (Theorem \ref{thm:storderNPC}). \section{Lifting good orientations of a 2T-graph}\label{2Tliftsec} \begin{definition} \label{Henneberg} Let $G=(V,E)$ be a generic circuit, let $z$ be a vertex with three distinct neighbours $u,v,w$. A {\bf Henneberg move} from $z$ is the operation that deletes $z$ and its three incident edges from $G$ and adds precisely one of the the edges $uv,uw,vw$. A Henneberg move is {\bf admissible} if the resulting graph, which we denote by $G^{uv}_z$, where $uv$ is the edge we added to $G-z$, is a generic circuit and a Henneberg move is {\bf feasible} if it is admissible and $G^{uv}_z$ is a 3-connected graph. \end{definition} \begin{theorem} \cite{bergJCT88} \label{thm:atleast3} Let $G=(V,E)$ be a 3-connected generic circuit on $n\geq 5$ vertices. Then either $G$ has four distinct degree 3 vertices from which we can perform an admissible Henneberg move, or $G$ has 3 pairwise non-adjacent vertices, each of degree 3, so that we can perform a feasible Henneberg move from each of these. \end{theorem} \begin{theorem}\cite{bergJCT88} \label{thm:connelly} A 3-connected graph $G=(V,E)$ is a generic circuit if and only if $G$ can be reduced to (build from) $K_4$ by applying a series of feasible Henneberg moves (a series of Henneberg extensions\footnote{This is the inverse operation of a Henneberg move.}). \end{theorem} It is easy to see that if $z$ is a vertex of degree three in a 2T-graph $G$ then we can obtain a new 2T-graph $G'$ by performing a Henneberg move from $z$. The following lemma shows that when the three neighbours of the vertex $z$ that we remove in a Henneberg move are distinct, we can lift back a good orientation of $G'$ to a good orientation of $G$. \begin{figure}[h!t] \begin{center} \scalebox{0.9}{\input{figure1.pdf_t}} \caption{How to lift a good ordering to a Henneberg extension as in Lemma \ref{lem:liftgoodor}. In-branchings are displayed solid, out-branchings are dashed. The first line displays the three possible orders of the relevant vertices (increasing left to right) as they occur in the proof. The second line displays the ordering and the modification of the branchings in the extension.} \label{F1} \end{center} \end{figure} \begin{lemma} \label{lem:liftgoodor} Let $G$ be a 2T-graph on $n$ vertices and let $z$ be a vertex of degree 3 with three distinct neighbours $u,v,w$ from which we can perform an admissible Henneberg move to get $G^{uv}_z$. If $G^{uv}_z$ is good, then also $G$ is good. \end{lemma} \hspace{5mm}{\bf Proof: } Let ${\prec}'=(v_1,v_2,\ldots{},v_{n-1})$ be a good ordering of $G^{uv}_z$ and let $\tilde{B}^+_{v_1},\tilde{B}^-_{v_{n-1}}$ be arc-disjoint branchings of $D_{\prec'}$. Assume without loss of generality that $u=v_i$ and $v=v_j$ where $i<j$ (if this is not the case then consider the reverse ordering $\stackrel{\leftarrow}{\prec'}$ which is also good). Let $k\in [n-1]$ be the index of $w$ ($w=v_k$) and recall that $k\neq i,j$. Now there are 6 possible cases depending on the position of $w$ and which of the two branchings the arc $uv$ belongs to. In all cases we explain how to insert $z$ in the ordering ${\prec'}$ and update the two branchings which certifies that the new ordering $\prec$ is good. \begin{itemize} \item $uv$ is in $\tilde{B}^-_{v_{n-1}}$ and $j<k$. In this case we obtain ${\prec}$ from ${\prec'}$ by inserting $z$ anywhere between $v=v_j$ and $w=v_k$, replacing the arc $v_iv_j$ by the arcs $v_iz,zv_k$ and adding the arc $v_jz$ to $\tilde{B}^+_{v_1}$. \item $uv$ is in $\tilde{B}^-_{v_{n-1}}$ and $i<k<j$. In this case we obtain ${\prec}$ from ${\prec'}$ by inserting $z$ anywhere between $w=v_k$ and $v=v_j$, replacing the arc $v_iv_j$ by the arcs $v_iz,zv_j$ and adding the arc $v_kz$ to $\tilde{B}^+_{v_1}$. \item $uv$ is in $\tilde{B}^-_{v_{n-1}}$ and $k<i$. In this case we obtain ${\prec}$ from ${\prec'}$ by inserting $z$ anywhere between $u=v_i$ and $v=v_j$, replacing the arc $v_iv_j$ of $\tilde{B}^-_{v_{n-1}}$ by the arcs $v_iz,zv_j$ and adding the arc $v_kz$ to $\tilde{B}^+_{v_1}$. \end{itemize} The argument in the remaining three cases is obtained by considering $\stackrel{\leftarrow}{{\prec'}}$ and noting that this switches the roles of the in- and out-branchings. \hspace*{\fill} $\Box$ \section{Generic circuits are all good}\label{OneGCsec} In this section we show that every generic circuit has a good ordering. In fact we prove a stronger statement on generic circuits \jbjb{which \jh{turns} out to be very} useful in the study of 2T-graphs that have good orderings. Let $H=(V,E)$ be 2-connected and let $\{u,v\}$ be a pair of non-adjacent vertices such that $H-\{u,v\}$ is not connected. Then there exists $X,Y\subset V$ such that $X\cap Y=\{u,v\}$, $X\cup Y=V$ and there are no edges between $X-Y$ and $Y-X$. A {\bf 2-separation} of $H$ along the cutset $\{u,v\}$ is the process which replaces $H$ by the two graphs $H[X]+e$ and $H[Y]+e$, where $e$ is a new edge connecting $u$ and $v$. It is easy to show the following. \begin{lemma}\cite{bergJCT88} \label{lem:2sepgeneric} Let $G=(V,E)$ is a generic circuit. Then $G$ is 2-connected. Moreover, if $G-\{a,b\}$ is not connected, with connected components $X',Y'$, then $ab\not\in E$ and both of the graphs $G_1=G[X'\cup\{a,b\}]+ab$ and $G_2=G[Y'\cup\{a,b\}]+ab$ are generic circuits. \end{lemma} \iffalse The inverse operation of 2-separation is that of a 2-sum: Given disjoint graphs $H_i=(V_i,E_i)$, $i=1,2$ and two prescribed edges $e_1=u_1v_1\in E_1$ and $e_2=u_2v_2\in E_2$, the {\bf 2-sum} $H_1\oplus{}H_2$ of $H_1,H_2$ along the pairs $u_1,u_2$ and $v_1,v_2$ is the graph we obtain from $H_1,H_2$ by deleting the edges $e_1,e_2$ and identifying $u_1$ with $u_2$ and $v_1$ with $v_2$. \begin{lemma}\cite{bergJCT88} Let $G_i=(V_i,E_i)$, $i=1,2$ be generic circuits and let $u_iv_i\in E_i$, $i=1,2$ be edges. Then the 2-sum $G_1\oplus{}G_2$ along the pairs $u_1,u_2$ and $v_1,v_2$ is a generic circuit. \end{lemma} \fi \begin{theorem} \label{thm:GChasgoodor} Let $G=(V,E)$ be a generic circuit, let $s,t$ be distinct vertices of $G$ and let $e$ be an edge incident with at least one of $s,t$. Then \jbj{the following holds: \begin{enumerate} \item[(i)] $G$ has a good ordering $\prec$ with corresponding branchings $B^+,B^-$ in which $s$ is the root of $B^+$, $t$ is the root of $B^-$ and $e$ belongs to $B^+$. \item[(ii)] $G$ has a good ordering $\prec$ with corresponding branchings $B^+,B^-$ in which $s$ is the root of $B^+$, $t$ is the root of $B^-$ and $e$ belongs to $B^-$. \end{enumerate}} \end{theorem} \hspace{5mm}{\bf Proof: } The statement is clearly true when $G$ has two vertices. So assume that $G$ has more than two vertices. The proof is by induction on $n$, the number of vertices of $G$. The smallest generic circuit on $n > 2$ vertices is $K_4$ and we prove that the statement holds for $K_4$. By symmetry (reversing all arcs) it suffices to consider the case when $e$ is incident with $s$ (see Figure~\ref{F4}). It is possible to order the vertices of $K_4$ as $s=v_1,v_2,v_3,v_4=t$ such that $e \not =sv_2$, implying that $e=sv_3$ or $e=st$. Let $B^+_{s,1}$ and $B^+_{s,2}$ be the out-branchings at $s$ formed by the arcs $sv_2,v_2v_3,st$ and of $sv_2,v_2t,sv_3$, respectively. The three remaining edges form in-branchings $B^-_{t,1},B^-_{t,2}$ at $t$, respectively. Since $st \in B^+_{s,1}$ and $sv_3 \in B^+_{s,2}$, we find the desired branchings containing $e$ as in (i), (ii), respectively. \begin{figure}[htbp] \begin{center} \scalebox{0.9}{\input{figure4.pdf_t}} \caption{Illustrating the base case of the inductive proof of Theorem \ref{thm:GChasgoodor}. All arcs are oriented from left to right, the prescribed edge is either $sv_3$ or $st$, and the picture shows that we may force $e$ to be in the (dashed) out-branching as well as in the (solid) in-branching.} \label{F4} \end{center} \end{figure} \iffalse Consider an acyclic orientation of $K_4$ (it will be the transitive tournament on 4 vertices) with the acyclic \jbj{ordering $v_1,v_2,v_3,v_4$ such that $s=v_1$, $t=v_4$ and the second end vertex of $e$ is either $v_3$ or $t$.} \jbj{Suppose first that $e=st$. To get a pair of arc-disjoint branchings where $e$ belongs to $B^+$ we let $B^+$ consist of the arcs $sv_2,v_2v_3,st$ and let $B^-$ consist of the arcs $sv_3,v_2t,v_3t$. To get a pair of arc-disjoint branchings where $e$ belongs to $B^-$ we let $B^+$ consist of the arcs $sv_2,sv_3,v_2t$ and $B^-$ consist of the arcs $st,v_2v_3,v_3t$. Suppose now that $e\neq st$. If we want $e$ to be in $B^+$, then we let $B^+$ consist of the arcs $sv_2,sv_3,v_2t$ and let $B^-$ consist of the arcs $st,v_2v_3,v_3t$. If we want $e$ to be in $B^-$, then we let $B^+$ consist of the arcs $st,sv_2,v_2v_3$ and $B^-$ consist of the arcs $sv_3,v_2t,v_3t$.}\fi Assume below that $n>4$ and that the statement holds for every generic circuit on at most $n-1$ vertices. Suppose that $G$ is not 3-connected. Then it has a separating set $\{u,v\}$ of size 2 (Recall that, by Lemma \ref{lem:2sepgeneric}, $G$ is 2-connected). Let $G_1,G_2$ be obtained from $G,u,v$ by 2-separation. Then each of $G_1,G_2$ are smaller generic circuits so the theorem holds by induction for each of these. Note that $e$ is not the edge $uv$ as this edge does not belong to $G$ by Lemma \ref{lem:2sepgeneric}. \\ Suppose first that $s,t$ are both vertices of the same $G_i$, say w.l.o.g. $G_1$. Then $e$ is also an edge of $G_1$ and there are two cases depending on whether we want $e$ to belong to the out-branching or the in-branching. We give the proof for the first case, the proof of the later is analogous. By induction there is a good ordering ${\prec}_1$ of $V(G_1)$ and arc-disjoint branchings $B^+_{s,1},B^-_{t,1}$ so that $e$ belongs to $B^+_{s,1}$. By interchanging the names of $u,v$ if necessary, we can assume that the edge $uv$ is oriented from $u$ to $v$ in $D({\prec}_1)$. Suppose first that the arc $u\rightarrow v$ is used in $B^+_{s,1}$. By induction, by specifying the vertices $u,v$ as roots and $e^+=uv$ as the edge, $G_2$ has a good ordering ${\prec}^+_2$ such that $D({\prec}^+_2)$ has arc-disjoint branchings $B^+_{u,2},B^-_{v,2}$ where the arc $uv$ is in $B^-_{v,2}$. Now it is easy to check that $B^+_s,B^-_t$ form a solution in $G$ if we let \jbj{$A(B^+_s)=A(B^+_{s,1}-uv)\cup A(B^+_{u,2})$ and $A(B^-_t)=A(B^-_{t,1})\cup A(B^-_{v,2}-uv)$}. Here we used that there is no edge between $u$ and $v$ in $G$, so $e$ is not the removed arc above. The corresponding good ordering ${\prec}$ is obtained from ${\prec}_1,{\prec}^+_2$ by inserting all vertices of $V(G_2)-\{u,v\}$ just after $u$ in ${\prec}_1$. Suppose now that the arc $u\rightarrow v$ is used in $B^-_{t,1}$. By induction, by specifying the vertices $u,v$ as roots and $e^-=uv$ as the edge, $G_2$ has a good ordering ${\prec}^-_2$ and arc-disjoint branchings $B^+_{u,2},B^-_{v,2}$ such that the arc $uv$ is in $B^+_{u,2}$. Again we obtain the solution in $G$ by combining the two orderings and the branchings. \jbj{By similar arguments we can show that there is also a good ordering such that the edge $e$ belongs to $B^-_t$.} Suppose now that only one of the vertices $s,t$, say wlog. $s$ is a vertex of $G_1$ and $t$ is in $G_2$. Note that this means that $s,t\not\in \{u,v\}$. Consider the graph $G_1$ with specification $s,v,e$. By induction $G_1$ has a good orientation $D_1$ with arc-disjoint branchings $B^+_{s,1},B^-_{v,1}$ so that $e$ belongs to $B^+_{s,1}$. Note that, as $v$ is the root of the in-branching, the edge $uv$ is oriented from $u$ to $v$ in $D_1$. If the arc $uv$ belongs to $B^+_{s,1}$, then we consider $G_2$ with specification $u,t,uv$ where $uv$ should belong to the in-branching. By induction there exists a good orientation $D_2$ of $G_2$ with arc-disjoint branchings $B^+_{u,2},B^-_{t,2}$ such that the arc $uv$ is in \jbj{$B^-_{t,2}$}. Now we obtain the desired acyclic orientation and arc-disjoint branchings by setting $A(B^+_s)=A(B^+_{s,1}-uv)\cup A(B^+_{u,2})$ and $A(B^-_t)=A(B^-_{v,1})\cup A(B^-_{t,2}-uv)$. To see that we do not create any directed cycles by combining the acyclic orientations $D_1$ and $D_2$ it suffices to observe that $u$ has no arc entering in $D_2$ and $v$ has no arc leaving in $D_1$. If the arc $uv$ belongs to $B^-_{v,1}$, then we consider $G_2$ with specification $u,t,uv$ where $uv$ should belong to the out-branching. Again, by induction, there exists an acyclic orientation $D_2$ of $G_2$ with good branchings and combining the two orientations and the branchings as above we obtain the desired acyclic orientation of $G$ and good in- and out-branchings. \jbj{By similar arguments we can show that there is also a good ordering such that the edge $e$ belongs to $B^-_t$.} It remains to consider the case when $G$ is 3-connected. By Theorem \ref{thm:atleast3} there is an admissible Henneberg move $G\rightarrow G^{uv}_z$ from a vertex $z\not\in\{s,t\}$ which is not incident with $e$. Consider $G^{uv}_z$ with specification $s,t,e$, where $e$ should belong to the out-branching. By induction there is an acyclic orientation $D'$ of $G^{uv}_z$ and arc-disjoint branchings $B^+_s,B^-_t$ so that $e$ is in $B^+_s$. Now apply Lemma \ref{lem:liftgoodor} to obtain an acyclic orientation $D$ of $G$ in which $s$ is the root of an out-branching $B^+$ which contains $e$ and $t$ is the root of an in-branching $B^-$ which is arc-disjoint from $B^+$. The proof of the case when $e$ must belong to $B^-_t$ is analogous. \hspace*{\fill} $\Box$ \jbjr{ We will see in Section \ref{decomposeintoGCsec} that Theorem \ref{thm:GChasgoodor} is very useful when studying good orderings of 2T-graphs. The result below shows that it can also be applied to an infinite class of graphs which are not 2T-graphs.} \iffalse \begin{corollary} \label{LineGofcubic} Let $H$ be a simple cubic graph which is essentially 4-edge-connected (that is, the only 3 edge-cuts are formed by the 3 edges incident with some vertex $v$). Let $G$ be the line graph of $H$. Then $G$ has a good ordering. \end{corollary} \hspace{5mm}{\bf Proof: } Observe that $G$ is a 4-regular and every edge is contained in a 3-cycle. Furthermore, it follows from the fact that $H$ is essentially 4-edge-connected that $G$ is 4-connected. Suppose first that $H$ is the complete graph $K_4$ on 4 vertices. Then $G=L(H)$ is the graph we obtain from a 4-cycle $v_1v_2v_3v_4v_1$, by adding two new vertices $v_5,v_6$ and joining these completely to $\{v_1,v_2,v_3,v_4\}$. If we delete the two edges $v_2v_6$ and $v_4v_5$ from $G$ then we obtain a graph $G'$ which is a generic circuit since $G'$ is the graph we obtain by performing a 2-sum of two disjoint copies of $K_4$. By Theorem \ref{thm:GChasgoodor}, $G'$ has a good ordering $\prec$. Clearly $\prec$ is also a good ordering of $G$. Suppose now that $|V(H)|>4$. Let $e,f$ be an arbitrary pair of distinct edges of $G$ and denote by $G''$ the graph we obtain by deleting $e$ and $f$ from $G$. Then $G''$ has $n$ vertices and $2n-2$ edges. It follows from the fact that $G$ is 4-connected that (\ref{sparse}) holds in $G''$, so by Theorem \ref{thm:NWcover2T}, $G''$ is a 2T-graph. We claim that $G''$ has a good ordering, which again will imply that $G$ has one.\\ If $G''$ is a generic circuit we are done by Theorem \ref{thm:GChasgoodor}, so suppose that $G''$ contains a proper subgraph $G''[X]$ which is a generic circuit. Thus $|E(G''[X])|=2|X|-2$. As $G$ is 4-regular we have $\sum_{v\in X}d_{G}(v)=4|X|$. As $G$ is 4-connected and contains $G''$ as a subgraph, this implies that there are exactly 4 edges between $X$ and $V-X$ in $G$ and they form a matching, unless $V-X$ is just one vertex $v$. Suppose first that $|V-X|>1$ in which case the edges between $X$ and $V-X$ form a matching $M$ of size 4. Note that none of the edges of $M$ can belong to a 3-cycle in $G$, contradicting the fact that every edge of $G$ is in a 3-cycle. In the remaining case let ${\prec'}$ be a good ordering of $G''-v$ (by Theorem \ref{thm:GChasgoodor}) and let $v_{i_1},v_{i_2}$ be two of the neighbours of $v$ in $X$ so that $v_{i_1}<_{{\prec'}} v_{i_2}$. Now we obtain a good ordering of $G''$ by inserting $v$ just after $v_{i_1}$ (we can add the arc $v_{i_1}v$ to the out-branching and the arc $vv_{i_2}$ to the in-branching). \hspace*{\fill} $\Box$\fi \begin{theorem} \label{4reg4con} Let $G$ be a $4$-regular $4$-connected graph in which every edge is on a triangle. Then $G-\{e,f\}$ is a spanning generic circuit for any two disjoint edges $e,f$. In particular, $G$ admits a good ordering. \end{theorem} {\bf Proof.} Observe that $G$ is simple, as it is $4$-regular and $4$-connected. By Tutte's Theorem, $H:=G-\{e,f\}$ is a 2T-graph. Suppose, to the contrary, that it contains a 2T-graph $C$ as a proper subgraph. Then elementary counting shows that $C$ is an induced subgraph of $G$ whose edge-neighborhood $N$ consists of exactly four edges. (In particular, neither $e$ nor $f$ connects two vertices from $V(C)$.) The endpoints of the edges from $N$ in $V(C)$ are pairwise distinct since $|V(C)| \geq 4$ and $G$ is $4$-connected. Since $G-\{h,g\}$ is a 2T-graph for $h \not= g$ from $N$ we see that $\overline{C}:=G-V(C)$ is a 2T-graph or consists of a single vertex only. If it is a 2T-graph then the endpoints of the edges of from $N$ in $V(\overline{C})$ are pairwise distinct, too, contradicting the assumption that every edge is on at least one triangle. If, otherwise, $\overline{C}$ consists of a single vertex only then it is incident with both $e$ and $f$, contradicting the assumption that $e,f$ are disjoint. \hspace*{\fill}$\Box$ Thomassen conjectured that every $4$-connected line graph is Hamiltonian \cite{thomassenJGT10}; more generally, Matthews and Sumner conjectured that every $4$-connected claw-free graph (that is, a graph without $K_{1,3}$ as an induced subgraph) is Hamiltonian \cite{matthewsJGT8}. These conjectures are, indeed, equivalent \cite{ryjacekJCT70}, and it suffices to consider $4$-connected line graphs of cubic graphs \cite{kocholJCT78}. Theorem \ref{4reg4con} shows that such graphs have a spanning generic circuit (that is, a spanning cycle in the rigidity matroid). \jbj{ \section{Structure of generic circuits in 2T-graphs}\label{2TGCsec} Every 2T-graph $G$ on two or more vertices contains a generic circuit as an induced subgraph. Indeed, any minimal set $X$ with $|X| \geq 2$ and $|E(G[X])| = 2|X| -2$ induces a generic circuit in $G$. We say that $H$ is a {\bf generic circuit of} a graph $G$ if $H$ is a generic circuit and an induced subgraph of $G$. \begin{proposition} \label{atmostoneincommon} Let $G=(V,E)$ be a 2T-graph. Suppose that $G_1 = (V_1,E_1)$ and $G_2 = (V_2,E_2)$ are distinct generic circuits of $G$. Then $|V_1 \cap V_2|\leq 1$ and hence $|E_1 \cap E_2| = 0$. In the case when $|V_1 \cap V_2| = 0$, there are at most two edges between $G_1$ and $G_2$. \end{proposition} \hspace{5mm}{\bf Proof: } Suppose to the contrary that $|V_1 \cap V_2| \geq 2$. Since $G_1$ and $G_2$ are generic circuits, $|E_1| = 2|V_1| - 2$ and $|E_2| = 2|V_2| - 2$. Since $V_1 \cap V_2 \subset V_1$, we must have $|E_1 \cap E_2| = |E(G[V_1 \cap V_2])| \leq 2|V_1 \cap V_2| - 3$. But then \begin{eqnarray*} |E(G[V_1\cup V_2])|&\geq & |E_1|+|E_2|-|E_1 \cap E_2|\\ &=& (2|V_1|-2)+(2|V_2|-2)-|E_1 \cap E_2|\\ &\geq& 2(|V_1|+|V_2|)-4-(2|V_1 \cap V_2| - 3)\\ &=& 2(|V_1| + |V_2| - |V_1 \cap V_2|)-1\\ &=& 2(|V_1 \cup V_2|) -1,\\ \end{eqnarray*} contradicting that $G$ is a 2T-graph and hence satisfies (\ref{sparse}) (see Corollary \ref{NW2T}). Hence $|V_1 \cap V_2| \leq 1$. Suppose that $|V_1 \cap V_2| = 0$ (i.e., $G_1$ and $G_2$ have no vertex in common). Let $k$ denote the number of edges between $G_1$ and $G_2$. Then \begin{eqnarray*} \hspace{3cm} k&=& |E(G[V_1\cup V_2])| - |E_1\cup E_2| \\ &\leq& (2|V_1 \cup V_2|-2) - (|E_1| + |E_2|)\\ &=& 2(|V_1|+|V_2|) -2 -(2|V_1|-2 + 2|V_2|-2)\\ &=& 2. \hspace{8cm}\hspace*{\fill} $\Box$ \\ \end{eqnarray*} \begin{proposition} \label{noedgebetween} Let $r \geq 2$ and $G_i = (V_i,E_i)$ where $1 \leq i \leq r$ be generic circuits of a 2T-graph $G = (V,E)$. Suppose that $|V_i \cap V_j| = 1$ if and only if $|i-j| = 1$. Then there is no edge with one end in $V_1 \setminus V_r$ and the other end in $V_r \setminus V_1$. \end{proposition} \hspace{5mm}{\bf Proof: } Let $k$ be the number of edges each has one end in $V_1 \setminus V_r$ and the other end in $V_r \setminus V_1$. We prove $k = 0$ by induction on $r$. When $r = 2$, \begin{eqnarray*} \hspace{1.5cm} k&=& |E(G[V_1\cup V_2])| - |E_1\cup E_2| \\ &\leq& (2|V_1 \cup V_2|-2) - (|E_1| + |E_2|)\\ &=& 2(|V_1|+|V_2| -1) -2-(2|V_1|-2 + 2|V_2|-2)\\ &=& 0. \\ \end{eqnarray*} Assume $r > 2$ and there is no edges with one end in $V_i \setminus V_j$ and the other end in $V_j \setminus V_i$ for all $1 \leq |i-j| \leq r-2$. By assumption $|V_i \cap V_j| = 1$ if and only if $|i-j| = 1$ and in particular $|V_i \cap V_j| = 0$ if $|i-j| > 1$. Hence \begin{eqnarray*} k&=& |E(G[V_1\cup \cdots \cup V_r])| - |E_1\cup \cdots \cup E_r| \\ &\leq& (2|V_1 \cup \cdots \cup V_r|-2) - (|E_1| + \cdots + |E_r|)\\ &=& 2(|V_1|+ \cdots + |V_r| -(r-1)) -2-(2|V_1|-2 + \cdots + 2|V_r|-2)\\ &=& 0. \\ \end{eqnarray*} This completes the proof. \hspace*{\fill} $\Box$ \begin{proposition} \label{hyperforest} Let $G_i = (V_i,E_i)$ where $1 \leq i \leq r$ be the collection of generic circuits of a 2T-graph $G=(V,E)$ and let ${\cal G}=(V,\cal E)$ \jbjb{be the hypergraph} where ${\cal E} = \{V_i:\ 1 \leq i \leq r\}$. Then $\cal G$ is a hyperforest. \end{proposition} \hspace{5mm}{\bf Proof: } Suppose to the contrary that $\cal G$ is not a hyperforest. Then there exist $V_{i_1},V_{i_2},\ldots{},V_{i_\ell}$ for some $\ell \geq 3$ such that $|V_{i_j}\cap V_{i_k}|=1$ if and only if $|j-k|=1$ or $\ell-1$ and moreover, the common vertices between the hyperedges on the hypercycle are pairwise distinct. Thus $$\sum_{j=1}^{\ell}|V_{i_j}| = |V_{i_1}\cup{}V_{i_2}\cup\cdots{}\cup{}V_{i_\ell}| + \ell.$$ By Proposition \ref{noedgebetween}, there is no edge with one end in $V_{i_j}\setminus V_{i_k}$ and the other end in $V_{i_k} \setminus V_{i_j}$ for all $j \neq k$. Hence \begin{eqnarray*} |E(G[V_{i_1}\cup{}V_{i_2}\cup\cdots{}\cup{}V_{i_\ell}])|&=&\sum_{j=1}^{\ell}(2|V_{i_j}|-2)\\ &=& 2|V_{i_1}\cup{}V_{i_2}\cup\cdots{}\cup{}V_{i_\ell}|,\\ \end{eqnarray*} \noindent{}contradicting that $G$ is a 2T-graph and hence satisfies (\ref{sparse}). \hspace*{\fill} $\Box$ Let $G = (V,E)$ be a 2T-graph and let ${\cal G}=(V,\cal E)$ be the hypergraph defined in Proposition \ref{hyperforest}. Two generic circuits of $G$ are {\bf connected} if their vertex sets are in the same hypertree of ${\cal G}$. Not every vertex of $G$ needs to be in a generic circuit of $G$. A {\bf generic component} of $G$ is either a set consisting of a single vertex which is not in \jbjb{any} generic circuit of $G$ or the union of a maximal set of connected generic circuits. A generic component is called {\bf trivial} if it consists of a single vertex and {\bf non-trivial} otherwise. An edge of $G$ is {\bf external} if it is not contained in any generic circuit. By Proposition \ref{noedgebetween} there is no external edge in a generic component. Thus each generic component is a 2T-graph. Two generic components do not have a vertex in common. A similar proof as for Proposition \ref{atmostoneincommon} shows that there can be at most two external edges between two generic components. We summarize these properties below. \begin{proposition} \label{genericomponents} Let $G = (V,E)$ be a 2T-graph. Then the following statements hold: \begin{enumerate} \item there is no external edge in a generic component; \item each generic component is a 2T-graph; \item two generic components are vertex-disjoint; \item there are at most two external edges between two generic components. \hspace*{\fill} $\Box$ \end{enumerate} \end{proposition} Thus every 2T-graph $G$ partitions uniquely into pairwise vertex-disjoint generic components. The {\bf quotient graph} $\tilde{G}$ of $G$ is the graph obtained from $G$ by contracting each generic component to a single vertex (and deleting loops resulted from the contractions). It follows from Proposition \ref{genericomponents} that every 2T-graph can be reduced to $K_1$ by successively taking quotients. \begin{theorem} \label{thequotient} Let $G$ be a 2T-graph. Then there is a sequence of 2T-graphs $G_0, G_1, \dots, G_k$ where $G_0 = G$, $G_k = K_1$, and $G_i = \tilde{G}_{i-1}$ for each $i = 1, 2, \dots, k$. In particular, $\tilde{G}$ is a 2T-graph. \hspace*{\fill} $\Box$ \end{theorem} \begin{theorem} \label{findallGC} There exists a polynomial algorithm $\cal A$ which given a 2T-graph $G=(V,E)$ as input finds the collection $G_1,G_2,\ldots{},G_r$, $r\geq 1$ of generic circuits of $G$. \end{theorem} \hspace{5mm}{\bf Proof: } This follows from the fact that the subset system $M=(E,{\cal I})$ is a matroid, where $E'\subseteq E$ is in $\cal I$ precisely when $E'=\emptyset$ or $|E'|\leq 2|V(E')|-3$ holds, where $V(E')$ is the set of vertices spanned by the edges in $E'$. See \cite{bergLNCS2832} for a description of a polynomial independence oracle. The circuits of $M$ are precisely the generic circuits of $G$. Recall from matroid theory that an element $e\in E$ belongs to a circuit of $M$ precisely when there exists a base of $M$ in $E-e$. Thus we can produce all the circuits by considering each edge $e\in E$ one at a time. If there is a base $B\subset E-e$, then $B\cup\{e\}$ contains a unique circuit $C_e$ which also contains $e$ and we can find $C_e$ in polynomial time by using independence tests in $M$. Since the generic circuits are edge-disjoint, by Proposition \ref{atmostoneincommon}, we will find all generic circuits by the process above. \hspace*{\fill} $\Box$ \begin{corollary} There exists a polynomial algorithm for deciding whether a 2T-graph $G$ is a generic circuit. \end{corollary} \begin{theorem} \label{decomposeintoGC} There exists a polynomial algorithm for deciding whether the vertex set of a 2T-graph $G=(V,E)$ decomposes into vertex disjoint generic circuits. Furthermore, if there is such a decomposition, then it is unique. \end{theorem} \hspace{5mm}{\bf Proof: } We first use the algorithm $\cal A$ of Theorem \ref{findallGC} to find the set $G_1,G_2,\ldots{},G_r$ of generic circuits of $G$. If $r=1$ we are done as our decomposition consists of that generic circuit alone ($G$ is a generic circuit). So assume now that $r\geq 2$ and form the hypergraph $\cal G$ from $G_1,G_2,\ldots{},G_r$. Initialize $H_1$ as the graph $G$ and ${\cal G}_1$ as the hypergraph $\cal G$. By Proposition \ref{hyperforest}, ${\cal G}_1$ is a hyperforest and hence, by Proposition \ref{atmostoneincommon} it has an edge which has at most one vertex in common with the rest of the edges of ${\cal G}_1$. Let $G_{i_1}$ be a generic circuit corresponding to such an edge. Note that, as $|V(G_{i_1})|\geq 2$ the generic circuit $G_{i_1}$ must be part of any decomposition of $V$ into generic circuits. Now let $V_2=V-V(G_{i_1})$ and consider the induced subgraph $H_2=G[V_2]$ of $G$ and the hypergraph ${\cal G}_2=(V_2,{\cal E}_2)$ that we obtain from ${\cal G}_1$ by deleting the vertices of $V(G_{i_1})$ as well as every hyperedge that contains a vertex from $V(G_{i_1})$. If ${\cal G}_2$ has at least one edge, we can again find one which intersects the rest of the edges in at most one vertex. Let $G_{i_2}$ denote the corresponding generic circuit and add this to our collection. Form $H_3,{\cal G}_3$ as above. Continuing this way we will either find the desired decomposition or we reach a situation where the current hypergraph ${\cal G}_k$ has at least one vertex but no edges. In this case it follows from the fact that the generic circuits we have removed so far are the only ones who could cover the corresponding vertex sets that $G$ has no decomposition into generic circuits. As the number, $r$, of generic circuits in $G$ is bounded by $|E|/2$ since generic circuits are edge-disjoint, the process above will terminate in a polynomial number of steps and each step also take polynomial time.\hspace*{\fill} $\Box$ } \medskip The proof above made heavy use of the structure of generic circuits in 2T-graphs. For general graphs the situation is much worse. \begin{theorem} \label{npc} It is NP-complete to decide if the vertex set of a graph admits a partition whose members induce generic circuits. \end{theorem} \iffalse {\bf Proof.} We reduce from the problem to decide if a hypergraph $H$ has a perfect matching, that is, if there exists a partition of its vertices consisting of edges. Since the graph obtained from $H$ by deleting all vertices covered by loops (edges of size $1$) but not by any edge of larger size and then deleting all loops has a perfect matching if and only if $H$ has, the problem remains NP-complete when restricted to loopless hypergraphs. It also remains NP-complete when restricted to loopless hypergraphs without edges of size $3$, because the graph obtained from $H$ by introducing two new vertices $a_e,b_e$ for each edge $e$ of $H$ of size $3$ and then replace $e$ by the two new edges $f_e:=\{a_e,b_e\}$ and $g_e:=e \cup f_e$ has a perfect matching if and only if $H$ has. Now we construct a graph $G$ from a hypergraph $H$ without loops or edges of size $3$ in polynomial time in terms of order and size of $H$ as follows. Take a family $(G_e)_{e \in E(H)}$ of pairwise disjoint generic circuits with $|V(G_e)|=\jbjr{|e|}$. The $2$-cycle and the wheels on at least $4$ vertices show that this is possible. Let $f_e:V(G_e) \rightarrow e$ be a bijection. For each $x$ from any $V(G_e)$, let $L_x$ be the $2$-cycle on $\{x,f_e(x)\}$, and let $L_e$ be the set of all $L_x$, $x \in V(G_e)$. Then the union of all $L_x$ and all $G_e$ is a graph on $V(H) \cup \bigcup\{V(G_e):\,e \in E(H)\}$, and its generic circuits are exactly the subgraphs $G_e$ and $L_x$. It suffices to show that $H$ has a perfect matching if and only if $V(G)$ has a partition whose members induce generic circuits. Indeed, if $M$ is a perfect matching of $H$ then collect all members from $L_e$ for all $e \in M$, and every $G_e$ for all $e \in E(H) \setminus M$, which yields the desired partition. Conversely, if a partition $P$ of $V(G)$ into vertex sets of generic circuits exist then let $M$ consists of all $e$ such that $V(G_e) \not\in P$, and that will give a matching of $G$. \hspace*{\fill}$\Box$ \fi \jbjr{\hspace{5mm}{\bf Proof: } Recall the problem {\sc exact cover by 3-sets} which is as follows: given a set $X$ with $|X|=3q$ for some integer $q$ and a collection ${\cal C}=Y_1,\ldots{},Y_k$ of 3-element subsets of $X$; does $\cal C$ contain a collection of $q$ disjoint sets $Y_{i_1},\ldots{},Y_{i_q}$ such that each element of $X$ is in exactly one of these sets? {\sc exact cover by 3-sets} is NP-complete \cite[Page 221]{garey1979}. Let {\sc exact cover by 4-sets} be the same problem as above, except that $|X|=4q$ and each set in $\cal C$ has size 4. It is easy to see that {\sc exact cover by 3-sets} polynomially reduces to {\sc exact cover by 4-sets}: Given an instance $X, \cal C$ of {\sc exact cover by 3-sets} we extend $X$ to $X'$ by adding $q$ new elements $z_1,z_2,\ldots{},z_q$ and construct ${\cal C}'$ by including the $q$ sets $Y\cup\{z_i\}$, $i\in [q]$ in ${\cal C}'$ for each set $Y\in \cal C$. It is easy to check that $X,{\cal C}$ is a yes-instance of {\sc exact cover by 3-sets} if and only if $X',{\cal C}'$ is a yes-instance of {\sc exact cover by 4-sets} so the later problem is also NP-complete. Now given an instance $X',{\cal C}'$ of {\sc exact cover by 4-sets} we construct the graph $G$ as follows: the vertex set of $G$ consists of two sets $V_1$ and $V_2$. The set $V_1$ contains a vertex $v_x$ for each element $x\in X'$ and $V_2$ contains 4 vertices $u_{j,1},u_{j,2},u_{j,3},u_{j,4}$ for each set $Y_j\in {\cal C}'$ so that all these vertices are distinct. The edge set of $G$ is constructed as follows: for each $j\in [|{\cal C}'|]$ $E(G)$ contains the edges of a $K_4$ on $u_{j,1},u_{j,2},u_{j,3},u_{j,4}$ and for each set $Y_j\in {\cal C}'$ with $Y_j=\{x_1,x_2,x_3,x_4\}$ $E(G)$ contains two copies of the edges $x_1u_{j,1},x_2u_{j,2},x_3u_{j,3}x_4u_{j,4}$. \jbjb{Clearly we can conctruct $G$ in polynomial time.} We claim that $G$ has a vertex partition into the vertex sets of disjoint generic circuits if and only if $X',{\cal C}'$ is a yes-instance of {\sc exact cover by 4-sets}.\\ Suppose first that $X',{\cal C}'$ is a yes-instance and let $Y_{i_1},\ldots{},Y_{i_q}$ be sets that form an exact cover of $X'$. For each $s\in [q]$ we include the 4 generic circuits of size 2 that connect the vertices $u_{s,1},u_{s,2},u_{s,3},u_{s,4}$ to the vertices corresponding to $Y_{i_s}$ and for every other set $Y_j$ of ${\cal C}'$ (not in the exact cover) we include the generic circuit on the vertices $u_{j,1},u_{j,2},u_{j,3},u_{j,4}$. This gives a vertex partition of $V(G)$ into vertex sets of disjoint generic circuits. Suppose now that $G_1,\ldots{},G_p$ is a collection of vertex disjoint generic circuits such that $V(G)$ is the union of their vertex sets. Then we obtain the desired exact cover of $X'$ by including $Y_j\in {\cal C}'$ in the cover precisely when the generic circuit on $u_{j,1},u_{j,2},u_{j,3},u_{j,4}$ is not one of the $G_i$'s. Note that vertices of $V_1$ can only be covered by generic circuits of size 2 (parallel edges) so the sets we put in the cover will cover $X'$ and they will do so precisely once since $G_1,\ldots{},G_p$ covered each vertex of $G$ precisely once. Hence the chosen $Y_j$'s form an exact cover of $X'$.\hspace*{\fill} $\Box$ } \section{Properties of good 2T-graphs} \label{good2T} Let $G$ be a 2T-graph. For simplicity we shall call a generic circuit of $G$ a {\bf circuit of} $G$. Recall from Section \ref{2TGCsec} that each generic component of $G$ consists of either a single vertex or a set of circuits that form a hypertree in ${\cal G}=(V,\cal E)$. We call a generic component of $G$ a {\bf hyperpath} if its circuits $G_1, G_2, \dots, G_k$ ($k \geq 1$) satisfy the property that for all distinct $i, j$, $G_i$ and $G_j$ have a common vertex if and only if $|i-j| = 1$. Note that the common vertices between circuits are pairwise distinct and in particular, a generic component consisting of one circuit is a hyperpath. We call $G$ {\bf linear} if every non-trivial generic component of $G$ is a hyperpath. \begin{proposition} \label{hyperpath} Let $G$ be a 2T-graph which has one non-trivial generic component and no trivial generic component. Then $G$ has a good ordering if and only if $G$ is linear (i.e., $G$ is a hyperpath). \end{proposition} \hspace{5mm}{\bf Proof: } Suppose that $G$ is a hyperpath formed by circuits $G_1, G_2, \dots, G_k$. For each $i = 1, 2, \dots, k-1$, let $v_i$ be the common vertex of $G_i$ and $G_{i+1}$. Arbitrarily pick a vertex $v_0$ from $G_1$ distinct from $v_1$ and a vertex $v_k$ from $G_k$ distinct from $v_{k-1}$. The assumption that $G$ is a hyperpath and the choice of $v_0, v_k$ ensure that $v_0, v_1, \dots, v_k$ are pairwise distinct. By Theorem \ref{thm:GChasgoodor}, each $G_i$ has a good ordering $\prec_i$ that begins with $v_{i-1}$ and ends with $v_i$. It is easy to see that the concatenation of these $k$ orderings gives a good ordering of $G$. On the other hand suppose that $G$ is not a hyperpath but has an acyclic orientation with arc-disjoint branchings $B^+_s, B^-_t$. Since $G$ is not a hyperpath, either there are three circuits intersecting at the same vertex or there are three pairwise non-intersecting circuits each intersecting with a fourth circuit. In either case, one of the three circuits contains neither $s$ nor $t$. This would imply that the arc sets of $B^+_s, B^-_t$ restricted to this circuit contains a directed cycle, a contradiction to the fact that the orientation of $G$ is acyclic. \hspace*{\fill} $\Box$ \begin{proposition} \label{eachhyperpath} If a 2T-graph $G$ has a good ordering, then $G$ is linear. \end{proposition} \hspace{5mm}{\bf Proof: } Suppose that $D$ is a good orientation of $G$ with arc-disjoint branchings $B^+_s, B^-_t$. Consider a non-trivial generic component $H$ of $G$ and its orientation $D'$ induced by $D$ which is clearly acyclic. Since $H$ has $2|V(H)|-2$ edges, $A(D') \cap A(B^+_s)$ and $A(D') \cap A(B^-_t)$ induce arc-disjoint branchings in $D'$, certifying that $D'$ is a good orientation of $H$. By Proposition \ref{hyperpath}, $H$ is is a hyperpath. Hence every non-trivial generic component of $G$ is a hyperpath and therefore $G$ is linear. \hspace*{\fill} $\Box$ In view of Proposition \ref{eachhyperpath}, we only need to consider linear 2T-graphs for possible good orderings or good orientations. Suppose that $D$ is a good orientation of a 2T-graph $G$ with arc-disjoint branchings $B^+_s, B^-_t$. Let $H$ be a generic component of $G$. Then the proof of Proposition \ref{eachhyperpath} shows that \jbjb{$D'=D[V(H)]$ is a good orientation of $H$ with arc-disjoint branchings $B^+_{s'}, B^-_{t'}$ which are the restrictions of $B^+_s, B^-_t$ to $V(H)$.} We refer $s, t$ to as {\bf global} roots and $s',t'$ as to {\bf local} roots (of the corresponding branchings in $H$). The {\bf external degree} of a vertex $x$ in $G$ is the number of external edges incident with $x$ in $G$ and the {\bf external degree} of $H$ is the sum of external degrees of the vertices of $H$. \begin{lemma} \label{localroots} Let $G$ be a 2T-graph which has a good orientation with \jbjb{arc-disjoint} branchings $B^+, B^-$. Then every non-trivial generic component has distinct local roots. Suppose that $H, H'$ are generic components of $G$ and $xy$ is an external edge where $x, y$ are vertices in $H, H'$ respectively. \iffalse If $H$ does not contain a global root and $x$ has external degree one (i.e., $xy$ is the only external edge incident with $x$), then one of the following holds:\fi \jbjb{Then one of the following holds:} \begin{itemize} \item[(a)] $xy \in A(B^-)$ and $x$ is the local root of \jbjb{the in-branching $B^-_H$ in $H$ which is the restriction of $B^-$ to $V(H)$}; \item[(b)] $xy \in A(B^+)$, $y$ is the local root of \jbjb{the out-branching $B^+_{H'}$ in $H'$ which is the restriction of $B^+$ to $V(H')$} and if the external degree of $x$ is one, then $x$ is either the root of $B^+$ (and hence the local root of $B^+_H$ which is the restriction of $B^+$ to $V(H)$) or not a local root. \item[(c)] $yx \in A(B^+)$ and $x$ is the local root of \jbjb{the out-branching $B^+_H$ in $H$ which is the restriction of $B^+$ to $V(H)$} \item[(d)] $yx \in A(B^-)$, \jh{$y$ is the local root of the in-branching $B^-_{H'}$ in $H'$ which is the restriction of $B^-$ and if the external degree of $x$ is one, then $x$ is either the root of $B^-$ (and hence the local root of $B^-_H$ which is the restriction of $B^-$ to $V(H)$) or not a local root.} \end{itemize} In particular, if the external degrees of $x, y$ are both one, and neither $H$ nor $H'$ contains a global root, then either $x$ is a local root in $H$ or $y$ is a local root in $H'$ but not both. \end{lemma} \jbjb{\hspace{5mm}{\bf Proof: } Suppose that $D$ is a good orientation of $G$ with arc-disjoint branchings $B^+, B^-$. Then, as we mentioned above, for every non-trivial generic component $H$ the restrictions of $B^+, B^-$ to $V(H)$ form a pair of arc-disjoint branchings in $D[V(H)]$ and since $D$ is acyclic, the roots of these branchings must be distinct. Thus the first part of the lemma holds. This implies that the digraph $\tilde{D}$ that we obtain by contracting each non-trivial generic component to one vertex is a good orientation of the quotient $\tilde{G}$ of $G$ and the digraphs $\tilde{B}^+, \tilde{B}^-$ that we obtain from $B^+, B^-$ via this contraction are arc-disjoint in- and out-branchings of $\tilde{D}$. As every vertex which is not the root of an in-branching (out-branching) has exactly one arc leaving it (entering it) this implies that if some arc $uv$ of $B^+$ ($B^-$) enters (leaves) a non-trivial generic component, then $v$ ($u$) is the local out-root (in-root) of that component. Now it is easy to see that (a)-(d) hold. The last claim is a direct consequence of these and the fact that $B^+$ and $B^-$ are arc-disjoint. \hspace*{\fill} $\Box$ } We say that a subset $X\subset V$ with $2\leq |X|\leq |V|-2$ is {\bf pendant at $x$ in $G$} if all edges between $X$ and $V(G)-X$ are incident with $x$. Note that $X$ is pendant \jbj{at $x$ in $G$ if an only if $V-X$ is pendant at $x$ }in $G$. \begin{lemma} \label{lem:rootinpendant} If $X$ is pendant at $x$ in a good 2T-graph $G$, then every good orientation $D$ of $G$ will have $|X\cap\{s,t\}|=1$, where $s$ and $t$ are the roots of arc-disjoint branchings $B^+_s,B^-_t$ that certify that $D$ is good. That is, $X$ contains precisely one global root. \end{lemma} \hspace{5mm}{\bf Proof: } Let $B^+_s,B^-_t$ be a pair of arc-disjoint branchings that certify that $D$ is good and suppose that none of $s,t$ are in $X$. Let $z\in X-x$ (such a vertex exists as $|X|>1$). As $X$ is pendant in $x$ the $(s,z)$-path in $B^+_s$ passes through $x$ and the $(z,t)$-path in $B^-_t$ also passes through $x$, but then $D$ contains a directed cycle, contradicting that it is acyclic. Since $V-X$ is also pendant at $x$, we see that $|X\cap \{s,t\}|=1$ must hold. \hspace*{\fill} $\Box$ Let $G$ be a 2T-graph. Suppose that $H$ is a generic component of $G$ which is a hyperpath formed by circuits $G_1, G_2, \dots, G_k$. Then $H$ is called {\bf pendant} if one of following conditions holds: \vspace{-3mm} \begin{itemize} \item $V(H)$ is a pendant set in $G$; \item all vertices of $G_1$ have external degree zero or all vertices of $G_k$ have external degree zero. \end{itemize} \begin{corollary} \label{pendantcomponent} If $H$ is a pendant generic component of a 2T-graph $G$, then $H$ must contain a global root. \end{corollary} \hspace{5mm}{\bf Proof: } If $H$ is the only generic \jbjb{component} in $G$, then clearly it contains a global root. So assume that $G$ has at least two generic components. We show that $V(H)$ contains a pendant set. If all vertices of $G_1$ have external degree zero, then $H$ has at least two circuits and $V(G_1)$ is a pendant set in $G$. Similarly, if all vertices of $G_k$ have external degree zero, then $V(G_k)$ is a pendant set in $G$. In any case $V(H)$ contains a pendant set and hence a global root by Lemma \ref{lem:rootinpendant}. \hspace*{\fill} $\Box$ \begin{figure}[h!tbp] \begin{center} \scalebox{0.7}{\input{figure5.pdf_t}} \caption{\jbjr{Example of a 2T-graph $G$ whose vertex set is partitioned in circuits but which has no good ordering. By Corollary \ref{pendantcomponent}, in any good orientation of $G$, the global roots $s,t$ are necessarily contained in $G_1$ and $G_4$. Now Lemma \ref{localroots} implies that the two vertices of attachment of $G_2,G_3$ must be local roots (of $G_2,G_3$, respectively) but not global roots in any good orientation. However, if two such local roots from distinct circuits are incident with only one external edge, then, by Lemma \ref{localroots}, these edges cannot be the same, implying that $G$ has no good ordering}} \label{F5} \end{center} \end{figure} \begin{corollary} \label{cor:3pendant} If $G$ contains three or more pairwise disjoint pendant subsets $X_1,X_2,X_3$, then $G$ has no good orientation. In particular, a 2T-graph has a good ordering then it contains at most two pendant generic components. \end{corollary} \hspace{5mm}{\bf Proof: } This follows immediately from Lemma~\ref{lem:rootinpendant} and Corollary \ref{pendantcomponent}. \hspace*{\fill} $\Box$ \begin{theorem} \label{lem:pathconflict} Suppose that there are vertex-disjoint circuits $G_{i_0},G_{i_1},\ldots{},G_{i_p}$, $p\geq 1$ of a 2T-graph $G$ such that \begin{itemize} \item Each $G_{i_j}$ has external degree 3 \item Some vertex $x_0\in V(G_{i_0})$ has external degree 2 and the third external edge goes between a vertex $y_0\in V(G_{i_0})-x_0$ and a vertex $z_1\in V(G_{i_1})$ \item Some vertex $x_p\in V(G_{i_p})$ has external degree 2 and the third external edge is adjacent to a vertex $y_p\in V(G_{i_p})-x_p$ and a vertex $z_{p-1}\in V(G_{i_{p-1}})$, where $z_{p-1}\neq x_0$ if $p=1$. \item For each $j\in [p-1]$ there is exactly one external edge between $V(G_{i_j})$ and $V(G_{i_{j+1}})$: $y_jz_{j+1}$ with $y_j\in V(G_j)$ and $z_{j+1}\in V(G_{j+1})$. \end{itemize} If $G$ has a good ordering $\prec$, then some vertex of $V(G_{i_0})\cup{}V(G_{i_1})\cup\ldots{}V(G_{i_p})$ is the first or the last vertex according to $\prec$ \jbjr{(that is, at least one of the global roots $s,t$ belongs to that vertex set).} \end{theorem} \begin{figure}[h!tbp] \begin{center} \scalebox{0.7}{\input{figure6.pdf_t}} \caption{\jbjr{The figure above shows part of a graph $G$ whose vertex set is partitioned in circuits together with all the external edges connecting them to other circuits. Assume that we have a good ordering and that the seven circuits displayed in the configuration do not contain any of the two global roots. Consider the external edge $xy$ between $C$ and $C'$. By Lemma \ref{localroots}, exactly one of $x$ and $y$ must be a local root. Say, w.l.o.g. that $x$ is a local root so $y$ cannot be a local root of $C'$. We encode this fact by a white arrow from $C$ to $C'$ in the quotient graph (lower left figure). Now the other two vertices displayed in $C'$ must be its local roots, so that, following our drawing convention, we need to orient the remaining two edges incident with $C'$ in the quotient away from it. Processing this way all the six circuits in the upper row we get the lower right figure and deduce that, finally, there is no way to place two local roots in the circuit $C''$. Thus the conclusion is that if $G$ has a good ordering, then at least one of the global roots must be a vertex of one of the circuits in the upper part of the figure. }} \label{F6} \end{center} \end{figure} \vspace{-5mm} \hspace{5mm}{\bf Proof: } Assume that $V(G_{i_0})\cup{}V(G_{i_1})\cup\ldots{}V(G_{i_p})$ does not contain any global root. The two local roots of $G_{i_0}$ are $x_0$ and $y_0$. So $z_1$ can not be a local root of $G_{i_1}$. Then $y_1$ is a local root of $G_{i_1}$ and $z_2$ is not a local root of $G_{i_2}$. Following the argument, $z_p$ is not a local root of $G_{i_p}$, but then it has only one local root, a contradiction. \hspace*{\fill} $\Box$ We call $G_{i_0},G_{i_1},\ldots{},G_{i_p}$ as above a {\bf conflict} of $G$. We say that two conflicts are {\bf disjoint} if no circuit is involved in both of them. The following is immediate from Corollary \ref{cor:3pendant} and Theorem \ref{lem:pathconflict}. For an example, see Figure \ref{F5}. \begin{corollary} \label{cor:atmost2conflicts} Let $G$ be a 2T-graph. If $G$ has 3 disjoint conflicts, then $G$ has no good ordering. \end{corollary} Even if the graph has no conflict, then it is possible that it has no good orientation. \jbjr{Indeed, using the example in Figure \ref{F6} we can now construct a complex example in Figure \ref{F7} below of a 2T-graph whose vertex set partitions into vertex sets of disjoint circuits so that $G$ has no good ordering. Note that it is necessary for the conclusion that there must be a global root in each of the three locally identical pieces of the graph that all of the circuits at the rim have exactly three vertices that are incident with external edges.} \begin{figure}[htbp] \begin{center} \scalebox{0.5}{\input{figure3a.pdf_t}} \caption{\jbjr{Example of a $3$-connected 2T-graph $G$ such that the set of external edges almost form a matching and $G$ has no good ordering. The solid and dashed edges illustrate two spanning trees along the external edges which can be extended arbitrarily into the circuits. Note that there are 22 circuits and 42 external edges connecting these so all of these are needed by Theorem \ref{thm:tutte}. It also follows from Theorem \ref{thm:tutte} applied to the partition consisting of the seven circuits appearing from (roughly) 2 o'clock to 6 o'clock in the figure and the union of the remaining 15 circuits that the 4 external edges between these two collections are all needed and since they are incident with only 3 vertices of the seven circuits, there will be two external edges incident with the same vertex. One gets further examples by enlarging the three paths on the rim of the figure.}} \label{F7} \end{center} \end{figure} \section{\jbj{2T-graphs which are disjoint unions of circuits}} \label{decomposeintoGCsec} In this section we consider 2T-graphs whose generic components are circuits. When we speak of a good orientation $D_{\prec}$ of a 2T-graph $G$, we use $s$ to denote the root of the out-branching $B^+$ and $t$ to denote the root of the in-branching $B^-$, where $B^+,B^-$ certify that $D$ is good (so $s$ is the first and $t$ is the last vertex in the ordering $\prec$). A circuit $H$ of $G$ is called a {\bf leaf} if there are exactly two external edges between $H$ and some other circuit, that is, $H$ corresponds to a vertex in $\tilde{G}$ incident with two parallel edges, otherwise $H$ is called {\bf internal}. \jbj{\begin{theorem} \label{matchingcase} Let $G=(V,E)$ be a 2T-graph whose generic components are circuits. If the external edges in $G$ form a matching, then $G$ has a good ordering. \end{theorem} \hspace{5mm}{\bf Proof: } Let $G_1,G_2,\ldots{},G_k$ be the circuits of $G$. We prove the theorem by by induction on $k$. When $k=1$, $G$ is itself a circuit and the result follows from Theorem \ref{thm:GChasgoodor}. So assume $k \geq 2$. Suppose first that some circuit $G_i$ is a leaf. By relabelling the circuits we may assume that $i=k$ and that $G_k$ is connected to $G_{k-1}$ by a matching of 2 edges $uv,zw$, where $u,z\in V(G_{k-1})$ and $v,w\in V(G_k)$. By induction $G-V(G_k)$ has a good ordering ${\prec'}$. By renaming if necessary we can assume $u {\prec'} z$. By Theorem \ref{thm:GChasgoodor}, $G_k$ has a good ordering ${\prec''}$ such that $v$ is the first vertex and $w$ the last vertex of ${\prec''}$. Now we obtain a good ordering by inserting all the vertices of ${\prec''}$ just after $u$ in ${\prec'}$. Note that this corresponds to taking the union of the branchings $B_s^+,B_t^-$ that correspond to ${\prec'}$ and the branchings $\hat{B}_v^+,\hat{B}_w^-$ that correspond to ${\prec''}$ by letting $A(\tilde{B}_s^+)=A(B_s^+)\cup A(\hat{B}_v^+)\cup\{uv\}$ and $A(\tilde{B}_t^-)=A(B^-_t)\cup A(\hat{B}^-_w)\cup\{wz\}$.\\ Suppose now that every $G_i$, $i\in [k]$ is internal. As $G$ and hence its quotient $\tilde{G}$ is a 2T-graph, there is a circuit $G_j$ such that there are exactly 3 edges $u_1v_1,u_2v_2,u_3v_3$, with $v_i\in V(G_j)$ connecting $V(G_j)$ to $V-V(G_j)$. Again we may assume that $j=k$. We may also assume w.l.o.g. that for some pair of spanning trees $T_1,T_2$ of $\tilde{G}$, the edges $u_1v_1,u_2v_2$ belong to $T_1$ and $u_3v_3$ belongs to $T_2$ (so the vertex in $\tilde{G}$ corresponding to $G_k$ is a leaf in $T_2$). Note that this means that $u_1,u_2$ belong to different circuits $G_a,G_b$. Now let $H$ be obtained from $G$ by deleting the vertices of $V(G_k)$ and adding the edge $u_1u_2$. Then $V(H)$ decomposes into a disjoint union of vertex sets of circuits and set of edges connecting these form a matching. By induction there is a good ordering $\prec$ of $H$. Let $B^+_{s,0},B^-_{t,0}$ be a pair of arc-disjoint branchings that certify that $\prec$ is a good ordering of $H$. We are going to show how to insert the vertices of $V(G_k)$ so that we obtain a good ordering of $G$. By renaming $u_1,u_2,v_1,v_2$ and possibly considering the reverse ordering $\stackrel{\leftarrow}{\prec}$ if necessary we can assume that $u_1 {\prec} u_2$ and that the arc $u_1u_2$ belongs to $B^-_t$. We now consider the three possible positions of $u_3$ in the ordering ${\prec}$ (see Figure~\ref{F2}). \begin{itemize} \item $u_3{\prec}u_1{\prec}u_2$. By Theorem \ref{thm:GChasgoodor}, $G_k$ has a good ordering ${\prec}_1$ of $G_k$ such that $v_3$ is the initial vertex and $v_2$ is the terminal vertex of ${\prec}_1$. Let $B^+_{v_3,1},B^-_{v_2,1}$ be arc-disjoint branchings (on $V(G_k)$) certifying that ${\prec}_1$ is good. Then we obtain a good ordering of $G$ by inserting all the vertices of ${\prec}_1$ just after $u_1$ in $\prec$ and we obtain the desired branchings $B^+_s,B^-_t$ by letting $A(B^+_s)=A(B^+_{s,0})\cup A(B^+_{v_3,1})\cup\{u_3v_3\}$ and $A(B^-_t)=A(B^-_{t,0}-u_1u_2)\cup A(B^-_{v_2,1})\cup\{u_1v_1,v_2u_2\}$. \item $u_1{\prec}u_2{\prec}u_3$. By Theorem \ref{thm:GChasgoodor}, $G_k$ has a good ordering ${\prec}_2$ of $G_k$ such that $v_2$ is the initial vertex and $v_3$ is the terminal vertex of ${\prec}_2$. Let $B^+_{v_2,2},B^-_{v_3,2}$ be arc-disjoint branchings (on $V(G_k)$) certifying that ${\prec}_2$ is good. Then we obtain a good ordering of $G$ by inserting all the vertices of ${\prec}_2$ just after $u_2$ in $\prec$ and we obtain the desired branchings $B^+_s,B^-_t$ by letting $A(B^+_s)=A(B^+_{s,0})\cup A(B^+_{v_2,2})\cup\{u_2v_2\}$ and $A(B^-_t)=A(B^-_{t,0}-u_1u_2)\cup A(B^-_{v_3,2})\cup\{u_1v_1,v_3u_3\}$. \item $u_1{\prec}u_3{\prec}u_2$. Consider again the good ordering ${\prec}_1$ above and the branchings $B^+_{v_3,1},B^-_{v_2,1}$. Then we obtain a good ordering of $G$ by inserting all the vertices of ${\prec}_1$ just after $u_3$ in $\prec$ and we obtain the desired branchings $B^+_s,B^-_t$ by letting $A(B^+_s)=A(B^+_{s,0})\cup A(B^+_{v_3,1})\cup\{u_3v_3\}$ and $A(B^-_t)=A(B^-_{t,0}-u_1u_2)\cup A(B^-_{v_2,1})\cup\{u_1v_1,v_2u_2\}$. \end{itemize} As we saw, in all the possible cases we obtain a good ordering of $G$ together with a pair of arc-disjoint branchings which certify that the ordering is good so the proof is complete. \hspace*{\fill} $\Box$ \jbjb{ Figure \ref{F7} shows an example of a 2T-graph $G$ whose vertex set partitions into vertex sets of generic circuits such that the set of edges between different circuits almost forms a matching and the graph $G$ has no good ordering.} \begin{figure}[h!] \begin{center} \scalebox{0.9}{\input{figure2.pdf_t}} \caption{How to lift a good ordering to a new circuit as in the proof of Theorem \ref{matchingcase}. In-branchings are displayed solid, out-branchings are dashed. The first line displays the three possible orders of the relevant vertices (increasing left to right) as they occur in the proof. The second line displays the ordering of the augmented graph and how the branchings lead into and out of the new circuit; its local out- and in-root is the leftmost and rightmost $v_i$, respectively.} \label{F2} \end{center} \end{figure} } A {\bf double tree} is any graph that one can obtain from a tree $T$ by adding one parallel edge for each edge of $T$. A {\bf double path} is a double tree whose underlying simple graph is a path. Recall that a subset $X\subset V$ with $2\leq |X|\leq |V|-2$ is pendant at $x$ in $G$ if all edges between $X$ and $V(G)-X$ are incident with $x$ \begin{definition} Let $G$ be a 2T-graph whose quotient graph is a double tree $T$. An {\bf obstacle} in $G$ is a subgraph $H$ consisting of a subset of the circuits of $G$ and the edges between these such that the quotient graph of $G[V(H)]$ is a double path $T_H$ of $T$ so that \begin{itemize} \item $H$ contains circuits $C,C'$, possibly equal, and vertices $x\in C,y\in C'$, such that $x=y$ if $C=C'$ and there is an $(x,y)$-path $P$ in $H$ which uses only external edges of $H$ (so $P$ is also a path in $T_H$ between the two vertices corresponding to the circuits $C,C'$). \item $T-V(T_H)$ has at least two connected components $A,B$ and $V_A$ is pendant at $x$ and $V_B$ is pendant at $y$ in $G$, where $V_A$ (resp. $V_B$) is the union of those circuits of $G$ that correspond to the vertex set $A$ (resp. $B$) in $T$. \end{itemize} \end{definition} \begin{theorem} \label{thm:doubleTchar} Let $G$ be a 2T-graph whose quotient is a double tree $T$, then $G$ has a good ordering if and only if \begin{itemize} \item[(i)] $G$ has at most two pendant circuits and \item[(ii)] $G$ contains no obstacle. \end{itemize} \end{theorem} \hspace{5mm}{\bf Proof: } By Corollary \ref{cor:3pendant} we see that (i) must hold if $G$ has a good ordering. Suppose that $G$ contains an obstacle $H$ but there exists a good ordering ${\prec}$ with associated branchings $B^+_s,B^-_t$ in $D=D_{\prec}$. Let $x,y$ be the special vertices according to the definition. Suppose first that $x=y$ and let $C$ be the circuit that contains $x$, let $v_C$ be the vertex of $T$ that corresponds to $C$ and let $V_A$, $V_B$ be the union of the vertex sets of circuits of $G$ so that these correspond to distinct connected components $A,B$ of $T-v_C$ and both $V_A$ and $V_B$ are pendant at $x$ in $G$. By Lemma \ref{lem:rootinpendant}, we may assume w.l.o.g that $s\in V_A$ and $t\in V_B$. Then it is easy to see that $D$ contains two arcs $a_1x,a_2x$ from $V_A$ to $x$ and the two arcs $xb_1,xb_2$ from $x$ to $V_B$ and precisely one of the arcs $a_1x,a_2x$ is in $B^+_s$ and the other is in $B^-_t$ and the same holds for the arcs $xb_1,xb_2$. Now consider a vertex $z\in C-x$. The $(s,z)$-path in $B^+_s$ contains $x$ and the $(z,t)$-path in $B^-_t$ also contains $x$ so $D$ is not acyclic, contradiction. Hence we must have $x\neq y$ and $x,y$ are in different circuits (so $C\neq C'$). Again we let $V_A$ be the union of vertices of circuits of $G$ so that $V_A$ is pendant at $x$ and similarly let $V_B$ be the union of vertices of circuits of $G$ so that $V_B$ is pendant at $y$. Again by Lemma \ref{lem:rootinpendant}, we may assume w.l.o.g that $s\in V_A$ and $t\in V_B$. As above $D$ must contain two arcs $a_1x,a_2x$ from $V_A$ to $x$ and the two arcs $yb_1,yb_2$ from $y$ to $V_B$ and precisely one of the arcs $a_1x,a_2x$ is in $B^+_s$ and the other is in $B^-_t$ and the same holds for the arcs $yb_1,yb_2$. Let $C_1,C_2,\ldots{},C_r$, $r\geq 2$ be circuits of $G$ so that $C=C_1,C'=C_r$ and $v_{C_1}v_{C_2}\ldots{}v_{C_r}$ is a path in $T$ which corresponds to the $(x,y)$-path $P=x_1x_2\ldots{}x_r$, where $x=x_1,y=x_r$, that uses only edges between different circuits in $G$ (by the definition of an obstacle). As $s\in V_A$ and $t\in V_B$ the path $P$ must be a directed $(x_1,x_r)$-path in $D$ and using that $D[V(C_1)]$ are $D[V(C_r)]$ are acyclic we can conclude as above that the arc $x_1x_2$ is an arc of $B^+_s$ and the arc $x_{r-1}x_r$ is an arc of $B^-_t$ \jbjb{(if $x_1x_2$ would not be an arc of $B^+_s$, then $D[V(C_1)]$ would contain a directed path from $x_1$ to the end vertex $z$ of the other arc leaving $V(C_1)$ and also a directed path from $z$ to $x_1$, implying that $D[V(C_1)]$ would not be acyclic)}. Thus it follows that for some index $1<j<r$ the arc $x_{j-1}x_j$ is an arc of $B^+_s$ and the arc $x_jx_{j+1}$ is an arc of $B^-_t$. This implies that for every $z\in C_j$ the $(s,z)$-path of $B^+_s$ and the $(z,t)$-path of $B^-_t$ contains $x_j$, contradicting that $D$ is acyclic. Suppose now that $G$ satisfies (i) and (ii). We shall prove by induction on the number, $k$, of circuits in $G$ that $G$ has a good orientation. The base case $k=1$ follows from Theorem \ref{thm:GChasgoodor} so we may proceed to the induction step.\\ Suppose first that $G$ has a leaf circuit $G_h$ that is not pendant. Let $v_{h'}$ be the neighbour of $v_h$ in $\tilde{G}$ and let $G_{h'}$ be the circuit of $G$ corresponding to $v_{h'}$. As $G_h$ is not pendant the two edges $xx',zz'$ between $G_h$ and $G_{h'}$ have distinct end vertices $x,z$ in $V(G_h)$ and distinct end vertices $x',z'$ in $V(G_{h'})$. By induction $G-G_h$ has a good orientation $D'$ and we may assume, by reversing all arcs, if necessary, that $x'$ occurs before $z'$ in the ordering ${\prec'}$ that induces $D'$. By Theorem \ref{thm:GChasgoodor}, $G_h$ has a good orientation $D''$ where $x$ is the out-root and $z$ is the in-root. Now we obtain a good orientation $D$ by adding the two arcs $x'x$ and $zz'$ and using the first in the out-branching rooted at $x$ and the later in the in-branching. \\ Thus we can assume from now on that every leaf component of $G$ is pendant and now it follows from Corollary \ref{cor:3pendant} that $G$ is a double path whose circuits we can assume are ordered as $G_1,G_2,\ldots{},G_k$ in the ordering that the corresponding vertices $v_1,v_2,\ldots{},v_k$ appear in the quotient $\tilde{G}$.\\ We prove the following stronger statement which will imply that $G$ has a good orientation. \begin{claim} \label{claim1} Let $G$ be a double path having no obstacle and whose circuits are ordered as $G_1,G_2,\ldots{},G_k$. Let $s\in V(G_1)$ be any vertex, except $a$ if $G_1$ is pendant at $a\in V(G_1)$ and let $t\in V(G_k)$ be any vertex except $b$ if $G_k$ is pendant at $b\in V(G_k)$ (such vertices are called {\bf candidates for roots}). Then has a good orientation $D_{\prec}$ so that $s$ is the first vertex (root of the out-branching) and $t$ is the last vertex in $\prec$ if and only if none of the following hold. \begin{enumerate} \item[(a)] There is an $(s,t)$-path $P$ in $G$ which uses only external edges. \item[(b)] $t$ is an end vertex of one of the edges from $G_{k-1}$, there is an index $i\in [k-1]$ so that the two edges from $G_i$ to $G_{i+1}$ are incident with the same vertex $x$ of $G_{i+1}$ and there is an $(x,t)$-path in $G$ which uses only external edges. \item[(c)] $s$ is an end vertex of one of the edges from $G_1$ to $G_2$, there is an index $j\in [k]\setminus \{1\}$ so that the two edges from $G_{j-1}$ to $G_j$ are incident with the same vertex $y$ of $G_{j-1}$ and there is an $(s,y)$-path in $G$ which uses only external edges. \end{enumerate} \end{claim} \noindent{}{\bf Proof of claim:} Note that if ${\prec}:\ v_1,\ldots{},v_n$ is a good ordering with $s=v_1$ and $t=v_n$, then, in the corresponding acyclic orientation $D_{\prec}$ the two edges between $G_i$ and $G_{i+1}$ are both oriented towards $G_{i+1}$ and for every pair of arc-disjoint branchings $B^+_s,B^-_t$ in $D$, exactly one of these arcs belong to $B^+_s$ and the other to $B^-_t$.\\ We first show that if $G,s,t$ satisfy any of (a)-(c), then there is no good ordering $v_1,\ldots{},v_n$ with $s=v_1$ and $t=v_n$.\\ Suppose that $G$ has a good ordering $v_1,\ldots{},v_n$ with $s=v_1$ and $t=v_n$ and let $B^+_s,B^-_t$ be a pair of arc-disjoint branchings in the acyclic digraph $D=D_{\prec}$. If (a) holds, then let $P=x_1x_2\ldots{}x_k$ be a path from $s=x_1$ to $t=x_k$ so that each edge $x_ix_{i+1}$, $i\in [k-1]$ has one end vertex in $G_i$ and the other in $G_{i+1}$. As $B^+_s$ induces and out-branching from $s$ in the acyclic digraph $D[V(G_1)]$, we must have that the arc $sx_2$ belongs to $B^+_s$. By a similar argument, the arc $x_{k-1}t$ belongs to $B^-_t$. Hence there is an index $1<i<k$ such that the arc $x_{i-1}x_i$ is in $B^+_s$ and the arc $x_ix_{i+1}$ is in $B^-_t$. However this implies that $x_i$ is both an out-root and an in-root in $D[V(G_i)]$, contradicting that $D$ is acyclic. So (a) cannot hold if there is a good ordering.\\ If (b) holds, then let \jbjb{$x\in V(G_{i+1})$} be the vertex incident with both edges between \jbjb{$G_{i}$ and $G_{i+1}$} and let \jbjb{$x=x_{i+1}x_{i+2}\ldots{},x_{k-1}t$} be an $(x,t)$-path in $G$ so that each edge $x_jx_{j+1}$, $\jbjb{i+1}\leq j\leq k$ has one end vertex in \jbjb{$G_j$} and the other in \jbjb{$G_{j+1}$}. As above, we conclude that the arc $x_{k-1}t$ belongs to $B^-_s$ and that there exists an index $j$ with $i\leq j$ so that $x_j$ is the head an arc of $B^+_s$ coming from $G_{j-1}$ and the tail of an arc of $B^-_t$ going to $G_{j+1}$. As above this again contradicts that $D$ is acyclic with branchings $B^+_s,B^-_t$. Analogously we see that (c) cannot hold when there is a good ordering. \iffalse Finally suppose that (d) holds and let $u,v$ be vertices and let $Q$ be the path between them as in (d). Since $u$ is the head of an arc from each of the branchings and $v$ is the tail of an arc from both it is easy to conclude that there is a vertex $w\in G_j$ on $Q$ which is the head of an arc from $G_{j-1}$ that belongs to $B^+_s$ and also tail of an arc from $G_j$ to $G_{j+1}$ which belongs to $B^-_t$. As before this contradicts that $D$ is acyclic with branchings $B^+_s,B^-_t$.\\ \fi Suppose now that none of (a)-(c) hold. We prove the existence of a good orientation by induction on $k$. For $k=1$ the claim follows from Theorem \ref{thm:GChasgoodor}. Suppose next that $k=2$. \jbjb{Let $u_1u_2$ and $v_1,v_2$ with $u_1,v_1\in V(G_1)$ be the two edges between $G_1$ and $G_2$. Suppose first that $t\not\in \{u_2,v_2\}$. Since $G_1$ is not pendant at $s$, we can assume w.l.o.g. that $s\neq v_1$. By Theorem \ref{thm:GChasgoodor}, there is a good orientation of $G_1$ in which $s$ is the out-root and $v_1$ is the in-root and a good orientation of $G_2$ in which $u_2$ is the out-root and $t$ is the in-root. Thus we obtain the desired orientation by adding the arc $u_1u_2$ to the union of the two out-branchings and the arc $v_1v_2$ to the union of the two in-branchings. Suppose now that $t\in \{u_2,v_2\}$. Without loss of generality $t=u_2$. Since (a) does not hold, we know that $s\neq u_1$. By Theorem \ref{thm:GChasgoodor}, $G_1$ has a good orientation with $s$ and out-root and $u_1$ as in-root and $G_2$ has a good orientation with $v_2$ as out-root and $t$ as in-root. Now we obtain the desired branchings by adding the arc $u_1u_2$ to the union of the two in-branchings and the arc $v_1v_2$ to the union of the two out-branchings. } Assume that $k\geq 3$ and that the claim holds for all double paths which satisfy none of (a)-(c) and have fewer than $k$ circuits. Let $s\in V(G_1),t\in V(G_k)$ be candidates for roots and let $xx',zz'$ be the two edges between $G_1$ and $G_2$. \jbjb{Without loss of generality we have $s\neq z$.} Note that (b) cannot hold for $s',t$ in $G'=G-G_1$ when $s'\in\{x',z'\}$ because $G'$ is an induced subgraph of $G$. Suppose that (a) holds for $(G',x',t)$. Then $z'\neq x'$ as (b) does not hold for $G$. Now (a) cannot hold for $(G',z',t)$ as this would imply that (b) holds in $G$. For the same reason (c) cannot hold for $(G',z',t)$. Thus if (a) holds for $(G',x',t)$, then none of (a)-(c) hold for $(G',z',t)$. If (c) holds for $(G',x',t)$ we conclude that none of (a),(c) hold for $(G',z',t)$, because both would imply that $G$ contains an obstacle. Let $s'=x'$ unless one of (a)-(c) holds for $x'$ and in that case \jbjb{$s\neq x$ must hold and } we let $s'=z'$. By the arguments above, none of (a)-(c) hold for $(G',s',t)$.\\ By induction $G'$ has a good orientation where $s'$ is the out-root and $t$ is the in-root and by Theorem \ref{thm:GChasgoodor}, $G_1$ has a good orientation in which $s$ is the out-root and $z$ is the in-root. Let $a$ be the arc $xx'$ if $s'=x'$ and otherwise let $a$ be the arc $zz'$. Now adding $a$ to the union of the two out-branchings and the other arc from $G_1$ to $G_2$ to the union of the two in-branchings, we obtain the desired good orientation. This completes the proof of Claim \ref{claim1}.\\ Now we are ready to conclude the proof of Theorem \ref{thm:doubleTchar}. As $G_1,G_k$ are circuits they both have at least 2 vertices. If we can choose $s\in V(G_1)$ and $t\in V(G_k)$ so that none of these two vertices are incident with edges to the other circuits, then we are done by the Claim \ref{claim1}, so either $|V(G_1)|=2$ or $|V(G_k)|=2$ or both. Suppose w.l.o.g. that $|V(G_1)|=2$ and that the two edges from $G_1$ to $V-G_1$ are incident with different vertices $u,v\in V(G_1)$. As $V(G_1)$ is pendant, these two edges end in the same vertex $x$. If we can choose $t\in V(G_k)$ so that it is not incident with any of the edges between $G_{k-1}$ and $G_k$, then we are done, so we may assume that we also have $V(G_k)=\{z,w\}$ and that the edges between $G_{k-1}$ and $G_k$ are $yz,yw$ for some $y\in V(G_{k-1})$ ($G_k$ is pendant). Now it follows from the fact that (ii) holds that every $(x,y)$-path in $G$ uses an edge which is inside some $G_i$ we can take $s$ and $t$ freely among $u,v$, respectively $z,w$ and conclude by the claim (none of (a)-(c) can hold). \hspace*{\fill} $\Box$ \iffalse we just need to recall that each circuit has at least two vertices and it is easy to check that we can choose $s\in V(G_1), t\in V(G_k)$ so that none of (a)-(c) hold. \fi \iffalse Let $x\in V(G_1)$ be the vertex incident with the two edges $xy,xz$ from $G_1$ to $G_2$. By induction $G-G_1$ has a good ordering ${\prec}^1$ in which $y$ is the out-root of $D_{{\prec}^1})$ and some vertex $t\in V(G_k)$ is the in-root (if $k=2$ we can choose $t\neq y$). By Theorem \ref{thm:GChasgoodor}, $G_1$ has a good ordering ${\prec}^2$ such that $s$ is the out-root and $x$ is the in-root in $D_{{\prec}^2})$, where $s$ is an arbitrary vertex of $V(G_1)-x$. Thus we obtain the desired good orientation of $D$ by adding the arcs $x\rightarrow y$ and $x\rightarrow z$ and using the first in the out-branching rooted at $s$ and the later in the in-branching rooted at $t$. \hspace*{\fill} $\Box$ \fi \section{Remarks and open problems}\label{remarksec} Let us start by recalling that the following is an immediate consequence of Theorem \ref{thm:GChasgoodor} as we first find a good ordering of the circuit and then orient the remaining edges according to that ordering. \begin{corollary} \label{cor:containsGC} Every graph which contains a circuit as a spanning subgraph has a good ordering. \end{corollary} \begin{conjecture} There exists a polynomial algorithm for deciding whether a 2T-graph has a good ordering. \end{conjecture} \begin{problem} What is the complexity of deciding whether a given graph has a good ordering? \end{problem} Two of the authors of the current paper proved the following generalization of Theorem \ref{4reg4con}. Note that its proof is more complicated than that of Theorem \ref{4reg4con}. \begin{theorem}\cite{bang4reg4con} \label{all4reg4congood} Every 4-regular 4-connected graph has a good orientation. \end{theorem} \iffalse \begin{problem} What is the complexity of deciding whether a mixed graph $M=(V,E\cup A)$ which is acyclic (no directed cycle in $D=(V,A)$) and whose underlying graph is a circuit, has a good ordering that is consistent with the arcs in $A$? \end{problem} \begin{problem} What is the complexity of deciding whether a mixed graph $M=(V,E\cup A)$ which is acyclic (no directed cycle in $D=(V,A)$) and whose underlying graph is a 2T-graph, has a good ordering that is consistent with the arcs in $A$? \end{problem} \fi Let $D=(V,A)$ be a digraph and let $s,t$ be distinct vertices of $V$. An {\bf $(s,t)$-ordering} of $D$ is an ordering ${\prec}:\ v_1,v_2,\ldots{},v_n$ with $v_1=s,v_n=t$ such that every vertex $v_i$ with $i<n$ has an arc to some $v_j$ with $i<j$ and every vertex $v_r$ with $r>1$ has an arc from some $v_p$ with $p<r$. It is easy to see that $D$ has such an ordering if and only it it has a spanning acyclic digraph in with branchings $B^+_s,B^-_t$. These branchings are not necessarily arc-disjoint but it is clear that if $D$ has a good ordering with $s$ as the initial and $t$ the terminal vertex then this ordering is also an $(s,t)$-ordering. Hence having an $(s,t)$-ordering is a necessary condition for having a good ordering with $s$ as the initial and $t$ as the terminal vertex. \begin{theorem} \label{thm:storderNPC} It is NP-complete to decide whether a digraph $D=(V,A)$ with prescribed vertices $s,t\in V$ has an $(s,t)$-ordering. \end{theorem} \hspace{5mm}{\bf Proof: } The so called {\sc betweenness} problem is as follows: given a set $S$ and a collection of triples $(x_i,y_i,z_i)$, $i\in [m]$, consisting of three distinct elements of $S$; is there a total order on $S$ (called a betweenness order on $S$) so that for each of the triples we have either $x_i<y_i<z_i$ or $z_i<y_i<x_i$? {\sc Betweenness} is NP-complete \cite{opatrnySJC8}. Given an instance $[S, (x_i,y_i,z_i), i\in [m]]$ of {\sc betweenness} we construct the following digraph $D$. The vertex set $V$ of $D$ is constructed as follows: first take $5m$ vertices $$a_1,\ldots{},a_m,b_1,\ldots{},b_m,c_1,\ldots{},c_m,d_1,\ldots{},d_m,e_1,\ldots{},e_m$$ where $\{a_i,b_i,c_i\}$ corresponds to the triple $(x_i,y_i,z_i)$ and then identify those vertices in the set $\{a_1,\ldots{},a_m,b_1,\ldots{},b_m,c_1,\ldots{},c_m\}$ that correspond to the same element of $S$. Then, add two more vertices: $s$ and $t$. The arc set of $D$ consists of an arc from $s$ to each vertex of $\{a_1,\ldots{},a_m,c_1,\ldots{},c_m\}$, an arc from each such vertex to $t$ and the following $6m$ arcs which model the betweenness conditions: for each triple $(x_i,y_i,z_i)$ $D$ contains the arcs $a_id_i,c_id_i,d_ib_i,$ $b_ie_i,e_ia_i,e_ic_i$. \jbjb{Clearly $D$ can be constructed in polynomial time.} We claim that $D$ has an $(s,t)$-ordering if and only if there is a betweenness total ordering of $S$. Suppose first that $D$ has an $(s,t)$-ordering. The vertices $d_i,b_i,e_i$ must occur in that order as $b_i$ is the unique out-neighbour (in-neighbour) of $d_i$ ($e_i$). As each $a_i,c_i$ are the only in-neighbours (out-neighbours) of $d_i$ ($e_i$) in $D$ the vertices $a_i,c_i$ cannot both occur after (before) $d_i$ ($e_i$) so the vertices in $\{a_i,b_i,c_i\}$ will occur either in the order $a_i,b_i,c_i$ or in the order $c_i,b_i,a_i$. Thus taking the same order for the elements in $S$ as for the corresponding vertices of $D$, we obtain a betweenness total order. Conversely, if we are given a betweenness total order for $S$ we just place the vertices in $\{a_1,\ldots{},a_m,b_1,\ldots{},b_m,c_1,\ldots{},c_m\}$ in the order that the corresponding elements of $S$ occur and then insert each vertex $d_i$ ($e_i$) anywhere between $a_i$ and $b_i$ ($b_i$ and $c_i$) if the triple $(x_i,y_i,z_i)$ is ordered as $x_i<y_i<z_i$ and otherwise we insert $d_i$ ($e_i$) anywhere between $c_i$ and $b_i$ ($b_i$ and $a_i$). Finally insert $s$ as the first element and $t$ as the last element. Now every vertex different from $s,t$ has an earlier in-neighbour and a later out-neighbour, so it is an $(s,t)$-ordering. \hspace*{\fill} $\Box$ If $D$ is semicomplete digraph, that is, a digraph with no pair of non-adjacent vertices, then $D$ has an $(s,t)$-ordering for a given pair of distinct vertices $s,t$ if and only if $D$ has a Hamiltonian path from $s$ to $t$ \cite{bangJGT20a,thomassenJCT28}. It was shown in \cite{bangJA13} that there exists a polynomial algorithm for deciding the existence of such a path in a given semicomplete digraph so for semicomplete digraphs the $(s,t)$-ordering problem is polynomially solvable. \iffalse \begin{question}[For us!] What is the complexity of deciding whether a strong digraph $D=(V,A)$ has an $(s,t)$-ordering for some choice of distinct vertices $s,t\in V$? \end{question}\fi \begin{corollary} It is NP-complete to decide if a strong digraph $D=(V,A)$ has an $(p,q)$-ordering for some choice of distinct vertices $p,q\in V$ \end{corollary} \hspace{5mm}{\bf Proof: } Let $D'$ be the digraph that we obtain from the digraph $D$ in the proof above by adding the arc $ts$. Then $D'$ is strong and it is easy to see that the only possible pair for which there could exists a $(p,q)$-ordering is the pair $p=s,q=t$: for each triple $(x_i,y_i,z_i)$ the corresponding vertices in $D$ must occur either in the order $a_i,d_i,b_i,e_i,c_i$ or in the order $c_i,d_i,b_i,e_i,a_i$ and in both cases $s$ must be before all these vertices and $t$ must be after all these vertices.\hspace*{\fill} $\Box$ \begin{problem} What is the complexity of deciding whether a digraph which has a pair of arc-disjoint branchings $B^+_s,B^-_t$ has such a pair whose union (of the arcs) is an acyclic digraph? \end{problem}
proofpile-arXiv_065-4099
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The coupling between the super-massive black hole (SMBH) energy output and the interstellar and circum-galactic medium (ISM and CGM) of the host-galaxy is still an open issue, particularly relevant for hyper-luminous quasi-stellar objects (QSOs) with SMBH mass $\geq$ 10$^{9}$ M$_\odot$ and bolometric luminosity $L_{\rm Bol}> 10^{47}$ {erg~s$^{-1}$}, i.e. at the brightest end of the active galactic nuclei (AGN) luminosity function. Mechanical and radiative QSO-driven feedback processes have been implemented in models of galaxy evolution to prevent massive galaxies from over-growing, change their colours, heat both ISM and CGM and enrich them with metals \citep[e.g.][and references therein]{Croton06, Sijacki07, Voit15, Gaspari17, Choi18}. The impressive growth in the number of QSO-driven outflows discovered in the last decade represents a great advance in our comprehension of the feedback process. These outflows have been detected in all gas phases and at all spatial scales \citep[sub-pc to several kpc, see][and references therein]{Fiore17}, and provide a very promising mechanism to efficiently deposit energy and momentum into the surrounding gas \citep[e.g.][]{Faucher-Giguere12, Zubovas&King12}, with the most powerful ones exhibiting a kinetic power up to a few percent of {$L_{\rm Bol}$}\ \citep[e.g.][]{Feruglio10,Maiolino12,Cicone14,Aalto15,Bischetti17}. In several AGN, mainly in the cold molecular and neutral gas phases, mass outflow rates fairly exceeding the star formation rate have been measured \citep[e.g.][]{Feruglio13b, Alatalo15a, Alatalo15, Cicone15, Fluetsch19}, indicating that these outflows may affect the evolution of the host galaxy. Ultra-fast outflows (UFOs) of highly ionised gas observed at sub-pc scales \citep{Reeves03,Tombesi12} have been proposed as the likely origin of galaxy-wide outflows, interpreted as the result of the impact of UFOs on the ISM \citep[][and references therein]{King&Pounds15} Furthermore, both models and observations of kpc-scale outflows seem to indicate a UFO-ISM interaction in an energy-conserving regime, whereby the swept-up gas expands adiabatically. So far the co-existence of a massive molecular outflow with a nuclear UFO has been confirmed in a handful of AGN with {$L_{\rm Bol}$}$\sim10^{44}- 10^{46}$ {erg~s$^{-1}$}\ \citep{Tombesi15,Feruglio15,Longinotti15} and in APM 08279$+$5255 \citep{Feruglio17}, which is a gravitationally-lensed QSO at $z$ $\sim$ 4 with an estimated intrinsic {$L_{\rm Bol}$}\ of few times 10$^{47}$ {erg~s$^{-1}$}\ \citep{Saturni18}. In all these sources the momentum boost (i.e. the momentum flux of the wind normalised to the AGN radiative momentum output, {$L_{\rm Bol}$}/$c$) of the UFO is $\sim$ 1, while the momentum rate of the molecular outflow is usually $\gg$ 1, in qualitative agreement with the theoretical predictions for an energy-conserving expansion \citep{Faucher-Giguere12,Costa14}. However, these results are still limited to a very small sample and suffer from large observational uncertainties, mostly due to the relatively low signal-to-noise of the UFO- or outflow-related features confirmed in spectra, or to the limited spatial resolution of sub-mm observations. Recent works increasing the statistics of sources with detection of molecular outflows have widened the range of measured energetics \citep[e.g.][]{GarciaBurillo14,Veilleux17,Feruglio17,Brusa18,BarcosMunoz18,Fluetsch19}, consistently with driving mechanisms alternative to the energy-conserving expansion, such as direct radiation pressure onto the host-galaxy ISM \citep[e.g.][]{Ishibashi&Fabian14,Ishibashi18,Costa18a}. In order to study the interplay between UFOs and large-scale outflows in the still little explored high-{$L_{\rm Bol}$}\ regime, we have targeted with ALMA the QSO PDS~456, which is the most luminous, radio-quiet AGN ({$L_{\rm Bol}$}\ $\sim10^{47}$ {erg~s$^{-1}$}) in the local Universe at $z\simeq0.18$ \citep{Torres97,Simpson99}. This has allowed us to probe the molecular gas reservoir in a hyper-luminous QSO with unprecedented spatial resolution ($\sim700$ pc). PDS~456\ exhibits the prototype of massive and persistent UFO detected in the X-rays, identified as a quasi-spherical wind expanding with a velocity of $\sim0.3c$ and kinetic power of $\sim20-30$\% of $L_{\rm Bol}$ \citep{Nardini15, Luminari18}, arising at $\sim$ 0.01 pc from the SMBH. \citet{Reeves16} have reported the discovery of a complex of soft X-ray broad absorption lines, possibly associated with a lower ionisation, decelerating ($\sim$ 0.1$c$) phase of the UFO out to pc scales. Moreover, \citet{Hamann18} have recently claimed the presence of highly blueshifted CIV absorption line in the {\it Hubble Space Telescope} UV spectra of PDS~456, tracing an outflow with velocity of 0.3$c$, similar to that measured for the UFO. Given its uniqueness in terms of presence of very fast outflows observed in several wavebands and its high luminosity, which makes it a local counterpart of the hyper-luminous QSOs shining at $z\sim2-3$, PDS~456\ stands out as one of the best targets to investigate the presence of a molecular outflow and the effects of the QSO activity on the host-galaxy ISM. Nonetheless, the properties of the molecular gas of PDS~456\ have been poorly studied so far, being based on a low-resolution ($7\times4.8$ arcsec$^2$) and low-sensitivity observation performed with the OVRO array \citep[][hereafter Y04]{Yun04}. The detection of a CO(1-0) emission line with a FWHM = 180 {km~s$^{-1}$}\ and line flux of $\sim$ 1.5 Jy {km~s$^{-1}$}\ implies a molecular gas reservoir of few times $10^{9}$ {$M_\odot$}, which is an intermediate value between those typically measured for {\it blue} Palomar-Green QSOs and local ultra-luminous infrared galaxies (ULIRGs) \citep[e.g.][]{Solomon97,Evans06,Xia12}. The $K$-band image obtained at the \textit{Keck} Telescope shows three compact sources detected at $\sim3$ arcsec from the QSO, suggesting the possible presence of companions at a projected distance of $\sim9$ kpc (Y04). The paper is organised as follows. In Sect. 2 we describe the ALMA observation of PDS~456\ and the data reduction procedure. Our analysis and results are presented in Sect. 3. We discuss in Sect. 4 and conclude our findings in Sect. 5. At the redshift of PDS~456, the physical scale is $\sim3.1$ kpc arcsec$^{-1}$, given a $H_0=69.6$, $\Omega_{\rm m}=0.286$, and $\Omega_{\rm \Lambda}=0.714$ cosmology. \begin{figure*}[thb] \centering \includegraphics[width = 1\textwidth]{images/mappa-tot-pdsco.png} \caption{Panel \textbf{\textit{(a):}} map of the continuum-subtracted, {CO(3$-$2)}\ emission of PDS~456\, integrated over a linewidth of 320 {km~s$^{-1}$}. Black contours indicate the [-3,-2, 2, 3, 2$^n$]$\sigma$ significance levels ($n\geq2$ and $\sigma=0.013$ Jy beam$^{-1}$ {km~s$^{-1}$}) of the {CO(3$-$2)}\ emission. Blue contours indicate the (rest-frame) $\sim340$ GHz continuum [-3,-2, 2, 3, 2$^n$]$\sigma$ levels (with $\sigma=9.6$ $\mu$Jy beam$^{-1}$). The ALMA synthetic beam is indicated by the grey ellipse. Panel \textbf{\textit{(b):}} map of the line and continuum emitters detected in the ALMA field of view at $\gtrsim5\sigma$. } \label{fig:CO-cont-map} \end{figure*} \section{ALMA observation and data reduction}\label{sect:datared} We present in this work the ALMA Cycle 4 observation (project 2016.1.01156.S, P.I. E. Piconcelli) of PDS~456, performed on 5 May 2017 for a 4.1 hours on-source integration time. The ALMA array was arranged in C40-5 configuration, with a maximum projected baseline of $\sim1.1$ km. We used the ALMA band 7 receiver and the frequency division mode of the ALMA correlator, providing us with four spectral windows of 1.875 GHz width and a spectral resolution of 31.25 MHz ($\sim30$ {km~s$^{-1}$}). One spectral window (spw0) was centred at 292 GHz to cover the expected frequency of the {CO(3$-$2)}\ emission (rest frequency 345.796 GHz), based on the [Fe II] redshift $z_{\rm [FeII]}=0.184$ from \citet{Simpson99}. The second spectral window (spw1) was set adjacent to the first with $\sim300$ MHz overlap on the lower frequency side to accurately estimate the continuum. The sideband not including {CO(3$-$2)}\ emission with the two remaining spectral windows was set at $\sim280$ GHz. Visibilities were calibrated by using the CASA 4.7.2 software \citep{McMullin07} in the pipeline mode and the default calibrators provided by the Observatory: bandpass calibrator J175132$+$093958 (band 7 flux 1.42$\pm$0.07 Jy), flux and phase calibrator J173302$-$130445 (band 7 flux 1.12$\pm$0.06 Jy), water vapour radiometer calibrator J173811$-$150300 (band 3 flux 0.11$\pm$0.01 Jy). The absolute flux accuracy is better than 10\%. To estimate the rest-frame $\sim340$ GHz continuum emission we averaged the visibilities in the four spectral windows excluding the spectral range covered by the {CO(3$-$2)}\ emission ($\sim1$ GHz). Moreover, to accurately model the continuum emission close to the {CO(3$-$2)}\ line, we performed a combined fit of only spw0 and spw1 in the UV plane. We did not include the lower sideband to avoid introducing systematics usually associated with the relative calibration of distant spectral windows. The relative flux calibration of spw0 and spw1 was verified for all calibrators and, for PDS~456, in the overlap range of the two spectral windows. The agreement of the continuum levels in the overlap region is better than 2\%. As the intrinsic QSO continuum variation across spw0 and spw1 is expected to be less than 1\%, we fitted a zero order model in the UV plane to the continuum channels ($|v|>1000$ {km~s$^{-1}$}\ from the peak of the {CO(3$-$2)} emission line). A first order polynomial fit to the continuum emission did not significantly change our results. We subtracted this fit from the global visibilities and created continuum-subtracted {CO(3$-$2)}\ visibilities. We investigated different cleaning procedures to produce the continuum-subtracted cube of {CO(3$-$2)}. We preferred the Hogbom algorithm and the application of interactive cleaning masks for each channel and cleaning iteration. The usage of the Clark cleaning algorithm does not significantly affect the properties of the QSO emission but increases the number of negative artefacts in the ALMA maps, while non-interactive cleaning (without masks) results in positive residuals at the location of the QSO. We chose a natural weighting, a cleaning threshold equal to the rms per channel, a pixel size of 0.04 arcsec and a spectral resolution of $\sim30$ {km~s$^{-1}$}. The final beam size of the observation is ($0.23\times0.29$) arcsec$^2$ at position angle $\rm PA= -70$ deg. The 1$\sigma$ rms in the final cube is $\sim 0.083$ mJy beam$^{-1}$ for a channel width of 30 {km~s$^{-1}$}. By adopting the same deconvolution procedure as explained above, we obtain a continuum map with synthetic beam of ($0.24\times0.30$) arcsec$^2$ and rms of 9.6 $\mu$Jy beam$^{-1}$ in the aggregated bandwidth. We also produced a {CO(3$-$2)}\ data-cube with increased angular resolution by applying a briggs weighting to the visibilites in our ALMA observation with robust parameter $b=-0.5$, resulting into an ALMA beamsize of $0.16\times0.19$ arcsec$^2$ and a rms sensitivity of $\sim 0.16$ mJy beam$^{-1}$ for a 30 {km~s$^{-1}$}\ channel width. \section{Results}\label{sect:results} \begin{figure}[htb] \caption{Continuum-subtracted spectrum of the {CO(3$-$2)}\ emission line in PDS~456, extracted from a circular region of 1 arcsec radius. Panel \textbf{\textit{(a)}} shows the integrated flux density as a function of velocity, corresponding to a circular region of 1 arcsec radius centred on the QSO position. The channel width is 30 {km~s$^{-1}$}. The inset \textbf{\textit{(b)}} shows a zoomed-in view of the high-velocity wings in the {CO(3$-$2)}\ line profile. } \label{fig:spectrum} \includegraphics[width=1\columnwidth]{images/PDS456-spectrum-tot-zoom.pdf} \end{figure} The continuum-subtracted, velocity integrated {CO(3$-$2)}\ emission in the velocity range $v \in$ [-160,160] {km~s$^{-1}$}\ of PDS~456\ is shown in Fig. \ref{fig:CO-cont-map}a. The contours of the 340 GHz continuum emission are also plotted. The peak of the {CO(3$-$2)}\ emission is located at RA (17:28:19.79 $\pm$ 0.01), Dec (-14:15:55.86 $\pm$ 0.01), consistent with the position of PDS~456\ based on VLA data (Y04). The {CO(3$-$2)}\ emission line is detected with a significance of $\sim350\sigma$ (with $\sigma= 0.013$ Jy beam$^{-1}$ {km~s$^{-1}$}). The bulk of the emission is located within $\sim1$ arcsec from the QSO, with some extended, fainter clumps located at larger distance detected with statistical significance of $\sim4\sigma$. Both {CO(3$-$2)}\ and continuum emission are resolved by the ALMA beam in our observation. Specifically, a two-dimensional Gaussian fit of {CO(3$-$2)}\ in the image plane results into a deconvolved FWHM size of (0.28$\pm$0.02) $\times$ (0.25$\pm$0.02) arcsec$^2$, which corresponds to a physical size of $\sim0.9$ kpc. A fit of the continuum map gives a FWHM deconvolved size of (0.19$\pm$0.02) $\times$ (0.17$\pm$0.02) arcsec$^2$ and a flux density of $0.69\pm0.02$ mJy. In addition to PDS~456, three line emitters (CO-1, CO-2, CO-3) and three continuum emitters (Cont-1, Cont-2, Cont-3) are detected at $\gtrsim$5$\sigma$ in the ALMA primary beam ($\sim20$ arcsec), as displayed in Fig. \ref{fig:CO-cont-map}b. The proximity in sky frequency of the line emitters suggests that these are {CO(3$-$2)}\ emitters located at approximately the same redshift of the QSO. A detailed analysis of the galaxy overdensity around PDS~456\ will be presented in a forthcoming paper (Piconcelli et al. 2019 in prep.). \begin{figure*}[htb] \centering \includegraphics[width = 0.515\textwidth]{images/mask_3sigma_contours.pdf} \includegraphics[width = 0.475\textwidth]{images/spec-diffuse.pdf} \caption{Panel \textit{\textbf{(a)}}: velocity integrated map of the blue-shifted ($v<-250$ {km~s$^{-1}$}) {CO(3$-$2)}\ emission obtained by integrating the emission detected at $>3\sigma$ in each 30 {km~s$^{-1}$}\ spectral channel, for at least four contiguous channels (i.e. over a velocity range of $\geq$ 120 {km~s$^{-1}$}). White contours show the systemic {CO(3$-$2)}\ emission (same as Fig. 1a). Panel \textit{\textbf{(b)}}: {CO(3$-$2)}\ spectra of the extended outflowing clumps A and B shown in panel (a), together with their best-fit multi-Gaussian components model. The spectrum extracted at the position of clump A (top panel), located at $\sim1.8$ kpc from the QSO, shows {CO(3$-$2)}\ emission centred at $v\sim0$ (systemic emission from the QSO, green curve) plus two components with blueshifted velocities of $v\sim-300$ and $\sim-700$ {km~s$^{-1}$}. The spectrum extracted at the position of clump B (bottom panel) shows no contamination from the {CO(3$-$2)}\ systemic emission, while blueshifted emission is detected at $v\sim-400$ {km~s$^{-1}$}. } \label{fig:mappa-broad} \end{figure*} \begin{figure}[htb] \centering \includegraphics[width=1\columnwidth]{images/moments_rotaxis.png} \caption{Panel \textit{\textbf{(a)}} shows the velocity map of the {CO(3$-$2)} emission detected at $\gtrsim5\sigma$ in the host galaxy of PDS~456, resolved with $\sim12$ ALMA beams (indicated by the grey ellipse). The main kinematic axis is in indicated by the black line. Panel \textit{\textbf{(b)}} displays the velocity dispersion map, characterised by values of $\sigma\lesssim80$ {km~s$^{-1}$}.} \label{fig:core-moments} \end{figure} As detailed in Sect. \ref{sect:datared}, for an accurate study of the {CO(3$-$2)}\ emission line profile and estimation of the underlying continuum, we combine two adjacent spectral windows in order to exploit the largest possible spectral coverage (i.e. $\sim$ 3.8 GHz). In Fig. \ref{fig:spectrum} we present the continuum-subtracted spectrum of the {CO(3$-$2)}\ emission in PDS~456, extracted from a circular region of 1 arcsec radius. By fitting the line with a single Gaussian component, we measure a peak flux density S$_{\rm 3-2} = 63.6 \pm 0.7$ mJy and a FWHM $= 160\pm30$ {km~s$^{-1}$}. The line peak corresponds to a redshift $z_{\rm CO} = 0.1850\pm0.0001$, consistent with the CO(1$-$0) based redshift from Y04, but significantly larger than the value $z_{\rm [FeII]} = 0.1837\pm0.0003$ derived from the [FeII] emission line in the near-IR spectrum \citep{Simpson99}. We find a line brightness ratio S$_{\rm 3-2}/$S$_{\rm 1-0}\sim8$ (computed by using the $S_{\rm 1-0}$ flux density reported by Y04\footnote{We estimate a possible contamination to the CO(1$-$0) flux due to the companion sources CO-1 and CO-2 to be $\lesssim2$\%, once a CO excitation ladder similar to the Galactic one is assumed \citep{Carilli13}.}) in agreement with the CO excitation ladder typically found in QSOs \citep{Carilli13}. This suggests that our ALMA observation is able to recover the bulk of the {CO(3$-$2)}\ emission in PDS~456. According to our observation setup, the largest recoverable scale is $2.2$ arcsec ($\sim$6.6 kpc), fairly larger than the size of the CO emission measured in local luminous infrared galaxies and QSOs \citep[e.g.][]{Bryant&Scoville99,Solomon05,Moser16}. We derive an integrated intensity S$\Delta$v$_{\rm 3-2} = 10.6 \pm 0.2$ Jy {km~s$^{-1}$}. This translates into a luminosity L$^\prime$CO$_{3-2}= 2.1\times10^9$ K {km~s$^{-1}$}\ pc$^2$ and a luminosity ratio L$^\prime$CO$_{3-2}/$L$^\prime$CO$_{1-0}\sim 0.85$. The line profile of the {CO(3$-$2)}\ emission exhibits a blue tail, indicating the presence of emitting material with velocities down to $\sim$ $-$1000 {km~s$^{-1}$}\ (see Fig. \ref{fig:spectrum}b), that we interpret as associated to outflowing gas. Conversely, no emission on the red side of {CO(3$-$2)}\ is detected at $v>600$ {km~s$^{-1}$}. The spatial resolution of our ALMA observation allows us to map the morphology of the outflow in extreme detail, as shown in Fig. \ref{fig:mappa-broad}a. Specifically, the outflow in PDS~456\ shows several components: a bright inner component located at a radial distance $R\lesssim1.2$ kpc, plus an extended component consisting of several clumps with different blueshifted bulk velocities located at radii $R\sim1.8-5$ kpc. \subsection{Extended outflow} Fig. \ref{fig:mappa-broad}a shows the velocity integrated map of the {CO(3$-$2)}\ clumps in the $v\in[-1000,-250]$ {km~s$^{-1}$}, obtained by integrating the emission detected at $>3\sigma$ in each 30 {km~s$^{-1}$}\ spectral channel, for at least four contiguous channels (i.e. over a velocity range $\geq$ 120 {km~s$^{-1}$}). The colour map shows that the outflowing gas is distributed in several clumps located at different distances from the QSO, up to $\sim5$ kpc, while the white contours refer to the quiescent molecular gas traced by the {CO(3$-$2)}\ core. Two examples of the clumps spectra are given in Fig. \ref{fig:mappa-broad}b, showing that each clump emits over a typical range of $\sim200$ {km~s$^{-1}$}. Specifically, the {CO(3$-$2)}\ spectrum at the position of clump A (located at $\sim0.6$ arcsec = 1.8 kpc from the nucleus) is characterised by an emission component centred at $v\sim0$, which is associated with the quiescent gas. It also shows an excess of blue-shifted emission at $v\sim-300$ and $v\sim-750$ {km~s$^{-1}$}. Differently, the spectrum of clump B at a larger separation ($\sim1.6$ arcsec = 5 kpc) lacks systemic emission but shows a blue-shifted component centred at $v\sim-350$ {km~s$^{-1}$}. We model the spectrum of each extended clump with multiple Gaussian components (also shown in Fig. \ref{fig:mappa-broad}b). \begin{figure}[htb] \centering \includegraphics[width=1\columnwidth]{images/pv-highres.png} \caption{Panel \textit{\textbf{(a)}} shows the position-velocity diagram, extracted from a 0.3 arcsec slit centred on the QSO location along the major kinematic axis (see Fig. \ref{fig:core-moments}a), corresponding to a PA of 165 deg (measured anti-clockwise from north). Black contours refer to the [2, 3, 4,...2$^n$]$\sigma$ significance level of the {CO(3$-$2)}\ emission, with $\sigma = 0.083$ mJy beam$^{-1}$ and $n>2$. The contours associated with the best-fit $^{\rm 3D}$BAROLO model of the kinematics are also shown by the blue contours. Panels \textit{\textbf{(b)}} and \textit{\textbf{(c)}} are a zoom-in of the velocity range $v\in[-500,+600]$ {km~s$^{-1}$}\ with increased angular resolution ($0.16\times0.19$ arcsec$^2$), extracted along and perpendicular to the major kinematic axis, respectively. Contours are as in top panel, with $\sigma = 0.16$ mJy beam$^{-1}$.} \label{fig:pv} \end{figure} \begin{figure}[htb] \centering \includegraphics[width = 1\columnwidth]{images/resspec-3c.pdf} \caption{{CO(3$-$2)}\ spectrum of the central $1\times1$ arcsec$^2$ region centred on PDS~456. Data are shown in grey together with the total best-fit model resulting from the pixel-by-pixel decomposition in the $v\in[-500,+600$] {km~s$^{-1}$}\ range, obtained by adding the best-fit models of each pixel. Yellow and green histograms indicate the two rotation components used to model the {CO(3$-$2)}\ core, while the emission from H$^{13}$CN at $v\sim+390$ {km~s$^{-1}$}\ is shown in purple. The red histogram represents the best-fit of the low-velocity outflow component. The red shaded area represents the emission, remaining after the subtraction of rotation and H$^{13}$CN emission, that we consider to be associated with the outflow, and indicates the presence of a blue-shifted, high-velocity ($v\sim-800$ {km~s$^{-1}$}) outflow component.} \label{fig:decomposition} \end{figure} For a single high-velocity clump the mass outflow rate is computed as: \begin{equation}\label{eq:mdot} \dot{M}_{\rm mol} = \frac{M_{\rm mol}^{\rm out}\times v_{\rm 98}}{R} \end{equation} where $M_{\rm mol}^{\rm out}$ is the clump mass, $v_{\rm 98}$ is the velocity enclosing 98\% of the cumulative velocity distribution of the outflowing gas, and $R$ is the projected distance of the clump from the QSO. $M_{\rm mol}^{\rm out}$ is derived from the {CO(3$-$2)}\ luminosity of the clumps detected at $>3\sigma$ in the velocity range $v\in[-1000,-250]$ {km~s$^{-1}$}. We use the $L^\prime$CO$_{3-2}/$L$^\prime$CO$_{1-0}$ luminosity ratio measured for the systemic emission, and a luminosity-to-gas-mass conversion factor $\alpha_{\rm CO} = 0.8$ {$M_\odot$}\ (K {km~s$^{-1}$}\ pc$^2$)$^{-1}$, typical of star-forming QSO hosts (see Sect. 4 for further details). By adding together the contribution of all clumps, we estimate the molecular gas mass, the molecular mass outflow rate and momentum flux of the extended outflow detected in PDS~456, see Table \ref{tab:outflow_prop}. \subsection{Central outflow}\label{sect:central} Fig. \ref{fig:core-moments} shows the velocity and velocity dispersion map of {CO(3$-$2)}\ emission of the inner 1 arcsec$^2$ region. The latter is resolved in about 12 independent beams which allow us to detect a projected velocity gradient in approximately the north-west to south direction with a rather small total range ($v\in[-50,+40]$ {km~s$^{-1}$}). Emission with a flat velocity distribution around $v=0$ {km~s$^{-1}$}\ smears the velocity gradient in an arc-like region extending from the QSO position to $\sim[+0.3,-0.3]$ arcsec. The maximum of the velocity dispersion is observed in the central region ($\sigma_{\rm vel}\sim80$ {km~s$^{-1}$}), where beam-smearing effects are more prominent \citep{Davies11,Tacconi13}. A more reliable estimate of $\sigma_{\rm vel}$ is provided by the average $\sigma_{\rm vel}\sim40-50$ {km~s$^{-1}$} in an annulus with $0.2<R<0.4$ arcsec. The kinematics in the central region of PDS~456\ is more complex than that of a rotating disk, as further supported by the position-velocity diagram shown in Fig. \ref{fig:pv}a, extracted along the maximum velocity gradient direction from a 0.3 arcsec slit. A rotation pattern can be identified, with a velocity gradient $\Delta v_{\rm blue-red}\sim200$ {km~s$^{-1}$}, which is modified by the presence of an excess of emission due to gas with velocity $v\in[-1000,+600]$ {km~s$^{-1}$}\ roughly centred at the position of the QSO. This appears more evidently in Fig. \ref{fig:pv}b,c showing zoom-in position-velocity diagrams of the $v\in[-500,+600]$ {km~s$^{-1}$}\ velocity range with an increased angular resolution of $0.16\times0.19$ arcsec$^2$ (see Sect. \ref{sect:datared}), extracted along and perpendicular to the major kinematic axis direction, respectively. We fit a 3D tilted-ring model with $^{\rm 3D}$BAROLO \citep{DiTeodoro15} to the data to provide a zero order description of the kinematics. We exclude from the fit the region with an angular separation $\lesssim0.15$ arcsec from the nucleus, where the high-velocity gas perturbs the kinematics. This results into an inclination $i=25\pm10$ deg, being consistent with the value of $\sim25$ deg derived from the projected axes ratio, and an intrinsic circular velocity $v_{\rm rot}=1.3\times\Delta v_{\rm blue-red}/(2$sin$i)\sim280$ {km~s$^{-1}$}\ \citep[e.g.][]{Tacconi13}. The implied virial dynamical mass is $M_{\rm dyn}={Dv_{\rm rot}^2/2G}\sim1.0\times10^{10}$ {$M_\odot$}, where $D\sim1.3$ kpc is the source size estimated as 1.5$\times$ the deconvolved major axis of the {CO(3$-$2)}\ emission. A comparable value, i.e. $M_{\rm dyn}(i = 25\ \rm deg) \sim 1.2\times10^{10}$ {$M_\odot$}\ is derived by using the relation $M_{\rm dyn} = 1.16\times10^{5}\times(0.75\times {\rm FWHM}/{\rm sin}i)^2 \times D$ \citep{Wang13,Bischetti18}. Using the inferred dynamical mass we derive an escape velocity from the central 1.3 kpc of PDS~456\ of $\sim280$ {km~s$^{-1}$}. By subtracting the $^{\rm 3D}$BAROLO model to the ALMA cube we find that strong ($\sim8$ \% of the total {CO(3$-$2)} flux) positive residuals are present in the velocity range $v\in[-500,+600]$ {km~s$^{-1}$}. It is likely that these residuals are due to an inner emission component associated with the outflow described in Sect. 3.1. Therefore, we perform an accurate modelling of the spectrum of the central region to better disentangle the contribution provided by the outflow from the total {CO(3$-$2)}\ emission. Specifically, we use a pixel-by-pixel spectral decomposition in the range $v\in[-500,+600]$ {km~s$^{-1}$}\ with a combination of four Gaussian components to model: (a) the disk rotation (two components\footnote{The normalisation of the first component is initially set to the peak of the emission in each pixel, while that of second component is set to be 1/10 of the first one.}, needed to account for the partially resolved velocity distribution, i.e. nearby emitting regions with different rotation velocities within the ALMA beam); (b) the outflow (one component with $\sigma>90$ {km~s$^{-1}$}, i.e. the maximum value measured in the velocity dispersion map of the {CO(3$-$2)}\ emission); (c) the possible contamination by H$^{13}$CN(4$-$3) emission (rest frequency $\nu_{\rm rest} = 345.34$ GHz) to the red wing of the {CO(3$-$2)}\ emission line \citep{Sakamoto10}. This component has a fixed velocity offset of $+390$ {km~s$^{-1}$}, corresponding to the spectral separation between H$^{13}$CN and {CO(3$-$2)}, and line width equal or larger than that of the main rotation component. Fig. \ref{fig:decomposition} shows the spectrum of the $1\times1$ arcsec$^2$ central region with the different components obtained by adding together the best-fit models obtained from the pixel-by-pixel fit. We then subtract from the total spectrum the components due to disk rotation and H$^{13}$CN. The residuals (red histogram in Fig. \ref{fig:decomposition}) may be associated with emission from outflowing gas. It is worth noting that a spectral decomposition without the outflow component (i.e. maximising the contribution from the emission due to rotation) is able to account for at most $\sim$50\% of these residuals. The bulk of this emission is due to a low velocity ($|v|\lesssim500$ {km~s$^{-1}$}) component. Maps of the integrated flux density, velocity and velocity dispersion of this low-velocity emission component are shown in Fig. \ref{fig:outflow-moments}. This emission peaks at an offset of $\sim0.05$ arcsec ($\sim160$ pc) west from the QSO position (marked by a cross). After deconvolving from the beam, the low-velocity outflow has a total projected physical scale of $\sim2.4$ kpc. A fraction of $\sim40$ \% of this emission is unresolved in the present ALMA observation. A velocity gradient is detected along the east-west direction (see Fig. \ref{fig:outflow-moments}b), i.e. roughly perpendicular to the north-south gradient in the velocity map of the total {CO(3$-$2)}\ emission (see Fig. \ref{fig:core-moments}a). This molecular gas is also characterised by a high velocity dispersion (see Fig. \ref{fig:outflow-moments}c), with a peak value of $\sigma\sim360$ {km~s$^{-1}$}\ and an average \citep{Davies11, Tacconi13} $\sigma\sim200$ {km~s$^{-1}$}, suggesting highly turbulent gas close to the nucleus. All these pieces of evidence, in combination with the position-velocity diagram shown in Fig. \ref{fig:pv}, strongly suggest the presence of molecular gas whose kinematics is associated with outflowing gas. Beyond the velocity range $v\in[-500,+600]$ {km~s$^{-1}$}\ covered by the spectral decomposition mentioned above, the {CO(3$-$2)}\ spectrum of the central $1\times1$ arcsec$^2$ region exhibits an excess of blue-shifted emission between $-500$ and $-1000$ {km~s$^{-1}$}\ (see Fig. \ref{fig:decomposition}). This high-velocity component can be modelled with a Gaussian line centred at $-800\pm80$ {km~s$^{-1}$}, with flux density $0.25\pm0.08$ mJy and $\sigma=180\pm70$ {km~s$^{-1}$}, and is visible in Fig. \ref{fig:mappa-broad}a at the position of the QSO. Based on its large velocity, this emission can be also associated to the molecular outflow in PDS~456. Accordingly, the red shaded area in Fig. \ref{fig:decomposition} represents the combination of the low- and high-velocity components for which we measure the outflow parameters (i.e. outflow mass, mass outflow rate and momentum flux) listed in Table \ref{tab:outflow_prop}. To avoid any possible contamination from H$^{13}$CN we exclude the spectral region in the range $v\in[310,560]$ {km~s$^{-1}$}. As the central outflow is only marginally resolved by our ALMA observation, we infer its {$\dot{M}_{\rm mol}$}\ by considering the simple scenario of a spherically/biconically symmetric, mass-conserving flow with constant velocity and uniform density up to $R\sim1.2$ kpc (Fig. \ref{fig:outflow-moments}), similarly to the geometry assumed for the molecular outflows detected in other luminous AGN, i.e. \citet{Vayner17,Feruglio17,Brusa18}. This corresponds to multiplying by a factor of three the {$\dot{M}_{\rm mol}$}\ value inferred by Eq. \ref{eq:mdot}. Alternative outflow models considering a time-averaged thin shell geometry \citep[e.g.][]{Cicone15,Veilleux17} or a density profile scaling as $R^{-2}$ \citep{Rupke05} predict instead a mass outflow rate consistent with the value of {$\dot{M}_{\rm mol}$}\ derived by Eq. \ref{eq:mdot}. \begin{figure*}[htb] \centering \includegraphics[width = 1\textwidth]{images/outflow_moments.png} \caption{Intensity \textit{\textbf{(a)}}, velocity \textit{\textbf{(b)}} and velocity dispersion \textit{\textbf{(c)}} maps of the central, low-velocity outflow component resulting from the pixel-by-pixel decomposition of the {CO(3$-$2)}\ spectrum in the velocity range $v\in[-500,+600]$ {km~s$^{-1}$}\ (red histogram in Fig. \ref{fig:decomposition}). The ALMA beam is shown by the grey ellipse, while the black cross indicates the QSO position. } \label{fig:outflow-moments} \end{figure*} \begin{table*}[htb] \centering \setlength{\tabcolsep}{3pt} \caption{Main parameters of the molecular outflow detected in PDS~456. The outflowing gas mass, the mass outflow rate and momentum flux of the outflow (computed by using an $\alpha_{\rm CO}=0.8$ {$M_\odot$}(K {km~s$^{-1}$}\ pc$^2$)$^{-1}$) are indicated for the different outflow components identified in the data. Brackets indicate the range of variation of each parameter considering 1$\sigma$ statistical errors and systematics associated with the spectral decomposition and uncertainty in the outflow physical size. } \begin{tabular}{lcccccccc} \toprule Outflow component & $R$ & & {$M_{\rm mol}^{\rm out}$}\ & {$\dot{M}_{\rm mol}$}\ & {$\dot{P}_{\rm mol}$}\ \\ & [kpc] & & [$10^8$ {$M_\odot$}] & [{$M_{\odot}$~yr$^{-1}$}] & [10$^{35}$ cm g s$^{-1}$] \\ \midrule Extended & 1.8$-$5 & & $0.78[0.72-0.84]$ & $50[45-55]$ & $2.1[1.9-2.3]$ \\ Central & $\lesssim1.2$ & $\left\{ \begin{tabular}{l} $v\in[-500,+650]$ km s$^{-1}$\\ $v<-500$ km s$^{-1}$ \end{tabular}\right.\kern-\nulldelimiterspace$ & \begin{tabular}{c} $1.5[0.74^a-1.7]$ \\ $0.21[0.15^a-0.27]$ \end{tabular} & \begin{tabular}{c} $180[90-530^b]$ \\ $60[40-180^b]$ \end{tabular} & \begin{tabular}{c} $5.5[2.8-16^b]$ \\ $4.4[3.1-13^b]$ \end{tabular} \\ \midrule Total & & & $2.5[1.6-2.8]$ & $290[180-760]$ & $12.0[7.8-32]$\\ \bottomrule \end{tabular} \flushleft \small $^{a}$ The lower limit on {$M_{\rm mol}^{\rm out}$}\ is computed by adding the residuals of the subtraction of the best fit pixel-by-pixel spectral decomposition with a model including only disk rotation and H$^{13}$CN emission from the total CO spectrum.\\ $^{b}$ The upper limits on {$\dot{M}_{\rm mol}$}\ and {$\dot{P}_{\rm mol}$}\ are derived assuming that the unresolved fraction ($\sim40$ \%) of the central outflow component has a minimum size of 160 pc. This value corresponds to $\sim1/4$ of the mean beam axis and to the spatial offset measured between the peaks of central outflow and total {CO(3$-$2)}\ emission (Sect. \ref{sect:central}). \label{tab:outflow_prop} \end{table*} \begin{figure*} \centering \includegraphics[width = 1\textwidth]{images/scalings_parabolic_twoparts.pdf} \caption{\textit{\textbf{(a):}} Mass outflow rate as a function of {$L_{\rm Bol}$}\ for PDS~456\ (red star) and a compilation of AGN with outflow detection from \citet{Fiore17} and \citet{Fluetsch19}, \citet{Zchaechner16, Feruglio17,Querejeta17,Vayner17,Veilleux17,Brusa18,Longinotti18,HerreraCamus19}. The blue(green) dashed line shows the best-fit parabolic function for the molecular(ionised) phase, while the shaded area indicates the rms scatter of the data from the relation. \textit{\textbf{(b):}} Ratio $\mu=\dot{M}_{\rm ion}/\dot{M}_{\rm mol}$ (black solid curve) inferred from the best-fit relations in panel (a), as a function of {$L_{\rm Bol}$}. The shaded area represents the uncertainty on $\mu$, given the scatter of these relations. Data points indicate the position in the $\mu-L_{\rm Bol}$ plane of the AGN with available molecular and ionised mass outflow rates \citep{Vayner17,Brusa18,HerreraCamus19}. We also include the sources with AGN contribution to {$L_{\rm Bol}$}\ $>10$ \% from \citet{Fluetsch19}. } \label{fig:scalings} \end{figure*} \section{Discussion} The ALMA observation of the {CO(3$-$2)}\ emission line in PDS~456\ reveals high-velocity molecular gas which traces a clumpy molecular outflow, extended out to $\sim$ 5 kpc from the nucleus in this hyper-luminous QSO. The molecular outflow discovered in PDS~456\ is the first reported for a radio quiet, non-lensed QSO in the poorly-explored brightest end ($L_{\rm Bol}\gtrsim10^{47}$ {erg~s$^{-1}$}) of the AGN luminosity distribution. The total mass of the outflowing molecular gas is {$M_{\rm mol}^{\rm out}$}\ $\sim2.5\times 10^{8}$ {$M_\odot$}\ (for an $\alpha_{\rm CO}=0.8$ {$M_\odot$}(K {km~s$^{-1}$}\ pc$^2$)$^{-1}$), of which $\sim$ 70 \% is located within the $\sim$ $2\times2$ kpc$^2$ inner region. We stress that the high spatial resolution of our ALMA observation has been crucial to disentangle the outflow-related emission from the dominant emission of the quiescent gas. The ratio between {$M_{\rm mol}^{\rm out}$}\ and the total molecular gas mass for PDS~456\ is $\sim12$\%, which is comparable to ratios typically measured for other molecular outflows \citep[$\sim10-20$\%, e.g.][]{Feruglio13a,Cicone14,Feruglio15,Brusa18}. We note that the estimate of the molecular gas masses strongly depends on the assumption on $\alpha_{\rm CO}$. Given that (i) PDS~456\ exhibits a {$L_{\rm Bol}$}\ comparable to that of high-z QSOs with available CO measurements \citep{Carilli13} and (ii) the host-galaxy of PDS~456\ shows both a compact size and SFR comparable to local luminous infrared galaxies, we expect a $\alpha_{\rm CO}\sim0.8$ {$M_\odot$} (K {km~s$^{-1}$}\ pc$^2$)$^{-1}$ in the central region, where the bulk of the outflowing gas lies \citep{Downes&Solomon93,Bolatto13}. Similarly to \citet{HerreraCamus19}, we adopt the same conversion factor also for the extended outflow. We note that {CO(3$-$2)}\ emission in the extended outflow clumps may turn optically thin because of the large velocity dispersion \citep{Bolatto13}. This would imply a lower $\alpha_{\rm CO}\sim0.34$ {$M_\odot$} (K {km~s$^{-1}$}\ pc$^2$)$^{-1}$ and, in turn, a smaller mass of the extended outflow by a factor of $\sim2.5$. On the other hand, assuming an $\alpha_{\rm CO}\sim2$ {$M_\odot$} (K {km~s$^{-1}$}\ pc$^2$)$^{-1}$ as recently derived for the extended neutral outflow in NGC6240, i.e. a merging LIRG hosting two AGNs \citep{Cicone18}, would imply a larger total mass of the outflowing gas by a factor of $\sim2.5$. By adding together the mass outflow rates of the inner and outer outflow components discovered by ALMA in PDS~456, we find a total {$\dot{M}_{\rm mol}$}\ $\sim290$ {$M_{\odot}$~yr$^{-1}$}. This translates into a depletion timescale $\tau_{\rm dep} = M_{\rm mol}^{\rm out}/\dot{M}_{\rm mol}\sim8$ Myr for the molecular gas reservoir in PDS~456, suggesting a potential quenching of the star formation within a short time. Such a $\tau_{\rm dep}$ is comparable to the Salpeter time for the mass growth rate of the SMBH in PDS~456\ and close to the typical QSO lifetime, indicating that this system will likely evolve into a passive galaxy with a dormant SMBH at its centre. Moreover, by including the measured rest-frame $\sim1$ mm continuum emission in a broad-band, UV-to-FIR fitting of the spectral energy distribution in PDS~456, we are able to measure a SFR$\sim30-80$ {$M_{\odot}$~yr$^{-1}$}\ in the QSO host galaxy (Vignali in prep.). In this process we avoid the contamination from the companions which account for the bulk of the FIR luminosity derived by previous, low resolution observations with only upper-limits in the FIR range above 100 micron \citep{Yun04}. This implies that $\tau_{\rm dep}$ is a factor of $\sim4-10$ shorter than the time needed for the molecular gas to be converted into stars ($\tau_{\rm SF}$), indicating that the detected outflow is potentially able to affect the evolution of the host-galaxy in PDS~456. A value of $\tau_{\rm dep}< \tau_{\rm SF}$ has been similarly observed for other molecular outflows observed in AGN \citep[e.g.][]{Cicone14,Veilleux17,Brusa18,HerreraCamus19}. Given the large uncertainties both on $\tau_{\rm dep}$ and $\tau_{\rm SF}$, it is not possible to exclude a starburst contribution to the outflow acceleration unless $\tau_{\rm dep} << \tau_{\rm SF}$. However, for a given far-infrared luminosity, the estimate of the SFR depends on the assumption of the inital mass function (IMF) and star formation history. The SFR in the host-galaxy of PDS~456\ is estimated assuming a continuous star formation burst of $10-100$ Myr and a Salpeter IMF \citep{Kennicutt98}, in case of solar metallicity. A different IMF (i.e. Larson or Chabrier) would translate into a smaller SFR by a factor of $\sim2$ \citep{Conroy09,Valiante14} and, hence, a larger $\tau_{\rm SF}$. Fig. \ref{fig:scalings}a shows the mass outflow rate {$\dot{M}_{\rm of}$}\ as a function of {$L_{\rm Bol}$}\ for PDS~456\ and a compilation of molecular and ionised AGN-driven outflows from \citet{Fiore17}. We also include the molecular outflows recently revealed in CO emission by \citet{Zchaechner16, Feruglio17, Querejeta17, Veilleux17, Brusa18, Longinotti18} and those detected in both the molecular and ionised phase by \citet{Vayner17} and \citet{HerreraCamus19}, which have been identified to be AGN-driven. In addition to these outflows, we consider those reported by \citet{Fluetsch19} in systems where the AGN contributes to $\gtrsim20$ \% of {$L_{\rm Bol}$}, and that discovered by \citet{Pereira-Santaella18} in IRAS 14348$-$1447, for which the PA and the high ($\sim$ 800 {km~s$^{-1}$}) velocity of the outflow suggest an AGN origin, for a total of 23(60) molecular(ionised) outflows. To minimise the systematic differences from sample to sample, all values have been recomputed from the tabulated data according to the same assumptions, following Eq. B.2 by \citet{Fiore17}. Nevertheless, some scatter between various samples may still be present because of the different assumptions in the literature on $\alpha_{\rm CO}$ and, therefore, on the outflow mass. This updated compilation allows us to populate the luminosity range above $10^{46}$ {erg~s$^{-1}$}, poorly sampled by both \citet{Fiore17} and \citet{Fluetsch19} samples. The relation for the molecular mass outflow rates as a function of {$L_{\rm Bol}$}\ by \cite{Fiore17} predicts values of {$\dot{M}_{\rm mol}$}\ much larger than those measured for the sources with $L_{\rm Bol}>10^{46}$ {erg~s$^{-1}$}. Accordingly, in order to model a likely flattening of the relation between the molecular mass outflow rate and {$L_{\rm Bol}$}\ in this high-luminosity range, we fit the molecular data with a parabolic function defined as: \begin{equation}\label{eq:scaling} {\rm Log}\left(\frac{\dot{M}}{M_{\odot}}\right) = \alpha\times {\rm Log}^2\left(\frac{L_{\rm Bol}}{L_0}\right) + \beta\times {\rm Log}\left(\frac{L_{\rm Bol}}{L_0}\right) + \gamma \end{equation} The best-fit relation is given by $\alpha_{\rm mol} = -0.11$, $\beta_{\rm mol} = 0.80$, $\gamma_{\rm mol} = 1.78$ and $L_{\rm 0, mol}=10^{44.03}$ {erg~s$^{-1}$}, with an associated scatter of $\sim0.37$ dex, computed as the rms of the molecular data points with respect to the relation. Our modelling suggests a molecular mass outflow rate $\dot{M}_{\rm mol}\sim1000$ {$M_{\odot}$~yr$^{-1}$}\ for {$L_{\rm Bol}$}\ in the range $10^{46}-10^{48}$ {erg~s$^{-1}$}. By fitting the ionised data with Eq. \ref{eq:scaling}, we find $\alpha_{\rm ion} = -0.21$, $\beta_{\rm ion} = 1.26$, $\gamma_{\rm ion} = 2.14$ and $L_{\rm 0, ion}=10^{46.07}$ {erg~s$^{-1}$}, and a rms scatter of 0.91 dex. According to our best-fit relation, the ionised mass outflow rate $\dot{M}_{\rm ion}$ keeps increasing up to $L_{\rm Bol}\sim10^{48}$ {erg~s$^{-1}$}. \begin{figure*}[htb] \centering \includegraphics[width = 1\textwidth]{images/pmol-prad.pdf} \caption{Ratio between the outflow (molecular or UFO) momentum flux and the radiative momentum flux as a function of the outflow velocity. Star = PDS~456; Blue symbols = AGN with {$L_{\rm Bol}$}$<10^{46}$ {erg~s$^{-1}$}\ \citep{Feruglio15, Longinotti18}; red symbols = AGN with {$L_{\rm Bol}$}$\sim10^{46}-10^{48}$ \citep[][and this work]{Tombesi15,Veilleux17,Feruglio17}. Filled symbols = molecular outflows; open symbols = UFOs. The dashed line is the expectation for a momentum-driven outflow. The dot-dashed line represents the prediction for an energy-driven outflow with $\dot{P}_{\rm mol}/\dot{P}_{\rm rad} = v_{\rm UFO}/v$. The solid lines show the expected $\dot{P}_{\rm mol}/\dot{P}_{\rm rad}$ for a given luminosity and for different $\mu(L) = \dot{M}_{\rm ion}/\dot{M}_{\rm mol}$. The red(blue) shaded area shows the uncertainty on $\dot{P}_{\rm mol}/\dot{P}_{\rm rad}$ at $L_{\rm Bol}\sim10^{47}(10^{45})$ {erg~s$^{-1}$}, given the rms scatter on $\dot{M}_{\rm ion}$ according to Eq. \ref{eq:scaling}.} \label{fig:energetics} \end{figure*} Fig. \ref{fig:scalings}b shows the ratio between the two parabolic functions described above which reproduce the ionised and molecular mass outflow rate trends with {$L_{\rm Bol}$}, i.e. $\mu(L_{\rm Bol})=\dot{M}_{\rm ion}/\dot{M}_{\rm mol}$, in the luminosity range $L_{\rm Bol}\in[10^{43}-10^{48}]$ {erg~s$^{-1}$}. Similarly to what previously noted by \citet{Fiore17}, we find that, although with a large scatter of about one order of magnitude, the ratio $\mu(L_{\rm Bol})=\dot{M}_{\rm ion}/\dot{M}_{\rm mol}$ increases with {$L_{\rm Bol}$}. The mean expected value varies between $\mu\sim10^{-3}$ at {$L_{\rm Bol}$}\ $\sim10^{44}$ {erg~s$^{-1}$}\ and $\mu\sim1$ at {$L_{\rm Bol}$}\ $\sim10^{47}$ {erg~s$^{-1}$}, suggesting a comparable contribution of the molecular and ionised gas phase to the outflow in PDS~456. In our analysis we do not take into account the contribution of the neutral atomic gas phase to the total mass outflow rate. However, for the few moderate-luminosity AGN with spatially-resolved measurements of the outflow in both the molecular and neutral gas phase, the latter seems to represent a fraction $\lesssim 30\%$ of the molecular one \citep[e.g.][]{Rupke&Veilleux13, Rupke17, Brusa18}. In Fig. \ref{fig:scalings}b the sources with combined detection of outflow in the ionised and molecular gas phases are also shown. Differently from \citet{Fluetsch19}, who explored the luminosity range $L_{\rm Bol}\in[10^{44}-10^{46}]$ {erg~s$^{-1}$}\ finding an anti-correlation between $\mu$ and {$L_{\rm Bol}$}, we observe that a positive trend is likely present over a wider luminosity range, although the limited statistics \citep[three more objects with respect to][]{Fluetsch19} does not allow to put any firm conclusion on this. However, we note that values of $\mu$ close or even larger than unity have already been reported for moderately-luminous QSOs \citep[e.g.][]{Vayner17,Brusa18}, suggesting that sources with comparable molecular and ionised mass outflow rates may span a wide range of {$L_{\rm Bol}$}. In Fig. \ref{fig:energetics} we plot the outflow momentum boost {$\dot{P}_{\rm of}$}/$\dot{P}_{\rm rad}$, where $\dot{P}_{\rm rad}=L_{\rm Bol}/c$, as a function of the outflow velocity\footnote{Similarly to what has been done for the molecular outflows, all $\dot{P}_{\rm UFO}$ have been homogeneously recomputed according to Eq. 2 in \citet{Nardini18}.}. This plot has been often used to compare different models of energy transfer between UFOs and galaxy-scale outflows, assuming that most of the outflow mass is carried by the molecular phase, i.e. {$\dot{P}_{\rm of}$}\ $\sim$ {$\dot{P}_{\rm mol}$}\ \citep{Tombesi15,Feruglio15}. This may not be true, especially in the high {$L_{\rm Bol}$}\ regime, as in the case of PDS~456\ (Fig. \ref{fig:energetics}a). The ratio {$\dot{P}_{\rm mol}$}/$\dot{P}_{\rm rad}$ $\sim0.36$ estimated using CO for the galactic scale outflow in PDS~456\ is significantly smaller than those measured in other AGN, typically showing {$\dot{P}_{\rm mol}$}/$\dot{P}_{\rm rad}$ $\sim$ $5-50$. Interestingly, it is of the order of $\dot{P}_{\rm UFO}$/$\dot{P}_{\rm rad}$, found by \citet{Nardini15,Luminari18}. The dot-dashed line indicates the expected {$\dot{P}_{\rm of}$}/$P_{\rm rad}$ for an energy conserving expansion assuming that most of the outflow mass is traced by the molecular phase. As suggested by Fig. \ref{fig:scalings}, this is likely not the case in the high {$L_{\rm Bol}$}\ regime, where the ionised outflow can be as massive as the molecular one. We thus probably detect in the molecular phase only a fraction of the total outflowing mass in PDS~456. Therefore, when comparing the expectation for the energy-conserving scenario with the results of ALMA observations we need to take into account that using the molecular phase alone to estimate the outflow mass may lead to underestimate the total mass of the outflow (i.e. the y-position of the red star marking PDS~456\ in Fig. \ref{fig:energetics} should be considered as a lower limit). We thus use an empirical relation to estimate the molecular momentum flux $\dot{P}_{\rm mol}$ using the scaling relations given by Eq. \ref{eq:scaling}. Specifically, the ratio between the total momentum flux of the large-scale outflow and that of the UFO for an energy-conserving expansion is related to the UFO and outflow velocities ($v_{\rm UFO}$ and $v_{\rm of}$ ) by the following relation: \begin{equation} \frac{\dot{P_{\rm of}}}{\dot{P_{\rm UFO}}} = \frac{v_{\rm UFO}}{v_{\rm of}} \end{equation} that, by assuming $\dot{P}_{\rm of}\sim\dot{P}_{\rm mol}+\dot{P}_{\rm ion}$, translates into a ratio $\dot{P}_{\rm mol}/\dot{P}_{\rm rad}$ given by: \begin{equation}\label{eq:pdot} \frac{\dot{P}_{\rm mol}}{\dot{P}_{\rm rad}} = \frac{v_{\rm UFO}}{v_{\rm mol}}\times\frac{1}{1+k\times\mu(L_{\rm Bol})} \end{equation} where $v_{\rm mol}$ is the velocity of the molecular and outflow and $k = v_{\rm ion}/v_{\rm mol}$ is the ratio between the velocity of the ionised outflow and $v_{\rm mol}$. For our calculations, we assume $v_{\rm mol}\sim1000$ {km~s$^{-1}$}\ and $k\sim2$ \citep{Fiore17}. Solid lines plotted in Fig. \ref{fig:energetics} represent the relations inferred from Eq. \ref{eq:pdot} for a luminosity of $L_{\rm Bol}\sim10^{45}$ and $\sim10^{47}$ {erg~s$^{-1}$}, respectively. We note that for AGN at relatively low luminosity (such as Mrk 231 and IRAS 17020$+$4544) the relation has a similar slope to the classic energy-conserving model, for which $\mu(L_{\rm Bol})<<1$, because the bulk of the outflowing mass is due to molecular gas. Conversely, for hyper-luminous AGN, the empirical relation for $\dot{P}_{\rm mol}/\dot{P}_{\rm rad}$ is less steep, as expected when $\mu(L_{\rm Bol})$ increases. This effect reduces the discrepancy between the observed $\dot{P}_{\rm mol}/\dot{P}_{\rm rad}$ and the expectation for a "luminosity-corrected" energy-conserving scenario. So far, there is no available observation of the outflow in the ionised gas phase for these hyper-luminous sources. However, it is interesting to note that a massive ionised outflow characterised by a $\dot{M}_{\rm ion}\gtrsim10^3$ {$M_{\odot}$~yr$^{-1}$}, as inferred from Eq. \ref{eq:scaling} at such high luminosities, would be required to fit the measured $\dot{P}_{\rm mol}/\dot{P}_{\rm rad}$ in IRAS F11119$+$3257 and APM 08279$+$5255 (red shaded area in Fig. \ref{fig:energetics}). Remarkably, in the case of PDS~456, even a $\dot{M}_{\rm ion}$ as large as $10^4$ {$M_{\odot}$~yr$^{-1}$}\ (i.e. the maximum value allowed for an ionised outflow by Eq. \ref{eq:scaling} and its associated scatter) would likely be still insufficient to match the expectation for an energy-conserving outflow. On the other hand, the small value of the momentum boost derived for PDS~456\ may be an indication that the shocked gas by the UFO preferentially expands along a direction not aligned with the disc plane and is not able to sweep up large amount of ISM (Menci et al. 2019, submitted). Alternatively, the results of our analysis can be interpreted as an indication of forms of outflow driving mechanisms in high-luminosity AGN different from the UFO-related energy-driving. Models based on a mechanism for driving galaxy-scale outflows via radiation-pressure on dust indeed predict $\dot{P}_{\rm mol}/\dot{P}_{\rm rad}$ values around unity, and may offer a viable explanation for the observed energetics of the outflow in PDS~456\ \citep{Ishibashi&Fabian14,Thompson15,Costa18a, Ishibashi18}. On the other hand, large-scale ($\gtrsim$ a few hundreds of pc) outflows cannot be explained by a momentum-conserving expansion which predicts a rapid cooling of the shocked wind \citep[e.g.][]{King&Pounds15}. \begin{figure} \centering \includegraphics[width=1\columnwidth]{images/tau-of.pdf} \caption{$\tau_{\rm of}=E_{\rm of}/\dot{E}_{\rm UFO}$ as a function of {$L_{\rm Bol}$}\ for PDS~456\ and a compilation of AGN as in Fig \ref{fig:energetics}. $E_{\rm of}$ is computed under the assumption that the bulk of the outflowing gas mass is in the molecular phase. Blue(red) symbols correspond to AGN with $L_{\rm Bol}<10^{46}$ {erg~s$^{-1}$}\ ($L_{\rm Bol}>10^{46}$ {erg~s$^{-1}$}).} \label{fig:tau} \end{figure} Fig. \ref{fig:tau} shows $\tau_{\rm of} = E_{\rm of}/\dot{E}_{\rm UFO}$, which represents the time needed for the relativistic wind to provide the mechanical energy of the galaxy-scale outflow integrated over its flow time, i.e. $E_{\rm of}=0.5\times M_{\rm of}v_{\rm of}^2$, as a function of {$L_{\rm Bol}$}. Being a function of $E_{\rm of}$, $\tau_{\rm of}$ allows to constrain the UFO efficiency in producing the observed kpc-scale outflow without any assumption on its morphology and size \citep{Nardini18}. For AGN with $L_{\rm Bol}<10^{46}$ {erg~s$^{-1}$}\ $\tau_{\rm of}\sim10^5-10^6$ yrs while it drops to $\sim10^3$ yrs in hyper-luminous QSOs such as PDS~456, suggesting a high efficiency of the UFO launched in these sources. We note that $E_{\rm of}$ should in principle be derived by including all gas phases at a given radius \citep{Nardini18}, while in Fig. \ref{fig:tau} we only consider the molecular gas. In fact, complementary information on both the molecular and ionised phases as traced by e.g. CO and [O III] is typically unavailable. Therefore, the observed trend of a decreasing $\tau_{\rm of}$ with increasing {$L_{\rm Bol}$}\ may be a further indication of a smaller molecular gas contribution to the total energy carried by the outflow in the high luminosity regime. However, since $\tau_{\rm of}$ is the ratio between a time-averaged quantity ($E_{\rm of}$) and an instantaneous quantity ($\dot{E}_{\rm UFO}$), a small value may be also explained in terms of an "outburst" phase of the UFO in the two sources with $L_{\rm Bol}\sim10^{47}$ {erg~s$^{-1}$}\ considered here (i.e., PDS 456 and APM 08279$+$5255). Alternatively, a small coupling of the UFO with the host-galaxy ISM can be invoked to account for the short $\tau_{\rm of}$ observed in these QSOs in a scenario where the kpc-scale outflow is undergoing an energy conserving expansion. \section{Summary and Conclusions} In this work, we report on the ALMA observation of the 1~mm continuum and {CO(3$-$2)}\ line emission in PDS~456\ ($z_{\rm CO}=0.185$). These data enable us to probe with unprecedented spatial resolution ($\sim700$ pc) the ISM in the host-galaxy of a hyper-luminous ($L_{\rm Bol}\sim10^{47}$ {erg~s$^{-1}$}) QSO. We provide the first detection of a molecular outflow in a radio-quiet, non-lensed source at the brightest end of the AGN luminosity function. Our observation highlights the importance of the combined high spatial resolution and high sensitivity provided by ALMA in revealing broad wings much weaker than the core of the CO emission line, and disentangling the relative contribution of outflowing and quiescent molecular gas to the emission from the innermost regions around QSOs. Our main findings can be summarised as follows: \begin{itemize} \item We detect at $\sim350\sigma$ significance the {CO(3$-$2)}\ emission from the host-galaxy of PDS~456, finding that the bulk of the molecular gas reservoir is located in a rotating disk with compact size ($\sim1.3$ kpc) seen under a small inclination ($i\sim25$ deg), with an intrinsic circular velocity of $\sim280$ {km~s$^{-1}$}. We measure a molecular gas of $M_{\rm mol}\sim2.5\times10^9$ {$M_\odot$}\ and a dynamical mass of $M_{\rm dyn}\sim1\times10^{10}$ {$M_\odot$}. \item The {CO(3$-$2)}\ emission line profile shows a blue-shifted tail (whose flux density is about $1/60$ of the line peak), extending to $v\sim-1000$ {km~s$^{-1}$}, and a red-shifted wing at $v\lesssim600$ {km~s$^{-1}$}, associated with molecular outflowing gas. The outflow is characterised by a complex morphology, as several clumps with blue-shifted velocity are detected over a wide region out to $\sim5$ kpc from the nucleus, in addition to a bright compact, outflow component with velocity $v\in[-1000,500]$ {km~s$^{-1}$}\ located within $\sim1.2$ kpc. \item By adding together all outflow components, we measure a total mass {$M_{\rm mol}^{\rm out}$}\ $\sim2.5\times10^8$ {$M_\odot$}\ and a mass outflow rate {$\dot{M}_{\rm mol}$}\ $\sim290$ {$M_{\odot}$~yr$^{-1}$}. This is a remarkably weak outflow for such a hyper-luminous QSO hosting one of the fastest and most energetic UFO ever detected. Nevertheless, the measured {$\dot{M}_{\rm mol}$}\ implies a depletion timescale $\tau_{\rm dep}\sim8$ Myr for the molecular gas in PDS~456, being a factor of 4 $-$ 10 shorter than the gas depletion time based on the SFR. This suggests a possible quenching of the star-formation activity in the host galaxy within a short time. \item The momentum boost of the molecular outflow, with respect to the AGN radiative momentum output is {$\dot{P}_{\rm mol}$}/$\dot{P}_{\rm rad}\sim0.36$, which represents the smallest value reported so far for sources exhibiting both UFO and molecular outflow. This result improves our understanding of the {$\dot{P}_{\rm of}$}/$\dot{P}_{\rm rad}$ versus {$L_{\rm Bol}$}\ relation and indicates that the relation between UFO and galaxy-scale molecular outflow is very complex and may significantly differ from the typical expectations of models of energy-conserving expansion \citep[e.g.][]{Faucher-Giguere12,Zubovas&King12}, i.e. {$\dot{P}_{\rm of}$}/$\dot{P}_{\rm rad}$ $\gg$ 1. \item We calculate updated scaling relations between the mass outflow rate and {$L_{\rm Bol}$}\ for both the molecular and ionised gas phase. Thanks to our detection of the molecular outflow in PDS~456, combined with other recent results, we can extend the modelling of the {$\dot{M}_{\rm mol}$}\ vs {$L_{\rm Bol}$}\ relation by one order of magnitude in luminosity. Our best-fit relations indicate that the molecular mass outflow rate flattens at {$L_{\rm Bol}$}\ $>10^{46}$ {erg~s$^{-1}$}, while the ionised one keeps increasing up to {$L_{\rm Bol}$}\ $\sim10^{48}$ {erg~s$^{-1}$}. Although with a large scatter, the two gas phases appear comparable at {$L_{\rm Bol}$}\ $\sim10^{47}$ {erg~s$^{-1}$}, suggesting that in luminous QSOs the ionised gas phase cannot be neglected to properly evaluate the impact of AGN-driven feedback. Planned high-resolution \textit{VLT}-MUSE observations will offer us an excellent opportunity to shed light on this by probing the ionised gas phase in PDS~456\ with unprecedented detail. \item We derive an empirical relation to compute the luminosity-corrected {$\dot{P}_{\rm mol}$}\ in case of an energy-conserving scenario, as a function of {$L_{\rm Bol}$}. Accordingly, we predict smaller $\dot{P}_{\rm mol}$/$\dot{P}_{\rm rad}$ in luminous QSOs compared to the "classic" energy-conserving scenario. However, in case of PDS~456, the smallest $\dot{P}_{\rm mol}$ predicted by our analysis (corresponding to a $\dot{M}_{\rm ion}\sim10^{4}$ {$M_{\odot}$~yr$^{-1}$}) still falls short on matching the expectations for an efficient energy-conserving expansion, unless the shocked gas by the UFO leaks out along a direction that intercepts a small fraction of the molecular disc. Remarkably, the small momentum boost measured for the molecular outflow in PDS~456\ lends support to a driving mechanism alternative to or concurrent with energy-driving, i.e. the AGN radiation pressure on dust, predicting momentum ratios close to unity. \item The time necessary for the UFO to supply the energy measured for the molecular outflow in PDS~456, i.e. $\tau_{\rm of}\sim10^{-3}$ Myr, is about two orders of magnitude shorter than those derived for AGN at lower {$L_{\rm Bol}$}. Such a small value of $\tau_{\rm of}$ may suggest that the molecular phase is not representative of the total outflow energy in hyper-luminous sources, or that the UFO in PDS~456\ is caught in an "outburst" phase. Alternatively, it may be an indication of AGN radiative feedback at work in luminous QSOs. All these hypotheses suggest a very complex interplay between nuclear activity and its surroundings, with important implications for evaluating and simulating the impact and role of AGN-driven in the evolution of massive galaxies. \end{itemize} \begin{acknowledgements} This paper makes use of the ALMA data from project ADS/JAO.ALMA\#2016.1.01156.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. We thank R. Valiante, C. Ramos-Almeida and N. Menci for helpful discussions, and E. Di Teodoro and M. Talia for his assistance in the usage of the $^{\rm 3D}$BAROLO model. M. Bischetti, E. Piconcelli, A. Bongiorno, L. Zappacosta and M. Brusa acknowledge financial support from ASI and INAF under the contract 2017-14-H.0 ASI-INAF. C. Feruglio, E. Piconcelli and F. Fiore acknowledge financial support from INAF under the contract PRIN INAF 2016 FORECAST. R. Maiolino acknowledges ERC Advanced Grant 695671 "QUENCH" and support by the Science and Technology Facilities Council (STFC). C. Cicone and E. Nardini acknowledge funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 664931. \end{acknowledgements} \bibpunct{(}{)}{;}{a}{}{,} \bibliographystyle{aa}
proofpile-arXiv_065-4109
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Hyperspectral imaging is the technique that captures the reflectance of scenes with extremely high spectral resolution (\eg, 10$nm$)~\cite{chakrabarti2011statistics}. The captured hyperspectral image (HSI) often contains hundreds or thousands of spectral bands, and each pixel has a spectrum~\cite{chakrabarti2011statistics,zhang2018cluster}. Profiting from the abundant spectral information, HSIs have been widely applied to various tasks, \eg, classification~\cite{akhtar2018nonparametric}, detection~\cite{manolakis2002detection} and tracking~\cite{van2010tracking} \etc However, the expense of obtaining such spectral information is to increase the pixel size on the sensor, which inevitably limits the spatial resolution of HSIs~\cite{lanaras2015hyperspectral}. Thus, it is crucial to investigate how to generate high-spatial-resolution (HSR) HSIs. Different from convnetioanl HSIs super-resolution~\cite{mei2017hyperspectral,zhang2018exploiting} that directly improves the spatial resolution of a given HSI, spectral super-resolution (SSR)~\cite{arad2016sparse,xiong2017hscnn} adopts an alternative way and attempts to produce an HSR HSI by increasing the spectral resolution of a given RGB image with satisfactory spatial resolution. Early SSR methods~\cite{arad2016sparse,aeschbacher2017defense,jia2017rgb} often formulate SSR as a linear inverse problem, and exploit the inherent low-level statistic of HSR HSIs as priors. However, due to the limited expressive capacity of their handcrafted prior models, these methods fail to well generalize to challenging cases. Recently, witnessing the great success of deep convolutional neural networks (DCNNs) in a wide range of tasks~\cite{simonyan2014very,he2016deep,he2017mask}, increasing efforts have been invested to learn a DCNN based mapping function to directly transform the RGB image into an HSI~\cite{alvarez2017adversarial,arad2017filter,shi2018hscnn+,fu2018joint}. These methods essentially involve mapping the RGB context within a size-specific receptive field centered at each pixel to its spectrum in the HSI, as shown in Figure~\ref{fig:idea}. The focus thereon is to appropriately determine the receptive field size and establish the mapping function from RGB context to the corresponding spectrum. Due to the difference in category or spatial position, pixels in HSIs often necessitate collecting different RGB information and adopting various recovery schemes for SSR. Therefore, to obtain an accurate DCNN based SSR approach, it is crucial to adaptively determine the receptive field size and the RGB-to-spectrum mapping function for each pixel. However, most existing DCNN based SSR methods treat all pixels in HSIs equally and learn a universal mapping function with a fixed-sized receptive field, as shown in Figure~\ref{fig:idea}. In this study, we present a pixel-aware deep function-mixture network for SSR, which is flexible to pixel-wisely determine the receptive field size and the mapping function. Specifically, we first develop a new module, termed the function-mixture (FM) block. Each FM block consists of some parallel DCNN based subnets, among which one is termed the {\textit{mixing function}} and the remaining are termed {\textit{basis functions}}. The basis functions take different-sized receptive fields and learn distinct mapping schemes; while the mixture function generates pixel-wise weights to linearly mix the outputs of the basis functions. In this way, the pixel-wise weights can determine a specific information flow for each pixel and consequently benefit the network to choose appropriate RGB context as well as the mapping function for spectrum recovery. Then, we stack several such FM blocks to further improve the flexibility of the network in learning the pixel-wise mapping. Furthermore, to encourage feature reuse, the intermediate features generated by the FM blocks are fused in late stage, which proves to be effective for boosting the SSR performance. Experimental evaluation on three benchmark HSI datasets shows the superiority of the proposed approach for SSR. In summary, we mainly contribute in three aspects. {\textbf{\textit{i)}} We present an effective pixel-aware deep function-mixture network for SSR, which is flexible to learn the pixel-wise RGB-to-spectrum mapping. To our best knowledge, this is the first attempt to explore this in SSR. {\textbf{\textit{ii)}} We design a new FM module, which is flexible to plug in any modern DCNN architectures; {\textbf{\textit{iii)}} We demonstrate new state-of-the-art performance on three benchmark SSR datasets. \section{Related Work} We first review the existing approaches for SSR and then introduce some techniques related to this work. \vspace{-0.4cm} \paragraph{Spectral Super-resolution} Early methods mainly focus on exploiting appropriate image priors to regularize the linear inverse SSR problem. For example, Arad and Aeschbacher \etal~\cite{arad2016sparse,aeschbacher2017defense} investigated the sparsity of the latent HSI on a pre-trained over-complete spectral dictionary. Jia \etal~\cite{jia2017rgb} considered the manifold structure of HSIs in a low-dimensional space. Recently, most methods turn to learning a deep mapping function from the RGB image to an HSI. For example, Alvarez-Gila et al.~\cite{alvarez2017adversarial} implemented the mapping function using an U-Net architecture~\cite{ronneberger2015u} and trained it based on both the mean-square-error (MSE) loss and the adversarial loss~\cite{goodfellow2014generative}. Shi \etal~\cite{shi2018hscnn+} developed a deep residual network consisting of residual blocks to learn the mapping function. Despite obtaining impressive performance for SSR, these methods are limited by learning a universal RGB-to-spectrum mapping function for all pixels in HSIs. This leaves space for learning more flexible and adaptive mapping function. \vspace{-0.4cm} \paragraph{Receptive Field in DCNNs} Receptive field is an important concept in the DCNN, which determines the sensing space of a convolutional neuron. There are many efforts dedicating to adjusting the size or shape of the receptive field~\cite{yu2015multi,wei2017learning,dai2017deformable} to meet the requirement of specific tasks at hand. Thereinto, dilated convolution~\cite{yu2015multi} or kernel separation~\cite{seif2018large} were often utilized to enlarge the receptive field. Recently, Wei \etal~\cite{wei2017learning} changed the receptive field by inflating or shrinking the feature maps using two affine transformation layers. Dai \etal~\cite{dai2017deformable} proposed to adaptively determine the context within the receptive field by estimating the offsets of pixels to the central pixel using an additional convolution layer. In contrast, we take a totally different direction and learn the pixel-wise receptive field size by mixing some basis function with different receptive field sizes. \begin{figure*} \centering \includegraphics[height=1.1in, width=6.2in]{./img/flow.pdf} \caption{Architecture of the proposed pixel-aware deep function-mixture network. FMB denotes the function-mixture block.} \label{fig:FMNet} \vspace{-0.3cm} \end{figure*} \vspace{-0.4cm} \paragraph{Multi-column Network} Multi-column network~\cite{cirecsan2012multi} is a specicial type of network that feeds the input into several parallel DCNNs (\ie, columns), and then aggregates their outputs for final prediction. With the ability of using more context information, the multi-column network (MCNet) often shows better generalization capacity than that with only a single column in various tasks, \eg, classification~\cite{cirecsan2012multi}, image processing~\cite{agostinelli2013adaptive}, counting~\cite{zhang2016single} \etc. Although we also adopt a similar multi-column structure in our module design, the proposed network is obviously different from these existing multi-column networks~\cite{cirecsan2012multi,zhang2016single,agostinelli2013adaptive}. First, MCNet employs a separation-and-aggregation architecture which processes the input with parallel columns and then aggregates the outputs of all columns for output. In contrast, we adopt a recursive separation-and-aggregation architecture by stacking multiple FM modules, each of which can be viewed as an enhanced multi-column module, as shown in Figure~\ref{fig:idea},~\ref{fig:FM}. Second, when applied to SSR, MCNet still learns a universal mapping function and fails to flexibly handle each pixel in an explicit way. In contrast, the proposed FM block incorporates a mixing function to generate pixel-wise weights and mix the outputs of all basis functions. This enables to flexibly customize the pixel-wise mapping function. In addition, we fuse the intermediate feature generated by FM blocks in the network for feature reuse. \section{Proposed Network} In this section, we present the technical details of the proposed pixel-aware deep function-mixture network, as shown in Figure~\ref{fig:FMNet}. The proposed network adopts a global residual architecture as~\cite{kim2016accurate}. Its backbone is constructed by stacking multiple FM blocks and fusing the intermediate features generated by previous FM block with skip connections. In the following, we will first introduce the basic FM block. Then, we will introduce how to incorporate multiple FM blocks and the intermediate features fusion into the proposed network for performance enhancement. \subsection{Function-mixture Block}\label{subsec:FMB} The proposed network essentially establishes an end-to-end mapping function from an RGB image to the HSI counterpart, and thus each FM block plays the role of a mapping subfunction. In this study, we attempt to utilize the FM block to adaptively determine the receptive field size and the mapping function for each pixel, \ie, to obtain a pixel-dependent mapping subfunction. To this end, a direct solution is to introduce an additional hypernetwork~\cite{ha2016hypernetworks,jia2016dynamic} to adaptively generate the subfunction parameters for each pixel. However, this will greatly increase the computational complexity as well as the training difficulty~\cite{ha2016hypernetworks}. To avoid this problem, we borrow the idea in function approximation~\cite{cybenko1989approximation} and assume that all pixel-dependent subfunctions can be accurately approximated by mixing some {\textit{basis functions}} with pixel-wise weights. Due to being shared by all subfunctions, these basis functions can be learned by DCNNs. While the pixel-wise mixing weights can be viewed as the pixel-wise channel attention~\cite{Sato2014Deep}, which also can be directly generated by a DCNN. Following this idea, we construct the FM block with a separation-and-aggregation structure, as shown in Figure~\ref{fig:FM}. First, a convolutional block, \ie a convolutional layer followed by a Rectified Linear Unit (ReLu)~\cite{nair2010rectified}, is utilized for initial feature representation. Then, the obtained features are fed into multiple parallel subnets. Thereinto, one subnet is utilized to generate the pixel-wise mixing weights. For simplicity, we term it the {\textit{mixing function}}. While the remaining subnets represent the basis functions. Finally, the outputs of all basis functions are linearly mixed based on the generated pixel-wise weights. Let $\mathbf{x}^{u-1}$ denote the input for the $u$-th FM block $\mathcal{F}^{u}$ and $n$ denote the number of basis functions in $\mathcal{F}^{u}$. The output $\mathbf{x}^{u}$ of $\mathcal{F}^{u}$ can be formulated as \begin{equation}\label{eq:eq1} \begin{aligned} &\mathbf{x}^{u}=\mathcal{F}^{u}(\mathbf{x}^{u-1})= \sum\nolimits^n_{i=1} f^{u}_{i}(\bar{\mathbf{x}}^{u},\theta^{u}_i)\odot w^{u}(\bar{\mathbf{x}}^{u},\vartheta^u)[i]\\ &{\rm{s.t.}}, {\kern 2pt} \bar{\mathbf{x}}^{u}=\mathcal{G}^u(\mathbf{x}^{u-1}, \omega^{u}),\\ &{\kern 14pt}\sum\nolimits^n_{i=1} w^{u}(\bar{\mathbf{x}}^{u}, \vartheta^u)[i] = \mathbf{1}, w^{u}(\bar{\mathbf{x}}^{u},\vartheta^u)\succeq 0, \end{aligned} \end{equation} where $f^{u}_i(\cdot,\theta^u_i)$ denotes the $i$-th basis function parameterized by $\theta^u_i$ and $w^{u}(\cdot, \vartheta^u)$ represents the mixing function parameterized by $\vartheta^u$. When $f^{u}_{i}(\bar{\mathbf{x}}^{u},\theta^{u}_i)$ is of size $c\times h\times w$ (\ie, channel $\times$ height $\times$ width), $w^{u}(\bar{\mathbf{x}}^{u}, \vartheta^u)$ is of size $n\times h \times w$, and $w^{u}(\bar{\mathbf{x}}^{u}, \vartheta^u)[i]$ represents the mixing weights of size $h\times w$ generated for all pixels corresponding to the $i$-th basis function. $\odot$ denotes the point product. $\bar{\mathbf{x}}^u$ denotes the features output by the convolutional block $\mathcal{G}^u(\cdot,\omega^{u})$ in $\mathcal{F}^{u}$, and $\omega^{u}$ represents the convolutional filters. Inspired by~\cite{everitt2005finite}, we also require the mixing weights to be non-negative and the summation across all basis functions is equal to 1, as shown in Eq.~\eqref{eq:eq1}. \begin{figure} \centering \includegraphics[height=2.2in, width=3.5in]{./img/FM.pdf} \caption{Architecture of the proposed function-mixture block where $k_i$ ($i=1,\cdots,n$) denotes the convoluational filter size in the $i$-th basis function $f^u_i$.} \label{fig:FM} \vspace{-0.3cm} \end{figure} In this study, we implement the basis functions as well as the mixing function by stacking $m$ consecutive convolutional blocks, as shown in Figure~\ref{fig:FM}. Moreover, we equip these basis functions with different-sized convolutional filters to ensure they take different-sized receptive fields and learn distinct mapping schemes. For the mixing function, we introduce a Softmax unit at the end to comply with the constraints in Eq.~\eqref{eq:eq1}. Apparently, profiting from such a pixel-wise mixture architecture, the proposed FM block is able to determine the appropriate receptive field size and the mapping function for each pixel. \subsection{Multiple FM Blocks}\label{subsec:MFMB} As shown in Figure~\ref{fig:FMNet}, in the proposed network, we first introduce an individual convolutional block, and then stack multiple FM blocks for the intermediate feature representation and the ultimate output. For an input RGB image $\mathbf{x}$, the output of the network with $p$ FM blocks can be given as \begin{equation}\label{eq:eq2} \begin{aligned} &\mathbf{y} = \mathbf{x} + \mathcal{F}^{p}\left(\mathcal{F}^{p-1}\left(\cdots\mathcal{F}^2\left(\mathcal{F}^{1}\left(\mathbf{x}^0\right)\right)\right)\right),\\ &{\rm{s.t.}}, \quad\mathbf{x}^0 = \mathcal{G}^{0}\left(\mathbf{x},\omega^{0}\right), \end{aligned} \end{equation} where $\mathbf{y}$ denotes the generated HSI and $\mathbf{x}^0$ represents the output of the first convolutional block $\mathcal{G}^{0}(\cdot,\omega^{0})$ parameterized by $\omega^{0}$. It is worth noting that in this study we increase the spectral resolution of $\mathbf{x}$ to the same as that of $\mathbf{y}$ by the bilinear interpolation. In addition, $\mathcal{F}^{1}, \cdots,\mathcal{F}^{p-1}$ show the same architecture, while the output of $\mathcal{F}^{p}$ will be adjusted according to the number of spectral bands in $\mathbf{y}$. It has been shown that the layers in an DCNN from bottom to top take increasingly larger receptive fields and extract different levels of features from the input signal~\cite{zhou2016learning}. Therefore, by stacking multiple FM blocks, we can further increase the flexibility of the proposed network in learning the pixel-wise mapping, viz., adjust the receptive field size and the mapping function for each pixel at multiple levels. In addition, considering that each FM block defines the mapping subfunction for each pixel, the ultimate mapping function obtained by stacking $p$ FM blocks can be viewed as a composition function of $p$ subfunctions. Since each subfunction is approximated by the mixture of $n$ basis functions, the ultimate mapping function can be viewed as the mixture of $n^p$ basis functions, which show much larger expressive capacity than a single FM block in pixel-wisely fitting an appropriate mapping function. \subsection{Intermediate Features Fusion}\label{subsec:IFF} As previously mentioned, the FM blocks in the porposed network extract different levels of features from the input. Inspired by~\cite{kim2016deeply,zhang2018residual}, to reuse these intermediate features for performance enhancement, we employ skip connections to aggregate the intermediate features generated by each FM block before the ultimate output block with a concatenation operation, as shown in Figure~\ref{fig:FMNet}. To better utilize all of these features for pixel-wise representation, we introduce an extra FM block $\mathcal{F}_{c}$ to fuse the concatenation result. With such an intermediate feature fusion operation, the output of the proposed network can be reformulated as \begin{equation}\label{eq:eq3} \begin{aligned} \mathbf{y} = \mathbf{x} + \mathcal{F}^{p}\left(\mathcal{F}_c\left(\left[\mathcal{F}^{p-1}\left(\cdots\mathcal{F}^{1}\left(\mathbf{x}^{0}\right)\right),\cdots,\mathcal{F}^{1}\left(\mathbf{x}^{0}\right)\right]\right)\right) \end{aligned} \end{equation} \begin{table*}\smal \caption{Numerical results of different methods on three benchmark SSR datasets. The best results are in bold.} \label{table:numerical} \renewcommand{\arraystretch}{1.1} \begin{center} \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{NTIRE2018} & \multicolumn{4}{c|}{CAVE} & \multicolumn{4}{c}{Harvard}\\ \cline{2-13} & {RMSE} & {PSNR} & {SAM} & {SSIM} & {RMSE} & {PSNR} & {SAM} & {SSIM} & {RMSE} & {PSNR} & {SAM} & {SSIM}\\ \hline BI~\cite{hou1978cubic} & 15.41 & 25.73 & 15.30 & 0.8397 & 26.60 & 21.49 & 34.38 & 0.7382 & 30.86 & 19.44 & 39.04 & 0.5887\\ Arad~\cite{arad2016sparse} & 4.46 & 35.63 & 5.90 & 0.9082 & 10.09 & 28.96 & 19.54 & 0.8695 & 7.85 & 31.30 & 8.32 & 0.8490\\ Aitor~\cite{alvarez2017adversarial} & 1.97 & 43.30 & 1.80 & 0.9907 & 6.80 & 32.53 & 17.50 & 0.8768 & 3.29 & 39.21 & 4.93 & 0.9671\\ HSCNN+~\cite{xiong2017hscnn} & 1.55 & 45.38 & 1.63 & 0.9931 & 4.97 & 35.66 & 8.73 & 0.9529 & 2.87 & 41.05 & 4.28 & 0.9741\\ \hline DCNN & 1.23 & 47.40 & 1.30 & 0.9939 & 5.77 & 34.09 & 11.35 & 0.9275 & 2.88 & 40.83 & 4.24 & 0.9724\\ MCNet & 1.11 & 48.43 & 1.13 & 0.9951 & 4.84 & 35.92 & 8.98 & 0.9555 & 2.83 & 40.70 & 4.26 & 0.9689\\ \hline Ours & {\textbf{1.03}} & {\textbf{49.29}} & {\textbf{1.05}} & {\textbf{0.9955}} & {\textbf{4.54}} & {\textbf{36.33}} & {\textbf{7.07}} & {\textbf{0.9611}} & {\textbf{2.54}} & {\textbf{41.54}} & {\textbf{3.76}} & {\textbf{0.9796}}\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \end{table*} \section{Experiment} In this section, we will conduct extensive comparison experiments and carry out an ablation study to demonstrate the effectiveness of the proposed method in SSR. \subsection{Experimental Setting} \paragraph{Datasets} In this study, we adopt three benchmark HSI datasets, including NTIRE2018~\cite{timofte2018ntire}, CAVE~\cite{yasuma2010generalized} and Harvard~\cite{chakrabarti2011statistics}. NTIRE2018 dataset is the benchmark for the SSR challenge in NTIRE2018. In NTIRE2018 dataset, there are 255 paired HSIs and RGB images which have the same spatial resolution, \eg, 1392 $\times$ 1300. Each HSI consists of $31$ successive spectral bands ranging from 400$nm$ to 700$nm$ with a 10$nm$ interval. CAVE dataset contains 32 HSIs of indoor objects. Similar to NTIRE2018, each HSI contains $31$ spectral bands ranging from 400$nm$ to 700$nm$ with a 10$nm$ interval but with the spatial resolution 512 $\times$ 512. Harvard dataset is another common benchmark for HSIs. It consists of 50 HSIs with spatial resolution 1392$\times$1040. Each image contains 31 spectral bands captured from 420$nm$ to 720$nm$ with a 10$nm$ interval. For the CAVE and Havard datasets, inspird by~\cite{dong2016hyperspectral,zhang2018exploiting}, we adopt the spectral response function of Nikon D700 camera~\cite{dong2016hyperspectral} to generate the corresponding RGB image for each HSI. In the following experiments, we randomly select 200 paired images from the NTIRE2018 dataset as the training set and the remaining 55 paired images for testing. For the CAVE dataset, we randomly choose 22 paired images for training and the remaining 10 paired images for testing. While in the Harvard dataset, 30 paired images are randomly chosen as the training set and the remaining 20 paired images are utilized for testing. \vspace{-0.4cm} \paragraph{Comparison Methods} In this study, we compare the proposed method with 6 existing methods including the bilinear interpolation (BI)~\cite{hou1978cubic}, Arad~\cite{arad2016sparse}, Aitor~\cite{alvarez2017adversarial}, HSCNN+~\cite{xiong2017hscnn}, deep convolution neural network (DCNN) and the multi-column network (MCNet). Among them, the BI utilizes the bilinear interpolation to increase the spectral resolution of the input RGB image. The Arad is a sparsity induced conventional SSR method. The Aitor and HSCNN+ are two recent DCNN based state-of-the-art SSR methods. The DCNN and MCNet are two baselines for the proposed method. The DCNN is a variant of the proposed method that is implemented by replacing each FM block in the proposed method with a convolutional block. For the MCNet, we implement it following the basic architecture in~\cite{cirecsan2012multi,zhang2016single} with the convolutional blocks. Moreover, the column number is set as $n$ and the convolutions in $n$ columns are equipped with $n$ kinds of different-sized filters, which is similar as the proposed method. We further control the depth of each column to make sure the model complexity of the MCNet is comparable to the proposed method. By doing this, the only difference between the MCNet and the proposed network is the network architecture. For fair comparison, all these DCNN based competitors and the spectral dictionary in the Arad~\cite{arad2016sparse} are retrained on the training set utilized in the experiments. \vspace{-0.4cm} \paragraph{Evaluation Metrics} To objectively evaluate the SSR performance of each method, we employ four commonly utilized metrics, including the root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR), spectral angle sapper (SAM) and structural similarity index (SSIM). The RMSE and PSNR measure the numerical difference between the reconstructed image and the reference image. The SAM computes the average spectral angle between two spectra from the reconstructed image and the reference image at the same spatial position to indicate the reconstruction accuracy of spectrum. The SSIM is often utilized to measure the spatial structure similarity between two images. In general, a larger PSNR or SSIM and a smaller RMSE or SAM indicate better performance. \vspace{-0.4cm} \paragraph{Implementation Details} In the proposed method, we adopt 4 FM blocks (\ie, including $\mathcal{F}_c$ for feature fusion and $p$=3), and each block contains $n=3$ basis functions. The basis functions and the mixing functions consist of $m$=2 convolutional blocks. Each convolutional block contains 64 filters. In each FM block, three basis functions are equipped with three different-sized filters for convolution, \ie, 3$\times$3, 7$\times$7 and 11$\times$11. While the filter size in all other convolutional blocks is fixed as 3$\times$3. In this study, we implement the proposed method on the Pytorch platform~\cite{ketkar2017introduction} and train the network using the following model \begin{equation}\label{eq:eq4} \begin{aligned} \min\limits_{\theta} \frac{1}{N}\sum\nolimits^N_{i=1} \|\mathbf{y}_i - f(\mathbf{x}_i,\theta)\|_1, \end{aligned} \end{equation} where $\{(\mathbf{y}_i, \mathbf{x}_i)\}$ denotes the $i$-th paired HSI and RGB image, respectively. $N$ denotes the number of training pairs. $f$ denotes the ultimate mapping function defined by the proposed network and $\theta$ represents all involved parameters. $\|\cdot\|_1$ represents the $\ell_1$ norm based loss. In the training stage, we employ the Adam optimizer~\cite{kingma2014adam} with the weight decay 1e-6. The learning rate is initially set as 1e-4 and halved in every 20 epochs. The batch size is 128. We terminate the optimization at the $100$-th epoch. \begin{figure*}[htbp] \centering \includegraphics[height=1.0in, width=0.9in]{./error/BGU_HS_00043_chazhi_channel31.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/BGU_HS_00043_A_channel31.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/BGU_HS_00043_MSCNN_channel31.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/BGU_HS_00043_HSCNN_channel31.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/BGU_HS_00043_full_conv_channel31.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/BGU_HS_00043_MC_channel31.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/BGU_HS_00043_ours_channel31.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.2in]{./error/blank.pdf} \\ \vspace{-0.18cm} \subfigure[BI~\cite{hou1978cubic}]{\includegraphics[height=1.0in, width=0.9in]{./error/BGU_HS_00043_chazhi.pdf}} \hspace{-0.15cm} \subfigure[Arad~\cite{arad2016sparse}]{\includegraphics[height=1.0in, width=0.9in]{./error/BGU_HS_00043_A.pdf}} \hspace{-0.15cm} \subfigure[Aitor~\cite{alvarez2017adversarial}]{\includegraphics[height=1.0in, width=0.9in]{./error/BGU_HS_00043_MSCNN.pdf}} \hspace{-0.15cm} \subfigure[HSCNN+~\cite{xiong2017hscnn}]{\includegraphics[height=1.0in, width=0.9in]{./error/BGU_HS_00043_HSCNN.pdf}} \hspace{-0.15cm} \subfigure[DCNN]{\includegraphics[height=1.0in, width=0.9in]{./error/BGU_HS_00043_full_conv.pdf}} \hspace{-0.15cm} \subfigure[MCNet]{\includegraphics[height=1.0in, width=0.9in]{./error/BGU_HS_00043_MC.pdf}} \hspace{-0.15cm} \subfigure[Ours]{\includegraphics[height=1.0in, width=0.9in]{./error/BGU_HS_00043_ours.pdf}} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.2in]{./error/colorbar_30.pdf} \caption{Visual super-resolution results of the 31-th band and the reconstruction error maps of an example image from the NTIRE2018 dataset for different methods. The reconstruction error is obtained by computing the mean-square error between two spectrum vectors from the super-resolution result and the ground truth at each pixel. Best view on the screen.} \label{fig:visual-ntr} \vspace{-0.3cm} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[height=1.0in, width=0.9in]{./error/face_ms_chazhi_channel28.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/face_ms_A_channel28.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/face_ms_MSCNN_channel28.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/face_ms_HSCNN_channel28.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/face_ms_full_conv_channel28.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/face_ms_MC_channel28.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/face_ms_ours_channel28.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.2in]{./error/blank.pdf} \\ \vspace{-0.18cm} \subfigure[BI~\cite{hou1978cubic}]{\includegraphics[height=1.0in, width=0.9in]{./error/face_ms_chazhi.pdf}} \hspace{-0.15cm} \subfigure[Arad~\cite{arad2016sparse}]{\includegraphics[height=1.0in, width=0.9in]{./error/face_ms_A.pdf}} \hspace{-0.15cm} \subfigure[Aitor~\cite{alvarez2017adversarial}]{\includegraphics[height=1.0in, width=0.9in]{./error/face_ms_MSCNN.pdf}} \hspace{-0.15cm} \subfigure[HSCNN+~\cite{xiong2017hscnn}]{\includegraphics[height=1.0in, width=0.9in]{./error/face_ms_HSCNN.pdf}} \hspace{-0.15cm} \subfigure[DCNN]{\includegraphics[height=1.0in, width=0.9in]{./error/face_ms_full_conv.pdf}} \hspace{-0.15cm} \subfigure[MCNet]{\includegraphics[height=1.0in, width=0.9in]{./error/face_ms_MC.pdf}} \hspace{-0.15cm} \subfigure[Ours]{\includegraphics[height=1.0in, width=0.9in]{./error/face_ms_ours.pdf}} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.2in]{./error/colorbar_30.pdf} \caption{Visual super-resolution results of the 28-th band and the reconstruction error maps of an example image from the CAVE dataset for different methods. The reconstruction error is obtained by computing the mean-square error between two spectrum vectors from the super-resolution result and the ground truth at each pixel. Best view on the screen.} \label{fig:visual-cave} \vspace{-0.3cm} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[height=1.0in, width=0.9in]{./error/imgb5_chazhi_channel18.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/imgb5_A_channel18.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/imgb5_MSCNN_channel18.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/imgb5_HSCNN_channel18.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/imgb5_full_conv_channel18.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/imgb5_MC_channel18.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.9in]{./error/imgb5_ours_channel18.pdf} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.2in]{./error/blank.pdf} \\ \vspace{-0.18cm} \subfigure[BI~\cite{hou1978cubic}]{\includegraphics[height=1.0in, width=0.9in]{./error/imgb5_chazhi.pdf}} \hspace{-0.15cm} \subfigure[Arad~\cite{arad2016sparse}]{\includegraphics[height=1.0in, width=0.9in]{./error/imgb5_A.pdf}} \hspace{-0.15cm} \subfigure[Aitor~\cite{alvarez2017adversarial}]{\includegraphics[height=1.0in, width=0.9in]{./error/imgb5_MSCNN.pdf}} \hspace{-0.15cm} \subfigure[HSCNN+~\cite{xiong2017hscnn}]{\includegraphics[height=1.0in, width=0.9in]{./error/imgb5_HSCNN.pdf}} \hspace{-0.15cm} \subfigure[DCNN]{\includegraphics[height=1.0in, width=0.9in]{./error/imgb5_full_conv.pdf}} \hspace{-0.15cm} \subfigure[MCNet]{\includegraphics[height=1.0in, width=0.9in]{./error/imgb5_MC.pdf}} \hspace{-0.15cm} \subfigure[Ours]{\includegraphics[height=1.0in, width=0.9in]{./error/imgb5_ours.pdf}} \hspace{-0.15cm} \includegraphics[height=1.0in, width=0.2in]{./error/colorbar_50.pdf} \caption{Visual super-resolution results of the 18-th band and the reconstruction error maps of an example image from the Harvard dataset for different methods. The reconstruction error is obtained by computing the mean-square error between two spectrum vectors from the super-resolution result and the ground truth at each pixel. Best view on the screen.} \label{fig:visual-harvard} \vspace{-0.3cm} \end{figure*} \begin{figure*}[htbp] \centering \subfigure[NTIRE2018]{\includegraphics[height=1.0in, width=0.9in]{./spectrum/BGU_HS_00110_rgb.pdf}} \hspace{-0.15cm} \subfigure[Spectra]{\includegraphics[height=1.0in, width=1.3in]{./spectrum/BGU_HS_00110_ours_plot.pdf}} \hspace{-0.15cm} \subfigure[CAVE]{\includegraphics[height=1.0in, width=0.9in]{./spectrum/pompoms_ms_rgb.pdf}} \hspace{-0.15cm} \subfigure[Spectra]{\includegraphics[height=1.0in, width=1.3in]{./spectrum/pompoms_ms_ours_plot.pdf}} \hspace{-0.15cm} \subfigure[Havard]{\includegraphics[height=1.0in, width=0.9in]{./spectrum/imge7_rgb.pdf}} \hspace{-0.15cm} \subfigure[Spectra]{\includegraphics[height=1.0in, width=1.3in]{./spectrum/imge7_ours_plot.pdf}} \caption{Recovered spectra form the super-resolution results of the proposed method on three example images chosen from three datasets. In each image, we select four different positions and plot the curves of the recovered spectra (\ie, denoted by dash lines) and the corresponding ground truth spectra (\ie, denoted by solid lines).} \label{fig:spectrum} \vspace{-0.3cm} \end{figure*} \begin{figure}[htbp] \centering \includegraphics[height=0.7in, width=0.75in]{./mask/1_1.pdf} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.75in]{./mask/2_1.pdf} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.75in]{./mask/c_1.pdf} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.75in]{./mask/o_1.pdf} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.18in]{./mask/colorbar_1.pdf} \\ \includegraphics[height=0.7in, width=0.75in]{./mask/1_2.pdf} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.75in]{./mask/2_2.pdf} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.75in]{./mask/c_2.pdf} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.75in]{./mask/o_2.pdf} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.18in]{./mask/colorbar_1.pdf} \\ \vspace{-0.18cm} \subfigure[$\mathcal{F}^{1}$]{\includegraphics[height=0.7in, width=0.75in]{./mask/1_3.pdf}} \hspace{-0.15cm} \subfigure[$\mathcal{F}^{2}$]{\includegraphics[height=0.7in, width=0.75in]{./mask/2_3.pdf}} \hspace{-0.15cm} \subfigure[$\mathcal{F}_{c}$]{\includegraphics[height=0.7in, width=0.75in]{./mask/c_3.pdf}} \hspace{-0.15cm} \subfigure[$\mathcal{F}^{3}$]{\includegraphics[height=0.7in, width=0.75in]{./mask/o_3.pdf}} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.18in]{./mask/colorbar_1.pdf} \caption{Pixel-wise weights generated by the mixing function in different FM blocks of the proposed network. Figures in each column show the weight maps for three basis functions (from top to bottom: $f^u_1$, $f^u_2$ and $f^u_3$). For visualization convenience, we normalize each weight map into the range [0,1] using the inner maximum values and the minimum values.} \label{fig:one-weight} \vspace{-0.3cm} \end{figure} \begin{figure}[htbp] \centering \includegraphics[height=0.7in, width=0.75in]{./mask/BGU_HS_00104_clean.pdf} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.75in]{./mask/2_1.pdf} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.75in]{./mask/2_2.pdf} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.75in]{./mask/2_3.pdf} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.18in]{./mask/colorbar_1.pdf} \\ \includegraphics[height=0.7in, width=0.75in]{./mask/BGU_HS_00008_clean.pdf} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.75in]{./mask/BGU_HS_00008_2_1.pdf} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.75in]{./mask/BGU_HS_00008_2_2.pdf} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.75in]{./mask/BGU_HS_00008_2_3.pdf} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.18in]{./mask/colorbar_1.pdf} \\ \vspace{-0.18cm} \subfigure[RGB image]{\includegraphics[height=0.7in, width=0.75in]{./mask/BGU_HS_00043_clean.pdf}} \hspace{-0.15cm} \subfigure[$f^2_1$]{\includegraphics[height=0.7in, width=0.75in]{./mask/BGU_HS_00043_2_1.pdf}} \hspace{-0.15cm} \subfigure[$f^2_2$]{\includegraphics[height=0.7in, width=0.75in]{./mask/BGU_HS_00043_2_2.pdf}} \hspace{-0.15cm} \subfigure[$f^2_3$]{\includegraphics[height=0.7in, width=0.75in]{./mask/BGU_HS_00043_2_3.pdf}} \hspace{-0.15cm} \includegraphics[height=0.7in, width=0.18in]{./mask/colorbar_1.pdf} \caption{Pixel-wise weights generated by the mixing function in the FM block $\mathcal{F}^{2}$ on different images. In each row, figures from left to right denote the input RGB image and three generated weight maps corresponding to the basis functions $f^2_1$, $f^2_2$ and $f^2_3$, respectively. For visualization convenience, we normalize each weight map into the range [0,1] using the inner maximum values and the minimum values.} \label{fig:multiple-weight} \vspace{-0.3cm} \end{figure} \subsection{Performance Evaluation} \paragraph{Performance comparison} Under the same experimental settings, we evaluate all those methods on the testing set from each benchmark dataset. Their numerical results are reported in Table~\ref{table:numerical}. It can be seen that these DCNN based comparison methods often produce more accurate results than the interpolation or the sparsity induced SSR method. For example, on the NTIRE2018 dataset, the RMSE of the Aitor and HSCNN+ are less than 2.0 while that of the BI and Arad are higher than 4.0. Nevertheless, the proposed method obviously outperforms these DCNN based competitors. For example, compared with the state-of-the-art HSCNN+, the proposed method reduces the RMSE by 0.43 and improves the PSNR by 0.67db on the CAVE dataset. On the NTIRE2018 dataset, the decrease on RMSE is even up to 0.52 and the improvement on PSNR is up to 3.19db. This profits from the ability of the proposed method in adaptively determining the receptive field size and the mapping function for each pixel. With such an ability, the proposed method is able to handle each pixel more flexibly. Moreover, since various mapping functions can be approximated by the mixture of the learned basis functions, the proposed method can better generalize to the unknown pixels. In addition, as shown in Table~\ref{table:numerical}, the proposed method also performs better than two baselines, \ie, DCNN and MCNet. For example, on the NTIRE2018 dataset, the PSNR obtained by the proposed method is higher than that of DCNN by 1.89db and higher than that of MCNet by 0.86. Since the only difference between the proposed method and DCNN is the discrepancy between the convolutional block and the proposed FM block, the superiority of the proposed method demonstrates that the proposed FM block is much powerful than the convolutional block for SSR. Similarly, the advantage of the proposed method over MCNet clarifies that the proposed network architecture is more effective than the multi-column architecture in SSR. To further clarify the above conclusions, we plot some visual super-resolution results of different methods on three datasets in Figure~\ref{fig:visual-ntr}, Figure~\ref{fig:visual-cave} and Figure~\ref{fig:visual-harvard}. As can be seen, the super-resolution results of the proposed method have more details and show less reconstruction error than other competitors. In addition, we also sketch the recovered spectrum curves of the proposed method in Figure~\ref{fig:spectrum}. It can be seen that the spectra produced by the proposed method are very close to the ground truth. \vspace{-0.4cm} \paragraph{Pixel-wise mixing weights} In this study, we mix the outputs of the basis functions with pixel-wise weights to adaptively learn the pixel-wise mapping. To validate that the proposed method can effectively produce the pixel-wise weights as expected, we choose an example image from the NTIRE2018 and visualize the produced pixel-wise weights in each FM block, as shown in Figure~\ref{fig:one-weight}. We can find that, i) pixels from different categories or spatial positions are often given different weights. For example, in the second weight map generated by $\mathcal{F}^1$, the weights for the pixels from 'road' are obviously smaller than that for the pixels from 'tree'. ii) Pixels from the same category are pone to be given similar weights. For example, pixels from 'road' are given similar weights in each weight map in Figure~\ref{fig:one-weight} (a)(b). To further clarify these two aspects of observations, we visualize the weight maps of some other images generated by the FM block $\mathcal{F}^2$ in Figure~\ref{fig:multiple-weight}, where similar phenomenon can be observed. iii) In the intermediate FM blocks (\ie, $\mathcal{F}^1$ and $\mathcal{F}^2$ in Figure~\ref{fig:one-weight}), the high level block (\eg, $\mathcal{F}^2$) can distinguish finer difference between pixels than the low level block (\eg, $\mathcal{F}^1$), viz., only highly similar pixels will be assigned to similar weights. iv) Due to being forced to match the output, in the weight maps generated by the ultimate output block $\mathcal{F}^3$, the weight difference between pixels from various categories is not as obvious as that in previous FM block (\eg, $\mathcal{F}^1$ and $\mathcal{F}^1$), as shown in Figure~\ref{fig:one-weight}(a)(b)(d). According to the above observations, we can conclude that the proposed network can effectively generate the pixel-wise mixing weights and thus is able to pixel-wisely determine receptive field size and mapping function. \subsection{Ablation study} In this part, we carry out an ablation study on the NTIRE2018 dataset to demonstrate the effect of the different ingredients, the number of basis functions and the number of FM blocks on the proposed network. \begin{figure}[htbp] \centering \includegraphics[height=1.4in, width=1.6in]{./img/Train_curve.pdf} \includegraphics[height=1.4in, width=1.6in]{./img/Test_curve.pdf} \caption{Curves of training loss and test PSNR for the proposed method (\eg, 'Ours') and its two variants (\eg, 'Ours w/o mix', 'Our w/o fusion') during training on the NTIRE2018 dataset. (Ours w/o mix: without pixel-wise mixture; Ours w/o fusion: without intermediate feature fusion)} \label{fig:curves} \end{figure} \begin{table}\smal \caption{Effect of the different ingredients (\ie, pixel-wise mixture \& intermediate feature fusion) in the proposed network.} \label{table:ingradient} \renewcommand{\arraystretch}{1.1} \begin{center} \begin{tabular}{l|c|c|c|c} \hline Methods & RMSE & PSNR & SAM & SSIM\\ \hline Ours w/o mix & 1.10 & 48.44 & 1.16 & 0.9950\\ Ours w/o fusion & 1.05 & 48.97 & 1.09 & 0.9953\\ Ours & {\textbf{1.03}} & {\textbf{49.29}} & {\textbf{1.05}} & {\textbf{0.9955}}\\ \hline \end{tabular} \end{center} \vspace{-0.3cm} \end{table} \begin{table}\smal \caption{Effect of the number $n$ of basis functions.} \label{table:n_num} \renewcommand{\arraystretch}{1.1} \begin{center} \begin{tabular}{l|c|c|c|c} \hline Methods & RMSE & PSNR & SAM & SSIM\\ \hline Ours ($n=$1) & 1.47 & 45.82 & 1.57 & 0.9913\\ Ours ($n=$2) & 1.08 & 48.76 & 1.10 & 0.9952\\ Ours ($n=$3)& 1.03 & 49.29 & 1.05 & 0.9955\\ Ours ($n=$5) & {\textbf{0.98}} & {\textbf{49.87}} & {\textbf{1.00}} & {\textbf{0.9958}}\\ \hline \end{tabular} \end{center} \vspace{-0.3cm} \end{table} \begin{table}\smal \caption{Effect of the number $p$ of FM blocks.} \label{table:p_num} \renewcommand{\arraystretch}{1.1} \begin{center} \begin{tabular}{l|c|c|c|c} \hline Methods & RMSE & PSNR & SAM & SSIM\\ \hline Ours ($p=$2) & 1.05 & 48.95 & 1.09 & 0.9954\\ Ours ($p=$3) & 1.03 & 49.29 & 1.05 & 0.9955\\ Ours ($p=$4)& 1.05 & 49.42 & 1.05 & 0.9954\\ Ours ($p=$6) & {\textbf{1.00}} & {\textbf{49.59}} & {\textbf{1.02}} & {\textbf{0.9956}}\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \end{table} \vspace{-0.4cm} \paragraph{Effect of Different Ingredients} In the proposed FM network, there are two important ingredients, namely the pixel-wise mixture and the intermediate feature fusion. To demonstrate the effect of these two ingredients, we compare the proposed method with its two variants. One (\ie, 'Ours w/o mix') disables the pixel-wise mixture in the proposed network, which implies mixing the outputs of the basis functions with equal weights; while the other (\ie, 'Ours w/o fusion') disables the intermediate feature fusion, \ie, removing the skip connections as well as the FM block $\mathcal{F}_c$. We plot the training loss curves and the testing PSNR curves of these three methods in Figure~\ref{fig:curves}. As can be seen that the proposed method obtains the smallest training loss and the highest testing PSNR. More numerical results are reported in Table~\ref{table:ingradient}. It can be seen that the proposed method still obviously outperforms these two variants. This demonstrate that both the pixel-wise mixture and the intermediate feature fusion are crucial for the proposed network. \vspace{-0.4cm} \paragraph{Effect of the Number of Basis Functions} In the above experiments, we fix the number of basis functions as $n=3$ in each FM block. Intuitively, increasing $n$ will enlarge the expressive capacity of the basis fictions and thus lead to better performance, vice versa. To validate this, we evaluate the proposed method on the NTIRE2018 dataset using different $n$, \ie, $n=$1, 2, 3 and 5. The obtained numerical results are provided in Table~\ref{table:n_num}. As can be seen, the reconstruction accuracy gradually increases as the number $n$ of basis functions increases. When $n=$1, the proposed method degenerates to the convolutional blocks based network, which shows the lowest reconstruction accuracy in Table~\ref{table:n_num}. When $n$ increases to $5$, the obtained RMSE is even lower than 1.0 and the PSNR is close to 50db. However, there is also no free lunch in our case and a larger $n$ often results in higher computational complexity. Therefore, we make a balance between the accuracy and efficiency by tuning $n$. This makes it possible to customize the proposed network for a specific device. \vspace{-0.4cm} \paragraph{Effect of the Number of FM Blocks} In addition to the number of basis functions, the model complexity of the proposed method also depends on the number $p$ of the FM blocks. To demonstrate the effect of $p$ on the proposed method, we evaluate the proposed method on the NTIRE2018 dataset using different number of FM blocks, \ie, $p$=2,3,4 and 6. The obtained numerical results are reported in Table~\ref{table:p_num}. Similar as the case of $n$, the performance of the proposed method can be gradually improved as the number $p$ of FM blocks increases. We also find an interesting thing, increasing $n$ may be more effective than increasing $p$ in terms of boosting the performance of the proposed method. \section{Conclusion} In this study, to flexibly handle the pixels from different categories or spatial positions in HSIs and consequently improve the performance, we present a pixel-aware deep function-mixture network for SSR, which is composed of multiple FM blocks. Each FM block consists of one mixing function and some basis functions, which are implemented as parallel DCNN based subnets. Thereinto, the basis functions take different sized receptive fields and learn distinct mapping schemes; while the mixing function generates the pixel-wise weights to linearly mix the outputs of all these basis functions. This enables to pixel-wisely determine the receptive field size and mapping function. Moreover, we stack several such FM block in the network to further increase its flexibility in learning the pixel-wise mapping. To boost the SSR performance, we also fuse the intermediate features generated by the FM blocks for feature reuse. With extensive experiments on three benchmark SSR datasets, the proposed method shows superior performance over several existing state-of-the-art competitors. It is worth noting that this study employs the linear mixture to approximate the pixel-wise mapping function. In the future, it is interesting to exploit the non-linear mixture. In addition, it is promising to generalize the idea in this study to other tasks requiring pixel-wise modelling, \eg, semantic segmentation, colorization \etc {\small \bibliographystyle{ieee}
proofpile-arXiv_065-4115
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} As a far-reaching generalisation of the situation in $3$-dimensional real elliptic geometry, H.~Karzel, H.-J.~Kroll and K.~S\"{o}rensen coined the notion of a \emph{projective double space}, that is, a projective space ${\mathbb P}$ together with a \emph{left parallelism} $\mathrel{\parallel_{\ell}}$ and a \emph{right parallelism} $\mathrel{\parallel_{r}}$ on the line set of ${\mathbb P}$ such that---loosely speaking---all ``mixed parallelograms'' are closed \cite{kks-73}, \cite{kks-74}. It is common to address the given parallelisms as the \emph{Clifford parallelisms} of the projective double space. We shall not be concerned with the particular case where ${\mathrel{\parallel_{\ell}}} = {\mathrel{\parallel_{r}}}$, which can only happen over a ground field of characteristic two. All other projective double spaces are three-dimensional and they can be obtained algebraically in terms of a quaternion skew field $H$ with centre $F$ by considering the projective space ${\mathbb P}(H_F)$ on the vector space $H$ over the field $F$ and defining $\mathrel{\parallel_{\ell}}$ and $\mathrel{\parallel_{r}}$ via left and right multiplication in $H$. (See \cite{blunck+p+p-10a}, \cite{havl-15}, \cite{havl-16a}, \cite[pp.~75--76]{karz+k-88} and the references given there.) In their work \cite{blunck+p+p-10a} about generalisations of Clifford parallelism, A.~Blunck, S.~Pianta and S.~Pasotti pointed out that a projective double space $\bigl(\bPH,{\lep},{\rip}\bigr)$ may be equipped in a natural way with so-called \emph{Clifford-like} parallelisms, namely parallelisms for which each equivalence class is either a class of left parallel lines or a class of right parallel lines. The exposition of this topic in \cite{havl+p+p-19a} serves as major basis for this article. \par Our main objective is to describe the group of all collineations that preserve a given Clifford-like parallelism $\parallel$ of a projective double space $\bigl(\bPH,{\lep},{\rip}\bigr)$. Since we work most of the time in terms of vector spaces, we shall consider instead the underlying group $\ensuremath{\Gamma_\parallel}$ of all $\parallel$-preserving semilinear transformations of the vector space $H_F$, which we call the \emph{automorphism group} of the given parallelism. In a first step we focus on the \emph{linear automorphisms} of $\parallel$. We establish in Theorem~\ref{thm:cl-aut-lin} that the group of all these linear automorphism does not depend on the choice of $\parallel$ among all Clifford-like parallelisms of $\bigl(\bPH,{\lep},{\rip}\bigr)$. Since $\mathrel{\parallel_{\ell}}$ and $\mathrel{\parallel_{r}}$ are also Clifford-like, it is impossible to characterise Clifford parallelism in terms of its linear automorphism group in our general setting of an arbitrary quaternion skew field. On the other hand, there are projective double spaces in which there are no Clifford-like parallelisms other than its Clifford parallelisms. This happens, for instance, if $H$ is chosen to be the skew field of Hamilton's quaternions over the real numbers. (It is worth noting that D.~Betten, R.~L\"{o}wen and R.~Riesinger characterised Clifford parallelism among the topological parallelisms of the $3$-dimensional real projective space by its (linear) automorphism group in \cite{bett+l-17a}, \cite{bett+r-14a}, \cite{loew-17y}, \cite{loew-18z}, \cite{loew-17z}.) The next step is to consider the (full) automorphism group $\ensuremath{\Gamma_\parallel}$. Here the situation is more intricate, since in general the group depends on the underlying quaternion skew field as well as the choice of $\parallel$. We know from previous work of S.~Pianta and E.~Zizioli (see \cite{pian-87b} and \cite{pz90-coll}) that the left and right Clifford parallelism of $\bigl(\bPH,{\lep},{\rip}\bigr)$ share the same automorphism group, say $\Autle$. According to Corollary~\ref{cor:aut-not-ex}, $\Autle$ cannot be a proper subgroup of $\ensuremath{\Gamma_\parallel}$. In Section~\ref{sect:exa}, we construct a series of examples showing that over certain quaternion skew fields it is possible to choose $\parallel$ in such a way that $\ensuremath{\Gamma_\parallel}$ is either properly contained in $\ensuremath{\Gamma_{\ell}}$ or coincides with $\ensuremath{\Gamma_{\ell}}$ even though ${\parallel}\neq{\mathrel{\parallel_{\ell}},\mathrel{\parallel_{r}}}$. \par One open problem remains: Is there a projective double space $\bigl(\bPH,{\lep},{\rip}\bigr)$ that admits a Clifford-like parallelism $\parallel$ for which none of the groups $\ensuremath{\Gamma_\parallel}$ and $\ensuremath{\Gamma_{\ell}}$ is contained in the other one? \section{Basic notions and results}\label{sect:basics} Let ${\mathbb P}$ be a projective space with line set ${\mathcal L}$. We recall that a \emph{parallelism} on ${\mathbb P}$ is an equivalence relation on ${\mathcal L}$ such that each point of ${\mathbb P}$ is incident with precisely one line from each equivalence class. We usually denote a parallelism by the symbol $\parallel$. For each line $M\in{\mathcal L}$ we then write ${\mathcal S}(M)$ for the equivalence class of $M$, which is also addressed as the \emph{parallel class} of $M$. Any such parallel class is a spread (of lines) of ${\mathbb P}$, that is, a partition of the point set of ${\mathbb P}$ by lines. When dealing with several parallelisms at the same time we add some subscript or superscript to the symbols $\parallel$ and ${\mathcal S}$. The seminal book \cite{john-10a} covers the literature about parallelisms up to the year 2010. For the state of the art, various applications, connections with other areas of geometry and historical remarks, we refer also to \cite{betta-16a}, \cite{bett+r-12a}, \cite{cogl-15a}, \cite{havl+r-17a}, \cite{karz+k-88}, \cite{loew-18z}, \cite{topa+z-18z} and the references therein. \par The following simple observation, which seems to be part of the folklore, will be useful. \par \begin{lem}\label{lem:invar} Let\/ ${\mathbb P}$ and\/ ${\mathbb P}'$ be projective spaces with parallelisms\/ $\parallel$ and $\parallel'$, respectively. Suppose that $\kappa$ is a collineation of\/ ${\mathbb P}$ to\/ ${\mathbb P}'$ such that any two $\parallel$-parallel lines go over to $\parallel'$-parallel lines. Then $\kappa$ takes any\/ $\parallel$-class to a\/ $\parallel'$-class. \end{lem} \begin{proof} In ${\mathbb P}'$, the $\kappa$-image of any $\parallel$-class is a spread that is contained in a spread, namely some $\parallel'$-class. Any proper subset of a spread fails to be a spread, whence the assertion follows. \end{proof} Let $H$ be a quaternion skew field with centre $F$; see, for example, \cite[pp.~103--105]{draxl-83} or \cite[pp.~46--48]{tits+w-02a}. If $E$ is a subfield of $H$ then $H$ is a left vector space and a right vector space over $E$. These spaces are written as ${}_E H$ and $H_E$, respectively. We do not distinguish between $_E H$ and $H_E$ whenever $E\subseteq F$. Given any $x\in H$ we denote by $\ol x$ the \emph{conjugate quaternion} of $x$. Then $x=\ol x$ holds precisely when $x\in F$. We write $\tr(x)=x+\ol x\in F$ for the \emph{trace} of $x$ and $N(x)=\ol x x= x\ol x\in F$ for the \emph{norm} of $x$. We have the identity \begin{equation}\label{eq:x1} x^2-\tr(x)x+N(x) = 0 . \end{equation} In $H_F$, the symmetric bilinear form associated to the quadratic form $N\colon H\to F$ is \begin{equation}\label{eq:<,>} \li\,\cdot\,,\cdot\,\re\colon H\x H\to F\colon (x,y)\mapsto \langle x,y\rangle =\tr(x\ol{y})=x\ol{y}+y\ol{x} . \end{equation} \par Let $\alpha$ be an automorphism of the quaternion skew field $H$. Then $\alpha(F)=F$ and so $\alpha$ is a semilinear transformation of the vector space $H_{F}$ with $\alpha_{|F}\colon F\to F$ being its accompanying automorphism. Furthermore, \begin{equation}\label{eq:tr+N} \forall\, x\in H\colon \tr\bigl(\alpha(x)\bigr)=\alpha\bigl(\tr(x)\bigr),\; N\bigl(\alpha(x)\bigr)=\alpha\bigl(N(x)\bigr), \; \ol{\alpha(x)}=\alpha(\ol{x}) . \end{equation} This is immediate for all $x\in F$, since here $\tr(x)=2x$, $N(x)=x^2$, and $\ol x = x$. For all $x\in H\setminus F$ the equations in \eqref{eq:tr+N} follow by applying $\alpha$ to \eqref{eq:x1} and by taking into account that $\alpha(x^2)=\alpha(x)^2$ can be written in a unique way as an $F$-linear combination of $\alpha(x)$ and $1$. \par The \emph{projective space} $\bPH$ is understood to be the set of all subspaces of $H_F$ with \emph{incidence} being symmetrised inclusion. We adopt the usual geometric terms: \emph{points}, \emph{lines} and \emph{planes} are the subspaces of $H_F$ with vector dimension one, two, and three, respectively. We write $\cLH$ for the line set of $\bPH$. The \emph{left parallelism} $\mathrel{\parallel_{\ell}}$ on $\cLH$ is defined by letting $M_1\mathrel{\parallel_{\ell}} M_2$ precisely when there is a $g\in H^*:=H\setminus\{0\}$ with $gM_1=M_2$. The \emph{right parallelism} $\mathrel{\parallel_{r}}$ is defined in the same fashion via $M_1g=M_2$. Then $\bigl(\bPH,{\lep},{\rip}\bigr)$ is a \emph{projective double space} with $\mathrel{\parallel_{\ell}}$ and $\mathrel{\parallel_{r}}$ being its \emph{Clifford parallelisms} (see \cite{kks-73}, \cite{kks-74}, \cite[pp.~75--76]{karz+k-88}). A parallelism $\parallel$ of $\bigl(\bPH,{\lep},{\rip}\bigr)$ is \emph{Clifford-like}, if each $\parallel$-class is a left or a right parallel class (see Def.~3.2. of \cite{havl+p+p-19a} where the construction of Clifford-like parallelisms appears frequently in the more general framework of ``blending''; this point of view will be disregarded here). Any Clifford-like parallelism $\parallel$ of $\bigl(\bPH,{\lep},{\rip}\bigr)$ admits the following explicit description: \begin{thm}[{see \cite[Thm.~4.10]{havl+p+p-19a}}]\label{thm:cliff-like} In $\bigl(\bPH,{\lep},{\rip}\bigr)$, let $\cA(H_F)\subset\cLH$ denote the star of lines with centre $F1$, let ${\mathcal F}$ be any subset of $\cA(H_F)$, and define a relation $\parallel$ on ${\mathcal L}(H_F)$ by taking the left parallel classes of all lines in $\cal F$ and the right parallel classes of all lines in $\cA(H_F)\setminus{\mathcal F}$. This will be an equivalence relation (and hence, a parallelism) if, and only if, the defining set $\cal F$ is invariant under the inner automorphisms of $H$. \end{thm} We note that ---from an algebraic point of view--- the lines from $\cA(H_F)$ are precisely the maximal subfields of the quaternion skew field $H$. \par Let $\parallel$ be any parallelism on $\bPH$. We denote by $\ensuremath{\Gamma_\parallel}$ the set of all mappings from $\GammaL(H_F)$ that act on $\bPH$ as $\parallel$-preserving collineations. By Lemma~\ref{lem:invar}, $\ensuremath{\Gamma_\parallel}$ is a subgroup of $\GammaL(H_F)$ and we shall call it the \emph{automorphism group} of the parallelism $\parallel$. Even though we are primarily interested in the group of all $\parallel$-preserving collineations of $\bPH$, which is a subgroup of $\PGammaL(H_F)$, we investigate instead the corresponding group $\ensuremath{\Gamma_\parallel}$. The straightforward task of rephrasing our findings about $\ensuremath{\Gamma_\parallel}$ in projective terms is usually left to the reader. \par The Clifford parallelisms of the projective double space $\bigl(\bPH,{\lep},{\rip}\bigr)$ give rise to automorphism groups $\Gamma_{\mathrel{\parallel_{\ell}}}=:\ensuremath{\Gamma_{\ell}}$ and $\Gamma_{\mathrel{\parallel_{r}}}=:\ensuremath{\Gamma_{r}}$. We recall from \cite[p.~166]{pian-87b} that \begin{equation}\label{eq:le=ri} \ensuremath{\Gamma_{\ell}} = \ensuremath{\Gamma_{r}}. \end{equation} Equation~\eqref{eq:le=ri} is based on the following noteworthy geometric result. In $\bigl(\bPH,{\lep},{\rip}\bigr)$, the right (left) parallelism can be defined in terms of incidence, non-incidence and left (right) parallelism. See, for example, \cite[pp.~75--76]{karz+k-88} or make use of the (much more general) findings in \cite[\S6]{herz-77a}, which are partly summarised in \cite{herz-77b} and \cite{herz-80a}. In order to describe the group $\Autle$ more explicitly, we consider several other groups. First, the group of all \emph{left translations} $\lambda_g\colon H\to H\colon x\mapsto g x$, $g\in H^*$, is precisely the group $\ensuremath{\GL(H_H)}$. The group $\ensuremath{\GL(H_H)}$ is contained in $\GL(H_F)$ and it acts regularly on $H^*$. Next, the automorphism group $\Aut(H)$ of the skew field $H$ is a subgroup of $\GammaL(H_F)$. Finally, we write $\inner{H^*}$ for the group of all inner automorphisms $\tilde{h} \colon H\to H \colon x\mapsto h^{-1}x h$, $h\in H^*$, and so $\inner {H^*}$ is a subgroup of $\GL(H_F)$. According to \cite[Thm.~1]{pian-87b} and \cite[Prop.~4.1 and 4.2]{pz90-coll}\footnote{We wish to note here that Prop.~4.3 of \cite{pz90-coll} is not correct, since the group $\overbracket{K}$ from there in general is not a subgroup of $\Aut(H)$.}, \begin{equation}\label{eq:semidir1} \Autle = \ensuremath{\GL(H_H)} \rtimes \Aut(H) = \GammaL(H_H). \end{equation} By symmetry of `left' and `right', \eqref{eq:semidir1} implies $\ensuremath{\Gamma_{r}} = \GL({}_H H) \rtimes \Aut(H)=\GammaL({}_H H)$, where $\GL({}_H H)$ is the group of \emph{right translations}. Note that $\GammaL(H_H)=\GammaL({}_H H)$. From this fact \eqref{eq:le=ri} follows once more and in an algebraic way. By virtue of the Skolem-Noether theorem \cite[Thm.~4.9]{jac-89}, the $F$-linear skew field automorphisms of $H$ are precisely the inner automorphisms. We therefore obtain from \eqref{eq:semidir1} that \begin{equation}\label{eq:semidir2} \Autle\cap \GL(H_F) = \ensuremath{\GL(H_H)} \rtimes \inner{H^*}. \end{equation} The subgroups of $\Autle$ and $\Autle\cap\GL(H_F)$ that stabilise $1\in H$ are the groups $\Aut(H)$ and $\inner{H^*}$, respectively. \begin{rem} The natural homomorphism $\GL(H_F)\to\PGL(H_F)$ sends the group from \eqref{eq:semidir2} to the group of all $\mathrel{\parallel_{\ell}}$-preserving projective collineations of $\bPH$. This collineation group can be written as the \emph{direct product} of two (isomorphic) subgroups, namely the image of the group of left translations $\GL(H_H)$ and the image of the group of right translations $\GL({}_H H)$ under the natural homomorphism. \end{rem} \par If $\alpha\colon H\to H$ is an antiautomorphism of the quaternion skew field $H$, then $\alpha\in\GammaL(H_F)$ and $\alpha$ takes left (right) parallel lines to right (left) parallel lines. In particular, the conjugation $\overline{(\cdot)}\colon H\to H$ is an $F$-linear antiautomorphism of $H$. Therefore, the set \begin{equation}\label{eq:semilinswap} \bigl(\ensuremath{\GL(H_H)} \rtimes \Aut(H)\bigr)\circ{\overline{(\cdot)}} \end{equation} comprises precisely those mappings in $\GammaL(H_F)$ that interchange the left with the right Clifford parallelism. The analogous subset of $\GL(H_F)$ is given by \begin{equation*}\label{eq:linswap} \bigl(\ensuremath{\GL(H_H)} \rtimes \inner{H^*}\bigr)\circ{\overline{(\cdot)}} . \end{equation*} Alternative proofs of the previous results can be found in \cite[Sect.~4]{blunck+k+s+s-17z}. \section{Automorphisms} Throughout this section, we always assume $\parallel$ to be a Clifford-like parallelism of $\bigl(\bPH,{\lep},{\rip}\bigr)$ as described in Section~\ref{sect:basics}. Our aim is to determine the group $\ensuremath{\Gamma_\parallel}$ of automorphisms of $\parallel$. In a first step we focus on the transformations appearing in \eqref{eq:semidir1} and \eqref{eq:semilinswap}. \begin{prop}\label{prop:par-preserv} Let\/ $\parallel$ be a Clifford-like parallelism of\/ $\bigl(\bPH,{\lep},{\rip}\bigr)$. Then the following assertions hold. \begin{enumerate} \item\label{prop:par-preserv.a} An automorphism $\alpha\in\Aut(H)$ preserves\/ $\parallel$ if, and only if, $\alpha({\mathcal F})={\mathcal F}$. \item\label{prop:par-preserv.b} An antiautomorphism $\alpha$ of the quaternion skew field $H$ preserves\/ $\parallel$ if, and only if, $\alpha({\mathcal F})=\cA(H_F)\setminus{\mathcal F}$. \item\label{prop:par-preserv.c} For all $h\in H^*$, the inner automorphism $\tilde{h}$ preserves\/ $\parallel$. \item\label{prop:par-preserv.d} For all $g\in H^*$, the left translation $\lambda_g$ preserves\/ $\parallel$. \item\label{prop:par-preserv.e} If $\beta\in\GL(H_F)$ preserves $\mathrel{\parallel_{\ell}}$, then $\beta$ preserves also $\parallel$. \end{enumerate} \end{prop} \begin{proof} \eqref{prop:par-preserv.a} We read off from $\alpha(1)=1$ that $\alpha\bigl(\cA(H_F)\bigr)=\cA(H_F)$ and from \eqref{eq:semidir1} that $\alpha\in\Aut(H)\subset\Autle$. The assertion now is an immediate consequence of Theorem \ref{thm:cliff-like}. \par \eqref{prop:par-preserv.b} The proof follows the lines of \eqref{prop:par-preserv.a} taking into account that $\alpha$ interchanges the left with the right parallelism. \par \eqref{prop:par-preserv.c} \cite[Thm.~4.10]{havl+p+p-19a} establishes $\tilde{h}({\mathcal F})={\mathcal F}$. Applying \eqref{prop:par-preserv.a} we get $\tilde{h}\in\ensuremath{\Gamma_\parallel}$. \par \eqref{prop:par-preserv.d} Choose any $\parallel$-class, say ${\mathcal S}(L)$ with $L\in\cA(H_F)$. In order to verify that $\lambda_g\bigl({\mathcal S}(L)\bigr)$ is also a $\parallel$-class, we first observe that \eqref{eq:semidir1} gives $\lambda_g\in\ensuremath{\GL(H_H)}\subset\Autle$. Next, we distinguish two cases. If $L\in{\mathcal F}$, then, by Theorem \ref{thm:cliff-like}, ${\mathcal S}(L)={\mathcal S}_{\ell}(L)$ and so $\lambda_g\bigl({\mathcal S}(L)\bigr)=\lambda_g\bigl({\mathcal S}_{\ell}(L)\bigr) ={\mathcal S}_{\ell}(g L)={\mathcal S}_{\ell}(L)={\mathcal S}(L)$. If $L\in\cA(H_F)\setminus{\mathcal F}$, then, by Theorem \ref{thm:cliff-like}, ${\mathcal S}(L)={\mathcal S}_{r}(L)$. Furthermore, \eqref{prop:par-preserv.c} gives $g L g^{-1}\in\cA(H_F)\setminus{\mathcal F}$. By virtue of these results and \eqref{eq:le=ri}, we obtain $\lambda_g\bigl({\mathcal S}(L)\bigr)=\lambda_g\bigl({\mathcal S}_{r}(L)\bigr) ={\mathcal S}_{r}(g L)={\mathcal S}_{r}( g L g^{-1})={\mathcal S}(g L g^{-1})$. \par \eqref{prop:par-preserv.e} By \eqref{eq:semidir2}, there exist $g,h\in H^*$ such that $\beta=\lambda_g\circ\tilde{h}$. We established already in \eqref{prop:par-preserv.d} and \eqref{prop:par-preserv.c} that $\lambda_g,\tilde{h}\in\ensuremath{\Gamma_\parallel}$, which entails $\beta\in\ensuremath{\Gamma_\parallel}$. \end{proof} We proceed with a lemma that, apart from the quaternion formalism, follows easily from \cite[Thm.~1.10, Thm~1.11]{luen-80a}; those theorems are about spreads, their kernels and their corresponding translation planes. We follow instead the idea of proof used in \cite[Thm.~4.3]{blunck+k+s+s-17z}. \begin{lem}\label{lem:preauto} Let $L\in\cA(H_F)$ and $\alpha\in\GammaL(H_F)$ be given such that $\alpha(1)=1$ and such that $\alpha$ takes one of the two parallel classes ${\mathcal S}_{\ell}(L)$, ${\mathcal S}_{r}(L)$ to one of the two parallel classes ${\mathcal S}_{\ell}\bigl(\alpha(L)\bigr)$, ${\mathcal S}_{r}\bigl(\alpha(L)\bigr)$. Then \begin{equation}\label{eq:preauto1234} \forall\,x\in H,\;z\in L \colon \left\{\renewcommand\arraystretch{1.05} \begin{array}{l} \alpha(xz)= \left\{ \begin{array}{l@{\mbox{~~~if~~~}}l} \alpha(x)\alpha(z) & \alpha\bigl({\mathcal S}_{\ell}(L)\bigr)={\mathcal S}_{\ell}\bigl(\alpha(L)\bigr);\\ \alpha(z)\alpha(x) & \alpha\bigl({\mathcal S}_{\ell}(L)\bigr)={\mathcal S}_{r}\bigl(\alpha(L)\bigr); \end{array} \right.\\ \alpha(zx)= \left\{ \begin{array}{l@{\mbox{~~~if~~~}}l} \alpha(x)\alpha(z) & \alpha\bigl({\mathcal S}_{r}(L)\bigr)={\mathcal S}_{\ell}\bigl(\alpha(L)\bigr);\\ \alpha(z)\alpha(x) & \alpha\bigl({\mathcal S}_{r}(L)\bigr)={\mathcal S}_{r}\bigl(\alpha(L)\bigr). \end{array} \right. \end{array} \right. \end{equation} \end{lem} \begin{proof} First, let us suppose that $\alpha$ takes the \emph{left} parallel class ${\mathcal S}_{\ell}(L)$ to the \emph{left} parallel class ${\mathcal S}_{\ell}\bigl(\alpha(L)\bigr)$. We consider $H$, on the one hand, as a $2$-dimensional \emph{right} vector space $H_{L}$ and, on the other hand, as a $2$-dimensional \emph{right} vector space $H_{\alpha(L)}$. By our assumption, $\alpha$ takes ${\mathcal S}_{\ell}(L)=\{g L \mid g\in H^*\}$ to ${\mathcal S}_{\ell}\bigl(\alpha(L)\bigr)=\{g'\alpha(L)\mid g'\in H^*\}$, \emph{i.e.}, the set of one-dimensional subspaces of $H_{L }$ goes over to the set of one-dimensional subspaces of $H_{\alpha(L)}$. Since $\alpha$ is additive, it is a collineation of the affine plane on $H_{L }$ to the affine plane on $H_{\alpha(L)}$. From $\alpha(0)=0$ and the Fundamental Theorem of Affine Geometry, $\alpha$ is a semilinear transformation of $H_{L }$ to $H_{\alpha(L)}$. Let $\phi_{L}\colon L \to \alpha(L)$ be its accompanying isomorphism of fields. From $\alpha(1)=1$, we obtain $\alpha(z)=\alpha(1z)=\alpha(1)\phi_{L}(z) =\phi_{L}(z)$ for all $z\in L$, whence the $\phi_{L}$-semilinearity of $\alpha$ can be rewritten as \begin{equation}\label{eq:preauto} \forall\,x\in H,\;z\in L \colon \alpha(xz)=\alpha(x)\alpha(z) . \end{equation} \par Next, suppose that $\alpha$ takes the \emph{left} parallel class ${\mathcal S}_{\ell}(L)$ to the \emph{right} parallel class ${\mathcal S}_{r}\bigl(\alpha(L)\bigr)$. We proceed as above except for $H_{\alpha(L)}$, which is replaced by the 2-dimensional \emph{left} vector space $_{\alpha(L)}H$. In this way all products of $\alpha$-images have to be rewritten in reverse order so that the equation in \eqref{eq:preauto} changes to $\alpha(xz)=\alpha(z)\alpha(x)$. \par There remain the cases when $\alpha$ takes ${\mathcal S}_{r}(L)$ to ${\mathcal S}_{\ell}\bigl(\alpha(L)\bigr)$ or ${\mathcal S}_{r}\bigl(\alpha(L)\bigr)$. Accordingly, the equation in \eqref{eq:preauto} takes the form $\alpha(zx)=\alpha(x)\alpha(z)$ or $\alpha(zx)=\alpha(z)\alpha(x)$. \end{proof} We now establish that any $\alpha\in\ensuremath{\Gamma_\parallel}$ fixing $1$ satisfies precisely one of the two properties concerning $\alpha({\mathcal F})$, as appearing in Proposition~\ref{prop:par-preserv}~\eqref{prop:par-preserv.a} and \eqref{prop:par-preserv.b}. Afterwards, we will be able to show that any such $\alpha$ is actually an automorphism or antiautomorphism of the skew field $H$. \begin{prop}\label{prop:oneline} Let $\alpha\in\ensuremath{\Gamma_\parallel}$ be such that $\alpha(1)=1$. If there exists a line $L\in\cA(H_F)$ such that ${\mathcal S}(L)$ and $\alpha\bigl({\mathcal S}(L)\bigr)$ are of the same kind, that is, both are left or both are right parallel classes, then $\alpha({\mathcal F})={\mathcal F}$. Similarly, if there exists a line $L\in\cA(H_F)$ such that ${\mathcal S}(L)$ and $\alpha\bigl({\mathcal S}(L)\bigr)$ are of different kind, then $\alpha({\mathcal F})=\cA(H_F)\setminus{\mathcal F}$. \end{prop} \begin{proof} First, let us suppose that ${\mathcal S}(L)={\mathcal S}_{\ell}(L)$ and $\alpha\bigl({\mathcal S}(L)\bigr)={\mathcal S}_{\ell}\bigl(\alpha(L)\bigr)$. This means that $L$ and $\alpha(L)$ are in ${\mathcal F}$. We proceed by showing $\alpha({\mathcal F})\subseteq{\mathcal F}$. If this were not the case, then a line $L'\in{\mathcal F}$ would exist such that $\alpha(L')\in\cA(H_F)\setminus{\mathcal F}$, that is, ${\mathcal S}\bigl(\alpha(L')\bigr)={\mathcal S}_{r}\bigl(\alpha(L')\bigr)$. Furthermore, there would exist quaternions $e\in L\setminus F$, $e'\in L'\setminus F$ and we would have $e'e\neq e e'$. By Lemma~\ref{lem:preauto}, applied to $L$ and also to $L'$, we would finally obtain $\alpha(e'e)=\alpha(e')\alpha(e)=\alpha(e e')$, which is absurd due to $\alpha$ being injective. The same kind of reasoning can be applied to $\alpha^{-1}\in\ensuremath{\Gamma_\parallel}$, whence $\alpha^{-1}({\mathcal F})\subseteq {\mathcal F}$. Summing up, we have shown $\alpha({\mathcal F})={\mathcal F}$ in our first case. \par The case when ${\mathcal S}(L)={\mathcal S}_{r}(L)$ and $\alpha\bigl({\mathcal S}(L)\bigr)={\mathcal S}_{r}\bigl(\alpha(L)\bigr)$ can be treated in an analogous way and leads us to $\alpha\bigl(\cA(H_F)\setminus{\mathcal F}\bigr)=\cA(H_F)\setminus{\mathcal F}$. Clearly, this is equivalent to $\alpha({\mathcal F})={\mathcal F}$. \par Let us now suppose that ${\mathcal S}(L)$ and $\alpha\bigl({\mathcal S}(L)\bigr)$ are of different kind, that is, one of them is a left and the other one is a right parallel class. Then, by making the appropriate changes in the reasoning above, we obtain $\alpha({\mathcal F})=\cA(H_F)\setminus{\mathcal F}$. \end{proof} On the basis of our previous results, we now establish our two main theorems. \begin{thm}\label{thm:cl-aut} Let\/ $\parallel$ be a Clifford-like parallelism of\/ $\bigl(\bPH,{\lep},{\rip}\bigr)$. Then a semilinear transformation $\beta\in\GammaL(H_F)$ preserves\/ $\parallel$ if, and only if, it can be written in the form \begin{equation}\label{eq:cl-aut} \beta = \lambda_{\beta(1)}\circ \alpha, \end{equation} where $\lambda_{\beta(1)}$ denotes the left translation of $H$ by $\beta(1)$ and $\alpha$ either is an automorphism of the quaternion skew field $H$ satisfying $\alpha({\mathcal F})={\mathcal F}$ or an antiautomorphism of $H$ satisfying $\alpha({\mathcal F})=\cA(H_F)\setminus{\mathcal F}$. \end{thm} \begin{proof} If $\beta$ can be factorised as in \eqref{eq:cl-aut}, then $\beta\in\ensuremath{\Gamma_\parallel}$ follows from Proposition~\ref{prop:par-preserv}~\eqref{prop:par-preserv.a}, \eqref{prop:par-preserv.b}, and \eqref{prop:par-preserv.d}. \par In order to verify the converse, we define $\alpha:=\lambda_{\beta(1)}^{-1}\circ\beta$. Then $\alpha(1)=1$ and $\alpha\in\ensuremath{\Gamma_\parallel}$ by Proposition \ref{prop:par-preserv}~\eqref{prop:par-preserv.d}. We now distinguish two cases. \par \emph{Case}~(i). There exists a line $L\in\cA(H_F)$ such that ${\mathcal S}(L)$ and $\alpha\bigl({\mathcal S}(L)\bigr)$ are of the same kind. We claim that under these circumstances $\alpha\in\Aut(H)$. \par First, we confine ourselves to the subcase ${\mathcal S}(L)={\mathcal S}_{\ell}(L)$. By the theorem of Cartan-Brauer-Hua \cite[(13.17)]{lam-01a}, there is an $h\in H^*$ such that $L':=h^{-1}Lh\neq L$. From Proposition~\ref{prop:par-preserv}~{\eqref{prop:par-preserv.c}}, ${\mathcal S}(L')=\tilde{h}\bigl({\mathcal S}(L)\bigr)$ is a left parallel class and, from Proposition~\ref{prop:oneline}, the same holds for $\alpha\bigl({\mathcal S}(L')\bigr)$. There exists an $e'\in L'\setminus L$ and, consequently, the elements $1,e'$ constitute a basis of $H_{L}$. Given arbitrary quaternions $x,y$ we may write $y=z_0+e' z_1$ with $z_0,z_1\in L$. By virtue of Lemma~\ref{lem:preauto}, we obtain the intermediate result \begin{equation}\label{eq:z-e'} \forall\, x\in H,\; z\in L \colon \alpha(xz)=\alpha(x)\alpha(z),\; \alpha(x e')=\alpha(x)\alpha(e'). \end{equation} Using repeatedly the additivity of $\alpha$ and \eqref{eq:z-e'} gives \begin{equation}\label{eq:auto} \begin{aligned} \alpha(xy)& =\alpha(x z_0) + \alpha\bigl((xe')z_1\bigr) =\alpha(x)\alpha(z_0) + \alpha(xe')\alpha(z_1)\\ & =\alpha(x)\bigl(\alpha(z_0) + \alpha(e')\alpha(z_1)\bigr) =\alpha(x)\bigl(\alpha(z_0) + \alpha(e' z_1)\bigr) =\alpha(x)\alpha(y). \end{aligned} \end{equation} Thus $\alpha$ is an automorphism of $H$. \par The subcase ${\mathcal S}(L)={\mathcal S}_{r}(L)$ can be treated in an analogous way. It suffices to replace $H_L$ with ${}_L H$ and to revert the order of the factors in all products appearing in \eqref{eq:z-e'} and \eqref{eq:auto}. \par \emph{Case}~(ii). There exists a line $L\in\cA(H_F)$ such that ${\mathcal S}(L)$ and $\alpha\bigl({\mathcal S}(L)\bigr)$ are of different kind. Then, by reordering certain factors appearing in Case~(i) in the appropriate way, the mapping $\alpha$ turns out to be an antiautomorphism of $H$. \par Altogether, since there exists a line in $\cA(H_F)$, $\alpha$ is an automorphism or an antiautomorphism of $H$. Accordingly, from Proposition~\ref{prop:par-preserv}~\eqref{prop:par-preserv.a} or \eqref{prop:par-preserv.b}, $\alpha({\mathcal F})={\mathcal F}$ or $\alpha({\mathcal F})=\cA(H_F)\setminus{\mathcal F}$. \end{proof} \begin{thm}\label{thm:cl-aut-lin} Let\/ $\parallel$ be a Clifford-like parallelism of\/ $\bigl(\bPH,{\lep},{\rip}\bigr)$. Then the group\/ ${\ensuremath{\Gamma_\parallel}}\cap\GL(H_F)$ of linear transformations preserving\/ $\parallel$ coincides with the group\/ ${\ensuremath{\Gamma_{\ell}}}\cap\GL(H_F)$ of linear transformations preserving the left Clifford parallelism\/ $\mathrel{\parallel_{\ell}}$. \end{thm} \begin{proof} In view of Proposition~\ref{prop:par-preserv}~\eqref{prop:par-preserv.e} it remains to show that any $\beta\in{\ensuremath{\Gamma_\parallel}}\cap\GL(H_F)$ is contained in ${\ensuremath{\Gamma_{\ell}}}\cap\GL(H_F)$. From \eqref{eq:cl-aut}, we deduce $\beta=\lambda_{\beta(1)}\circ\alpha$, where $\alpha\in\GL(H_F)$ is an automorphism of $H$ such that $\alpha({\mathcal F})={\mathcal F}$ or an antiautomorphism of $H$ such that $\alpha({\mathcal F})=\cA(H_F)\setminus{\mathcal F}$. There are two possibilities. \par \emph{Case}~(i). $\alpha$ is an automorphism. By the Skolem-Noether theorem, $\alpha$ is inner. Consequently, \eqref{eq:le=ri} and \eqref{eq:semidir2} give $\beta\in{\Autle}\cap\GL(H_F)$. \par \emph{Case}~(ii). $\alpha$ is an antiautomorphism. Again by Skolem-Noether, the product $\alpha':=\alpha\circ\overline{(\cdot)}$ of the given $\alpha$ and the conjugation is in $\inner{H^*}$. The conjugation fixes $1$ and sends any $x\in H$ to $\ol{x}=\tr(x) -x\in F1+Fx$. Therefore, all lines of the star $\cA(H_F)$ remain fixed under conjugation. The inner automorphism $\alpha'$ fixes ${\mathcal F}$ as a set \cite[Thm.~4.10]{havl+p+p-19a}. This gives $\alpha({\mathcal F})={\mathcal F}$ and contradicts $\alpha({\mathcal F})=\cA(H_F)\setminus{\mathcal F}$. So, the second case does not occur. \end{proof} Theorem~\ref{thm:cl-aut-lin} may be rephrased in the language of projective geometry as follows: if a \emph{projective collineation} of $\bPH$ preserves a \emph{single} Clifford-like parallelism $\parallel$ of $\bigl(\bPH,{\lep},{\rip}\bigr)$, then \emph{all} Clifford-like parallelisms of $\bigl(\bPH,{\lep},{\rip}\bigr)$ (including $\mathrel{\parallel_{\ell}}$ and $\mathrel{\parallel_{r}}$) are preserved. This means that a characterisation of the Clifford-parallelisms of $\bigl(\bPH,{\lep},{\rip}\bigr)$ by their common group of linear automorphisms \big(or by the corresponding subgroup of the projective group $\PGL(H_F)$\big) is out of reach whenever there exist Clifford-like parallelisms of $\bigl(\bPH,{\lep},{\rip}\bigr)$ other than $\mathrel{\parallel_{\ell}}$ and $\mathrel{\parallel_{r}}$. (Cf.\ the beginning of Section~\ref{sect:exa}.) Indeed, by \cite[Thm.~4.15]{havl+p+p-19a}, any Clifford-like parallelism of this kind is not Clifford with respect to any projective double space structure on $\bPH$. \begin{cor}\label{cor:skolem-noether-pasotti} Let $\alpha_1\in\ensuremath{\Gamma_\parallel}$ be a fixed automorphism of $H$. Then the following assertions hold. \begin{enumerate} \item\label{cor:skolem-noether-pasotti.a} All automorphisms $\alpha$ of the skew field $H$ satisfying $(\alpha_1)_{|F}=\alpha_{|F}$ are in the group\/ $\ensuremath{\Gamma_\parallel}$. \item\label{cor:skolem-noether-pasotti.b} All antiautomorphisms $\alpha$ of the skew field $H$ satisfying $(\alpha_1)_{|F}=\alpha_{|F}$ are not in the group\/ $\ensuremath{\Gamma_\parallel}$. \end{enumerate} The whole statement remains true if the words ``automorphism'' and ``antiautomorphism'' are switched. \end{cor} \begin{proof} \eqref{cor:skolem-noether-pasotti.a} By the Skolem-Noether theorem, $\alpha^{-1}\circ\alpha_1$ is an inner automorphism of $H$. Thus, from Proposition~\ref{prop:par-preserv}~\eqref{prop:par-preserv.c}, $\alpha^{-1}\circ\alpha_1\in\ensuremath{\Gamma_\parallel}$, which implies $\alpha\in\ensuremath{\Gamma_\parallel}$. \par \eqref{cor:skolem-noether-pasotti.b} The conjugation $\overline{(\cdot)}$ is $F$-linear. We therefore can apply \eqref{cor:skolem-noether-pasotti.a} to $\alpha\circ\overline{(\cdot)}$ and in this way we obtain $\alpha\circ\overline{(\cdot)}\in \ensuremath{\Gamma_\parallel}$. The proof of Theorem~\ref{thm:cl-aut-lin}, Case~(ii), gives $\overline{(\cdot)}\notin\ensuremath{\Gamma_\parallel}$. Hence $\alpha\notin\ensuremath{\Gamma_\parallel}$ as well. \end{proof} Theorem~\ref{thm:cl-aut} and Corollary~\ref{cor:skolem-noether-pasotti} (with $\alpha_1:=\id$) together entail that \begin{equation*} \bigl\{\alpha\in\ensuremath{\Gamma_\parallel}\mid\alpha(1)=1\bigr\}\subset\Aut(H)\circ\bigl\{\id_H,\overline{(\cdot)}\bigr\}. \end{equation*} In particular, for all $h\in H^*$, the inner automorphism $\tilde{h}$ is in $\ensuremath{\Gamma_\parallel}$, whereas the antiautomorphism $\tilde h\circ\overline{(\cdot)}$ of the skew field $H$ does not belong to $\ensuremath{\Gamma_\parallel}$. \par Theorem~\ref{thm:cl-aut-lin} motivates to compare the automorphism groups $\ensuremath{\Gamma_\parallel}$ and $\ensuremath{\Gamma_{\ell}}$ with respect to inclusion. This leads to four (mutually exclusive) possibilities as follows: \begin{gather} \ensuremath{\Gamma_\parallel} = \ensuremath{\Gamma_{\ell}} ,\label{eq:aut-equal}\\ \ensuremath{\Gamma_\parallel}\subset\ensuremath{\Gamma_{\ell}} , \label{eq:aut-proper}\\ \ensuremath{\Gamma_\parallel}\supset\ensuremath{\Gamma_{\ell}} , \label{eq:aut-not-ex}\\ \ensuremath{\Gamma_\parallel}\not\subseteq\ensuremath{\Gamma_{\ell}}\mbox{~a}\mbox{nd~}\ensuremath{\Gamma_\parallel}\not\supseteq\ensuremath{\Gamma_{\ell}} . \label{eq:aut-open} \end{gather} In Section~\ref{sect:exa}, it will be shown, by giving illustrative examples, that each of \eqref{eq:aut-equal} and \eqref{eq:aut-proper} is satisfied by some Clifford-like parallelisms. The situation in \eqref{eq:aut-not-ex} does not occur due to Corollary~\ref{cor:aut-not-ex} below. Whether or not there exists a Clifford-like parallelism subject to \eqref{eq:aut-open} remains as an open problem. \begin{cor}\label{cor:aut-not-ex} In\/ $\bigl(\bPH,{\lep},{\rip}\bigr)$, there exists no Clifford-like parallelism\/ $\parallel$ satisfying\/ \eqref{eq:aut-not-ex}. \end{cor} \begin{proof} If \eqref{eq:aut-not-ex} holds for some Clifford-like parallelism $\parallel$, then, by Theorem~\ref{thm:cl-aut}, there exists an antiautomorphism $\alpha_1$ of $H$ such that $\alpha_1\in\ensuremath{\Gamma_\parallel}$. Corollary~\ref{cor:skolem-noether-pasotti}~\eqref{cor:skolem-noether-pasotti.b} shows $\alpha_1\circ\overline{(\cdot)}\in\Aut(H)\setminus\ensuremath{\Gamma_\parallel}$. But \eqref{eq:semidir1} and \eqref{eq:aut-not-ex} force $\alpha_1\circ\overline{(\cdot)}\in\Aut(H)\subset\ensuremath{\Gamma_{\ell}}\subset\ensuremath{\Gamma_\parallel}$, an absurdity. \end{proof} \begin{rem}\label{rem:correl} For any Clifford-like parallelism $\parallel$ of $\bigl(\bPH,{\lep},{\rip}\bigr)$ there are also correlations that preserve $\parallel$. We just give one example. The orthogonality relation $\perp$ that stems from the non-degenerate symmetric bilinear form \eqref{eq:<,>} determines a projective polarity of $\bPH$ by sending any subspace $S$ of $H_F$ to its orthogonal space $S^\perp$. Using \cite[Cor.~4.4]{havl+p+p-19a} or \cite[(2.6)]{kk-75} one obtains that ${\mathcal S}_{\ell}(M) \cap {\mathcal S}_{r}(M) = \{ M, M^\perp \}$ for all lines $M\in\cLH$. So, for all $M\in\cLH$, we have $ M\mathrel{\parallel_{\ell}} M^\perp$ and $M\mathrel{\parallel_{r}} M^\perp$, which implies $M\parallel M^\perp$. In other words, the polarity $\perp$ fixes all parallel classes of the parallelisms $\mathrel{\parallel_{\ell}}$, $\mathrel{\parallel_{r}}$ and $\parallel$. Consequently, each of the parallelisms $\mathrel{\parallel_{\ell}}$, $\mathrel{\parallel_{r}}$ and $\parallel$ is preserved under the action of $\perp$ on the line set $\cLH$. \end{rem} \section{Examples}\label{sect:exa} We first turn to equation~\eqref{eq:aut-equal}, that is, ${\ensuremath{\Gamma_\parallel}}={\ensuremath{\Gamma_{\ell}}}$. In any projective double space $\bigl(\bPH,{\lep},{\rip}\bigr)$, this equation has two trivial solutions, namely ${\parallel}={\mathrel{\parallel_{\ell}}}$ and, by \eqref{eq:le=ri}, ${\parallel}={\mathrel{\parallel_{r}}}$. According to \cite[Thm.~4.12]{havl+p+p-19a}, which relies on \cite{fein+s-76a}, a projective double space $\bigl(\bPH,{\lep},{\rip}\bigr)$ admits no Clifford-like parallelisms other than $\mathrel{\parallel_{\ell}}$ and $\mathrel{\parallel_{r}}$ precisely when $F$ is a formally real Pythagorean field and $H$ is the ordinary quaternion skew field over $F$. (See also \cite[Thm.~9.1]{blunck+k+s+s-17z}.) Thus, when looking for non-trivial solutions of \eqref{eq:aut-equal}, we have to avoid this particular class of quaternion skew fields. \begin{exa}\label{exa:F-Aut-invar} Let $H$ be any quaternion skew field of characteristic two. From \cite[Ex.~4.13]{havl+p+p-19a}, there exists a Clifford-like parallelism $\parallel$ of $\bigl(\bPH,{\lep},{\rip}\bigr)$ such that ${\mathcal F}$ comprises \emph{all} lines $L\in\cA(H_F)$ that are---in an algebraic language---separable extensions of $F$. The set ${\mathcal F}$ is fixed under all automorphisms of $H$, since any $L'\in\cA(H_F)\setminus {\mathcal F}$ is an inseparable extension of $F$. Equation~\eqref{eq:semidir1} and Theorem~\ref{thm:cl-aut} together give ${\ensuremath{\Gamma_{\ell}}}\subseteq{\ensuremath{\Gamma_\parallel}}$. As \eqref{eq:aut-not-ex} cannot apply, we get ${\ensuremath{\Gamma_{\ell}}}={\ensuremath{\Gamma_\parallel}}$. Each of the sets ${\mathcal F}$ and $\cA(H_F)\setminus{\mathcal F}$ is non-empty; see, for example, \cite[pp.~103--104]{draxl-83} or \cite[pp.~46--48]{tits+w-02a}. Hence $\parallel$ does not coincide with $\mathrel{\parallel_{\ell}}$ or $\mathrel{\parallel_{r}}$. \end{exa} \begin{exa}\label{exa:aut=inner} Let $H$ be a quaternion skew field that admits only inner automorphisms. Then all automorphisms and all antiautomorphisms of $H$ are in $\GL(H_F)$. By Theorem~\ref{thm:cl-aut-lin}, ${\ensuremath{\Gamma_{\ell}}}$ is the common automorphism group of all Clifford-like parallelisms of $\bigl(\bPH,{\lep},{\rip}\bigr)$. \par In particular, any quaternion skew field $H$ with centre ${\mathbb Q}$ admits only inner automorphisms by the Skolem-Noether theorem. Since ${\mathbb Q}$ is not Pythagorean, we may infer from \cite[Thm.~4.12]{havl+p+p-19a} that any $\bigl({\mathbb P}(H_{\mathbb Q}),{\mathrel{\parallel_{\ell}}},{\mathrel{\parallel_{r}}}\bigr)$ possesses Clifford-like parallelisms other than $\mathrel{\parallel_{\ell}}$ and $\mathrel{\parallel_{r}}$. (See \cite[Ex.~4.14]{havl+p+p-19a} for detailed examples.) \end{exa} In order to establish the existence of Clifford-like parallelisms $\parallel$ that satisfy \eqref{eq:aut-proper}, we shall consider certain quaternion skew fields admitting an outer automorphism of order two. The idea to use this kind of automorphism stems from the theory of \emph{involutions of the second kind} \cite[\S2,~2.B.]{knus+m+r+t-98a}. Indeed, for each of the automorphisms $\alpha$ from Examples~\ref{exa:root3}, \ref{exa:c2-sep}, \ref{exa:c2-sep-old}, \ref{exa:c2-insep} and \ref{exa:c2-insep-old} the product $\alpha\circ\overline{(\cdot)}$ is such an involution. Also, we shall use the following auxiliary result. \begin{lem}\label{lem:orbit} Let $L$ be a maximal commutative subfield of $H$, let $\alpha\in \Aut(H)$, and let $h\in H^*$. Furthermore, assume that $\alpha(L) = h^{-1} L h$. \begin{enumerate} \item\label{lem:orbit.a} If\/ $\Char H \neq 2$, then for each $q\in L\setminus\{0\}$ with\/ $\tr(q)=0$ there exists an element $c\in F^*$ such that \begin{equation}\label{eq:c} c^2 = N\bigl(\alpha(q)\bigr) \, N(q)^{-1} . \end{equation} \item\label{lem:orbit.b} If\/ $\Char H = 2$ and $L$ is separable over $F$, then for each $q\in L$ with $\tr(q)=1$ there exists an element $d\in F$ such that \begin{equation}\label{eq:d} d^2 +d = N\bigl(\alpha(q)\bigr) + N(q) . \end{equation} \item\label{lem:orbit.c} If\/ $\Char H = 2$ and $L$ is inseparable over $F$, then for each $q\in L\setminus F$ there exist elements $c\in F^*$, $d\in F$ such that \begin{equation}\label{eq:e} d^2= N\bigl(\alpha(q)\bigr)+c^2N(q). \end{equation} \end{enumerate} \end{lem} \begin{proof} \eqref{lem:orbit.a} From \eqref{eq:tr+N}, applied first to $\alpha$ and then to the inner automorphism $\tilde{h}$, we obtain $\tr\bigl(\alpha(q)\bigr)=0=\tr(h^{-1}qh)$. The elements of $\alpha(L)$ with trace zero constitute a one-dimensional $F$-subspace of $\alpha(L)$. Hence there exists an element $c\in F^*$ with $\alpha(q)= c (h^{-1}qh)$. Application of the norm function $N$ establishes \eqref{eq:c}. \par \eqref{lem:orbit.b} Like before, \eqref{eq:tr+N} implies $\tr\bigl(\alpha(q)\bigr)=1=\tr(h^{-1}qh)$. The elements of $\alpha(L)$ with trace $1$ constitute the set $\alpha(q)+F \subset \alpha(L)$. Hence there exists an element $d\in F$ with $\alpha(q)+ d = h^{-1}qh$. Taking the norm on both sides gives $N\bigl(d+\alpha(q)\bigr)=N(q)$. This equation can be rewritten as in \eqref{eq:d}, which follows from $N\bigl(\alpha(q)+ d\bigr) = \bigl(\alpha(q)+ d\bigr)\bigl(\overline{\alpha(q)+ d}\bigr) = \bigl(\alpha(q)+ d\bigr)\bigl(\alpha(q)+ d+1\bigr)$. \par \eqref{lem:orbit.c} Since both $L$ and $\alpha(L)$ are inseparable over $F$, for any $x\in L\cup\alpha(L)$ it follows $\tr(x)=0$ and, by \eqref{eq:x1}, $N(x)=x^2$. Thus, in particular, $\tr\bigl(\alpha(q)\bigr)=0=\tr(h^{-1}qh)$. Since $\alpha(q)$ belongs to $\alpha(L)$, which is a $2$-dimensional $F$-vector space spanned by $h^{-1}qh$ and $1$, there exist $c,d\in F$ such that $\alpha(q)=c(h^{-1}qh)+d$. Note that $c\neq0$ since $\alpha(q)\notin F$. Taking the norm on both sides of the previous equation gives $N\bigl(\alpha(q)\bigr)=N\bigl(c(h^{-1}qh)+d\bigr) = \bigl(c(h^{-1}qh)+d\bigr)^2=c^2N(q)+d^2$, which entails \eqref{eq:e}. \end{proof} \begin{exa}\label{exa:root3} Let $F={\mathbb Q}\bigl(\sqrt{3}\bigr)$ and denote by $H$ the ordinary quaternions over $F$ with the usual $F$-basis $\{1,i,j,k\}$. The mapping $v + w \sqrt{3}\mapsto v - w \sqrt{3}$, $v,w\in {\mathbb Q}$, is an automorphism of $F$. It can be extended to a unique $F$-semilinear transformation, say $\alpha\colon H\to H$, such that $\{1,i,j,k\}$ is fixed elementwise. This $\alpha$ is an automorphism of the skew field $H$, since all structure constants of $H$ with respect to the given basis are in ${\mathbb Q}$, and so all of them are fixed under $\alpha$. \par Following Lemma~\ref{lem:orbit}, we define $q:=i+\bigl(1+\sqrt{3}\bigr)j$ and $L:=F1\oplus Fq$. Then $\tr(q)=q+\overline{q}=0$, \begin{equation*} N(q) = 1 + \bigl(1+\sqrt{3}\bigr)^2 = 5+2\sqrt{3}, \quad N\bigl(\alpha(q)\bigr)=\alpha\bigl(N(q)\bigr)=5-2\sqrt{3} \end{equation*} and $N\bigl(\alpha(q)\bigr)\,N(q)^{-1} = \bigl(5-2\sqrt{3}\bigr)^2 /13 \neq c^2$ for all $c\in F^*$, since $13$ is not a square in $F$. By Lemma~\ref{lem:orbit}~\eqref{lem:orbit.a}, there is no $h\in H^*$ such that $\alpha(L)=h^{-1}Lh$. \par We now apply the construction from \cite[Thm.~4.10~(a)]{havl+p+p-19a} to the set ${\mathcal D}:=\{L\}$. This gives a Clifford-like parallelism $\parallel$ with the property ${\mathcal F}=\{h^{-1}L h\mid h\in H^*\}$. Under the action of the group of inner automorphisms, $\inner{H^*}$, the star $\cA(H_F)$ splits into orbits of the form $\{h^{-1}L'h\mid h\in H^*\}$ with $L'\in\cA(H_F)$. One such orbit is ${\mathcal F}$ and, due to $\alpha(L)\notin{\mathcal F}$, another one is $\alpha({\mathcal F})$. The automorphism $\alpha$ interchanges these two distinct orbits, but it fixes the $\inner{H^*}$-orbit of the line $F1\oplus Fi$. Therefore, $\cA(H_F)\setminus{\mathcal F}$ contains at least two distinct $\inner{H^*}$-orbits. Consequently, there is no antiautomorphism of $H$ taking ${\mathcal F}$ to $\cA(H_F)\setminus{\mathcal F}$. So, by Theorem~\ref{thm:cl-aut}, $\ensuremath{\Gamma_\parallel}\subseteq\ensuremath{\Gamma_{\ell}}$. From \eqref{eq:semidir1}, Theorem~\ref{thm:cl-aut} and $\alpha(L)\notin{\mathcal F}$, follows $\alpha\in\ensuremath{\Gamma_{\ell}}\setminus\ensuremath{\Gamma_\parallel}$. Summing up, we have $\ensuremath{\Gamma_\parallel}\subset\ensuremath{\Gamma_{\ell}}$, as required. \end{exa} \begin{exa}\label{exa:c2-sep} Let ${\mathbb F}_2$ be the Galois field with two elements, and let $F={\mathbb F}_2(t,u)$, where $t$ and $u$ denote independent indeterminates over ${\mathbb F}_2$. \par First, we collect some facts about the polynomial algebra ${\mathbb F}_2[t,u]$ over ${\mathbb F}_2$. Let ${\mathbb N}$ denote the set of non-negative integers. The monomials of the form \begin{equation}\label{eq:basis} t^\gamma u^\delta \mbox{~~with~~}(\gamma,\delta)\in{\mathbb N}\times{\mathbb N} \end{equation} constitute a basis of the ${\mathbb F}_2$-vector space ${\mathbb F}_2[t,u]$. Each non-zero polynomial $p\in{\mathbb F}_2[t,u]$ can be written in a unique way as a non-empty sum of basis elements from \eqref{eq:basis}. Among the elements in this sum there is a unique one, say $t^m u^n$, such that $(m,n)$ is maximal w.r.t.\ the lexicographical order on ${\mathbb N}\times {\mathbb N}$. We shall refer to $(m,n)$ as the \emph{$t$-leading pair} of $p$. (In this definition the indeterminates $t$ and $u$ play different roles, because of the lexicographical order. Due to this lack of symmetry the degree of $p$ can be strictly larger than $m+n$.) If $p_1,p_2\in{\mathbb F}_2[t,u]$ are non-zero polynomials with $t$-leading pairs $(m_1,n_1)$ and $(m_2,n_2)$, then $p_1p_2$ is immediately seen to have the $t$-leading pair $(m_1+m_2,n_1+n_2)$. \par Next, we construct a quaternion algebra with centre $F$. We follow the notation from \cite{blunck+p+p-10a} and \cite[Rem.~3.1]{havl-16a}. Let $K:=F(i)$ be a separable quadratic extension of $F$ with defining relation $i^2+i+1=0$. Furthermore, we define $b:=t+u$. The quaternion algebra $(K/F,b)$ has a basis $\{1,i,j,k\}$ such that its multiplication is given by the following table: \begin{equation*}\label{eq:table}\small \begin{array}{c|ccc} \cdot &i & j & k \\ \hline i &1+i& k & j+k \\ j &j+k& t+u & (t+u)(1+i) \\ k &j & (t+u)i& t+u \end{array} \end{equation*} The conjugation $\overline{(\cdot)}\colon H\to H$ sends $i\mapsto\ol i = i+1$ and fixes both $j$ and $k$. \par In order to show that $(K/F,b)$ is a skew field we have to verify $b\notin N(K)$. Assume to the contrary that there are polynomials $p_1$, $p_2\neq 0$, $p_3$, and $p_4\neq 0$ in ${\mathbb F}_2[t,u]$ such that \begin{equation*} \begin{aligned} N\big( p_1 / p_2 + (p_3 /p_4) i\big) &= \big( p_1 /p_2 + (p_3 /p_4) i\big) \big( p_1 / p_2 + (p_3 /p_4)(i+1)\big)\\ &= p_1^2 / p_2^2 + (p_1 p_3) / (p_2 p_4) + p_3^2 / p_4^2\\ &= t + u . \end{aligned} \end{equation*} Consequently, \begin{equation}\label{eq:cond} ( p_1 p_4 )^2 + p_1 p_2 p_3 p_4 + ( p_2p_3 )^2 + (t + u)(p_2p_4)^2 = 0. \end{equation} We cannot have $p_1=0$ or $p_3=0$, since then the left hand side of \eqref{eq:cond} would reduce to a sum of two terms, with one being a square in ${\mathbb F}_2[t,u]$ and the other being a non-square. We define $(m_s,n_s)$ to be the $t$-leading pair of $p_s$, $s\in\{1,2,3,4\}$. So, the $t$-leading pairs of the first three summands on the left hand side of \eqref{eq:cond} are \begin{equation*}\label{eq:t-leaders} \begin{aligned} &\bigl(2(m_1+m_4),2(n_1+n_4)\bigr),\; (m_1+m_2+m_3+m_4,n_1+n_2+n_3+n_4),\;\\ &\bigl(2(m_2+m_3),2(n_2+n_3)\bigr). \end{aligned} \end{equation*} Let us expand each of the four summands on the left hand side of \eqref{eq:cond} in terms of the monomial basis \eqref{eq:basis}. All monomials in the fourth expansion have odd degree. There are three possibilities. \par \emph{Case}~(i). $m_1+m_4\neq m_2+m_3$. Then, for example, $m_1+m_4 >m_2+m_3$. From \begin{equation}\label{eq:greater} 2(m_1+m_4)>m_1+m_2+m_3+m_4 > 2(m_2+m_3) , \end{equation} the monomial $t^{2(m_1+m_4)}u^{2(n_1+n_4)}$ appears in the expansion of $(p_1p_4)^2$, but not in the expansions of $p_1p_2p_3p_4$ and $(p_2p_3)^2$. This monomial remains unused in the expansion of $(t + u)(p_2p_4)^2$, since both of its exponents are even numbers. So, the left hand side of \eqref{eq:cond} does not vanish, whence this case cannot occur. \par \emph{Case}~(ii). $m_1+m_4 = m_2+m_3$ and $n_1+n_4\neq n_2+n_3$. Then, for example, $n_1+n_4>n_2+n_3$. Formula~\eqref{eq:greater} remains true when replacing $m_s$ by $n_s$, $s\in\{1,2,3,4\}$. We now can deduce, as in Case~(i), that the monomial $t^{2(m_1+m_4)}u^{2(n_1+n_4)}$ appears precisely once when expanding each of the four summands the left hand side of \eqref{eq:cond} in the monomial basis. So, this case is impossible. \par \emph{Case}~(iii). $m_1+m_4 = m_2+m_3$ and $n_1+n_4 = n_2+n_3$. Then $t^{2(m_1+m_4)}u^{2(n_1+n_4)}$ appears precisely three times when expanding the four summands on the left hand side of \eqref{eq:cond} and, due to $1+1+1\neq 0$, this case cannot happen either. \par Since none of the Cases~(i)--(iii) applies, we end up with a contradiction. \par There is a unique automorphism of $F$ that interchanges the indeterminates $t$ and $u$. It can be extended to a unique $F$-semilinear transformation, say $\alpha\colon H\to H$, such that $\{1,i,j,k\}$ is fixed elementwise. This $\alpha$ is an automorphism of $H$, because $\alpha(t+u)=u+t=t+u$. \par Following Lemma~\ref{lem:orbit}, we define $q:=i+ u j$ and $L:=F1\oplus Fq$. Then $\tr(q)=1$, $N(q)=1 +u^2(t+u)$, and $N\bigl(\alpha(q)\bigr)=1+t^2(u+t)$. \par We claim that \begin{equation*} N\bigl(\alpha(q)\bigr) + N(q) = (u+t)^3 \neq d^2+d \mbox{~~for all~~}d\in F. \end{equation*} Let us assume, by way of contraction, that there are polynomials $d_1$ and $d_2\neq 0$ in ${\mathbb F}_2[t,u]$ satisfying $(u+t)^3 = d_1^2 / d_2^2 + d_1 / d_2$. Hence $d_1\neq 0$ and \begin{equation}\label{eq:poly=0} (u+t)^3 d_2^2 + d_1^2 + d_1d_2 = 0 . \end{equation} We expand the first summand in \eqref{eq:poly=0} in terms of the monomial basis \eqref{eq:basis}. This gives a sum of monomials all of which have odd degree. Likewise, the expansion of the second summand in \eqref{eq:poly=0} results in a sum of monomials all of which have even degree. Let us also expand the third summand in \eqref{eq:poly=0} to a sum of monomials and let us then collect all monomials with odd (resp.\ even) degree. In this way we get precisely the monomials appearing in the first (resp.\ second) sum from above. Thus, with $n_1:=\deg d_1$, $n_2:=\deg d_2$ we obtain that the degrees of the summands in \eqref{eq:poly=0} satisfy the inequalities \begin{equation*} 3+2 n_2 \leq n_1+n_2 ,\; 2 n_1\leq n_1+n_2. \end{equation*} These inequalities imply $3+2n_2\leq n_1+n_2\leq n_2+n_2$, which is absurd. \par By Lemma~\ref{lem:orbit}~\eqref{lem:orbit.b}, there is no $h\in H^*$ such that $\alpha(L)=h^{-1}Lh$. \par We now repeat the reasoning from the end of Example~\ref{exa:root3}. This shows that the Clifford-like parallelism $\parallel$ that arises from ${\mathcal D}:=\{L\}$ satisfies $\ensuremath{\Gamma_\parallel}\subset\ensuremath{\Gamma_{\ell}}$. \end{exa} \begin{exa}\label{exa:c2-sep-old} Let $H=(K/F,b)$, $\alpha\in\Aut(H)$ and $L$ be given as in Example~\ref{exa:c2-sep}. We know from Example~\ref{exa:F-Aut-invar} that \begin{equation}\label{eq:E-insep} {\mathcal E}_{\mathrm{insep}}:=\bigl\{L'\in\cA(H_F)\mid L'/F \mbox{~is~inseparable}\bigr\} \neq \emptyset. \end{equation} In contrast to Example~\ref{exa:c2-sep}, we adopt an alternative definition of ${\mathcal D}$, namely ${\mathcal D}:=\{L\}\cup {\mathcal E}_{\mathrm{insep}}$. The construction from \cite[Thm.~4.10~(a)]{havl+p+p-19a} applied to this ${\mathcal D}$ gives a Clifford-like parallelism $\parallel$ with the property ${\mathcal F}=\{h^{-1}L h\mid h\in H^*\}\cup{\mathcal E}_{\mathrm{insep}}$. The set ${\mathcal E}_{\mathrm{insep}}$ remains fixed under any antiautomorphism of $H$. Consequently, there is no antiautomorphism of $H$ taking ${\mathcal F}$ to $\cA(H_F)\setminus{\mathcal F}$. So, by Theorem~\ref{thm:cl-aut}, $\ensuremath{\Gamma_\parallel}\subseteq\ensuremath{\Gamma_{\ell}}$. From \eqref{eq:semidir1}, Theorem~\ref{thm:cl-aut} and $\alpha(L)\notin{\mathcal F}$, follows $\alpha\in\ensuremath{\Gamma_{\ell}}\setminus\ensuremath{\Gamma_\parallel}$. Summing up, we have $\ensuremath{\Gamma_\parallel}\subset\ensuremath{\Gamma_{\ell}}$, as required. \end{exa} \begin{exa}\label{exa:c2-insep} Consider the same quaternion skew field $H=(K/F,b)$ and the same automorphism $\alpha\in\Aut(H)$ as in Example~\ref{exa:c2-sep}. However, now we define $q:=j+uk$ and $L:=F1\oplus Fq$. Then $L$ is inseparable over $F$, $\tr(q)=0$, $N(q)=(j+uk)^2=(u+t)(1+u+u^2)$ and $N\bigl(\alpha(q)\bigr)=(u+t)(1+t+t^2)$. Equation~\eqref{eq:e} of Lemma~\ref{lem:orbit} is \begin{equation*} (u+t)(1+t+t^2)+c^2(u+t)(1+u+u^2)=d^2 \end{equation*} which, upon fixing $c={c_1}/{c_2}$ and $d={d_1}/{d_2}$ with $c_1,c_2,d_1,d_2\in {\mathbb F}_2[t,u]$ and $c_1,c_2,d_2\neq 0$, is equivalent to \begin{equation}\label{eq:insep1} d_2^2(u+t)(c_2^2+c_2^2t^2+c_1^2+c_1^2u^2)+d_2^2(u+t)(c_2^2t+uc_1^2)=d_1^2c_2^2. \end{equation} All the monomials in the first summand of the left hand side of equation~\eqref{eq:insep1} are of odd degree, while the monomials in the second one are of even degree. Since $d_1^2 c_2^2$ is a sum of monomials of even degree this entails \begin{equation}\label{eq:insep2}\renewcommand{\arraystretch}{1.2} \left\{\begin{array}{l} d_2^2(u+t)(c_2^2+c_2^2t^2+c_1^2+c_1^2u^2)=0, \\ d_2^2(u+t)(c_2^2t+uc_1^2)=d_1^2c_2^2. \end{array}\right. \end{equation} The second equation in \eqref{eq:insep2} yields $ut(1+c)^2=(d+t+uc)^2$, and since $c=1$ (\emph{i.e.}, $c_1=c_2$) is not a solution of the first equation in \eqref{eq:insep2}, we can assume $1+c\neq0$, thus $ut=\big((d+t+uc)/(1+c)\big)^2$. This equation, after all, cannot be satisfied for any choice of $c\in F^*$ since $ut$ is not a square in $F$. Thus we can conclude by Lemma \ref{lem:orbit}~\eqref{lem:orbit.c} that there exists no $h\in H^*$ such that $\alpha(L)=h^{-1}qh$. \par The final step is to define a Clifford-like parallelism subject to \eqref{eq:aut-proper}. This can be done as in Example~\ref{exa:c2-sep} using ${\mathcal D}:=\{L\}$. \end{exa} \begin{exa}\label{exa:c2-insep-old} Let $H=(K/F,b)$, $\alpha\in\Aut(H)$ and $L$ be given as in Example~\ref{exa:c2-insep}. Then a Clifford-like parallelism that satisfies \eqref{eq:aut-proper} can be obtained along the lines of Example~\ref{exa:c2-sep-old} by replacing everywhere the set ${\mathcal E}_{\mathrm{insep}}$ from \eqref{eq:E-insep} with ${\mathcal E}_{\mathrm{sep}}:= \bigl\{L'\in\cA(H_F)\mid L'/F \mbox{~is~separable}\bigr\}$. \end{exa} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
proofpile-arXiv_065-4130
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Given a Hermitian manifold $(M,J,h)$ it is well-known that there exists a family of metric connections leaving the complex structure $J$ parallel (see \cite{G}). Among these the Chern connection is particularly interesting and provides different Ricci tensors which can be used to define several meaningful parabolic metric flows preserving the Hermitian condition and generalizing the classical Ricci flow in the non-K\"ahler setting. In \cite{Gi} Gill introduced an Hermitian flow on a compact complex manifold involving the first Chern-Ricci tensor, namely the one whose associated 2-form represents the first Chern class of $M$ (see also \cite{TW} for further related results). In \cite{ST} Streets and Tian introduced a family of Hermitian curvature flows (HCFs) involving the second Ricci tensor $S$ together with an arbitrary symmetric Hermitian form $Q(T)$ which is quadratic in the torsion $T$ of the Chern connection: $$ h_t'=-S+Q(T). $$ For any admissible $Q(T)$ the corresponding flow is strongly parabolic and the short time existence of the solution is estabilished. This family includes geometrically interesting flows as for instance the {\em pluriclosed flow} that was previously introduced in \cite{ST2} and preserves the pluriclosed condition $\partial\overline{\partial}\omega=0$. More recently Ustinovskiy focused on a particular choice of $Q(T)$ obtaining another remarkable flow, which we will call $\text{\it HCF}_{\text{\it U}}$ for brevity, with several geometrically relevant features (see \cite{U}). In particular Ustinovskiy proves that the $\text{HCF}_{\text{U}}$ on a compact Hermitian manifold preserves Griffiths non-negativity of the Chern curvature, generalizing the classical result that K\"ahler-Ricci flow preserves the positivity of the bisectional holomorphic curvature (see e.g. \cite{M}). In \cite{U3} the author could prove stronger results showing that the $\text{HCF}_{\text{U}}$ preserves several natural curvature positivity conditions besides Griffiths positivity. In \cite{U2} Ustinovskiy focuses on complex homogeneous manifolds and proves that the finite dimensional space of induced metrics (which are not necessarily invariant) is preserved by the $\text{HCF}_{\text{U}}$. Given a connected complex Lie group $G$ acting transitively, effectively and holomorphically on a complex manifold $M$, a simple observation shows that $M$ does not carry any $G$-invariant Hermitian metric unless the isotropy is finite. More recently in \cite{LPV} Lafuente, Pujia and Vezzoni considered the behaviour of a general HCF in the space of left-invariant Hermitian metrics on a complex unimodular Lie group. In this work we focus on C-spaces, namely compact simply connected complex manifolds $M$ which are homogeneous under the action of a compact semisimple Lie group $G$. By classical results $M$ fibers over a flag manifold $N$ with a complex torus as a typical fiber $F$; in particular we consider the case where $N$ is a product of compact Hermitian symmetric spaces and $F$ is non-trivial (so that $M$ does not carry any K\"ahler metric). While the analysis of a generic HCF on these manifolds seems to be out of reach, some special flows may deserve attention. In view of the classification results obtained in \cite{FGV} for C-spaces carrying invariant SKT metrics, the pluriclosed flow can be investigated only in very particular cases of such spaces (see also \cite{B} for the analysis of the pluriclosed flow on compact locally homogeneous surfaces and \cite{AL} for the case of left-invariant metrics on Lie groups). On the other hand the $\text{HCF}_{\text{U}}$ is geometrically meaningful and can be dealt with more easily. Actually we are able to write down the flow equations in the space of $G$-invariant metrics in a surprisingly simple way. In particular we prove the remarkable fact that the flow can be described by an induced flow on the base, which depends only on the initial conditions on the base itself, and an induced flow on the fiber. Moreover the maximal existence domain is bounded above and we can provide a precise description of the limit metric. Indeed we see that the kernel of the limit metric defines an integrable distribution whose leaves coincide with the orbits of the complexification of a suitable normal subgroup $S$ of $G$. When the leaves of this foliation are closed, we can prove the Gromov-Hausdorff convergence of the space to a lower dimensional Riemannian homogeneous space. We are also able to estabilish the existence and the uniqueness up to homotheties of invariant static metrics for the $\text{HCF}_{\text{U}}$. These metrics turn out to be particularly meaningful when $S=G$, as in this case the normalized flow with constant volume converges to one of them. The work is organized as follows. In Section 2 we give some prelimiary notions on C-spaces and the $\text{HCF}_{\text{U}}$. In Section 3 we compute the invariant tensors involved in the flow equations and in Section 4 we prove our main result, which is summarized in Theorem \ref{Main} and Proposition \ref{GH}. \subsection*{Notation} Throughout the following we will denote Lie groups by capital letters and the corresponding Lie algebras by gothic letters. The Cartan Killing form of a Lie algebra will be denoted by $\kappa$. \par \medskip \subsection*{Acknowledgements} The authors heartily thank Yury Ustinovskiy and Marco Radeschi for many valuable discussions and remarks as well as Luigi Vezzoni and Daniele Angella for their interest. \section{Preliminaries} We consider a compact simply connected complex manifold $(M,J)$ with $\dim_{\small{\mathbb{C}}} M = m$ and we assume that it is homogeneous under the action of a compact semisimple Lie group $G$ of biholomorphisms, namely $M = G/L$ for some compact subgroup $L\subset G$. By Tits fibration theorem the manifold $M$ fibers $G$-equivariantly onto a flag manifold $N:= G/K$, say $\pi:M\to G/K$, and the manifold $N$ can be endowed with a $G$-invariant complex structure $I$ so that the fibration $\pi: (M,J)\to (N,I)$ is holomorphic. Since $M$ is supposed to be simply connected, the typical fiber $F:= K/L$ is a complex torus of complex dimension $k$ (see e.g. \cite{A}). Such a homogeneous complex manifold, which will be called a simply connected C-space (see \cite{W}), is not K\"ahler if the fiber $F$ is not trivial. \par In this work we will assume that the base of the Tits fibration is a product of irreducible Hermitian symmetric spaces of complex dimension at least two. More precisely, if we write $G$ as the (local) product of its simple factors, say $G=G_1\cdot\ldots\cdot G_{s}$, then $K$ also splits accordingly as $K = K_1\cdot\ldots\cdot K_{s}$ with $K_i\subset G_i$ and $(G_i,K_i)$ is a Hermitian symmetric pair with $\dim_{\small{\mathbb{C}}} G_i/K_i = n_i$. Note that $m = k + \sum_{i=1}^{s} n_i$. \par At the level of Lie algebras we can write the Cartan decomposition $\gg_i = \mathfrak{k}_i\oplus \mathfrak{n}_i$ for each $i=1,\ldots,s$. We recall that the center of $\mathfrak{k}_i$ is one-dimensional and spanned by an element $Z_i$ so that $\mathfrak{k}_i = \mathfrak{s}_i \oplus \mathbb{R} Z_i$, $\mathfrak{s}_i$ being the semisimple part of $\mathfrak{k}_i$, and $\ad(Z_i) = I|_{\mathfrak{n}_i}$. We can now write the following decompositions $$\mathfrak{l} = \bigoplus_{i=1}^s \mathfrak{s}_i \oplus \mathfrak{b}, \quad \gg = \mathfrak{l} \oplus \mathfrak{f} \oplus \mathfrak{n},\quad $$ where $\mathfrak{n}:= \bigoplus_{i=1}^s\mathfrak{n}_i$ and $\mathfrak{b},\mathfrak{f}$ are abelian subspaces of $\mathfrak{z}(\mathfrak{k}) = \bigoplus_{i=1}^s \mathbb{R} Z_i$ with $\kappa(\mathfrak{b},\mathfrak{f}) =0$. Note that $\mathfrak{f}$ and $\mathfrak{m} := \mathfrak{f} \oplus \mathfrak{n}$ identify with the tangent spaces to the fiber and to $M$ respectively. Since the fibration $\pi:M\to N$ is holomorphic, the complex structure $J\in \End(\mathfrak{m})$ can be written as $J = I_F + I$, where $I_F$ is a totally arbitrary complex structure on the fibre $F$. \par We now consider a $G$-invariant Hermitian metric $h$ on $M$, which can be seen as an $\ad(\mathfrak{l})$-invariant Hermitian inner product on $\mathfrak{m}$. As the $\mathfrak{s}_i$'s are not trivial, we have that $\mathfrak{l}$ acts non-trivially on $\mathfrak{n}$ and trivially on $\mathfrak{f}$, therefore $h(\mathfrak{f},\mathfrak{n})=0$. In particular the restriction $h|_{\mathfrak{f}\times \mathfrak{f}}$ is an arbitrary Hermitian metric. \par Moreover, the $\ad(\mathfrak{l})$-modules $\mathfrak{n}_i$ are mutually non-equivalent, hence $h(\mathfrak{n}_i,\mathfrak{n}_j)=0$ if $i\neq j$. \par If $\mathfrak{n}_i$ is $\mathfrak{s}_i$-irreducible, then Schur Lemma implies that $$h|_{\mathfrak{n}_i\times \mathfrak{n}_i} := -h_i\kappa_i,$$ where $h_i\in \mathbb{R}^+$ and $\kappa_i$ denotes the Cartan-Killing form on $\gg_i$. Note that this is always the case unless $\gg_i = \mathfrak{so}(n+2)$ and $\mathfrak{k}_i = \mathfrak{so}(2) \oplus \mathfrak{so}(n)$ ($n\geq 3$). Throughout the following we will assume that none of the Hermitian factors of the basis $N$ is a complex quadric. \par Given a $G$-invariant Hermitian metric $h$ on $M$, we can consider the associated Chern connection $\nabla$, which is the unique metric connection ($\nabla h = 0$) that leaves $J$ parallel ($\nabla J=0$) and whose torsion $T$ satisfies for $X,Y$ tangent vectors $$T(JX,Y)= T(X,JY) = JT(X,Y).$$ The curvature tensor $R_{XY} = [\nabla_X,\nabla_Y]-\nabla_{[X,Y]}$ of $\nabla$ gives rise to different Ricci tensors and we are mainly interested in the second Chern-Ricci tensor $S$ which is given by $$S(X,Y) = \sum_{a=1}^ {2m} h(JR_{e_a,Je_a}X,Y),$$ where $\{e_1,\ldots,e_{2m}\}$ denotes an $h$-orthonormal basis. In \cite{ST} Streets and Tian have introduced a family of Hermitian curvature flows on any complex manifold given by \begin{equation}\label{flow} h_t' = -S(h) + Q(T),\end{equation} where $Q(T)$ is symmetric, $J$-invariant tensor which is an arbitrary quadratic expression involving the torsion $T$. In \cite{ST} the authors proved the short time existence for all these flows for any initial Hermitian metric. In \cite{U} Ustinovskiy considered a special Hermitian flow where the quadratic term $Q(T)$ is given in complex coordinates by \begin{equation}\label{Q}Q(T)_{i\overline j} = -\frac 12 h^{m\overline n}h^{p\overline s}T_{mp\overline j}T_{\overline n \overline s i}.\end{equation} For the corresponding Hermitian curvature flow ($\text{HCF}_{\text{U}}$) Ustinovskiy could prove several important properties, in particular that it preserves the Griffiths positivity of the initial metric. \par We now focus on this Hermitian flow on the class of homogeneous compact complex manifolds $(M,J)$ we have introduced in this section. In particular we note that the flow evolves along invariant Hermitian metrics whenever the initial metric is so. \section{The computation of the tensors } In this section we compute the Ricci tensor $S$ and the quadratic expression $Q(T)$ in \eqref{Q} for a $G$-invariant Hermitian metric $h$ on the complex manifold $M = G/L$. Throughout the following we keep the notation introduced in the previous section. \par We choose a maximal abelian subalgebra $\mathfrak{a}_i\subset \mathfrak{k}_i$ of $\gg_i$ ($i=1,\ldots, s$). The complexification $\mathfrak{a}^{c}\subset \gg^{{c}}$ where $\mathfrak{a} := \bigoplus_{i=1}^{s}\mathfrak{a}_i$, gives a Cartan subalgebra of $\gg^c$ and we will denote by $R$ the associated root system of $\gg^c$. We denote by $R_i$ the subset of $R$ given by all the roots whose corresponding root vectors belong to $\gg_i^c$ for $i=1,\ldots, s$, so that $R= R_1\cup \ldots\cup R_{s}$. For each $i=1,\ldots, s$ we have $$\mathfrak{k}_i^c = \mathfrak{a}_i^c\oplus \bigoplus_{\a\in R_{\mathfrak{k}_i}}\gg_\a,\qquad \mathfrak{n}_i^c = \bigoplus_{\a\in R_{\mathfrak{n}_i}} \gg_\a$$ so that $R_i = R_{\mathfrak{k}_i}\cup R_{\mathfrak{n}_i}$. Moreover the invariant complex structure $I|_{\mathfrak{n}_i}$ on $\mathfrak{n}_i$ determines an invariant ordering of $R_{\mathfrak{n}_i} = R_{\mathfrak{n}_i}^+\cup R_{\mathfrak{n}_i}^-$ with $I(v) = \pm \sqrt{-1} v$ for every $v\in \gg_\a$ with $\a\in R_{\mathfrak{n}_i}^{\pm}$. We will use a standard Chevalley basis $\{E_\a\}_{\a\in R}$ of $\gg^c$ with $$\gg^c = \mathfrak{a}^c \oplus \bigoplus_{\a\in R} \mathbb{C} E_\a,\quad \overline{E_\a} = - E_{-\a},\quad \kappa(E_\a,E_{-\a}) = 1, $$ $$[E_\a,E_{-\a}] = H_\a,$$ where $\kappa(H_\a,v) = \a(v)$ for every $v\in \mathfrak{a}^c$.\par Let $h$ be an invariant Hermitian metric on $M$, i.e. an $\ad(\mathfrak{l})$-invariant symmetric Hermitian inner product on $\mathfrak{m}$. We recall that $h|_{\mathfrak{n}_i\times \mathfrak{n}_i} := -h_i\kappa_i$ and by the $\ad(\mathfrak{a})$-invariance be have $$ h(E_\a,\overline{E_\b}) = \begin{cases} 0 & \text{if $\a\neq\b$} \\ h_i & \text{if $\a=\b\in R_i$.}\end{cases}$$ In the complexified tangent space $\mathfrak{f}^c$ of the fiber we fix a complex basis $\mathcal V := \{V_1,\ldots,V_k\}$ of $\mathfrak{f}^{10}$. We also put $h_{a\bar b} := h(V_a,\overline{V_b})$ and $H:= (h_{a\bar b})_{a,b=1,\ldots,k}$. If we split $\mathfrak{m}^{c} = \mathfrak{m}^{10}\oplus \mathfrak{m}^{01}$ with respect to the extension of $J$ on $\mathfrak{m}^c$, the torsion $T$ can be seen as an element of $\Lambda^2(\mathfrak{m}^*)\otimes \mathfrak{m}$ and it satisfies \begin{equation} \label{T}T(\mathfrak{m}^{10},\mathfrak{m}^{01}) = 0.\end{equation} The Chern connection $\nabla$ is completely determined by the corresponding Nomizu's operator $\Lambda\in \mathfrak{m}^*\otimes \End(\mathfrak{m})$ (see e.g.\cite{KN}), which is defined as follows: for $X,Y\in\mathfrak{m}$ we set $\Lambda(X)Y\in \mathfrak{m}$ to be so that at $o:= [L]\in M$ $$(\Lambda(X)Y)^*|_o = (\nabla_{X^*}Y^*-[X^*,Y^*])|_o,$$ where for every $Z\in \mathfrak{m}$ we denote by $Z^*$ the vector field on $M$ induced by the one-parameter subgroup $\exp(tZ)$. Since $\nabla$ preserves $J$, the Nomizu's operator satisfies $\Lambda(v)(\mathfrak{m}^{10})\subseteq \mathfrak{m}^{10}$ for every $v\in \mathfrak{m}^c$. Now we recall that the torsion $T$ can be expressed as follows: for $X,Y\in \mathfrak{m}$ (see e.g. \cite{KN}) \begin{equation}\label{tor} T(X,Y) = \L(X)Y-\L(Y)X - [X,Y]_\mathfrak{m}.\end{equation} Therefore using \eqref{T} and \eqref{tor} we see that \begin{equation}\label{Nomizu} \Lambda(A)\overline B = [A,\overline B]^{01}\quad \forall\ A,B\in \mathfrak{m}^{10}.\end{equation} \begin{lemma}\label{Lambda} Given $v\in\mathfrak{f}^c$, $w\in\mathfrak{f}^{10}$, $\alpha,\beta\in R_{\mathfrak{n}_i}^+$, $\gamma\in R_{\mathfrak{n}_j}^+$ with, $i,j=1,\dots,s$, $i\neq j$, we have \begin{itemize} \item[a)] $\Lambda(E_\alpha)E_\beta = 0$,\quad $\Lambda(E_\alpha)E_{\pm\gamma} = 0$ and $\Lambda(E_\alpha)\overline w=0 $; \item[b)] $\Lambda(E_\alpha)E_{-\beta} = 0$ for $\beta\neq \alpha$;\quad $\Lambda(E_\alpha)E_{-\alpha} =H_\a^{01}$; \item[c)] $\Lambda(E_\alpha)w= \frac{1}{h_i}h(w,H_\alpha) E_\alpha$; \item[d)] $\Lambda(v) = \ad(v)$. \end{itemize} \end{lemma} \begin{proof} a) If $A\in\mathfrak{n}^{10}$ and $w\in\mathfrak{f}^{10}$, we have by (\ref{Nomizu}) and using the fact that $[\mathfrak{n}_i,\mathfrak{n}_i]\subseteq\mathfrak{k}_i$: \begin{equation*} \begin{split} &h(\L(E_\a)E_\b,\overline{A})=-h(E_\b,\L(E_\a)\overline{A})=-h(E_\b,[E_\a,\overline{A}]^{01})=0,\\ &h(\L(E_\a)E_\b,\overline{w})=-h(E_\b,\L(E_\a)\overline{w})=-h(E_\b,[E_\a,\overline{w}]^{01})=\a(\overline{w})h(E_\b,E_\a^{01})=0, \end{split} \end{equation*} therefore $\L(E_\a)E_\b=0$. Similarly we get $\L(E_\a)E_{\pm\gamma}=0$. Finally $$ \L(E_\a)\overline{w}=[E_\a,\overline{w}]^{01}=-\a(\overline{w})E_\a^{01}=0. $$ b) We have $\L(E_\a)E_{-\b}=[E_\a,E_{-\b}]^{01}$. If $\a\neq \b$, then $[E_\a,E_{-\b}]$ lies in the semisimple part of $\mathfrak{k}_i^c$, hence $[E_\a,E_{-\b}]^{01}= 0$. If $\a=\b$, $[E_\a,E_{-\a}]^{01}=H_\a^{01}$. c) Since we have, by a) and b), $h(\L(E_\a)w,E_{-\a})=-h(w,\L(E_\a)E_{-\a})=-h(w,H_\a)$, and $h(\L(E_\a)w,E_{-\b})=h(\L(E_\a)w,E_{-\gamma})=h(\L(E_\a)w,\overline{w'})=0$ (here $\b\in R_{\mathfrak{n}_i}^+$, $\b\neq\a$, $w'\in\mathfrak{f}^{10}$), the assertion follows. d) First we have, if $w,w'\in\mathfrak{f}^{10}$, $\L(w)\overline{w'}=[w,\overline{w'}]^{01}=0=\ad(w)\overline{w'}$ and $\L(w)E_{-\a}=[w,E_{-\a}]^{01}=-\a(w)E_{-\a}=\ad(w)E_{-\a}$. Furthermore, \begin{equation*} \begin{split} & h(\L(w)E_{\a},\overline{w'})=-h(E_\a,\L(w)\overline{w'})=0=\a(w)h(E_\a,\overline{w'})=h(\ad(w)E_\a,\overline{w'}),\\ & h(\L(w)E_\a,E_{-\b})=-h(E_\a,\L(w)E_{-\b})=\b(w)h(E_\a,E_{-\b})=\d_{\a,\b}\a(w)h(E_\a,E_{-\a})\\ &\qquad\qquad\qquad\qquad=\a(w)h(E_\a,E_{-\b})=h(\ad(w)E_\a,E_{-\b}), \end{split} \end{equation*} therefore $\L(w)E_\a=\ad(w)E_\a$. At this point it is easily seen that $\L(w)w'=0=\ad(w)w'$, so $\L(w)=\ad(w)$. Conjugation yields also $\L(\overline{w})=\ad(\overline{w})$, hence assertion d) follows. \end{proof} Using the previous Lemma we can compute the torsion tensor as follows. \begin{lemma}\label{Tor2} Given $\alpha,\beta\in R_{\mathfrak{n}_i}^+$, $\gamma\in R_{\mathfrak{n}_j}^+$ with $i\neq j$, and $w,w'\in\mathfrak{f}^{10}$ we have \begin{itemize} \item[a)] $T(E_\a,E_\gamma) = T(E_\a,E_{\b}) = 0$; \item[b)] $T(w,E_\a) = -\frac{1}{h_i}h(w,H_\a) E_\a$; \item[c)] $T(w,w') = 0$. \end{itemize} \end{lemma} \begin{proof} a) From Lemma \ref{Lambda}(a) and (\ref{tor}), we see that $$ T(E_\a,E_\gamma)=-[E_\a,E_\gamma]_{\mathfrak{m}^c},\qquad T(E_\a,E_\b)=-[E_\a,E_\b]_{\mathfrak{m}^c}. $$ Now $\a+\gamma$ cannot be a root, therefore $[E_\a,E_\gamma]=0$. On the other hand, since $G_i/K_i$ is symmetric, $[E_\a,E_\b]\in\mathfrak{k}_i\cap \mathfrak{n}_i=0$. This proves the assertion. b) By Lemma \ref{Lambda}(c)-(d) and (\ref{tor}) we have \begin{equation*} \begin{split} T(w,E_\a)&=\L(w)E_\a-\L(E_\a)w-[w,E_\a]_{\mathfrak{m}^c}\\ &=\a(w)E_\a-\frac{1}{h_i}h(w,H_\a)\,E_\a-\a(w)E_\a=-\frac{1}{h_i}h(w,H_\a)\,E_\a. \end{split} \end{equation*} c) It follows immediately from Lemma \ref{Lambda}(d). \end{proof} \noindent In order to compute the curvature tensor $R$, we use the general formula given in \cite[p. 192]{KN} (cf. also \cite{P}): for $X,Y\in \mathfrak{m}$ \begin{equation}\label{Chern_Curvature} R(X,Y)=[\L(X),\L(Y)]-\L([X,Y]_{\mathfrak{m}})-\ad([X,Y]_\mathfrak{l}). \end{equation} \begin{lemma}\label{Curvature} Given $\alpha,\beta\in R_{\mathfrak{n}_i}^+$, $\gamma\in R_{\mathfrak{n}_j}^+$, $i\neq j$, $v_1,v_2\in\mathfrak{f}^c$, $w\in\mathfrak{f}^{10}$, we have \begin{itemize} \item[a)] $R(E_\a,\overline{E_\a})E_\b=\L(E_\a)\L(\overline{E_\a})E_\b+\b(H_\a)E_\b$, \item[b)] $R(E_\a,\overline{E_\a})E_\gamma=0$, \item[c)] $R(E_\a,\overline{E_\a})w=\frac{1}{h_i}h(w,H_\a)\,\overline{H_\a^{01}}$, \item[d)] $R(v_1,v_2)=0$. \end{itemize} \end{lemma} \begin{proof} a) Using (\ref{Chern_Curvature}) and Lemma \ref{Lambda} we have \begin{equation*} \begin{split} R(E_\a,\overline{E_\a})E_\b&=[\L(E_\a),\L(\overline{E_\a})]E_\b-\L([E_\a,\overline{E_\a}]_{\mathfrak{m}^c})E_\b-\ad([E_\a,\overline{E_\a}]_{\mathfrak{l}^c})E_\b\\ &=\L(E_\a)\L(\overline{E_\a})E_\b+\L((H_\a)_{\mathfrak{m}^c})E_\b+\ad((H_\a)_{\mathfrak{l}^c})E_\b\\ &=\L(E_\a)\L(\overline{E_\a})E_\b+[H_\a,E_\b]\\ &=\L(E_\a)\L(\overline{E_\a})E_\b+\b(H_\a)E_\b. \end{split} \end{equation*} \indent b) is proved similarly. c-d) We have, using (\ref{Chern_Curvature}), Lemma \ref{Lambda} and the fact that $\mathfrak{f}^c$ is abelian: \begin{equation*} \begin{split} R(E_\a,\overline{E_\a})w&=-\L(\overline{E_\a})\L(E_\a)w+\L((H_\a)_{\mathfrak{m}^c})w+\ad((H_\a)_{\mathfrak{l}^c})w\\ &=-\frac{1}{h_i}h(w,H_\a)\,\L(\overline{E_\a})E_\a+[H_\a,w] =\frac{1}{h_i}h(w,H_\a)\,\overline{H_\a^{01}} \end{split} \end{equation*} and, for similar reasons, $$ R(v_1,v_2)=[\L(v_1),\L(v_2)]=[\ad(v_1),\ad(v_2)]=\ad([v_1,v_2])=0. $$ \end{proof} We can now compute the second Chern-Ricci tensor $S$. If $\b\in R_{\mathfrak{n}_i}^+$ we have by Lemmas \ref{Lambda}, \ref{Curvature}: \begin{equation*} \begin{split} S(E_\b,&\overline{E_\b})=\sum_{j=1}^{s}\sum_{\a\in R_{\mathfrak{n}_j}^+}\frac{1}{h_j}h(R(E_\a,\overline{E_\a})E_\b,\overline{E_\b}) \\ &=\sum_{\a\in R_{\mathfrak{n}_i}^+}\frac{1}{h_i}h(R(E_\a,\overline{E_\a})E_\b,\overline{E_\b}) =-\sum_{\a\in R_{\mathfrak{n}_i}^+}\frac{1}{h_i}h(\L(\overline{E_\a})E_\b,\L(E_\a)\overline{E_\b})+\sum_{\a\in R_{\mathfrak{n}_i}^+}\b(H_\a)\\ &=-\frac{1}{h_i}h(\overline{\L(E_\b)E_{-\b}},\L(E_\b)E_{-\b})+\sum_{\a\in R_{\mathfrak{n}_i}^+}\b(H_\a) =-\frac{1}{h_i}h(\overline{H_\b^{01}},H_\b^{01})+\sum_{\a\in R_{\mathfrak{n}_i}^+} \b(H_\a)\\ &=-\frac{1}{h_i}h(\overline{H_\b^{01}},H_\b^{01})+\frac{1}{2} \end{split} \end{equation*} where we have used that $\sum_{\a\in R_{\mathfrak{n}_i}^+}H_\a=-\frac{\sqrt{-1}}{2}Z_i$ and that $\b(Z_i)=\sqrt{-1}$. Similarly, for $a,b=1,\dots,k$, \begin{equation*} \begin{split} S(V_a,\overline{V_b})&=\sum_{j=1}^{s}\sum_{\a\in R_{\mathfrak{n}_j}^+}\frac{1}{h_j}h(R(E_\a,\overline{E_\a})V_a,\overline{V_b}) \\ &=\sum_{j=1}^{s}\sum_{\a\in R_{\mathfrak{n}_j}^+}\frac{1}{h_j^2}h(V_a,H_\a)h(\overline{V_b},\overline{H^{01}_\a}) =\sum_{j=1}^{s}\sum_{\a\in R_{\mathfrak{n}_j}^+}\frac{1}{h_j^2}h(V_a,H_\a)\overline{h(V_b,H_\a)}. \end{split} \end{equation*} We now compute the tensor $Q(T)$. For $\a\in R^+_{\mathfrak{n}_i}$, $i=1,\dots,s$, set $$ e_\a:=\frac{E_\a}{\sqrt{h_i}}.$$ We fix a $h$-orthonormal basis $\{e_a\}_{a=1,\ldots,k}$ of $\mathfrak{f}^{10}$. In the following, we shall use the greek letters $\a,\b,\dots$ as indices varying among the positive roots, while the lowercase latin letters $a,b,\dots$ will denote indices varying in the set $\{1,\dots,k\}$. Finally, latin capital letters $A,B,\dots$ will vary both among the positive roots and the elements of the set $\{1,\dots,k\}$. So we have, if $\b\in R^+_{\mathfrak{n}_i}$, \begin{equation*} \begin{split} Q(T)(e_\b,\overline{e_\b})&=-\frac{1}{2}\sum_{A,B} T^\b_{AB}\overline{T}^\b_{AB} =-\frac{1}{2}\sum_{\a,b} T^\b_{\a b}\overline{T}^\b_{\a b}-\frac{1}{2}\sum_{\a,b}T^\b_{b\a}\overline{T}^\b_{b\a}\\ &=-\sum_{\a,b}T^\b_{\a b}\overline{T}^\b_{a b}=-\sum_b\frac 1{h_i^2}|h(e_b,H_\b)|^2, \end{split} \end{equation*} whence $$Q(T)(E_\b,\overline{E_\b})=-\frac{1}{h_i}\sum_b|h(e_b,H_\b)|^2 = -\frac{1}{h_i}h(\overline{H_\b^{01}},H_\b^{01}).$$ The last equality in the previous formula holds noting that, if one writes $H_\b^{01}=\sum_a\l_a\overline{e}_a$ for suitable $\l_a\in{\mathbb C}$, then $$\sum_{b=1}^k |h(e_b,H_\b)|^2= \sum_{b=1}^k|\l_b|^2= h(\overline{H_\b^{01}},H_\b^{01}). $$ Moreover, using Lemma \ref{Tor2} we immediately see that $Q(T)(\mathfrak{f},\mathfrak{f}) = 0$ and $Q(T)(\mathfrak{f},\mathfrak{n})=0$.\par We can now write an expression for the tensor $$\mathcal{K}(h) := -S(h) + Q(T),$$ which governs the flow \eqref{flow}. Namely, \begin{equation} \label{K} \begin{cases} \mathcal{K}(h)(E_\a,\overline{E_\a}) = -\frac 12 ,\quad \a\in R_{\mathfrak{n}}^+;\\ \mathcal{K}(h)(\mathfrak{f},\mathfrak{n}) =0; \\ \mathcal{K}(h)(V_a,\overline{V_b}) = -\sum_{j=1}^{s}\frac{1}{h_j^2}\sum_{\a\in R_{\mathfrak{n}_j}^+}h(V_a,H_\a)\overline{h(V_b,H_\a)}. \end{cases}\end{equation} \section{The analysis of the flow and static metrics} Starting from an invariant Hermitian metric $h_o$, the unique solution to the flow equation \eqref{flow} consists of $G$-invariant Hermitian metrics on $M$. Therefore using \eqref{K} we can write the flow equations as follows \begin{equation}\label{InvHCF_gen} \begin{cases} h_{a\overline{b}}'=-\sum_{j=1}^{s}\frac{1}{h_j^2}\sum_{\a\in R_{\mathfrak{n}_j}^+}h(V_a,H_\a)\overline{h(V_b,H_\a)},\quad\text{for $a,b=1,\dots,k$},\\ h_i'=-\frac{1}{2},\quad\text{for $i=1,\dots,s$}. \end{cases} \end{equation} In order to write the equations in (\ref{InvHCF_gen}) relative to the fibre in a nicer form, set $R^+_{\mathfrak{n}_j}:=\{\a^j_1,\dots,\a^j_{n_j}\}$ for $j=1,\dots,s$. We note that for every $\a\in R_{\mathfrak{n}_j}^+$ ($j=1,\ldots,s$) we have $H_\a = -\frac{\sqrt{-1}}{2n_j}Z_j\ ({\rm{mod}}\ \mathfrak{s}_j)$, hence we can find coefficients $c^j_i$, $j=1,\ldots,s$, $i=1,\ldots,k$ so that \begin{equation}\label{cijl} H_{\a^j_i}^{01}=\sum_{l=1}^k c^j_{l}\overline{V_{l}}\qquad i=1,\ldots,n_j,\ j=1,\ldots,s. \end{equation} Thus \begin{equation*} h_{a\bar b}'= -\sum_{j=1}^s \frac 1{h_j^2}\sum_{l,m=1}^k \left( n_j c^j_{l}\overline{c^j_{m}}\right) h_{a\bar l} h_{m\bar b} = -\sum_{j=1}^s \frac 1{h_j^2}\left( H\Gamma^j H\right)_{a\bar b}, \end{equation*} where we have set for $j=1,\ldots,s$ $$(\Gamma^j)_{l\bar m} := n_j c^j_{l}\overline{c^j_{m}}.$$ Therefore we can write (\ref{InvHCF_gen}) as \begin{equation}\label{InvHCF_gen_matrix} \begin{cases} H'=-H\Gamma H\\ h_i'=-\frac{1}{2},\quad\text{for $i=1,\dots,s$}, \end{cases} \end{equation} where $$\Gamma(t) := \sum_{j=1}^s \frac 1{h_j^2(t)} \Gamma^j.$$ We note that $\Gamma$ is positive semidefinite as each $\Gamma^j$ is so, $j=1,\ldots,s$. The metric $h_o$ can be fully described by $s$ positive numbers $A_1,\ldots,A_{s}$ where \begin{equation}\label{Ai}h_o|_{\mathfrak{n}_i\times \mathfrak{n}_i} = -A_i\kappa\end{equation} and by a positive definite Hermitian $k\times k$ matrix $H_o$, which represents $h_o|_{\mathfrak{f}\times\mathfrak{f}}$ w.r.t. the basis $\mathcal V$. From \eqref{InvHCF_gen_matrix} we immediately see that \begin{equation}\label{solhi} h_i(t) = -\frac 12 t + A_i,\quad i=1,\ldots, s.\end{equation} If we set $A:= \min_{i=1,\ldots,s}\{A_i\}$, we see that $h_i(t)$ are all positive when $t\in [0,2A)$. The flow equation boils down to \begin{equation} \begin{cases} H' = -H\Gamma H\\ H(0) = H_o\end{cases}\end{equation} that can be explicitely integrated to \begin{equation}\label{H} H^{-1}(t) = H_o^{-1} + \int_0^t \Gamma(u)\, du.\end{equation} Note that the righthandside of \eqref{H} is positive definite for all $t\in [0,2A)$, therefore the solution $h(t)$ to the $\text{HCF}_{\text{U}}$ exists on $[0,2A)$. Moreover we notice that the maximal existence domain of $h(t)$ is of the form $(-r,2A)$, where $r\in (0,+\infty]$. \par In order to analyze the behaviour of the metric along the fiber when $t$ approaches the limit $2A$, we simply observe that \eqref{InvHCF_gen} implies \begin{equation}\label{decreasing} h(v,\bar v)'=-\sum_{j=1}^s\frac{1}{h_j^2}\sum_{\a\in R^+_{\mathfrak{n}_j}}|h(v,H_\a)|^2\leq 0,\qquad\text{for any $v\in\mathfrak{f}^{10}$}, \end{equation} therefore $ \lim_{t\rightarrow 2A}h(v,\bar v) $ exists and is non-negative. Thus when $t\rightarrow 2A$ the metric along the fiber converges to a positive semidefinite Hermitian form $\hat h$. \begin{proposition}\label{prop} There is a compact normal subgroup $S$ of $G$ such that the orbits of $S^c$ are the leaves of the distribution defined by the kernels of $\hat{h}$. \end{proposition} \begin{proof} We consider the distribution $\mathcal Q$ on $M$ which is defined for $x\in M$ by $\mathcal Q_x :=\{v\in T_xM|\ \hat h_x(v,w)=0,\ \forall w\in T_xM\}$. It is clear that $\mathcal Q$ is $G$-invariant and $J$-stable, so that it is enough to study it at $o:= [L]\in M=G/L$, where we can see $\mathfrak{q}:= \mathcal Q_o$ as a $J$-stable subspace of $\mathfrak{m}$. We write $\mathfrak{q}^c = \mathfrak{q}^{10}\oplus \mathfrak{q}^{01}$. We rearrange the indices so that $A=A_1=\ldots=A_p<A_i$ for $i=p+1,\dots,s$ and we define $\mathcal{Z}_p$ to be the complex subspace of $\mathfrak{f}^{01}$ generated by $Z_1^{01},\dots,Z_p^{01}$. The limit form $\hat h$ is described by a pair $(\hat h^N,\hat h^F)$, where $\hat h^N$ is a $\Ad(L)$-invariant Hermitian form on $\mathfrak{n}$ whose kernel is given by $\mathfrak{n}_1\oplus\ldots\oplus \mathfrak{n}_p$ and $\hat h^F$ is a positive semidefinite Hermitian form on $\mathfrak{f}$. \begin{lemma}\label{q} $\mathfrak{q}^c = \mathfrak{n}_1^c\oplus\ldots\oplus \mathfrak{n}_p^c\oplus \mathcal Z_p\oplus \overline{\mathcal Z_p}$. \end{lemma} \begin{proof} It is enough to prove that $\mathcal Z_p = (\ker \hat h^F)^{01}$. Throughout the following we will identify $\mathfrak{f}^{01}$ with ${\mathbb C}^k$ by means of the basis $\overline{\mathcal{V}}$. Observe that \eqref{H} reads $$ H^{-1}(t) = H_o^{-1} -\frac{2t}{A(t-2A)}\Theta_p +\sum_{j=p+1}^s\left(\int_0^t\frac{1}{h_j^2(u)}du\right)\Gamma^j, $$ where $\Theta_p:=\sum_{j=1}^p\Gamma^j$. Since the image of $\Gamma^j$ is spanned by $Z_j^{01}$ for $j=1,\dots,k$, we see that $\Theta_p(\mathcal{Z}_p)\subseteq\mathcal{Z}_p$. Moreover, as each $\Gamma^j$ is Hermitian positive semidefinite, we have that $\ker \Theta_p = \bigcap_{j=1}^p \ker \Gamma^j = \mathcal Z_p^\perp$, where the orthogonal space $\mathcal Z_p^\perp$ is taken with respect to the standard Hermitian structure on $\mathbb C^k$. This implies that $\ker \Theta_p \cap \mathcal Z_p = \{0\}$ and therefore $\Theta_p(\mathcal Z_p) = \mathcal Z_p$. We also observe that $\Theta_p$ is diagonalizable and therefore its image $\mathcal Z_p$ is the sum of all eigenspaces with non-zero eigenvalues. If we set $q:= \dim_{\small{\mathbb{C}}}\mathcal Z_p$, there exists $U\in {\mathrm U}(k)$ so that $$\Delta:= U\Theta_p \bar U^T$$ is the diagonal matrix ${\rm{diag}}(\mu_1,\ldots,\mu_q,0,\ldots,0)$, $\mu_i\neq 0$ for $i=1,\ldots,q$. Then if we set $H_u(t) := UH(t)\bar U^T$ we have for $t\in [0,2A)$ \begin{equation}\label{flow_u} H_u^{-1}(t) = \Lambda(t) - \frac{2t}{A(t-2A)}\Delta,\end{equation} where $\Lambda(t)$ is positive definite for all $t\in[0,2A)$ and, when $t\to 2A$, it converges to a positive definite matrix $$\hat \Lambda := \lim_{t\rightarrow 2A} \Lambda(t).$$ We have $$H_u(t) = \frac{ {\rm {adj}}(H_u^{-1}(t))}{\det H_u^{-1}(t) }$$ and using \eqref{flow_u} we see that $$ \lim_{t\rightarrow 2A}(2A-t)^q \det H_u^{-1}(t) =4^q\mu_1\cdot\ldots\cdot\mu_q \det \hat{\L}_o> 0 $$ where $\hat{\L}_o$ is the minor of $\hat{\L}$ obtained intersecting the last $k-q$ rows and the last $k-q$ columns. Moreover $$ \lim_{t\rightarrow 2A}(2A-t)^q {\rm {adj}}(H_u^{-1}(t))_{a\bar b}=0,\quad\forall\,a\in\{1,\dots,q\},\;\forall\,b\in\{1,\dots,k\}, $$ hence $$ \lim_{t\rightarrow 2A} (H_u(t))_{a\bar b}=0,\quad\forall\,a\in\{1,\dots,q\},\;\forall\,b\in\{1,\dots,k\}, $$ while for $a,b=q+1,\dots,k$, $$ \lim_{t\rightarrow 2A} (H_u(t))_{a\bar b} =(\hat{\L}_o^{-1})_{(a-q)\overline{(b-q)}}. $$ Thus we have that $$ \lim_{t\rightarrow 2A}H_u(t) = \left(\! \begin{array}{cc} 0 & 0\\ 0 & \hat{\L}_o^{-1} \end{array}\! \right)\!, $$ where $\hat{\L}_o^{-1}$ is $(k-q)\times (k-q)$ and positive definite. The claim follows. \end{proof} We define $U:= G_1\cdot\ldots\cdot G_p$. We observe that the universal complexification $U^c$ acts on $M$ and that the $U^c$-orbits define a $G$-invariant foliation. At the point $o\in M$ we have that $T_o(U^c\cdot o) = \mathfrak{n}_1\oplus\ldots\oplus\mathfrak{n}_p\oplus \Span\{ (Z_1)_\mathfrak{f},\ldots,(Z_p)_\mathfrak{f}, I_F((Z_1)_\mathfrak{f}),\ldots,I_F((Z_p)_\mathfrak{f})\} = \mathfrak{q}$ by Lemma \ref{q}. Therefore the $U^c$-orbits coincide with the leaves of $\mathcal Q$. \end{proof} \begin{remark} Since the complex structure $I_F$ along the fiber is totally arbitrary, the $U^c$-orbits are not necessarily closed. Note that $G$ acts transitively on the set of $U^c$-orbits and therefore one such orbit is closed if and only if all are so. \end{remark} \subsection{Static metrics} We say that an invariant Hermitian metric $h$ on $M$ is {\em static} for the $\text{HCF}_{\text{U}}$ if there exists $\l\in{\mathbb R}$ such that \begin{equation}\label{static} -\mathcal{K}(h) = \l h. \end{equation} In terms of algebraic data on $\gg$ this equation becomes \begin{equation}\label{static_matrix} \begin{cases} \l H=H\Gamma H,\\ \l h_i=\frac 12, & \text{for $i=1,\dots,s$}. \end{cases} \end{equation} This immediately implies that \eqref{static} has no solution if $\l\leq 0$. On the other hand, if $\l>0$, the second equation in \eqref{static_matrix} gives \begin{equation}\label{static1} h_i=\frac{1}{2\l}>0,\quad\text{for $i=1,\dots,s$}. \end{equation} Then the first equation in \eqref{static_matrix} reads \begin{equation}\label{static2} H=\frac{1}{4\l}\Theta_s^{-1}, \end{equation} where we note that $\Theta_s$ is invertible as it is shown in the proof of Lemma \ref{q}. Using \eqref{static1}, \eqref{static2} we can therefore define, for any $\l>0$, a static metric satisfying \eqref{static} for the chosen $\l$. Suppose now that in \eqref{Ai} the initial conditions satisfy $A=A_1=\cdots=A_s$. Clearly in this case we have $ \lim_{t\rightarrow 2A}h_i(t)=0$ for $i=1,\dots,s $ and $$\int_0^t \Gamma(u)du = \frac{2t}{A(2A-t)}\Theta_s,\qquad t\in [0,2A).$$ On the other hand we have $$ \lim_{t\rightarrow 2A} H(t) =\lim_{t\rightarrow 2A}\frac{{\rm{adj}}(H^{-1}(t))}{\det H^{-1}(t)}=0, $$ as one can easily verify that \begin{equation}\label{lim}\begin{cases} \lim_{t\rightarrow 2A} (2A-t)^k \det H^{-1}(t) =4^k\det\Theta_s>0,\\ \lim_{t\rightarrow 2A} (2A-t)^{k-1}{{\rm{adj}}(H^{-1}(t))}=4^{k-1}{\rm{adj}}(\Theta_s). \end{cases}\end{equation} Thus $$ \lim_{t\rightarrow 2A}h(t)=0. $$ We now consider the Hermitian metric $\tilde h(t)$ which is homothetic to $h(t)$ and has unitary volume, namely $\tilde h(t):= c(t) h(t)$ with $c(t) = (\vol_{h(t)}(M))^{-1/m}$. Now we see that there exists a positive constant $V$ so that $$ \vol_{h(t)}(M)=V\cdot \det H(t)\cdot \prod_{i=1}^s(2A-t)^{n_i} =V\cdot\frac{(2A-t)^{m}}{(2A-t)^k\det H^{-1}(t)},$$ where we have used that $m=k+\sum_{i=1}^sn_i$. We can then write $$c(t)=\frac{\xi(t)}{2A-t},\qquad \xi(t):=\left(V^{-1}(2A-t)^k\det H^{-1}(t)\right)^{1/m} $$ and note that by \eqref{lim}, $$ \lim_{t\rightarrow 2A}\xi(t)=\left(V^{-1}4^k\det\Theta_s\right)^{1/m}:= \xi>0. $$ We now have, for $i=1,\dots,s$, $$ \lim_{t\rightarrow 2A}c(t)h_i(t)=\lim_{t\rightarrow 2A}\frac{\xi(t)}{2A-t}\cdot\frac{2A-t}{2}=\frac{\xi}{2}. $$ while, using \eqref{lim} $$ \lim_{t\rightarrow 2A}c(t)H(t) =\frac{\xi}{4}\cdot \frac{{\rm {adj}}(\Theta_s)}{\det \Theta_s} = \frac \xi 4\cdot \Theta_s^{-1}, $$ Therefore the solution $\tilde h(t)$ to the normalized flow converges to the static metric satisfying \eqref{static} with $\l=1/\xi$.\par \medskip We can then formulate our first result as follows \begin{theorem}\label{Main} Let $M=G/L$ be a simply connected C-space with Tits fibration over a Hermitian symmetric space $N$ whose irreducible factors have complex dimension at least two and are not complex quadrics. Any $G$-invariant Hermitian metric $h$ on $M$ is determined by a pair $(h^N,h^F)$, where $h^N$ is an $\Ad(L)$-invariant Hermitian metric on $\mathfrak{n}$ and $h^F$ is an arbitrary Hermitian metric on the fiber $\mathfrak{f}$.\par Given any initial invariant Hermitian metric $(h^N_o,h^F_o)$ on $M$, we have: \begin{itemize} \item[i)] the solution $h(t)$ to the $\text{HCF}_{\text{U}}$ is given by the pair $(h^N(t),h^F(t))$, where $h^N(t)$ depends only on $h^N_o$; \item[ii)] the maximal existence domain of $h^N(t)$ is an interval $(-\infty,T)$ with $0<T< +\infty$ and the solution $h^F(t)$ exists and is positive definite on an interval $(-r,T)$ with $0<r\leq+\infty$. When $t\to T$, the base $N$ collapses to a product of some Hermitian symmetric spaces and the metric $h(t)$ converges to a positive semidefinite Hermitian bilinear form $\hat h$. The distribution given by the kernel of $\hat h$ is integrable and its leaves coincide with the $U^c$-orbits for a suitable compact connected normal subgroup $U\subseteq G$; \item[iii)] the manifold $M$ admits a unique (up to homotheties) invariant Hermitian metric which is static for the $\text{HCF}_{\text{U}}$. If $h_N(t)\rightarrow 0$ when $t\rightarrow T$, then $h(t)\rightarrow 0$ and the normalized flow with constant volume converges to a static metric. \end{itemize} \end{theorem} The following proposition can be thought of as a complement of the main Theorem \ref{Main} when the $U^c$-orbits are closed. We keep the same notation as in Theorem \ref{Main} and we consider the foliation, again denoted by $\mathcal Q$, given by the $U^c$-orbits. If $d_t$ denote the distance on $M$ induced by the metric tensor $h_t$ ($t\in [0,T)$), we describe the Gromov-Hausdorff limit of the metric spaces $(M,d_t)$ when $t\to T$. \begin{proposition}\label{GH} If the $U^c$-orbits are closed, the leaf space $\overline M := M/\mathcal Q$ has the structure of a smooth homogeneous manifold $G/\hat U$ for some closed subgroup $\hat U\subseteq G$. Moreover the positive semidefinite tensor $\hat h$ induces a $G$-invariant Riemannian metric $\overline h$ on $\overline M$ with induced distance $\bar d$ and the metric spaces $(M,d_t)$ Gromov-Hausdorff converge to $(\overline M,\bar d)$ when $t\to T$. \end{proposition} \begin{proof} Note that $G$ acts transitively on the leaves of the $G$-invariant foliation $\mathcal Q$. The closedness of the leaves implies that $\overline M$ is Hausdorff and it can be expressed as a coset space $G/\hat U$ for some closed subgroup $\hat U$ that contains both $U$ and $L$. At the level of Lie algebras we can write the decomposition $$\gg = \hat\mathfrak{u} \oplus \hat\mathfrak{m},$$ where the subspace $\hat\mathfrak{m}$ is defined as the $\kappa$-orthocomplement of $\hat\mathfrak{u}$. As $\mathfrak{l}\subset \hat\mathfrak{u}$ we have that $\hat\mathfrak{m}\subset \mathfrak{m}$. Moreover $\mathfrak{u} = \bigoplus_{i=1}^p\gg_i\subset \hat\mathfrak{u}$ implies that $\hat\mathfrak{m}\subseteq \bigoplus_{j=p+1}^s\gg_j$. We now note that the $G$-equivariant projection $\pi:M\to N$ maps the orbit $U^co$ onto the $U$-orbit $\prod_{i=1}^pN_i\times \prod_{j=p+1}^s\{[K_j]\}$, where $N_i:= G_i/K_i$ for $i=1\ldots s$, whence $\hat U\subseteq \prod_{i=1}^p G_i \times \prod_{j=p+1}^sK_j$.Then we can write $$\hat \mathfrak{m} = \bigoplus_{j=p+1}^s\mathfrak{n}_j \oplus \hat \mathfrak{f},\quad \hat\mathfrak{f} \subseteq \bigoplus_{j=p+1}^s\mathbb R Z_j.$$ This implies in particular that $\Ad(\hat U)|_{\hat\mathfrak{f}} = \mbox{Id}$. Therefore the restriction $\hat h|_{\hat\mathfrak{m}\times\hat\mathfrak{m}}$ gives a positive definite $\Ad(\hat U)$-invariant inner product which descends to a $G$-invariant Riemannian metric $\bar h$ on $\overline M$. \par We now prove the last statement, namely that for $x,y\in M$ we have $\lim_{t\to T}d_t(x,y) = \bar d(p(x),p(y))$, where $\bar d$ is the distance induced by $\bar h$ and $p:M\rightarrow\overline{M}$ is the projection. We denote by $\gamma^t$ a minimizing geodesic for $h_t$ joining $x$ with $y$. As \eqref{solhi} and \eqref{decreasing} imply that $h_t(v,v)$ is a non-incerasing funtion of $t$ for every tangent vector $v$, we see that \begin{equation*}\begin{split} d_t(x,y) &= \int_0^1 h_t(\frac{d\gamma^t}{ds},\frac{d\gamma^t}{ds})^{1/2}\ ds\geq \int_0^1\hat h(\frac{d\gamma^t}{ds},\frac{d\gamma^t}{ds})^{1/2}\ ds\\ &= \int_0^1\bar h(\frac{dp\circ\gamma^t}{ds},\frac{dp\circ\gamma^t}{ds})^{1/2}\ ds \geq \bar d(p(x),p(y)). \end{split}\end{equation*} This means that $$\liminf_{t\to T}d_t(x,y) \geq \bar d(p(x),p(y)). $$ On the other hand let $\gamma$ be a minimizing geodesic for $\bar h$ connecting $p(x)$ with $p(y)$. Let $\tilde \gamma$ be a lift of $\gamma$ starting at $x$ with ending point $\tilde y$. We choose a path $\eta$ in $p^{-1}(p(y))$ connecting $y$ with $\tilde y$. Then $$d_t(x,y)\leq d_t(x,\tilde y)+d_t(\tilde y,y)\leq \int_0^1h_t(\tilde \gamma'(s),\tilde \gamma'(s))^{1/2} ds + \int_0^1 h_t(\eta'(s),\eta'(s))^{1/2}ds.$$ Now $$\lim_{t\to T} \int_0^1 h_t(\eta'(s),\eta'(s))^{1/2} ds = 0 ,$$ while \begin{equation*}\begin{split} \lim_{t\to T} \int_0^1h_t(\tilde \gamma'(s),\tilde \gamma'(s))^{1/2}ds &= \int_0^1\hat h(\tilde \gamma'(s),\tilde \gamma'(s))^{1/2}ds\\ &=\int_0^1\bar h(\gamma'(s),\gamma'(s))^{1/2}ds = \bar d(p(x),p(y)). \end{split}\end{equation*} This implies that $$\limsup_{t\to T} d_t(x,y) \leq \bar d(p(x),p(y))$$ and this concludes the proof. \end{proof} \begin{remark} Note that the homogeneous space $\overline{M}$ might not carry any ($G$-invariant) complex structure. \end{remark} \subsection{Example} A C-space $M$ will be called of {\it Calabi-Eckmann type} if $M = G/L$, where $G = G_1\cdot G_2$, $L = L_1\cdot L_2$ and $L_i$ is the semisimple part of $K_i$ for $i=1,2$ (see \cite{CE},\cite{W}). The space $M$, which is $T^2$-bundle over a product of two Hermitian symmetric spaces, can be endowed with many invariant complex structures as described in section 2. We consider now a more general C-space $M$ which is given by a product of C-spaces of Calabi-Eckmann type $M_1,M_3,\ldots,M_{2k-1}$ with $M_i = G_i\cdot G_{i+1}/L_i\cdot L_{i+1}$, endowed with the invariant complex structure $J$ given by the product of invariant complex structures on each factor $M_i$, $i=1,3,\ldots,2k-1$. \par We now consider an initial datum $(h_o^N,h_o^F)$, where $h_o^N$ is determined by a sequence $(A_1,\ldots,A_{2k})$. We can rearrange the indexes as in the proof of Proposition \ref{prop}, i.e. $A=A_1=\dots=A_p < A_j$ for all $j=p+1,\ldots,2k$. Note that there is an involution $\sigma$ of the set $\{1,\ldots,2k\}$ so that for each index $1\leq l\leq 2k$ we have $JZ_l\in \Span\{Z_l,Z_{\sigma(l)}\}$. Hence $$T_o(U^c\cdot o) = \mathfrak{n}_1\oplus\ldots\oplus \mathfrak{n}_p \oplus \Span\{Z_1,\ldots, Z_p,Z_{\sigma(1)},\ldots, Z_{\sigma(p)}\}.$$ Note that in this case the $S^c$-orbits are closed. When $t\to 2A$ the manifold $M$ collapses to a product of Hermitian symmetric spaces and a lower dimensional C-space $M'$ of Calabi-Eckmann type. Indeed, if we set $\mathcal I_1 := \{p+1,\ldots 2k\}\cap \sigma(\{p+1,\ldots,2k\})$ and $\mathcal I_2 := \{p+1,\ldots 2k\}\setminus \mathcal I_1$, then the manifold collapses to $$\left(\prod_{i\in \mathcal I_2} G_i/K_i\right) \times M',\qquad M':= \prod_{i\in \mathcal I_1} G_i/L_i. $$
proofpile-arXiv_065-4140
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Let $(\Omega,\mathcal{F},P)$ be a probability space where all the random objects of this paper will be defined. The expectation of a random variable $X$ with values in a Euclidean space will be denoted by $E[X]$. We consider the following optimization problem \begin{equation}\label{eq_prob} F^*: = \min_{x \in \mathbb{R}^d} F(x),\mbox{ where } F(x):= E\left[ f(x,Z) \right] = \int_{\mathcal{Z}}{f(x,z) \mu(dz)} ,\ x\in\mathbb{R}^d \end{equation} and $Z$ is a random element in some measurable space $\mathcal{Z}$ with an unknown probability law $\mu$. The function $x \mapsto f(x,z)$ is assumed continuously differentiable (for each $z$) but it can possibly be non-convex. Suppose that one has access to i.i.d samples $\mathbf{Z} = (Z_1,...,Z_n)$ drawn from $\mu$, where $n \in \mathbb{N}$ is fixed. Our goal is to compute an approximate minimizer $X^{\dagger}$ such that the \textit{population risk} $$E[F(X^{\dagger})] - F^*$$ is minimized, where the expectation is taken with respect to the training data $\mathbf{Z}$ and additional randomness generating $X^{\dagger}$. Since the distribution of $Z_i, i \in \mathbb{N}$ is unknown, we consider the \textit{empirical risk minimization} problem \begin{equation}\label{eq_prob_emp} \min_{x \in \mathbb{R}^d} F_{\mathbf{z}}(x), \text{ where } F_{\mathbf{z}}(x):= \frac{1}{n} \sum_{i=1}^n f(x,z_i) \end{equation} using the dataset $\mathbf{z} := \{z_1,...,z_n\} Stochastic gradient algorithms based on Langevin Monte Carlo have gained more attention in recent years. Two popular algorithms are Stochastic Gradient Langevin Dynamics (SGLD) and Stochastic Gradient Hamiltonian Monte Carlo (SGHMC). First, we summarize the use of SGLD in optimization, as presented in \cite{raginsky}. Consider the overdamped Langevin stochastic differential equation \begin{equation}\label{langevin} dX_t = - \nabla F_{\mathbf{z}}(X_t) dt + \sqrt{2 \beta^{-1}} dB_t, \end{equation} where $(B_t)_{t \ge 0}$ is the standard Brownian motion in $\mathbb{R}^d$ and $\beta >0$ is the inverse temperature parameter. Under suitable assumptions on $f$, the SDE (\ref{langevin}) admits the Gibbs measure $\pi_{\mathbf{z}}(dx) \propto \exp(-\beta F_{\mathbf{z}}(x))$ as its unique invariant distribution. In addition, it is known that for sufficiently big $\beta$, the Gibbs distribution concentrates around global minimizers of $F_{\mathbf{z}}$. Therefore, one can use the value of $X_t$ from (\ref{langevin}), (or from its discretized counterpart SGLD), as an approximate solution to the empirical risk problem, provided that $t$ is large and temperature is low. In this paper, we consider the underdamped (second-order) Langevin diffusion \begin{eqnarray} dV_t &=& - \gamma V_tdt - \nabla F_{\mathbf{z}}(X_t)dt + \sqrt{2 \gamma \beta^{-1}}dB_t, \label{eq_V}\\ dX_t &=& V_tdt \label{eq_X}, \end{eqnarray} where $(X_t)_{t \ge 0}, (V_t)_{t \ge 0}$ model the position and the momentum of a particle moving in a field of force $F_{\mathbf{z}}$ with random force given by Gaussian noise. It is shown that under some suitable conditions for $F_{\mathbf{z}}$, the Markov process $(X,V)$ is ergodic and has a unique stationary distribution $$\pi_{\mathbf{z}}(dx,dv) = \frac{1}{\Gamma_{\mathbf{z}}} \exp\left( -\beta\left( \frac{1}{2}\|v\|^2 + F_{\mathbf{z}}(x) \right) \right) dx dv $$ where $\Gamma_{\mathbf{z}}$ is the normalizing constant $$\Gamma_{\mathbf{z}} = \left( \frac{2 \pi}{\beta} \right)^{d/2} \int_{\mathbb{R}^d} {e^{-\beta F_{\mathbf{z}}(x)}dx}. $$ It is easy to observe that the $x$-marginal distribution of $\pi_{\mathbf{z}}(dx,dv)$ is the invariant distribution $\pi_{\mathbf{z}}(dx)$ of (\ref{langevin}). We consider the first order Euler discretization of (\ref{eq_V}), (\ref{eq_X}), also called Stochastic Gradient Hamiltonian Monte Carlo (SGHMC), given as follows \begin{eqnarray} \overline{V}^{\lambda}_{k+1} &=& \overline{V}^{\lambda}_k - \lambda[\gamma \overline{V}^{\lambda}_k + \nabla F_{\mathbf{z}}(\overline{X}^{\lambda}_k)] + \sqrt{2 \gamma \beta^{-1} \lambda} \xi_{k+1}, \qquad \overline{V}^{\lambda}_0 = v_0,\label{eq_V_dis_aver}\\ \overline{X}^{\lambda}_{k+1} &=& \overline{X}^{\lambda}_k + \lambda \overline{V}^{\lambda}_{k}, \qquad \overline{X}^{\lambda}_0 = x_0, \label{eq_X_dis_aver} \end{eqnarray} where $\lambda>0$ is a step size parameter and $(\xi_k)_{k \in \mathbb{N}}$ is a sequence of i.i.d standard Gaussian random vectors in $\mathbb{R}^d$. The initial condition $v_0,x_0$ may be random, but independent of $(\xi_k)_{k \in \mathbb{N}}$. In certain contexts, the full knowledge of the gradient $F_{\mathbf{z}}$ is not available, however, using the dataset $\mathbf{z}$, one can construct its unbiased estimates. In what follows, we adopt the general setting given by \cite{raginsky}. Let $\mathcal{U}$ be a measurable space, and $g: \mathbb{R}^d \times \mathcal{U} \to \mathbb{R}^d$ such that for any $\mathbf{z} \in \mathcal{Z}^n$, \begin{equation}\label{eq_unbiased} E\left[g(x,U_{\mathbf{z}}) \right] = \nabla F_{\mathbf{z}}(x), \forall x \in \mathbb{R}^d, \end{equation} where $U_{\mathbf{z}}$ is a random element in $\mathcal{U}$ with probability law $Q_{\mathbf{z}}$. Conditionally on $\mathbf{Z} = \mathbf{z}$, the SGHMC algorithm is defined by \begin{eqnarray} V^{\lambda}_{k+1} &=& V^{\lambda}_k - \lambda[\gamma V^{\lambda}_k + g(X^{\lambda}_k,U_{\mathbf{z},k})] + \sqrt{2 \gamma \beta^{-1} \lambda} \xi_{k+1}, \qquad V^{\lambda}_0 =v_0, \label{eq_V_dis_appr}\\ X^{\lambda}_{k+1} &=& X^{\lambda}_k + \lambda V^{\lambda}_k, \qquad X^{\lambda}_0 = x_0,\label{eq_X_dis_appr} \end{eqnarray} where $(U_{\mathbf{z},k})_{k \in \mathbb{N}}$ is a sequence of i.i.d. random elements in $\mathcal{U}$ with law $Q_{\mathbf{z}}$. We also assume from now on that $v_0, x_0, (U_{\mathbf{z},k})_{k \in \mathbb{N}}, (\xi_k)_{k \in \mathbb{N}}$ are independent. Our ultimate goal is to find approximate global minimizers to the problem (\ref{eq_prob}). Let $X^{\dagger} := X^{\lambda}_k$ be the output of the algorithm (\ref{eq_V_dis_appr}),(\ref{eq_X_dis_appr}) after $k \in \mathbb{N}$ iterations, and $(\widehat{X}^*_{\mathbf{z}},\widehat{V}^*_{\mathbf{z}})$ be such that $\mathcal{L}(\widehat{X}^*_{\mathbf{z}},\widehat{V}^*_{\mathbf{z}}) = \pi_{\mathbf{z}}$. The excess risk is decomposed as follows, see also \cite{raginsky}, \begin{eqnarray} E[F(X^{\dagger})] - F^* &=& \underbrace{\left( E[F(X^{\dagger})] - E[F(\widehat{X}^*_{\mathbf{z}})] \right)}_{\mathcal{T}_1} + \underbrace{\left( E[F(\widehat{X}^*_{\mathbf{z}})] - E[F_{\mathbf{Z}}(\widehat{X}^*_{\mathbf{Z}})] \right)}_{\mathcal{T}_2} \nonumber \\ && + \underbrace{\left(E\left[ F_{\mathbf{Z}}(\widehat{X}^*_{\mathbf{Z}}) - F^* \right]\right) }_{\mathcal{T}_3}.\label{eq_decom_risk0} \end{eqnarray} The remaining part of the present paper is about finding bounds for these errors. Section \ref{sec_main} summarizes technical conditions and the main results. Comparison of our contributions to previous studies is discussed in Section \ref{sec_related}. Proofs are given in Section \ref{sec_proof}. \emph{Notation and conventions.} For $l\geq 1$, scalar product in $\mathbb{R}^{l}$ is denoted by $\langle \cdot,\cdot\rangle$. We use $\| \cdot \|$ to denote the Euclidean norm (where the dimension of the space may vary). $\mathcal{B}(\mathbb{R}^{l})$ denotes the Borel $\sigma$- field of $\mathbb{R}^{l}$. For any $\mathbb{R}^{l}$-valued random variable $X$ and for any $1\leq p<\infty$, let us set $\Vert X\Vert_p:=E^{1/p}\|X\|^p$. We denote by $L^p$ the set of $X$ with $\Vert X\Vert_p<\infty$. The Wasserstein distance of order $p \in [1,\infty)$ between two probability measures $\mu$ and $\nu$ on $\mathcal{B}(\mathbb{R}^{l})$ is defined by \begin{equation}\label{w_dist} \mathcal{W}_p(\mu,\nu) = \left( \inf_{\pi \in \Pi(\mu,\nu)} \int_{\mathbb{R}^l} \Vert x-y\Vert^p d\pi(x,y) \right)^{1/p}, \end{equation} where $\Pi(\mu,\nu)$ is the set of couplings of $(\mu, \nu)$, see e.g. \cite{v}. For two $\mathbb{R}^l$-valued random variables $X$ and $Y$, we denote $\mathfrak{W}_2(X,Y):= \mathcal{W}_2(\mathcal{L}(X),\mathcal{L}(Y))$, where $\mathcal{L}(X)$ is the law of $X$. We do not indicate $l$ in the notation and it may vary. \section{Asumptions and main results}\label{sec_main} The following conditions are required throughout the paper. \begin{assumption}\label{as_f_bound} The function $f$ is continuously differentiable, takes non-negative values, and there are constants $A_0,B \ge 0$ such that for any $z \in \mathcal{Z}$, $$\|f(0,z)\| \le A_0, \qquad \|\nabla f(0,z)\| \le B.$$ \end{assumption} \begin{assumption}\label{as_lip} There is $M>0$ such that, for each $z \in \mathcal{Z}$, \begin{equation*} \| \nabla f(x_1,z) - \nabla f(x_2,z) \| \le M\|x_1 - x_2\|, \qquad \forall x_1,x_2 \in \mathbb{R}^d. \end{equation*} \end{assumption} \begin{assumption}[Dissipative]\label{as_dissip} There exist constants $m >0, b \ge 0$ such that \begin{equation*} \left\langle x, f(x,z) \right\rangle \ge m\|x\|^2 -b, \qquad \forall x \in \mathbb{R}^d, z \in \mathcal{Z}. \end{equation*} \end{assumption} \begin{assumption}\label{as_lip_g} For each $u\in \mathcal{U}$, it holds that $\|g(0,u)\| \le B$ and \begin{equation*} \|g(x_1, u) - g(x_2, u)\| \le M \| x_1 - x_2 \|, \qquad \forall x_1, x_2 \in \mathbb{R}^d. \end{equation*} \end{assumption} \begin{assumption}\label{as_variance} There exists a constant $\delta >0$ such that for every $\mathbf{z} \in \mathcal{Z}^n$, $$E\|g(x,U_{\mathbf{z}}) - \nabla F_{\mathbf{z}}(x) \|^2 \le 2\delta(M^2 \|x\|^2 + B^2). $$ \end{assumption} \begin{assumption}\label{ass_exp_moment} The law $\mu_0$ of the initial state $(x_0,v_0)$ satisfies $$ \int_{\mathbb{R}^{2d}}{ e^{\mathcal{V}(x,v)} d\mu_0(x,v) } < \infty,$$ where $\mathcal{V}$ is the Lyapunov function defined in (\ref{eq:lyapunov}) below. \end{assumption} \begin{remark} If the set of global minimizers is bounded, we can always redefine the function $f$ to be quadratic outside a compact set containing the origin while maintaining its minimizers. Hence, Assumption \ref{as_dissip} can be satisfied in practice. Assumption \ref{as_lip_g} means that the estimated gradient is also Lipschitz when using the same training dataset. For example, at each iteration of SGHMC, we may sample uniformly with replacement a random minibatch of size $\ell$. Then we can choose $U_{\mathbf{z}} = (z_{I_1},...,z_{I_{\ell}})$ where $I_1,...,I_{\ell}$ are i.i.d random variables having distribution $\text{Uniform}(\{1,...,n\})$. The gradient estimate is thus $$g(x,U_{\mathbf{z}}) = \frac{1}{\ell} \sum_{j=1}^{\ell} \nabla f(x, z_{I_j}),$$ which is clearly unbiased and Assumption \ref{as_lip_g} will be satisfied whenever Assumptions \ref{as_lip} and \ref{as_f_bound} are in force. Assumption \ref{as_variance} controls the variance of the gradient estimate. \end{remark} An auxiliary continuous time process is needed in the subsequent analysis. For a step size $\lambda >0$, denote by $B^{\lambda}_t := \frac{1}{\sqrt{\lambda}} B_{\lambda t}$ the scaled Brownian motion. Let $\widehat{V}(t,s,(v,x)), \widehat{X}(t,s,(v,x))$ be the solutions of \begin{eqnarray} \qquad d\widehat{V}(t,s,(v,x)) &=& - \lambda \left( \gamma \widehat{V}(t,s,(v,x)) + \nabla F_{\mathbf{z}}(\widehat{X}(t,s,(v,x)))\right) dt + \sqrt{2 \gamma \lambda \beta^{-1}}dB^{\lambda}_t, \label{eq_V_hat}\\ \qquad d\widehat{X}(t,s,(v,x)) &=& \lambda \widehat{V}(t,s,(v,x))dt \label{eq_X_hat}, \end{eqnarray} with initial condition $\widehat{V}_s = v, \widehat{X}_s = x$ where $v,x$ may be random but independent of $(B^{\lambda}_t)_{t \ge 0}$. Our first result tracks the discrepancy between the SGHMC algorithm (\ref{eq_V_dis_appr}), (\ref{eq_X_dis_appr}) and the auxiliary processes (\ref{eq_V_hat}), (\ref{eq_X_hat}). \begin{theorem}\label{thm_algo_aver} Let $1 \le p \le 2$. There exists a constant $\tilde{C} > 0$ such that for all $k \in \mathbb{N}$, \begin{equation} \mathfrak{W}_p((V^{\lambda}_k, X^{\lambda}_k) , (\widehat{V}(k,0,(v_0,x_0)), \widehat{X}(k,0,(v_0,x_0)))) \le \tilde{C} (\lambda^{1/(2p)} + \delta^{1/(2p)}). \end{equation} \end{theorem} \begin{proof} The proof of this theorem is given in Section \ref{proof_thm_algo_aver}. \end{proof} The following is the main result of the paper. \begin{theorem}\label{thm_main} Let $1 < p \le 2$. Suppose that the SGHMC iterates $(V^{\lambda}_k,X^{\lambda}_k)$ are defined by (\ref{eq_V_dis_appr}), (\ref{eq_X_dis_appr}). The expected population risk can be bounded as $$E[F(X^{\lambda}_k)] - F^* \le \mathcal{B}_1 + \mathcal{B}_2 + \mathcal{B}_3,$$ where \begin{align*} \mathcal{B}_1 &:=(M \sigma + B) \left( \tilde{C} (\lambda^{1/(2p)} + \delta^{1/(2p)}) + C_* (\mathcal{W}_{\rho}(\mu_0, \pi_{\mathbf{z}}))^{1/p} \exp(- c_* k \lambda ) \right),\\ \mathcal{B}_2 &:= \frac{4\beta c_{LS}}{n}\left( \frac{M^2}{m}(b+d/\beta) + B^2 \right) ,\\ \mathcal{B}_3 &:= \frac{d}{2\beta} \log\left( \frac{eM}{m}\left( \frac{b\beta}{d} + 1 \right) \right), \end{align*} where $\tilde{C}, C_*, c_*, c_{LS}$ are appropriate constants and $\mathcal{W}_{\rho}$ is the metric defined in (\ref{eq_dist_aux}) below. \end{theorem} \begin{proof} The proof of this theorem is given in Section \ref{proof_thm_main}. \end{proof} \begin{corollary}\label{cor} Let $1 \le p \le 2, \varepsilon > 0$ We have $$\mathcal{W}_p(\mathcal{L}(X_k), \pi_{\mathbf{z}}) \le \varepsilon$$ whenever $$ (\lambda^{1/(2p)} + \delta^{1/(2p)}) \le \frac{1}{2\tilde{C}}\varepsilon, \qquad k \ge \frac{(2\tilde{C})^{2p}}{c_*} \frac{1}{\varepsilon^{2p}} \log\left( \frac{C_* (\mathcal{W}_{\rho}(\mu_0, \pi_{\mathbf{z}}))^{1/p}}{\varepsilon} \right) .$$ \end{corollary} \begin{proof} From the proof of Theorem \ref{thm_main}, or more precisely from (\ref{sampling_err}), we need to choose $\lambda$ and $k$ such that $$\tilde{C} (\lambda^{1/(2p)} + \delta^{1/(2p)}) + C_* (\mathcal{W}_{\rho}(\mu_0, \pi_{\mathbf{z}}))^{1/p} \le \varepsilon.$$ First, we choose $\lambda$ and $\delta$ so that $\tilde{C}(\lambda^{1/(2p)} + \delta^{1/(2p)}) < \varepsilon/2$ and then $$C_* (\mathcal{W}_{\rho}(\mu_0, \pi_{\mathbf{z}}))^{1/p} \exp(- c_* k \lambda ) \le \varepsilon/2$$ will hold for $k$ large enough. \end{proof} \section{Related work and our contributions}\label{sec_related} Non-asymptotic convergence rate Langevin dynamics based algorithms for approximate sampling log-concave distributions are intensively studied in recent years. For example, overdamped Langevin dynamics are discussed in \cite{welling2011bayesian}, \cite{dalalyan2017theoretical}, \cite{durmus2016high}, \cite{dalalyan2017user}, \cite{dm} and others. Recently, \cite{six} treats the case of non-i.i.d. data streams with a certain mixing property. Underdamped Langevin dynamics are examined in \cite{chen2014stochastic}, \cite{neal2011mcmc}, \cite{cheng2017underdamped}, etc. Further analysis on HMC are discussed on \cite{betancourt2017geometric}, \cite{betancourt2017conceptual}. Subsampling methods are applied to speed up HMC for large datasets, see \cite{dang2017hamiltonian}, \cite{quiroz2018speeding}. The use of momentum to accelerate optimization methods are discussed intensively in literature, for example \cite{attouch2016rate}. In particular, performance of SGHMC is experimentally proved better than SGLD in many applications, see \cite{chen2015convergence}, \cite{chen2014stochastic}. An important advantage of the underdamped SDE is that convergence to its stationary distribution is faster than that of the overdamped SDE in the $2$-Wasserstein distance, as shown in \cite{ear2}. Finding an approximate minimizer is similar to sampling distributions concentrate around the true minimizer. This well known connection gives rise to the study of simulated annealing algorithms, see \cite{hwang1980laplace}, \cite{gidas1985nonstationary}, \cite{hajek1985tutorial}, \cite{chiang1987diffusion}, \cite{holley1989asymptotics}, \cite{gelfand1991recursive}, \cite{gelfand1993metropolis}. Recently, there are many studies further investigate this connection by means of non asymptotic convergence of Langevin based algorithms and in stochastic non-convex optimization and large-scale data analysis, \cite{chen2016bridging}, \cite{dalalyan2017further}. Relaxing convexity is a more challenging issue. In \cite{cheng2018sharp}, the problem of sampling from a target distribution $\exp(-F(x))$ where $F$ is L-smooth everywhere and $m$-strongly convex outside a ball of finite radius is considered. They provide upper bounds for the number of steps to be within a given precision level $\varepsilon$ of the 1-Wasserstein distance between the HMC algorithm and the equilibrium distribution. In a similar setting, \cite{majka2018non} obtains bounds in both the $\mathcal{W}_1$ and $\mathcal{W}_2$ distances for overdamped Langevin dynamics with stochastic gradients. \cite{xu2018global} studies the convergence of the SGLD algorithm and the variance reduced SGLD to global minima of nonconvex functions satisfying the dissipativity condition. Our work continues these lines of research, the most similar setting to ours is the recent paper \cite{gao}. We summarize our contributions below: \begin{itemize} \item Diffusion approximation. In Lemma 10 of \cite{gao}, the upper bound for the 2-Wasserstein distance between the SGHMC algorithm at step $k$ and underdamped SDE at time $t = k \lambda$ is (up to constants) given by $$ (\delta^{1/4} + \lambda^{1/4}) \sqrt{k \lambda} \sqrt{\log(k\lambda)},$$ which depends on the number of iteration $k$. Therefore obtaining a precision $\varepsilon$ requires a careful choice of $k, \lambda$ and even $k\lambda$. By introducing the auxiliary SDEs (\ref{eq_V_hat}), (\ref{eq_X_hat}), we are able to achieve the rate \begin{equation*} (\delta^{1/4} + \lambda^{1/4}), \end{equation*} see Theorem \ref{thm_algo_aver} for the case $p=2$. This upper bound is better in the number of iterations and hence, improves Lemma 10 of \cite{gao}. Our analysis for variance of the algorithm is also different. The iteration does not accumulate mean squared errors, as the number of step goes to infinity. \item Our proof for Theorem \ref{thm_algo_aver} is relatively simple and we do not need to adopt the techniques of \cite{raginsky} which involve heavy functional analysis, e.g. the weighted Csisz\'ar - Kullback - Pinsker inequalities in \cite{bolley2005weighted} is not needed. \item If we consider the $p$-Wasserstein distance for $1<p\le 2$, in particular, when $p \to 1$, Theorem \ref{thm_main} gives tighter bounds, compared to Theorem 2 of \cite{gao}. \item Dependence structure of the dataset in the sampling mechanism, can be \textit{arbitrary}, see the proof of Theorem \ref{thm_algo_aver}. The i.i.d assumption on dataset is used only for the generalization error. We could also incorporate non-i.i.d data in our analysis, see Remark \ref{re_extend_T2}, but this is left for further research. \end{itemize} \section{Proofs}\label{sec_proof} \subsection{A contraction result} In this section, we recall a contraction result of \cite{ear2}. First, it should be noticed that the constant $u$ and the function $U$ in their paper are $\beta^{-1}$ and $\beta F_{\mathbf{z}}$ in the present paper, respectively. Here, the subscript $c$ stands for ``contraction". Using the upper bound of Lemma \ref{lem_quad} for $f$ below, there exist constants $\lambda_c \in \left(0, \min\{1/4, m/(M + 2B + \gamma^2/2)\} \right)$ small enough and $A_c \ge \beta/2(b + 2B + A_0)$ such that \begin{equation}\label{eq:drift} \left\langle x ,\nabla F_{\mathbf{z}} (x) \right\rangle \ge m\|x\|^2 - b \ge 2 \lambda_c ( F_{\mathbf{z}}(x) + \gamma^2 \|x\|^2/4 ) - 2A_c/\beta. \end{equation} Therefore, Assumption 2.1 of \cite{ear2} is satisfied, noting that $L_c:= \beta M$ and $$\|\nabla F_{\mathbf{z}} (x) - \nabla F_{\mathbf{z}}(y)\| \le \beta^{-1} L_c\|x -y\|.$$ We define the Lyapunov function \begin{equation}\label{eq:lyapunov} \mathcal{V}(x,v) = \beta F_{\mathbf{z}}(x) + \frac{\beta}{4} \gamma^2 \left( \|x + \gamma^{-1}v \|^2 + \| \gamma^{-1} v\|^2 - \lambda_c \|x\|^2 \right), \end{equation} For any $(x_1, v_1), (x_2, v_2) \in \mathbb{R}^{2d}$, we set \begin{eqnarray} r((x_1,v_2), (x_2,v_2)) &=& \alpha_c \|x_1 - x_2 \| + \| x_1 - x_2 + \gamma^{-1}(v_1 - v_2) \|, \label{eq:r}\\ \rho((x_1, v_1),(x_2,v_2)) &=& h(r((x_1, v_1),(x_2,v_2))) \left( 1 + \varepsilon_c \mathcal{V}(x_1,v_1) + \varepsilon_c \mathcal{V}(x_2,v_2) \right), \label{eq:rho} \end{eqnarray} where $\alpha_c, \varepsilon_c >0$ are suitable positive constants to be fixed later and $h:[0, \infty) \to [0, \infty)$ is continuous, non-decreasing concave function such that $h(0) = 0$, $h$ is $C^2$ on $(0,R_1)$ for some constant $R_1 > 0$ with right-sided derivative $h'_+(0) = 1$ and left-sided derivative $h'_-(R_1) > 0$ and $h$ is constant on $[R_1, \infty)$. For any two probability measures $\mu, \nu$ on $\mathbb{R}^{2d}$, we define \begin{equation}\label{eq_dist_aux} \mathcal{W}_{\rho}(\mu, \nu): = \inf_{(X_1,V_1) \sim \mu, (X_2,V_2) \sim \nu} E\left[ \rho((X_1,V_1),(X_2,V_2))\right]. \end{equation} Note that $\rho$ and $\mathcal{W}_{\rho}$ are semimetrics but not necessarily metrics. A result from \cite{ear2} is recalled below. For a probability measure $\mu$ on $\mathcal{B}(\mathbb{R}^{2d})$, we denote by $\mu p_t$ the law of $(V_t, X_t)$ when $\mathcal{L}(V_0,X_0) = \mu$. \begin{theorem}\label{thm_contraction} There exists a continuous non-decreasing concave function $h$ with $h(0) = 0$ such that for all probability measures $\mu, \nu$ on $\mathbb{R}^{2d}$, and $1 \le p \le 2$, we have \begin{equation} \mathcal{W}_p(\mu p_t, \nu p_t) \le C_* \left( \mathcal{W}_{\rho}(\mu, \nu)\right)^{1/p} \exp(- c_* t ), \qquad \forall t \ge 0, \end{equation} where the following relations hold: \begin{eqnarray*} c_* &=& \frac{\gamma}{384p} \min\{ \lambda_c M \gamma^{-2}, \Lambda_c^{1/2} e^{-\Lambda_c}M \gamma^{-2}, \Lambda_c^{1/2} e^{-\Lambda_c} \},\\ C_* &=& 2^{1/p}e^{2/p + \Lambda_c/p} \frac{1 + \gamma}{\min \{1, \alpha_c\}} \left( \max\left\lbrace 1, 4\frac{\max\{1,R^{p-2}_1\}}{\min\{1,R_1\}}(1 + 2 \alpha_c + 2 \alpha_c^2)(d + A_c) \beta^{-1} \gamma^{-1} c_*^{-1} \right\rbrace \right)^{1/p} , \\ \Lambda_c &=& \frac{12}{5}(1 + 2 \alpha_c + 2 \alpha_c^2)(d + A_c) M \gamma^{-2} \lambda_c^{-1}(1-2\lambda_c)^{-1},\\ \alpha_c &=& (1 + \Lambda_c^{-1})M \gamma^{-2} >0 , \\ \varepsilon_c &=& 4 \gamma^{-1} c_*/(d+A_c) >0,\\ R_1 &=& 4 \cdot (6/5)^{1/2} (1+2 \alpha_c + 2 \alpha_c^2)^{1/2} (d + A_c)^{1/2} \beta^{-1/2} \gamma^{-1}(\lambda_c - 2 \lambda_c^2)^{-1/2}. \end{eqnarray*} The function $h$ is constant on $[R_1, \infty)$, $C^2$ on $(0,R_1)$ with \begin{eqnarray*} f(r) &=& \int_0^{r \wedge R_1}{ \varphi(s) g(s) ds },\\ \varphi(s)&=& \exp \left( -(1 + \eta_c)L_cs^2/8 - \gamma^2 \beta \varepsilon_c \max\{1,(2\alpha_c)^{-1}\} s^2/2 \right) ,\\ g(s) &=& 1- \frac{9}{4}c_*\gamma \beta \int_0^{r} {\Phi(s)\varphi(s)^{-1}ds}, \qquad \Phi(s) = \int_0^s{\varphi(x)dx} \end{eqnarray*} and $\eta_c $ satisfies $\alpha_c = (1+\eta_c)L_c \beta^{-1} \gamma^{-2}$. \end{theorem} \begin{proof} From (5.15) of \cite{ear2}, we get $$\|(x_1,v_1) - (x_2,v_2) \|^p \le \frac{(1+\gamma)^p}{\min\{1, \alpha^{p}_c\}} r((x_1,v_1),(x_2,v_2))^{p}.$$ Furthermore, from the proof of Corollary 2.6 of \cite{ear2}, if $r:=r((x_1,v_1),(x_2,v_2)) \le \min\{1,R_1\}$, $$ r^2 \le r^p \le r \le 2e^{2+\Lambda_c} \rho((x_1,v_1),(x_2,v_2))$$ and if $r \ge \min\{1,R_1\} $ then $$r^p \le \max\{1,R^{p-2}_1\} r^2 \le \frac{\max\{1,R^{p-2}_1\}}{\min\{1,R_1\}} 8e^{2 + \Lambda_c}(1 + 2 \alpha_c + 2\alpha^2_c)(d+A_c)\beta^{-1}\gamma^{-1}c_*^{-1} \rho((x_1,v_1),(x_2,v_2)).$$ These bounds and Theorem 2.3 of \cite{ear2} imply that $$\mathcal{W}_p(\mu p_t, \nu p_t) \le C_* \left( \mathcal{W}_{\rho}(\mu, \nu)\right)^{1/p} \exp(- c_* t ).$$ The proof is complete. \end{proof} It should be emphasized that $(\widehat{V}(t,0,(v_0,x_0)), \widehat{X}(t,0,(v_0,x_0)))= (V_{\lambda t}, X_{\lambda t})$, and consequently, $(\widehat{V}(t,0,(v_0,x_0)), \widehat{X}(t,0,(v_0,x_0)))$ contracts at the rate $\exp(- c_* \lambda t)$. \subsection{Proof of Theorem \ref{thm_algo_aver}}\label{proof_thm_algo_aver} Here, we summarize our approach. For a given step size $\lambda >0$, we divide the time axis into intervals of length $T = \lfloor 1/\lambda\rfloor$. For each time step $k \in [nT,(n+1)T], n \in \mathbb{N}$, we compare the SGHMC to the version with exact gradients relying on the Doob inequality, and then compare the later to the auxiliary continuous-time diffusion $(\widehat{V}(k,0,(v_0,x_0)), \widehat{X}(k,0,(v_0,x_0)))$ with the scaled Brownian motion. At this stage we reply on the contraction result from \cite{ear2} and uniform boundedness of the Langevin diffusion and its discrete time versions. Since the auxiliary dynamics evolves slower than the original Langevin dynamics, or more precisely at the same speed as that of the SGHCM, our upper bounds do not accumulate errors and are independent from the number of iterations. \begin{proof} For each $k \in \mathbb{N}$, we define $$ \mathcal{H}_k: = \sigma(U_{\mathbf{z},i}, 1 \le i \le k) \vee \sigma(\xi_j, j \in \mathbb{N}).$ Let $\tilde{v}, \tilde{x}$ be $\mathbb{R}^d$-valued random variables satisfying Assumption \ref{ass_exp_moment}. For $0 \le i \le j$, we recursively define $\tilde{V}^{\lambda}(i,i,(\tilde{v},\tilde{x})) := \tilde{v}$, $\tilde{X}^{\lambda}(i,i,(\tilde{v},\tilde{x})) := \tilde{x}$ and \begin{eqnarray} \tilde{V}^{\lambda}(j+1,i,(\tilde{v},\tilde{x})) &=& \tilde{V}^{\lambda}(j,i,(\tilde{v},\tilde{x})) - \lambda[\gamma \tilde{V}^{\lambda}(j,i,(\tilde{v},\tilde{x})) + \nabla F_{\mathbf{z}}(\tilde{X}^{\lambda}(j,i,(\tilde{v},\tilde{x})))] \nonumber \\ && + \sqrt{2 \gamma \beta^{-1} \lambda} \xi_{j+1}, \label{eq_V_dis_tilde}\\ \tilde{X}^{\lambda}(j+1,i,(\tilde{v},\tilde{x})) &=& \tilde{X}^{\lambda}(j,i,(\tilde{v},\tilde{x})) + \lambda \tilde{V}^{\lambda}(j,i,(\tilde{v},\tilde{x})). \label{eq_X_dis_tilde} \end{eqnarray} Let $T: = \lfloor 1/\lambda\rfloor$. For each $n \in \mathbb{N}$, and for each $nT \le k < (n+1)T$, we set \begin{eqnarray}\label{pro_tilde} \tilde{V}^{\lambda}_k: = \tilde{V}^{\lambda}(k,nT,(V^{\lambda}_{nT},X^{\lambda}_{nT})), \qquad \tilde{X}^{\lambda}_k:= \tilde{X}^{\lambda}(k,nT,(V^{\lambda}_{nT},X^{\lambda}_{nT})). \end{eqnarray} For each $n \in \mathbb{N}$, it holds by definition that $V^{\lambda}_{nT} = \tilde{V}^{\lambda}_{nT}$ and the triangle inequality implies for $nT \le k < (n+1)T$, \begin{eqnarray*} \|V^{\lambda}_k- \tilde{V}^{\lambda}_k \| \le \lambda \left\| \sum_{i = nT}^{k-1} \left( g(X^{\lambda}_{i},U_{\mathbf{z},i}) - \nabla F_{\mathbf{z}}(\tilde{X}^{\lambda}_i)\right) \right\| \end{eqnarray*} and \begin{equation} \label{in_X_tri} \left\| X^{\lambda}_k - \tilde{X}^{\lambda}_k \right\| \le \lambda \sum_{i = nT}^{k-1} \left\| V^{\lambda}_i - \tilde{V}^{\lambda}_i \right\|. \end{equation} Denote $g_{k,nT}(x):= E\left[ g(x,U_{\mathbf{z},k})| \mathcal{H}_{nT} \right], x \in \mathbb{R}^d$. By Assumption \ref{as_lip_g}, the estimation continues as follows \begin{eqnarray} \|V^{\lambda}_k- \tilde{V}^{\lambda}_k \| &\le& \lambda \left\| \sum_{i = nT}^{k-1} \left( g(X^{\lambda}_{i},U_{\mathbf{z},i}) - \nabla F_{\mathbf{z}}(\tilde{X}^{\lambda}_i)\right) \right\| \le \lambda \sum_{i = nT}^{k-1} \left\| g(X^{\lambda}_{i},U_{\mathbf{z},i}) - g(\tilde{X}^{\lambda}_i, U_{\mathbf{z},i}) \right\| \nonumber \\ && + \lambda \left\| \sum_{i = nT}^{k-1} g(\tilde{X}^{\lambda}_i, U_{\mathbf{z},i}) - g_{i,nT}(\tilde{X}^{\lambda}_i) \right\| + \lambda \sum_{i = nT}^{k-1} \left\| g_{i,nT}(\tilde{X}^{\lambda}_i) - \nabla F_{\mathbf{z}}(\tilde{X}^{\lambda}_i) \right\| \nonumber \\ &\le& \lambda M \sum_{i = nT}^{k-1} \| X^{\lambda}_{i} - \tilde{X}^{\lambda}_{i} \| + \lambda \max_{nT \le m < (n+1)T} \left\| \sum_{i = nT}^{m} g(\tilde{X}^{\lambda}_i, U_{\mathbf{z},i}) - g_{i,nT}(\tilde{X}^{\lambda}_i) \right\| \nonumber \\ && + \lambda \sum_{i = nT}^{(n+1)T-1} \left\| g_{i,nT}(\tilde{X}^{\lambda}_i) - \nabla F_{\mathbf{z}}(\tilde{X}^{\lambda}_i) \right\|. \label{eq_long} \end{eqnarray} Using (\ref{in_X_tri}), one obtains \begin{eqnarray} \sum_{i = nT}^{k-1} \| X^{\lambda}_{i} - \tilde{X}^{\lambda}_{i} \| &\le& \lambda T\| V^{\lambda}_{nT} - \tilde{V}^{\lambda}_{nT} \| +...+ \lambda T\|V^{\lambda}_{k-1} - \tilde{V}^{\lambda}_{k-1}\| \nonumber \\ &\le& \sum_{i = nT}^{k-1} \| V^{\lambda}_i - \tilde{V}^{\lambda}_i \|, \end{eqnarray} noting that $T\lambda \le 1.$ Therefore, the estimation in (\ref{eq_long}) continues as \begin{eqnarray*} \|V^{\lambda}_k- \tilde{V}^{\lambda}_k \| &\le& \lambda M \sum_{i = nT}^{k-1} \| V^{\lambda}_i - \tilde{V}^{\lambda}_i \| + \lambda \max_{nT \le m < (n+1)T} \left\| \sum_{i = nT}^{m} g(\tilde{X}^{\lambda}_i, U_{\mathbf{z},i}) - g_{i,nT}(\tilde{X}^{\lambda}_i) \right\| \\ && + \lambda \sum_{i = nT}^{(n+1)T-1} \left\| g_{i,nT}(\tilde{X}^{\lambda}_i) - \nabla F_{\mathbf{z}}(\tilde{X}^{\lambda}_i) \right\|. \end{eqnarray*} Applying the discrete-time version of Gr\"onwall's lemma and taking squares, noting also that $(x + y)^2 \le 2 (x^2 + y^2), x, y \in \mathbb{R}$ yield \begin{eqnarray*} &&\|V^{\lambda}_k- \tilde{V}^{\lambda}_k \|^2 \le 2 \lambda^2 e^{2MT\lambda} \left[ \max_{nT \le m < (n+1)T} \left\| \sum_{i = nT}^{m} g(\tilde{X}^{\lambda}_i, U_{\mathbf{z},i}) - g_{i,nT}(\tilde{X}^{\lambda}_i) \right\|^2 + \Xi_n^2 \right], \end{eqnarray*} where \begin{equation}\label{eq_defi_Xi} \Xi_n := \sum_{i = nT}^{(n+1)T-1} \left\| g_{i,nT}(\tilde{X}^{\lambda}_i) - \nabla F_{\mathbf{z}}(\tilde{X}^{\lambda}_i) \right\|.\end{equation} Taking conditional expectation with respect to $\mathcal{H}_{nT}$, the estimation becomes \begin{eqnarray*} E\left[ \left. \|V^{\lambda}_k- \tilde{V}^{\lambda}_k \|^2 \right|\mathcal{H}_{nT} \right] &\le& 2 \lambda^2 e^{2M} E\left[ \left. \max_{nT \le m < (n+1)T} \left\| \sum_{i = nT}^{m} g(\tilde{X}^{\lambda}_i, U_{\mathbf{z},i}) - g_{i,nT}(\tilde{X}^{\lambda}_i) \right\|^2 \right| \mathcal{H}_{nT} \right] \\ &+& 2\lambda^2 e^{2M} E [\Xi^2_n|\mathcal{H}_{nT}]. \end{eqnarray*} Since the random variables $U_{\mathbf{z},i}$ are independent, the sequence of random variables $g(\tilde{X}^{\lambda}_i, U_{\mathbf{z},i}) - g_{i,nT}(\tilde{X}^{\lambda}_i)$, $nT\leq i<(n+1)T$ are independent conditionally on $\mathcal{H}_{nT}$, noting that $\tilde{X}^{\lambda}_i$ is measurable with respect to $\mathcal{H}_{nT}$. In addition, they have zero mean by the tower property of conditional expectation. By Assumption \ref{as_lip_g}, $$\|g(x,u)\| \le M\|x\| + B$$ and thus \begin{eqnarray}\label{eq_var_contr}E \left[ \|g(\tilde{X}^{\lambda}_i, U_{\mathbf{z},i})\|^2 | \mathcal{H}_{nT} \right] \le 2M^2 E\left[ \| \tilde{X}^{\lambda}_i \|^2\right] + 2B^2. \end{eqnarray} by the independence of $U_{\mathbf{z},i}, i > nT$ from $\mathcal{H}_{nT}$. Doob's inequality and (\ref{eq_var_contr}) imply \begin{eqnarray*} E\left[ \left. \max_{nT \le m < (n+1)T} \left\| \sum_{i = nT}^{m} g(\tilde{X}^{\lambda}_i, U_{\mathbf{z},i}) - g_{i,nT}(\tilde{X}^{\lambda}_i) \right\|^2 \right| \mathcal{H}_{nT} \right] \le 8M^2 \sum_{i=nT}^{(n+1)T-1} E\left[ \| \tilde{X}^{\lambda}_i \|^2\right] + 8B^2T. \end{eqnarray*} Taking one more expectation and using Lemma \ref{lem_uniform2} give \begin{eqnarray*} E\left[ \max_{nT \le m < (n+1)T} \left\| \sum_{i = nT}^{m} g(\tilde{X}^{\lambda}_i, U_{\mathbf{z},i}) - g_{i,nT}(\tilde{X}^{\lambda}_i) \right\|^2 \right] &\le& 8M^2 \sum_{i=nT}^{(n+1)T-1} E \left[ \| \tilde{X}^{\lambda}_i \|^2\right] + 8B^2T\\ &\le& (8M^2C^a_x + 8B^2)T. \end{eqnarray*} By Lemma \ref{lem_Xi}, we have $E [\Xi^2_n] < 2T^2\delta(M^2 C^a_x + B^2)$, and therefore, \begin{equation}\label{algo_tilde} E^{1/2}\left[ \|V^{\lambda}_k - \tilde{V}^{\lambda}_k \|^2\right] \le c_2 \sqrt{\lambda} + c_3 \sqrt{\delta} \end{equation} where we define $$c_2=4e^M\sqrt{(M^2C^a_x + B^2)}, \qquad c_3= 2e^M\sqrt{M^2C^a_x + B^2}.$$ Consequently, we have from (\ref{in_X_tri}) \begin{eqnarray} E^{1/2} \left[ \left\| X^{\lambda}_k - \tilde{X}^{\lambda}_k \right\| ^2\right] &\le& \lambda \sum_{i = nT}^{k-1} E^{1/2}\left[ \left\| V^{\lambda}_i - \tilde{V}^{\lambda}_i \right\|^2 \right] \le \lambda T (c_2 \sqrt{\lambda} + c_3 \sqrt{\delta}) \nonumber \\ &\le& c_2 \sqrt{\lambda} + c_3 \sqrt{\delta}. \label{algo_tilde_2} \end{eqnarray} Let $\tilde{V}^{int}$ and $\tilde{X}^{int}$ be the continuous-time interpolation of $\tilde{V}^{\lambda}_k$, and of $\tilde{X}^{\lambda}_k$ on $[nT, (n+1)T)$, respectively, \begin{eqnarray} d\tilde{V}^{int}_t &=& - \lambda \gamma \tilde{V}^{int}_{\lfloor t \rfloor} dt - \lambda \nabla F_{\mathbf{z}}(\tilde{X}^{int}_{\lfloor t \rfloor})\, dt + \sqrt{2\gamma \lambda \beta^{-1}} dB^{\lambda}_t , \label{proc_V_int}\\ d\tilde{X}^{int}_t &=& \lambda \tilde{V}^{int}_{\lfloor t \rfloor } dt, \label{proc_X_int} \end{eqnarray} with the initial conditions $\tilde{V}^{int}_{nT} = \tilde{V}_{nT} = V^{\lambda}_{nT}$ and $\tilde{X}^{int}_{nT} = \tilde{X}_{nT} = X^{\lambda}_{nT}$. For each $n \in \mathbb{N}$ and for $nT \le t < (n+1)T$, define also \begin{equation}\label{proc_hat} \widehat{V}_t = \widehat{V}(t,nT,(V^{\lambda}_{nT},X^{\lambda}_{nT})), \qquad \widehat{X}_t = \widehat{X}(t,nT,(V^{\lambda}_{nT},X^{\lambda}_{nT})), \end{equation} where the dynamics of $\widehat{V}, \widehat{X}$ are given in (\ref{eq_V_hat}), (\ref{eq_X_hat}). In this way, the processes $(\widehat{V}_t)_{t \ge 0}, (\widehat{X}_t)_{t \ge 0}$ are right continuous with left limits. From Lemma \ref{lem_tilde_hat}, we obtain for $nT \le t < (n+1)T$ \begin{equation}\label{tilde_hat} E^{1/2}\left[ \|\tilde{V}^{int}_t - \widehat{V}_t \|^2\right] \le c_7 \sqrt{\lambda}, \qquad E^{1/2}\left[ \|\tilde{X}^{int}_t - \widehat{X}_t \|^2\right] \le c_7 \sqrt{\lambda}. \end{equation} Combining (\ref{algo_tilde}), (\ref{algo_tilde_2}) and (\ref{tilde_hat}) gives \begin{equation}\label{algo_hat} E^{1/2}\left[ \| V^{\lambda}_k - \widehat{V}_k \|^2\right] \le (c_2 + c_7) \sqrt{\lambda} + c_3 \sqrt{\delta}, \qquad E^{1/2}\left[ \| X^{\lambda}_k - \widehat{X}_k \|^2 \right] \le (c_2 + c_7) \sqrt{\lambda} + c_3 \sqrt{\delta}. \end{equation} Define $\widehat{A}_t = (\widehat{V}_t, \widehat{X}_t)$ and $\widehat{B}(t,s,(v_s,x_s)) = (\widehat{V}(t,s,(v_s,x_s)), \widehat{X}(t,s,(v_s,x_s)))$ for $s \le t$ and $v_s, x_s$ are $\mathbb{R}^d$-valued random variables. The triangle inequality and Theorem \ref{thm_contraction} imply that for $nT \le t < (n+1)T$, and for $1 \le p \le 2$, \begin{eqnarray} &&\mathfrak{W}_p(\widehat{A}_t, \widehat{B}(t,0,(v_0,x_0))) \nonumber \\ &\le& \sum_{i = 1}^n \mathfrak{W}_p( \widehat{B}(t, iT, (V^{\lambda}_{iT}, X^{\lambda}_{iT})), \widehat{B}(t, (i-1)T, (V^{\lambda}_{(i-1)T}, X^{\lambda}_{(i-1)T}))) \nonumber\\ &=& \sum_{i = 1}^n \mathfrak{W}_p( \widehat{B}(t, iT, (V^{\lambda}_{iT}, X^{\lambda}_{iT})) , \widehat{B}(t, iT, \widehat{B}(iT, (i-1)T, (V^{\lambda}_{(i-1)T}, X^{\lambda}_{(i-1)T}))) )\nonumber\\ &\le& C_* \sum_{i = 1}^n e^{-c_* \lambda (t-iT)} \mathcal{W}^{1/p}_{\rho}( \mathcal{L} (V^{\lambda}_{iT}, X^{\lambda}_{iT}) , \mathcal{L} (\widehat{B}(iT, (i-1)T, (V^{\lambda}_{(i-1)T}, X^{\lambda}_{(i-1)T})) )) \nonumber \\\label{long} \end{eqnarray} noting the rate of contraction of $(\widehat{V}_t, \widehat{X}_t)$ is $e^{-c_* \lambda t}$. Using Lemma \ref{lemma:rho_to_w}, we obtain \begin{eqnarray*} &&\mathcal{W}_{\rho}(( \mathcal{L} (V^{\lambda}_{iT}, X^{\lambda}_{iT}) , \mathcal{L}(\widehat{V}(iT, (i-1)T, (V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})), \widehat{X}(iT, (i-1)T, (V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T}))))) \\ &\le& c_{17} \left( 1 + \varepsilon_c \sqrt{E\mathcal{V}^2(V^{\lambda}_{iT}, X^{\lambda}_{iT})} + \varepsilon_c \sqrt{E\mathcal{V}^2(\widehat{V}(iT, (i-1)T, (V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})), \widehat{X}(iT, (i-1)T, (V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})) )} \right) \\ &\times& \mathfrak{W}_2((V^{\lambda}_{iT}, X^{\lambda}_{iT}),(\widehat{V}(iT, (i-1)T, (V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})), \widehat{X}(iT, (i-1)T, (V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T}))))\\ &\le& c_{18} \left( E^{1/2} \|V^{\lambda}_{iT} - \widehat{V}(iT, (i-1)T, (V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T}))\|^2 + E^{1/2}\left[\| X^{\lambda}_{iT} - \widehat{X}(iT, (i-1)T, (V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})) \|^2 \right] \right), \end{eqnarray*} where \begin{eqnarray*} c_{18} &=& c_{17} \left( 1 + \varepsilon_c \sup_{k \in \mathbb{N}} \sqrt{E\mathcal{V}^2(V^{\lambda}_{k}, X^{\lambda}_{k})} \right. \\ &&\left. + \varepsilon_c \sup_{k \in \mathbb{N}} \sqrt{E\mathcal{V}^2(\widehat{V}(kT, (k-1)T, (V^{\lambda}_{(k-1)T},X^{\lambda}_{(k-1)T})), \widehat{X}(kT, (k-1)T, (V^{\lambda}_{(k-1)T},X^{\lambda}_{(k-1)T})))} \right). \end{eqnarray*} Now, we compute \begin{eqnarray} &&\|V^{\lambda}_{iT} - \widehat{V}(iT, (i-1)T, (V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T}))\| \nonumber\\ && \le \| V^{\lambda}_{iT - 1} - \widehat{V}(iT-1, (i-1)T, (V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})) \| \nonumber \\ && + \lambda \gamma \left\| V^{\lambda}_{iT - 1} - \widehat{V}(iT-1,(i-1)T,(V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})) \right\| \nonumber \\ && + \lambda \gamma \left\| \int_{iT-1}^{iT}{\left( \widehat{V}(iT-1,(i-1)T,(V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})) - \widehat{V}(t,(i-1)T,(V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T}))\right) dt} \right\| \nonumber \\ && + \lambda \left\| g(X^{\lambda}_{iT-1}, U_{\mathbf{z},iT-1}) - \int_{(iT-1)}^{iT}{\nabla F_{\mathbf{z}}(\widehat{X}(t,(i-1)T,(V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})))dt} \right\| \nonumber\\ && \qquad \qquad \qquad + \sqrt{\lambda} \|\xi_{iT} - (B^{\lambda}_{iT} - B^{\lambda}_{iT-1} )\| \label{eq_very_long} . \end{eqnarray} In $L^2$ norm, the first and second terms of (\ref{eq_very_long}) is bounded by $(c_2 + c_7) \sqrt{\lambda} + c_3 \sqrt{\delta}$, see (\ref{algo_hat}) and the fifth term is estimated by $\sqrt{\lambda}$. We consider the third term in (\ref{eq_very_long}). From the dynamics of $\widehat{V}$, we find that for $iT-1 \le t \le iT$, \begin{eqnarray*} &&\widehat{V}(iT-1,(i-1)T,(V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})) - \widehat{V}(t,(i-1)T,(V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})) \\ && = \lambda \int_{iT-1}^t \left( \gamma \widehat{V}(s,(i-1)T,(V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})) + \nabla F_{\mathbf{z}}(\widehat{X}(s,(i-1)T,(V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})))\right) ds \\ && - \sqrt{2 \gamma \lambda \beta^{-1}}\left( B^{\lambda}_t - B^{\lambda}_{iT-1} \right). \end{eqnarray*} H\"older's inequality yields \begin{eqnarray*} &&E\left[ \|\widehat{V}(iT-1,(i-1)T,(V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})) - \widehat{V}(t,(i-1)T,(V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})) \|^2 \right] \\ && \le 3\lambda^2 \gamma^2 \int_{iT-1}^t{ E \left[ \|\widehat{V}(s,(i-1)T,(V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})) \|^2\right] ds }\\ && + 3\lambda^2 \int_{iT-1}^t{ E\left[ \left\| \nabla F_{\mathbf{z}}(\widehat{X}(s,(i-1)T,(V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T}))) \right\|^2\right] ds} + 6 \gamma \beta^{-1} \lambda \\ && \le c_{14} \lambda, \end{eqnarray*} where the last inequality uses Lemma \ref{lem_uniform2} and Assumption \ref{as_lip} and $c_{14}:= 3 \gamma^2 C^c_v + 6M^2 C^c_x + 6B^2 + 6 \gamma \beta^{-1}$. For the fourth term of (\ref{eq_very_long}), we have \begin{eqnarray*} && E \left[ \left\| g(X^{\lambda}_{iT-1}, U_{\mathbf{z},iT-1}) - \int_{(iT-1)}^{iT}{\nabla F_{\mathbf{z}}(\widehat{X}(t,(i-1)T,(V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})))dt} \right\|^2\right] \\ && \le 2 E \left[ \| g(X^{\lambda}_{iT-1}, U_{\mathbf{z},iT-1}) - \nabla F_{\mathbf{z}}(X^{\lambda}_{iT-1}) \|^2\right] \\ && + 2E \left[ \left\| \int_{(iT-1)}^{iT}{\nabla F_{\mathbf{z}}(X^{\lambda}_{iT-1}) - \nabla F_{\mathbf{z}}(\widehat{X}(t,(i-1)T,(V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})))dt}\right\|^2\right] \\ && \le 2E\left[ \| g(X^{\lambda}_{iT-1}, U_{\mathbf{z},iT-1}) - \nabla F_{\mathbf{z}}(X^{\lambda}_{iT-1}) \|^2\right] \\ && + 2M^2E \left[ \int_{(iT-1)}^{iT}{\left\|X^{\lambda}_{iT-1} - \widehat{X}(t,(i-1)T,(V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T}))\right\|^2 dt}\right] \\ && \le 2 \delta (M^2 C^a_x + B^2) + 2M^2 (2(c_2 + c_7)^2 \lambda + 2c_3^2 \delta)\\ && \le c_{15}(\lambda + \delta), \end{eqnarray*} where the last inequality uses Assumption \ref{as_variance}, Lemma \ref{lem_uniform2}, and (\ref{algo_hat}) and $c_{15}:= \max\{ 2 (M^2 C^a_x + B^2) + 4M^2c_3^2, 4M^2(c_2 + c_7)^2 \}$. A similar estimate holds for $$E^{1/2}\left[\| X^{\lambda}_{iT} - \widehat{X}(iT, (i-1)T, (V^{\lambda}_{(i-1)T},X^{\lambda}_{(i-1)T})) \|^2 \right].$$ Letting $c_{16}:= \max\{ (c_2 + c_7), c_3, \sqrt{c_{14}}, \sqrt{c_{15}}\}$, the estimation (\ref{long}) continues as \begin{eqnarray} \mathfrak{W}_p( \widehat{A}_t, \widehat{B}(t,0,(v_0,x_0))) &\le& \sum_{i = 1}^n C_*e^{-c_*(n-i)} \left( c_{18} c_{16} (\sqrt{\lambda} + \sqrt{\delta}) \right)^{1/p} \nonumber\\ &\le& C_* \left( c_{18} c_{16} \right)^{1/p} \frac{e^{-c_*}}{1-e^{-c_*}} (\lambda^{1/(2p)} + \delta^{1/(2p)}). \label{hat_hat} \end{eqnarray} Therefore, from (\ref{algo_tilde}), (\ref{tilde_hat}), (\ref{hat_hat}), the triangle inequality implies for $nT \le k < (n+1)T$, \begin{eqnarray*} &&\mathfrak{W}_p((V^{\lambda}_k,X^{\lambda}_k) , (\widehat{V}(k,0,(v_0,x_0)), \widehat{X}(k,0,(v_0,x_0)))) \\ &\le& \mathfrak{W}_p((V^{\lambda}_k, X^{\lambda}_k), (\tilde{V}^{\lambda}_k, \tilde{X}^{\lambda}_k )) + \mathfrak{W}_p((\tilde{V}^{int}_{k}, \tilde{X}^{int}_{k}) , (\widehat{V}_{k}, \widehat{X}_{k}) )\\ &+& \mathfrak{W}_p( (\widehat{V}_{k}, \widehat{X}_{k}) , (\widehat{V}(k,0,(v_0,x_0)), \widehat{X}(k,0,(v_0,x_0))) ) \\ &\le& \tilde{C} (\lambda^{1/(2p)} + \delta^{1/(2p)}), \end{eqnarray*} where $\tilde{C} = 2\max\{c_2, c_3, c_7 ,C_* \left( c_{18} c_{16} \right)^{1/p} \frac{e^{-c_*}}{1-e^{-c_*}}\}$. The proof is complete. \end{proof} \begin{remark}\label{re_sampling_data} It is important to remark from the proof above that the data structure of $\mathbf{Z}$ can be \emph{arbitrary}, and only the independence of random elements $U_{\mathbf{z},k}, k \in \mathbb{N}$ is used. \end{remark} \begin{lemma}\label{lem_Xi} The quantity $\Xi_n$ defined in (\ref{eq_defi_Xi}) has second moments and $$\sup_{n\in \mathbb{N}} E[\Xi_n^2] < \infty.$$ \end{lemma} \begin{proof} Noting that for each $nT \le i < (n+1)T -1$, the random variable $\tilde{X}^{\lambda}_i$ is $\mathcal{H}_{nT}$-measurable. Using Assumption \ref{as_variance}, the Cauchy–Schwarz inequality implies \begin{eqnarray*} E[\Xi_n^2] &\le& T \sum_{i = nT}^{(n+1)T - 1} E \left[ \left\| g_{i,nT}(\tilde{X}^{\lambda}_i) - \nabla F_{\mathbf{z}}(\tilde{X}^{\lambda}_i) \right\|^2 \right] \\ &=& T \sum_{i = nT}^{(n+1)T - 1} E\left[ \left\| E\left[ g(\tilde{X}^{\lambda}_i,U_{\mathbf{z},k})| \mathcal{H}_{nT} \right] - \nabla F_{\mathbf{z}}(\tilde{X}^{\lambda}_i) \right\|^2 \right] \\ &\le& T \sum_{i = nT}^{(n+1)T - 1} E\left[ E\left[ \left\| g(\tilde{X}^{\lambda}_i,U_{\mathbf{z},k}) - \nabla F_{\mathbf{z}}(\tilde{X}^{\lambda}_i)\right\|^2 | \mathcal{H}_{nT} \right] \right] \\ &\le& 2T\delta \sum_{i = nT}^{(n+1)T - 1} (M^2 E\left[ \|\tilde{X}^{\lambda}_i\|^2\right] + B^2)\\ &\le& 2T^2\delta(M^2 C^a_x + B^2), \end{eqnarray*} where the last inequality uses Lemma \ref{lem_uniform2}. \end{proof} This lemma provides variance control for the algorithm. Each term in $\Xi_n$ has an error of order $\delta$, the total variance in $\Xi_n$ is of order $T \delta$. However, unlike \cite{raginsky}, \cite{gao}, our technique does not accumulate variance errors over time, as shown in (\ref{algo_tilde}). Recently in \cite{six}, the authors imposed no condition for variance of the estimated gradient, but employ the conditional $L$-mixing property of data stream, and hence variance is controlled by the decay of mixing property, see their Lemma 8.6. \begin{lemma}\label{lem_tilde_hat} For every $nT \le t < (n+1)T$, it holds that $$E^{1/2}\left[ \|\tilde{V}^{int}_t - \widehat{V}_t\|^2\right] \le c_7 \sqrt{\lambda}, \qquad E^{1/2}\left[ \| \tilde{X}^{int}_t - \widehat{X}_t \|^2\right] \le c_7 \sqrt{\lambda}.$$ \end{lemma} \begin{proof} Noting that $\tilde{V}^{int}_{nT} = \widehat{V}_{nT} = V^{\lambda}_{nT}$, we use the triangle inequality and Assumption \ref{as_lip} to estimate \begin{eqnarray} \|\tilde{V}^{int}_t - \widehat{V}_t\| &\le& \lambda \gamma \int_{nT}^t{ \left\| \tilde{V}^{int}_{\lfloor s \rfloor} - \widehat{V}_s\right\| ds} + \lambda \int_{nT}^t{ \left\| \nabla F_{\mathbf{z}}(\tilde{X}^{int}_{\lfloor s \rfloor}) -\nabla F_{\mathbf{z}}(\widehat{X}_s) \right\| ds } \nonumber \\ &\le& \lambda \gamma \int_{nT}^t{ \left\|\tilde{V}^{int}_s - \widehat{V}_s\right\| ds} + \lambda M \int_{nT}^t{ \left\| \tilde{X}^{int}_s -\widehat{X}_s \right\| ds } \nonumber\\ && + \lambda \gamma \int_{nT}^t{ \left\| \tilde{V}^{int}_{\lfloor s \rfloor} - \tilde{V}^{int}_s\right\| ds} + \lambda M \int_{nT}^t{ \left\| \tilde{X}^{int}_{\lfloor s \rfloor} -\tilde{X}^{int}_s \right\| ds}. \label{eq_1} \end{eqnarray} For notational convenience, we define for every $nT \le t < (n+1)T$ \begin{eqnarray*} I_t := \|\tilde{V}^{int}_t - \widehat{V}_t\|, \qquad J_t:= \left\| \tilde{X}^{int}_t -\widehat{X}_t \right\|. \end{eqnarray*} Then (\ref{eq_1}) becomes \begin{equation}\label{eq_2} I_t \le \lambda \gamma \int_{nT}^t{I_sds} + \lambda M \int_{nT}^t{J_sds} + \lambda \gamma \int_{nT}^t{ \left\| \tilde{V}^{int}_{\lfloor s \rfloor} - \tilde{V}^{int}_s\right\| ds} + \lambda M \int_{nT}^t{ \left\| \tilde{X}^{int}_{\lfloor s \rfloor} -\tilde{X}^{int}_s \right\| ds}. \end{equation} Furthermore, \begin{eqnarray} J_t &\le& \lambda \int_{nT}^t{\|\tilde{V}^{int}_s - \widehat{V}_s\| ds} + \lambda \int_{nT}^t{\|\tilde{V}^{int}_{\lfloor s \rfloor } -\tilde{V}^{int}_s\| ds} \nonumber\\ &\le& \lambda \int_{nT}^t{ I_sds } + \lambda \int_{nT}^t{\|\tilde{V}^{int}_{\lfloor s \rfloor } -\tilde{V}^{int}_s\| ds}.\label{eq_3} \end{eqnarray} We estimate \begin{eqnarray*} \left\| \tilde{V}^{int}_{\lfloor t \rfloor} - \tilde{V}^{int}_t\right\| &\le& \lambda \gamma \int_{\lfloor t \rfloor}^t {\|\tilde{V}^{int}_{\lfloor s \rfloor}\| ds} + \lambda \int_{\lfloor t \rfloor}^t{\|\nabla F_{\mathbf{z}}(\tilde{X}^{int}_{\lfloor s \rfloor})\|ds} + \sqrt{2\gamma \lambda \beta^{-1}} \|B^{\lambda}_t - B^{\lambda}_{\lfloor t \rfloor} \|. \end{eqnarray*} Noting that $0 \le t - \lfloor t \rfloor \le 1 $, the Cauchy-Schwarz inequality and Lemma \ref{lem_quad} imply \begin{eqnarray*} \left\| \tilde{V}^{int}_{\lfloor t \rfloor} - \tilde{V}^{int}_t\right\|^2 &\le& 3\lambda^2 \gamma^2 \int_{\lfloor t \rfloor}^t {\|\tilde{V}^{int}_{\lfloor s \rfloor}\|^2 ds} + 6\lambda^2 M^2 \int_{\lfloor t \rfloor}^t{\|\tilde{X}^{int}_{\lfloor s \rfloor}\|^2ds} \\ &+& 6 \lambda^2 B^2 + 6\gamma \lambda \beta^{-1} \|B^{\lambda}_t - B^{\lambda}_{\lfloor t \rfloor} \|^2. \end{eqnarray*} Taking expectation both sides and noting that $(\tilde{V}^{int}_{k}, \tilde{X}^{int}_{k})$ has the same distribution as $(\tilde{V}^{\lambda}_{k}, \tilde{X}^{\lambda}_{k}), k \in \mathbb{N}$, Lemma \ref{lem_uniform2} leads to \begin{eqnarray} E\left[ \left\| \tilde{V}^{int}_{\lfloor t \rfloor} - \tilde{V}^{int}_t\right\|^2\right] &\le& 3\lambda^2 \gamma^2 C^a_v + 6\lambda^2 M^2 C^a_x + 6 \lambda^2B^2 + 6\gamma \beta^{-1} \lambda \nonumber \\ &\le& c_8 \lambda, \label{eq_4} \end{eqnarray} for $c_8: = 3 \gamma^2 C^a_v + 6 M^2 C^a_x + 6 B^2 + 6\gamma \beta^{-1} $. Similarly, \begin{eqnarray} E\left[ \left\| \tilde{X}^{int}_{\lfloor t \rfloor} -\tilde{X}^{int}_t \right\|^2\right] = \lambda^2 \int_{\lfloor t \rfloor}^t{E\left[ \|\tilde{V}^{int}_{\lfloor s \rfloor }\|^2\right] ds} \le \lambda^2 C^a_v. \label{eq_5} \end{eqnarray} Taking squares and expectation of (\ref{eq_2}), (\ref{eq_3}), applying (\ref{eq_4}), (\ref{eq_5}) we obtain for $nT \le t < (n+1)T $ \begin{eqnarray*} E\left[ I^2_t\right] &\le& 4\lambda \gamma^2 \int_{nT}^t{E\left[ I^2_s\right] ds} + 4\lambda M^2 \int_{nT}^t{E\left[ J^2_s\right] ds} + c_9 \lambda,\\ E\left[ J^2_t\right] &\le& 2\lambda \int_{nT}^t{ E\left[ I^2_s\right] ds } + c_9\lambda, \end{eqnarray*} where $c_9 := \max\{4 \gamma^2 c_8 + 4M^2 C^a_v, 2c_8\}$. Summing up two inequalities yields $$E[I^2_t + J^2_t] \le c_{10} \lambda \int_{nT}^t{E[I^2_s + J^2_s]ds} + 2c_9 \lambda $$ where $c_{10}:= \max\{ 4 \gamma^2 + 2, 4M^2 \}$ and then Gronwall's lemma shows $$E[I^2_t + J^2_t] \le2c_9 \lambda e^{c_{10}}.$$ noting that $t \mapsto E[I^2_t + J^2_t]$ is continuous. The proof is complete by setting $c_7 = \sqrt{2c_9 e^{c_{10}}}$, which is of order $\sqrt{d}$. \end{proof} \subsection{Proof of Theorem \ref{thm_main}}\label{proof_thm_main} Denote $\mu_{\mathbf{z},k}:= \mathcal{L}((V^{\lambda}_k,X^{\lambda}_k)|\mathbf{Z} = \mathbf{z})$. Let $(\widehat{X},\widehat{V})$ and $(\widehat{X}^*,\widehat{V}^*)$ be such that $\mathcal{L}((\widehat{X},\widehat{V})|\mathbf{Z} = \mathbf{z}) = \mu_{\mathbf{z},k}$ and $\mathcal{L}(\widehat{X}^*_{\mathbf{z}},\widehat{V}^*_{\mathbf{z}}) = \pi_{\mathbf{z}}$. We decompose the population risk by \begin{eqnarray} E\left[ F(\widehat{X})\right] - F^* &=& \left( E\left[ F(\widehat{X})\right] - E\left[ F(\widehat{X}^*_{\mathbf{z}})\right] \right) + \left( E\left[ F(\widehat{X}^*_{\mathbf{z}})\right] - E\left[ F_{\mathbf{Z}}(\widehat{X}^*_{\mathbf{Z}})\right] \right) \nonumber \\ &+& \left(E\left[ F_{\mathbf{Z}}(\widehat{X}^*_{\mathbf{Z}})\right] - F^* \right). \label{eq_decom_risk} \end{eqnarray} \subsubsection{The first term $\mathcal{T}_1$} The first term in the right hand side of (\ref{eq_decom_risk}) is rewritten as $$E\left[ F(\widehat{X})\right] - E\left[ F(\widehat{X}^*)\right] = \int_{\mathcal{Z}^n} \mu^{\otimes n}(d\mathbf{z}) \left( \int_{\mathbb{R}^{2d}} F_{\mathbf{z} }(x)\mu_{\mathbf{z},k}(dx,dv) - \int_{\mathbb{R}^{2d}} F_{\mathbf{z} }(x)\pi_{\mathbf{z}}(dx,dv) \right),$$ where $\mu^{\otimes n}$ is the product of laws of independent random variables $Z_1, ...,Z_n$. By Assumptions \ref{as_f_bound} and \ref{as_lip}, the function $F_{\mathbf{z}}$ satisfies $\|\nabla F_{\mathbf{z}}(x)\| \le M\|x\| + B.$ Using Lemma \ref{lem_linear_bound}, we have $$\left| \int_{\mathbb{R}^{2d}} F_{\mathbf{z} }(x)\mu_{\mathbf{z},k}(dx,dv) - \int_{\mathbb{R}^{2d}} F_{\mathbf{z} }(x)\pi_{\mathbf{z}}(dx,dv) \right| \le (M \sigma + B) \mathcal{W}_p(\mu_{\mathbf{z},k}, \pi_{\mathbf{z}}),$$ where $p>1, q \in \mathbb{N}, 1/p + 1/(2q) = 1,$ $$\sigma = \max \left\lbrace \left( \int_{\mathbb{R}^{2d}}{\|x\|^{2q} \mu_{\mathbf{z},k}(dx,dv)}\right) ^{1/(2q)}, \left( \int_{\mathbb{R}^{2d}}{\|x\|^{2q} \pi_{\mathbf{z}}(dx,dv)} \right) ^{1/(2q)} \right\rbrace < \infty $$ by Lemma \ref{lemma:moment_bound}. On the other hand, Theorems \ref{thm_algo_aver} and \ref{thm_contraction} imply \begin{eqnarray} && \mathcal{W}_p(\mu_{\mathbf{z},k}, \pi_{\mathbf{z}}) \nonumber\\ && \le \mathcal{W}_p(\mathcal{L}((V^{\lambda}_k,X^{\lambda}_k)|\mathbf{Z} = \mathbf{z}), \mathcal{L}((\widehat{V}(k,0,v_0), \widehat{X}(k,0,x_0))|\mathbf{Z} = \mathbf{z})) \nonumber \\ && + \mathcal{W}_p(\mathcal{L}((\widehat{V}(k,0,v_0), \widehat{X}(k,0,x_0))|\mathbf{Z} = \mathbf{z}), \pi_{\mathbf{z}} ) \nonumber \\ && \le \tilde{C} (\lambda^{1/(2p)} + \delta^{1/(2p)}) + C_* \left( \mathcal{W}_{\rho}(\mu_0, \pi_{\mathbf{z}})\right)^{1/p} \exp(- c_* k \lambda )\label{sampling_err}. \end{eqnarray} Therefore, an upper bound for $\mathcal{T}_1$ is given by \begin{equation*} \mathcal{T}_1 \le (M \sigma + B) \left( \tilde{C} (\lambda^{1/(2p)} + \delta^{1/(2p)}) + C_* \left( \mathcal{W}_{\rho}(\mu_0, \pi_{\mathbf{z}})\right)^{1/p} \exp(- c_* k \lambda ) \right). \end{equation*} \subsubsection{The second term $\mathcal{T}_2$} Since the $x$-marginal of $\pi_{\mathbf{z}}(dx,dv)$ is $\pi_{\mathbf{z}}(dx)$, the Gibbs measure of (\ref{langevin}), we compute $$\int_{\mathbb{R}^{2d}}{F_{\mathbf{z}}(x) \pi_{\mathbf{z}}(dx,dv)} = \int_{\mathbb{R}^d}{F_{\mathbf{z}}(x)\pi_{\mathbf{z}}(dx)}.$$ Therefore the argument in \cite{raginsky} is adopted, $$E\left[ F(\widehat{X}^*)\right] - E\left[ F_{\mathbf{Z}}(\widehat{X}^*)\right] \le \frac{4\beta c_{LS}}{n}\left( \frac{M^2}{m}(b+d/\beta) + B^2 \right).$$ The constant $c_{LS}$ comes from the logarithmic Sobolev inequality for $\pi_{\mathbf{z}}$ and $$ c_{LS} \le \frac{2m^2 + 8M^2}{m^2M\beta} + \frac{1}{\lambda_*} \left( \frac{6M(d+\beta)}{m} + 2 \right),$$ where $\lambda_*$ is the uniform spectral gap for the overdamped Langevin dynamics $$\lambda_* = \inf_{\mathbf{z} \in \mathcal{Z}^n} \inf \left\lbrace \frac{\int_{\mathbb{R}^d}\|\nabla g\|^2d\pi_{\mathbf{z}}}{\int_{\mathbb{R}^d}g^2d\pi_{\mathbf{z}}}: g \in C^1(\mathbb{R}^d) \cap L^2(\pi_{\mathbf{z}}), g \ne 0, \int_{\mathbb{R}^d}{gd\pi_{\mathbf{z}}} = 0 \right\rbrace.$$ \begin{remark}\label{re_extend_T2} One can also find an upper bound for $\mathcal{T}_2$ when the data $\mathbf{z}$ is a realization of some non-Makovian processes. For example, if we assume that $f$ is Lipschitz on the second variable $z$ and $\mathbf{Z}$ satisfies a certain mixing property discussed in \cite{chau2016fixed}) then the term $\mathcal{T}_2$ is bounded by $1/\sqrt{n}$ times a constant, see Theorem 2.5 therein. \end{remark} \subsubsection{The third term $\mathcal{T}_3$} For the third term, we follow \cite{raginsky}. Let $x^*$ be any minimizer of $F(x)$. We compute \begin{eqnarray} E\left[ F_{\mathbf{Z}}(\widehat{X}^*)\right] - F^* &=& E\left[ F_{\mathbf{Z}}(\widehat{X}^*) - \min_{x \in \mathbb{R}^d} F_{\mathbf{Z}}(x) \right] + E\left[ \min_{x \in \mathbb{R}^d} F_{\mathbf{Z}}(x) - F_{\mathbf{Z}}(x^*) \right] \nonumber\\ &\le& E\left[ F_{\mathbf{Z}}(\widehat{X}^*) - \min_{x \in \mathbb{R}^d} F_{\mathbf{Z}}(x) \right] \nonumber\\ &\le& \frac{d}{2\beta} \log\left( \frac{eM}{m}\left( \frac{b\beta}{d} + 1 \right) \right), \end{eqnarray} where the last inequality comes from Proposition 3.4 of \cite{raginsky}. The condition $\beta \ge 2m$ is not used here, see the explanation in Lemma 16 of \cite{gao}. \section{Technical lemmas} \begin{lemma}\label{lem_quad} Under Assumptions \ref{as_f_bound}, \ref{as_lip}, for any $x \in \mathbb{R}^d$ and $z \in \mathcal{U}$, $$ \|\nabla f(x,z)\| \le M \|x \| + B,$$ and $$ \frac{m}{3}\|x\|^2 - \frac{b}{2} \log 3 \le f(x,z) \le \frac{M}{2} \|x \|^2 + B\|x\| + A_0. $$ \end{lemma} \begin{proof} See Lemma 2 of \cite{raginsky}. \end{proof} The next lemma generalizes continuity for functions of quadratic growth in Wasserstein distances given in \cite{polyanskiy2016wasserstein}. \begin{lemma}\label{lem_linear_bound}Let $\mu, \nu$ be two probability measures on $\mathbb{R}^{2d}$ with finite second moments and let $G: \mathbb{R}^{2d} \to \mathbb{R}$ be a $C^1$ function with $$\| \nabla G(w) \| \le c_1\|w\| + c_2 $$ for some $c_1 > 0, c_2 \ge 0 $. Then for $p > 1, q>1 $ such that $1/p + 1/q = 1$, we have $$\left| \int_{\mathbb{R}^{2d}}{G d\mu} - \int_{\mathbb{R}^{2d}}{G d\nu} \right| \le (c_1 \sigma + c_2) \mathcal{W}_p(\mu,\nu),$$ where $$\sigma = \frac{1}{2} \max\left\lbrace \left( \int_{\mathbf{R}^{2d}}{\|v\|^q\nu(dv)} \right)^{1/q} , \left( \int_{\mathbf{R}^{2d}}{\|u\|^q\mu(du)} \right)^{1/q} \right\rbrace .$$ \end{lemma} \begin{proof} Using the Cauchy-Schwartz inequality, we compute \begin{eqnarray*} |G(u) - G(v)| &=& \left| \int_{0}^1{\left\langle \nabla G(tv + (1-t)u), u-v \right\rangle dt} \right| \\ &\le& \left| \int_{0}^1{ (c_1 t\|v\| + c_1(1-t)\|u\| + c_2)\| u-v\| dt } \right| \\ &=& (c_1\|v\|/2 + c_1\|u\|/2 +c_2) \| u-v\|. \end{eqnarray*} Then for any $\xi \in \Pi(\mu,\nu)$ we have \begin{eqnarray*} \left| \int_{\mathbf{R}^{2d}}{G(u)\mu(du)} - \int_{\mathbf{R}^{2d}}{G(v)\nu(dv)}\right| &\le& \int_{\mathbf{R}^{2d}}{ (c_1\|v\|/2 + c_1\|u\|/2 +c_2) \| u-v\| \xi(du,dv) }\\ &\le& \frac{c_1}{2}\left( \int_{\mathbf{R}^{2d}}{\|v\|^q\nu(dv)} \right)^{1/q} \left( \int_{\mathbf{R}^{2d}}{\|u-v\|^p} \xi(du,dv) \right)^{1/p} \\ &+& \frac{c_1}{2}\left( \int_{\mathbf{R}^{2d}}{\|u\|^q\mu(du)} \right)^{1/q} \left( \int_{\mathbf{R}^{2d}}{\|u-v\|^p} \xi(du,dv) \right)^{1/p}\\ &+& c_2\left( \int_{\mathbf{R}^{2d}}{\|u-v\|^p} \xi(du,dv) \right)^{1/p}. \end{eqnarray*} Since this inequality holds true for any $\xi \in \Pi(\mu,\nu)$, the proof is complete. \end{proof} \begin{lemma}\label{lem_uniform2} The continuous time processes (\ref{eq_V}),(\ref{eq_X}) are uniformly bounded in $L^2$, more precisely, \begin{eqnarray*} \sup_{t \ge 0} E_{\mathbf{z}}\left[ \|X_t\|^2\right] &\le& C^c_x: = \frac{8 }{(1- 2 \lambda_c) \beta \gamma^2} \left( \int_{\mathbb{R}^{2d}}{\mathcal{V}(x,v)d\mu_0(x,v)} + \frac{5(d + A_c)}{\lambda_c} \right) < \infty, \\ \sup_{t \ge 0} E_{\mathbf{z}}\left[ \|V_t\|^2\right] &\le& C^c_v:= \frac{4}{(1- 2 \lambda_c) \beta } \left( \int_{\mathbb{R}^{2d}}{\mathcal{V}(x,v)d\mu_0(x,v)} + \frac{5(d + A_c)}{\lambda_c} \right) < \infty. \end{eqnarray*} For $0 < \lambda \le \min\left\lbrace \frac{\gamma}{K_2}\left( \frac{d + A_c}{\beta}, \frac{\gamma \lambda_c}{2K_1} \right) \right\rbrace $, where \begin{equation*} K_1 := \max\left\lbrace \frac{32M^2(\frac{1}{2}+\gamma + \delta)}{(1-2 \lambda_c) \beta \gamma^2}, \frac{8(\frac{M}{2} + \frac{\gamma^2}{4} - \frac{\gamma^2 \lambda_c}{4} + \gamma)}{\beta(1-2\lambda_c)} \right\rbrace \end{equation*} and \begin{equation*} k_2:= 2B^2\left( \frac{1}{2} + \gamma + \delta \right), \end{equation*}the SGHMC (\ref{eq_V_dis_appr}),(\ref{eq_X_dis_appr}) satisfy \begin{eqnarray*} \sup_{k \in \mathbb{N} } E_{\mathbf{z}}\left[ \|X^{\lambda}_k\|^2\right] &\le& C^a_x:= \frac{8 }{(1- 2 \lambda_c) \beta \gamma^2} \left( \int_{\mathbb{R}^{2d}}{\mathcal{V}(x,v)d\mu_0(x,v)} + \frac{8(d + A_c)}{\lambda_c} \right) < \infty, \\ \sup_{k \in \mathbb{N} } E_{\mathbf{z}}\left[ \|V^{\lambda}_k\|^2\right] &\le& C^a_v:= \frac{4}{(1- 2 \lambda_c) \beta } \left( \int_{\mathbb{R}^{2d}}{\mathcal{V}(x,v)d\mu_0(x,v)} + \frac{8(d + A_c)}{\lambda_c} \right) < \infty. \end{eqnarray*} Furthermore, the processes defined in (\ref{pro_tilde}), (\ref{proc_hat}) are also uniformly bounded in $L^2$ with the upper bounds $C^c_v, C^c_x, C^a_v, C^a_x$, respectively. \end{lemma} \begin{proof} The uniform boundedness in $L^2$ of the processes in (\ref{eq_V}), (\ref{eq_X}), (\ref{eq_V_dis_appr}), (\ref{eq_X_dis_appr}) are given in Lemma 8 of \cite{gao}. From (A.4) of \cite{gao}, it holds that \begin{equation}\label{ineq_lower_V} \mathcal{V}(v,x) \ge \max\left\lbrace \frac{1}{8}(1-2\lambda_c)\beta \gamma^2\|x\|^2, \frac{\beta}{4}(1-2\lambda_c)\|v\|^2 \right\rbrace . \end{equation} Using the notations in their Lemma 8, we denote $$L_t = E_{\mathbf{z}}\left[ \mathcal{V}(V_t,X_t) \right], \qquad L_2(k) = E_{\mathbf{z}}\left[ \mathcal{V}(V^{\lambda}_k,X^{\lambda}_k)/\beta \right],$$ then the following relations hold \begin{eqnarray} L_t &\le& L_s e^{-\gamma \lambda_c (t-s)} + \frac{d + A_c}{\lambda_c}(1- e^{\gamma \lambda_c(t-s)}), \qquad \text{ for } s \le t, \label{L_c}\\ L_2(k) &\le& L_2(j) + \frac{4(d/\beta + A_c/\beta)}{\lambda_c} \qquad \text{ for } j\le k.\label{L_d} \end{eqnarray} Taking $j=0$ in (\ref{L_d}) gives \begin{equation}\label{ineq_bound_V_algo} E_{\mathbf{z}}\left[ \mathcal{V}(V^{\lambda}_k,X^{\lambda}_k) \right] \le E_{\mathbf{z}}\left[ \mathcal{V}(V^{\lambda}_0,X^{\lambda}_0) \right] + \frac{4(d + A_c)}{\lambda_c}. \end{equation} Therefore, by (\ref{L_c}) we obtain for $nT \le t < (n+1)T, n \in \mathbb{N}$ $$ E_{\mathbf{z}}\left[ \mathcal{V}(\widehat{V}(t,nT,V^{\lambda}_{nT}),\widehat{X}(t,nT,V^{\lambda}_{nT})) \right] \le E_{\mathbf{z}}\left[ \mathcal{V}(V^{\lambda}_{nT},X^{\lambda}_{nT}) \right] + \frac{d + A_c}{\lambda_c}.$$ Then the processes in (\ref{proc_hat}) is uniformly bounded in $L^2$ by (\ref{ineq_lower_V}) and (\ref{ineq_bound_V_algo}), $$ \sup_{t \ge 0} E\left[\|\widehat{V}_t\|^2 \right] \le \frac{4}{(1- 2 \lambda_c) \beta } \left( \int_{\mathbb{R}^{2d}}{\mathcal{V}(x,v)d\mu_0(x,v)} + \frac{5(d + A_c)}{\lambda_c} \right) = C^c_v.$$ and $$ \sup_{t \ge 0} E\left[\|\widehat{X}_t\|^2 \right] \le \frac{8 }{(1- 2 \lambda_c) \beta \gamma^2} \left( \int_{\mathbb{R}^{2d}}{\mathcal{V}(x,v)d\mu_0(x,v)} + \frac{5(d + A_c)}{\lambda_c} \right) = C^c_x.$$ Similarly, from (\ref{L_d}) and (\ref{ineq_bound_V_algo}), we obtain for $nT \le k < (n+1)T, n\in \mathbb{N}$, $$ E_{\mathbf{z}}\left[ \mathcal{V}(\tilde{V}^{\lambda}_k,\tilde{X}^{\lambda}_k) \right] \le E_{\mathbf{z}}\left[ \mathcal{V}(V^{\lambda}_{0},X^{\lambda}_{0}) \right] + \frac{8(d + A_c)}{\lambda_c},$$ and the upper bounds for $\sup_{k \in \mathbb{N}}E[\|\tilde{V}^{\lambda}_k\|^2], \sup_{k \in \mathbb{N}}E[\|\tilde{X}^{\lambda}_k\|^2]$ are $C^a_v, C^a_x$, respectively. \end{proof} \begin{lemma}\label{lemma:rho_to_w} Let $\mu, \nu$ be any two probability measures on $\mathbb{R}^{2d}$. It holds that $$\mathcal{W}_{\rho}(\mu,\nu) \le c_{17} \left( 1 + \varepsilon_c \left( \int{\mathcal{V}^2d\mu}\right) ^{1/2} + \varepsilon_c \left( \int{\mathcal{V}^2d\nu}\right) ^{1/2} \right) \mathcal{W}_2(\mu,\nu),$$ \end{lemma} where $c_{17}:=3\max\{ 1+\alpha_c, \gamma^{-1}\}$. \begin{proof} From (2.11) of \cite{ear2}, we have that $h(x) \le x$, for $x \ge 0$, and from (\ref{eq:r}), $r((x_1,v_1),(x_2,v_2)) \le c_{17}/3\|(x_1,v_1)-(x_2,v_2)\|$. By definition (\ref{eq_dist_aux}), we estimate \begin{eqnarray*} \mathcal{W}_{\rho}(\mu,\nu) &=& \inf_{\xi \in \Pi(\mu,\nu)} \int_{\mathbb{R}^{2d}}{\rho((x_1,v_1),(x_2,v_2)) \xi(d(x_1,v_1)d(x_2,v_2)) }\\ &\le& \inf_{\xi \in \Pi(\mu,\nu)} \int_{\mathbb{R}^{2d}}{r((x_1,v_1),(x_2,v_2))\left(1+ \varepsilon_c \mathcal{V}(x_1,v_1) + \varepsilon_c \mathcal{V}(x_2,v_2) \right) \xi(d(x_1,v_1)d(x_2,v_2)) }\\ &\le& c_{17}/3 \inf_{\xi \in \Pi(\mu,\nu)} \int_{\mathbb{R}^{2d}}{\|(x_1,v_1) - (x_2,v_2)\| \left(1+ \varepsilon_c \mathcal{V}(x_1,v_1) + \varepsilon_c \mathcal{V}(x_2,v_2) \right) \xi(d(x_1,v_1)d(x_2,v_2)) }\\ &\le& c_{17} \left( 1 + \varepsilon_c \left( \int{\mathcal{V}^2d\mu}\right) ^{1/2} + \varepsilon_c \left( \int{\mathcal{V}^2d\nu}\right) ^{1/2} \right) \mathcal{W}_2(\mu,\nu). \end{eqnarray*} \end{proof} \begin{lemma}\label{lemma:moment_bound} Let $1 \le q \in \mathbb{N}$. It holds that $$C^{2q}_V:= \sup_{k \in \mathbf{N}} E[\|V^{\lambda}_k\|^{2q}] < \infty, \qquad C^{2q}_X:= \sup_{k \in \mathbf{N}} E[\|X^{\lambda}_k\|^{2q}] < \infty.$$ \end{lemma} \begin{proof} We will use the arguments in the proof of Lemma 12 of \cite{gao} to obtain the contraction for $\mathcal{V}(X^{\lambda}_k,V^{\lambda}_k)$ and in Lemma 3.9 of \cite{five} to obtain high moment estimates. First, we have \begin{eqnarray} F_{\mathbf{z}}(X^{\lambda}_{k+1}) - F_{\mathbf{z}}(X^{\lambda}_{k}) - \left\langle \nabla F_{\mathbf{z}}(X^{\lambda}_{k}), \lambda V^{\lambda}_k \right\rangle &=& \int_0^1{\left\langle \nabla F_{\mathbf{z}}(X^{\lambda}_{k} + \tau \lambda V^{\lambda}_{k} ) - \nabla F_{\mathbf{z}}(X^{\lambda}_{k}), \lambda V^{\lambda}_{k} \right\rangle } d\tau \nonumber \\ &\le& \int_0^1{ \left\|\nabla F_{\mathbf{z}}(X^{\lambda}_{k} + \tau \lambda V^{\lambda}_{k} ) - \nabla F_{\mathbf{z}}(X^{\lambda}_{k}) \right\| \left\| \lambda V^{\lambda}_{k} \right\| d\tau } \nonumber\\ &\le& \frac{1}{2}M \lambda^2 \|V^{\lambda}_k\|^2. \label{eq:F} \end{eqnarray} Denoting $\Delta^1_k = V^{\lambda}_{k} - \lambda[\gamma V^{\lambda}_{k} + g(X^{\lambda}_{k},U_{\mathbf{z},k})] $, we compute \begin{eqnarray} && \|V^{\lambda}_{k+1}\|^2 \nonumber\\ &=& \|\Delta^1_k\|^2 + 2 \gamma \beta^{-1} \lambda \| \xi_{k+1} \|^2 + 2 \sqrt{2 \gamma \beta^{-1} \lambda}\left\langle \Delta^1_k, \xi_{k+1}\right\rangle \nonumber \\ &\le& \|V^{\lambda}_{k} - \lambda[\gamma V^{\lambda}_{k} + \nabla F_{\mathbf{z}} (X^{\lambda}_k) ]\|^2 + \lambda^2 \| g(X^{\lambda}_{k},U_{\mathbf{z},k}) - \nabla F_{\mathbf{z}} (X^{\lambda}_k) \|^2 + 2 \gamma \beta^{-1} \lambda \| \xi_{k+1} \|^2 + 2 \sqrt{2 \gamma \beta^{-1} \lambda} \left\langle \Delta^1_k, \xi_{k+1}\right\rangle \nonumber\\ &\le& (1-\lambda \gamma)^2 \|V^{\lambda}_{k}\|^2 - 2\lambda(1-\lambda \gamma)\left\langle \nabla F_{\mathbf{z}} (X^{\lambda}_k), V^{\lambda}_k \right\rangle + \lambda^2 \|F_{\mathbf{z}} (X^{\lambda}_k)\|^2 + \lambda^2 \| g(X^{\lambda}_{k},U_{\mathbf{z},k}) - \nabla F_{\mathbf{z}} (X^{\lambda}_k) \|^2 \nonumber \\ && \qquad + 2 \gamma \beta^{-1} \lambda \| \xi_{k+1} \|^2 + 2 \sqrt{2 \gamma \beta^{-1} \lambda} \left\langle \Delta^1_k, \xi_{k+1}\right\rangle . \nonumber\\ &\le& (1-\lambda \gamma)^2 \|V^{\lambda}_{k}\|^2 - 2\lambda(1-\lambda \gamma)\left\langle \nabla F_{\mathbf{z}} (X^{\lambda}_k), V^{\lambda}_k \right\rangle + 3\lambda^2 (M \|X^{\lambda}_k\| +B)^2 + 2 \gamma \beta^{-1} \lambda \| \xi_{k+1} \|^2 + 2 \sqrt{2 \gamma \beta^{-1} \lambda} \left\langle \Delta^1_k, \xi_{k+1}\right\rangle . \nonumber\\ \label{eq:V} \end{eqnarray} Similarly, we have \begin{equation}\label{eq:X} \|X^{\lambda}_{k+1}\|^2 = \|X^{\lambda}_{k}\|^2 + 2 \lambda\left\langle X^{\lambda}_{k}, V^{\lambda}_{k} \right\rangle + \lambda^2 \|V^{\lambda}_{k}\|^2. \end{equation} Denoting $\Delta^2_k = X^{\lambda}_{k} + \gamma^{-1} V^{\lambda}_{k} - \lambda \gamma^{-1} g(X^{\lambda}_{k},U_{\mathbf{z},k})$, we compute that \begin{eqnarray} &&\|X^{\lambda}_{k+1} + \gamma^{-1} V^{\lambda}_{k+1} \|^2 \nonumber\\ &=& \| X^{\lambda}_{k} + \gamma^{-1} V^{\lambda}_{k} - \lambda \gamma^{-1} g(X^{\lambda}_{k},U_{\mathbf{z},k}) + \sqrt{2 \gamma^{-1} \beta^{-1} \lambda} \xi_{k+1} \|^2 \nonumber\\ &=& \| X^{\lambda}_{k} + \gamma^{-1} V^{\lambda}_{k} - \lambda \gamma^{-1} g(X^{\lambda}_{k},U_{\mathbf{z},k}) \|^2 + 2 \gamma^{-1} \beta^{-1} \lambda \|\xi_{k+1} \|^2 + 2 \sqrt{2 \gamma^{-1} \beta^{-1} \lambda}\left\langle \Delta^2_k, \xi_{k+1}\right\rangle \nonumber\\ &\le& \| X^{\lambda}_{k} + \gamma^{-1} V^{\lambda}_{k} - \lambda \gamma^{-1} \nabla F_{\mathbf{z}}(X^{\lambda}_{k}) \|^2 + \lambda^2 \gamma^{-2} \| g(X^{\lambda}_{k},U_{\mathbf{z},k}) - F_{\mathbf{z}}(X^{\lambda}_{k}) \|^2 \nonumber \\ && \qquad + 2 \gamma^{-1} \beta^{-1} \lambda \|\xi_{k+1} \|^2 + 2 \sqrt{2 \gamma^{-1} \beta^{-1} \lambda} \left\langle \Delta^2_k, \xi_{k+1}\right\rangle \nonumber\\ &\le& \| X^{\lambda}_{k} + \gamma^{-1} V^{\lambda}_{k}\|^2 - 2 \lambda \gamma^{-1} \left\langle \nabla F_{\mathbf{z}}(X^{\lambda}_{k}),X^{\lambda}_{k} + \gamma^{-1} V^{\lambda}_{k}\right\rangle + 3\lambda^2 \gamma^{-2}(M \|X^{\lambda}_{k} \| + B)^2\nonumber\\ && \qquad + 2 \gamma^{-1} \beta^{-1} \lambda \|\xi_{k+1} \|^2 + 2 \sqrt{2 \gamma^{-1} \beta^{-1} \lambda} \left\langle \Delta^2_k, \xi_{k+1}\right\rangle . \label{eq:XV} \end{eqnarray} Let us denote $\mathcal{V}_{k} = \mathcal{V}(X^{\lambda}_k,V^{\lambda}_k)$. From (\ref{eq:F}), (\ref{eq:V}), (\ref{eq:X}) and (\ref{eq:XV}) we compute that \begin{eqnarray} && \frac{\mathcal{V}_{k+1} - \mathcal{V}_{k} }{\beta } \label{eq:VV}\\ &\le& \lambda \left\langle \nabla F_{\mathbf{z}}(X^{\lambda}_{k}), V^{\lambda}_k \right\rangle + \frac{1}{2}M \lambda^2 \|V^{\lambda}_k\|^2 \nonumber \\ &-& \frac{1}{2} \lambda \gamma \left\langle \nabla F_{\mathbf{z}}(X^{\lambda}_{k}),X^{\lambda}_{k} + \gamma^{-1} V^{\lambda}_{k}\right\rangle + \frac{3}{4} \lambda^2 (M \|X^{\lambda}_k\| +B)^2 + \nonumber\\ && \qquad + \frac{1}{2} \gamma \beta^{-1} \lambda \|\xi_{k+1} \|^2 + \frac{1}{2} \gamma^2 \sqrt{2 \gamma^{-1} \beta^{-1} \lambda} \left\langle \Delta^2_k, \xi_{k+1}\right\rangle \nonumber\\ &+& \frac{1}{4}(-2\lambda \gamma + \lambda^2 \gamma^2) \|V^{\lambda}_{k}\|^2 - \frac{1}{2}\lambda(1-\lambda \gamma)\left\langle \nabla F_{\mathbf{z}} (X^{\lambda}_k), V^{\lambda}_k \right\rangle + \frac{3}{4} \lambda^2 (M \|X^{\lambda}_k\| +B)^2 + \nonumber \\ && \qquad + \frac{1}{2} \gamma \beta^{-1} \lambda \| \xi_{k+1} \|^2 + \frac{1}{2} \sqrt{2 \gamma \beta^{-1} \lambda} \left\langle \Delta^1_k, \xi_{k+1}\right\rangle \nonumber\\ &-& \frac{1}{2} \lambda \gamma^2 \lambda_c \left\langle X^{\lambda}_{k}, V^{\lambda}_{k} \right\rangle - \frac{1}{4} \lambda^2 \gamma^2 \lambda_c \|V^{\lambda}_{k}\|^2 \nonumber \\ &=& - \frac{1}{2} \lambda \gamma \left\langle \nabla F_{\mathbf{z}}(X^{\lambda}_{k}),X^{\lambda}_{k}\right\rangle - \frac{1}{2} \lambda \gamma \|V^{\lambda}_k\|^2 - \frac{1}{2} \lambda \gamma^2 \lambda_c \left\langle X^{\lambda}_{k}, V^{\lambda}_{k} \right\rangle + \lambda^2 \mathcal{E}_k \nonumber \\ && \qquad + \gamma \beta^{-1} \lambda \|\xi_{k+1} \|^2 + \Sigma_k, \nonumber \end{eqnarray} where \begin{eqnarray*} \mathcal{E}_k&:=&\left( \frac{1}{2}M + \frac{1}{4} \gamma^2 - \frac{1}{4} \gamma^2 \lambda_c\right) \|V^{\lambda}_k\|^2 + \frac{3}{2} (M \|X^{\lambda}_k\| +B)^2 + \frac{1}{2}\gamma\left\langle \nabla F_{\mathbf{z}} (X^{\lambda}_k), V^{\lambda}_k \right\rangle,\\ \Sigma_k &:=& \frac{1}{2} \gamma^2 \sqrt{2 \gamma^{-1} \beta^{-1} \lambda} \left\langle \Delta^2_k, \xi_{k+1}\right\rangle + \frac{1}{2} \sqrt{2 \gamma \beta^{-1} \lambda} \left\langle \Delta^1_k, \xi_{k+1}\right\rangle . \end{eqnarray*} Using the inequality (\ref{eq:drift}), we obtain \begin{eqnarray} \frac{\mathcal{V}_{k+1} - \mathcal{V}_{k} }{\beta} &\le& - \lambda \gamma \lambda_c F_{\mathbf{z}}(X^{\lambda}_k) - \frac{1}{4}\lambda \gamma^3 \lambda_c \|X^{\lambda}_k \|^2 + \lambda \gamma A_c/\beta - \frac{1}{2} \lambda \gamma \|V^{\lambda}_k\|^2 - \frac{1}{2} \lambda \gamma^2 \lambda_c \left\langle X^{\lambda}_{k}, V^{\lambda}_{k} \right\rangle + \lambda^2 \mathcal{E}_k \nonumber \\ && \qquad + \gamma \beta^{-1} \lambda \|\xi_{k+1} \|^2 + \Sigma_k. \label{eq:1} \end{eqnarray} The quantity $\mathcal{E}_k$ is bounded as follows \begin{eqnarray*} \mathcal{E}_k &\le& \left( \frac{1}{2}M + \frac{1}{4} \gamma^2 - \frac{1}{4} \gamma^2 \lambda_c + \gamma \right) \|V^{\lambda}_k\|^2 + M^2(3+2\gamma)\|X^{\lambda}_k\|^2 + B^2 \left( 3+ 2\gamma \right) . \end{eqnarray*} As in \cite{gao}, we deduce that \begin{eqnarray} \mathcal{V}_k/\beta &\ge& \max\left\lbrace \frac{1}{8}(1-2\lambda_c)\gamma^2\|X^{\lambda}_k\|^2, \frac{1}{4}(1-2\lambda_c)\|V^{\lambda}_k\|^2 \right\rbrace \label{eq:VXV} \\ &\ge& \frac{1}{16}(1-2\lambda_c)\gamma^2\|X^{\lambda}_k\|^2 + \frac{1}{8}(1-2\lambda_c)\|V^{\lambda}_k\|^2. \nonumber \end{eqnarray} And then we get that \begin{equation}\label{eq:E} \mathcal{E}_k \le K_1\mathcal{V}_k/\beta + K_2 \end{equation} where $$K_1 = \max\left\lbrace \frac{M^2(3+2\gamma)}{\frac{1}{16}(1-2\lambda_c)\gamma^2}, \frac{(M/2 + \gamma^2/4 - \gamma^2 \lambda_c /4 + \gamma)}{\frac{1}{8}(1-2\lambda_c)} \right\rbrace, \qquad K_2 = B^2(3 + 2\gamma).$$ Similarly, we bound $\Sigma_k$, using (\ref{eq:VXV}) and the definitions of $\Delta^1_k, \Delta^2_k$, \begin{eqnarray*} \|\Sigma_k\|^2 &\le& 2 \gamma^{3} \beta^{-1} \lambda \|\Delta^2_k\|^2 \|\xi_{k+1}\|^2 + 2 \gamma \beta^{-1} \lambda \|\Delta^1_k\|^2 \|\xi_{k+1}\|^2\\ &\le& 2 \lambda \gamma \beta^{-1} \|\xi_{k+1}\|^2 \left( \gamma^{2} \|X^{\lambda}_{k} + \gamma^{-1} V^{\lambda}_{k} - \lambda \gamma^{-1} g(X^{\lambda}_{k},U_{\mathbf{z},k})\|^2 + \|V^{\lambda}_{k} - \lambda[\gamma V^{\lambda}_{k} + g(X^{\lambda}_{k},U_{\mathbf{z},k})]\|^2 \right) \\ &\le& 2 \lambda \gamma \beta^{-1} \|\xi_{k+1}\|^2 \left( 3 \gamma^2 \|X^{\lambda}_{k}\|^2 + 3\|V^{\lambda}_{k}\|^2 +3(M\|X^{\lambda}_{k}\| + B)^2 + 2(1-\lambda \gamma)^2 \|V^{\lambda}_{k}\|^2 + 2(M\|X^{\lambda}_{k}\| + B)^2 \right)\\ &\le& 2 \lambda \gamma \beta^{-1} \|\xi_{k+1}\|^2 \left( (3 \gamma^2 + 10M^2) \|X^{\lambda}_{k}\|^2 + (3 + 2(1-\lambda \gamma)^2)\|V^{\lambda}_{k}\|^2 + 10B^2 \right). \end{eqnarray*} and thus \begin{equation}\label{eq:bound=sig} \|\Sigma_k\|^2 \le \left( P_1 \mathcal{V}_k/\beta + P_2\right) \lambda \|\xi_{k+1}\|^2 \end{equation} where $$P_1 = 2\max \left\lbrace \frac{2 \gamma \beta^{-1} (3 \gamma^2 + 10M^2)}{\frac{1}{16}(1-2 \lambda_c) \gamma^2}, \frac{2 \gamma \beta^{-1}(3 + 2(1-\lambda \gamma)^2)}{\frac{1}{8}(1-2 \lambda_c) } \right\rbrace , \qquad P_2 = 20 \gamma \beta^{-1} B^2.$$ Noting that $\lambda_c \le 1/4,$ we have \begin{eqnarray*} \mathcal{V}_k/\beta &=& F_{\mathbf{z}}(X^{\lambda}_k) + \frac{1}{4}\gamma^2(1-\lambda_c)\|X^{\lambda}_k\|^2 + \frac{1}{2}\gamma \left\langle X^{\lambda}_k, V^{\lambda}_k \right\rangle + \frac{1}{2} \|V^{\lambda}_k\|^2 \\ &\le& F_{\mathbf{z}}(X^{\lambda}_k) + \frac{1}{4}\gamma^2\|X^{\lambda}_k\|^2 + \frac{1}{2}\gamma \left\langle X^{\lambda}_k, V^{\lambda}_k \right\rangle + \frac{1}{2\lambda_c} \|V^{\lambda}_k\|^2. \end{eqnarray*} From (\ref{eq:1}), (\ref{eq:X}) we obtain \begin{eqnarray*} \frac{\mathcal{V}_{k+1} - \mathcal{V}_{k} }{\beta} &\le& - \lambda \gamma \lambda_c \left( F_{\mathbf{z}}(X^{\lambda}_k) + \frac{1}{4} \gamma^2 \|X^{\lambda}_k \|^2 - A_c/(\beta \lambda_c) + \frac{1}{2 \lambda_c} \|V^{\lambda}_k\|^2 + \frac{1}{2} \gamma \left\langle X^{\lambda}_{k}, V^{\lambda}_{k} \right\rangle \right) + \nonumber \\ && \qquad + \lambda^2 \mathcal{E}_k + \gamma \beta^{-1} \lambda \|\xi_{k+1} \|^2 + \Sigma_k \\ &\le& \lambda \gamma \left(A_c/\beta - \lambda_c \mathcal{V}_k/\beta \right) + (K_1\mathcal{V}_{k}/\beta + K_2) \lambda^2 + \gamma \beta^{-1} \lambda \|\xi_{k+1} \|^2 + \Sigma_k. \end{eqnarray*} Therefore, for $0< \lambda < \frac{\gamma \lambda_c}{2K_1}$ $$\mathcal{V}_{k+1} \le \phi \mathcal{V}_{k} + \tilde{K}_{k+1}$$ where \begin{equation} \label{eq:phi_K} \phi:= 1 - \lambda \gamma \lambda_c/2, \qquad \tilde{K}_{k+1}:= \lambda \gamma A_c + \lambda^2\beta K_2 + \lambda \gamma \|\xi_{k+1}\|^2 + \beta \Sigma_{k}. \end{equation} Define $E_k[\cdot] := E[\cdot|(X^{\lambda}_k,V^{\lambda}_k), \mathbf{Z} = \mathbf{z}]$. We then compute as follows, \begin{eqnarray} E_k[\mathcal{V}^{2q}_{k+1}] &\le& E_k\left[ \left(| \phi \mathcal{V}_{k}|^2 + 2\phi \mathcal{V}_{k}\tilde{K}_{k+1} + |\tilde{K}_{k+1} |^2 \right)^q \right] \nonumber\\ && \qquad = | \phi \mathcal{V}_{k} |^{2q} + 2q | \phi \mathcal{V}_{k} |^{2(q-1)} E_k\left[ \phi \mathcal{V}_{k} \tilde{K}_{k+1} \right] + \sum_{k=2}^{2q} C^k_{2q} E_k\left[ | \phi \mathcal{V}_{k}|^{2q-k} | \tilde{K}_{k+1}|^k \right] \nonumber\\ \label{eq:2p0} \end{eqnarray} where the last inequality is due to Lemma A.3 of \cite{five}. Denoting $c_{19}:= \gamma A_c + \beta K_2 + \gamma d,$ we continue \begin{eqnarray} E_k[\mathcal{V}^{2q}_{k+1}] &\le& | \phi \mathcal{V}_{k} |^{2q} + 2\lambda c_{19} q | \phi \mathcal{V}_{k} |^{2q-1} + \sum_{\ell=0}^{2q-2} {2q \choose \ell + 2} E_k\left[ | \phi \mathcal{V}_{k} |^{2q-2 -\ell} |\tilde{K}_{k+1} |^{\ell} |\tilde{K}_{k+1} |^{2} \right] \nonumber\\ &\le& | \phi \mathcal{V}_{k} |^{2q} + 2\lambda c_{19} q | \phi \mathcal{V}_{k} |^{2q-1} + {2q \choose 2} \sum_{\ell=0}^{2q-2} {2q-2 \choose \ell} C^{\ell }_{2q-2} E_k\left[| \phi \mathcal{V}_{k} |^{2q-2 -\ell} |\tilde{K}_{k+1} |^{\ell} |\tilde{K}_{k+1} |^{2} \right] \nonumber\\ &\le& | \phi \mathcal{V}_{k} |^{2q} + 2\lambda c_{19} q | \phi \mathcal{V}_{k} |^{2q-1} +q(2q-1)E_k\left[(| \phi \mathcal{V}_{k} | + |\tilde{K}_{k+1}|)^{2q-2} |\tilde{K}_{k+1} |^{2} \right] \nonumber\\ &\le& | \phi \mathcal{V}_{k} |^{2q} + 2 \lambda c_{19} q | \phi \mathcal{V}_{k} |^{2q-1} + q(2q-1) 2^{2q-3} | \phi \mathcal{V}_{k} |^{2q-2} E_k[|\tilde{K}_{k+1} |^{2}] +q(2q-1) 2^{2q-3} E_k[|\tilde{K}_{k+1} |^{2q}]. \nonumber \\ \label{eq:2p} \end{eqnarray} Clearly we have \begin{eqnarray*} E_k\|\tilde{K}_{k+1}\|^2 &\le& 3 \lambda (\gamma A_c + \beta K_2)^2 + 3 \lambda \gamma^2 E\|\xi_{k+1}\|^4 +3 \lambda \beta d P_1|\mathcal{V}_k| + 3 \lambda \beta^2 d P_2 ,\\ E_k\|\tilde{K}_{k+1}\|^{2q} &\le& 2^{2q-1} \lambda E \left(\gamma A_c + \beta K_2 + \gamma \|\xi_{k+1}\|^2 + \beta \sqrt{P_2}\|\xi_{k+1}\|\right)^{2q} + 2^{2q-1}\lambda \beta^q P^q_1 |\mathcal{V}_k|^q E\|\xi_{k+1}\|^{2q}. \end{eqnarray*} Define $$\tilde{M}_1:= \max \left\lbrace \frac{ (\gamma A_c + \beta K_2)^2 + \gamma^2 E\|\xi_{k+1}\|^4+ \beta^2 d P_2}{ \beta d P_1}, \frac{\left( E \left(\gamma A_c + \beta K_2 + \gamma \|\xi_{k+1}\|^2 + \beta \sqrt{P_2}\|\xi_{k+1}\|\right)^{2q}\right)^{1/q} }{\beta P_1 E^{1/q}\|\xi_{k+1}\|^{2q} } \right\rbrace.$$ On $\left\lbrace \mathcal{V}_k \ge \tilde{M}_1\right\rbrace $ we have \begin{eqnarray*} E_k\|\tilde{K}_{k+1}\|^2 &\le& 6\lambda \beta d P_1|\mathcal{V}_k|,\\ E_k\|\tilde{K}_{k+1}\|^{2q} &\le& 2^{2q} \lambda \beta^q P^q_1 |\mathcal{V}_k|^q E\|\xi_{k+1}\|^{2q}. \end{eqnarray*} And thus \begin{eqnarray} E_k[\mathcal{V}^{2q}_{k+1}] &\le& \phi |\mathcal{V}_{k} |^{2q} + 2 \lambda c_{19} q | \mathcal{V}_{k} |^{2q-1} + 6\lambda q(2q-1) 2^{2q-3} \beta dP_1 | \mathcal{V}_{k} |^{2q-1} + \lambda q(2q-1) 2^{4q-3} \beta^q P^q_1 E\|\xi_{k+1}\|^{2q} | \mathcal{V}_{k} |^{q} \nonumber \\ &=& (1 - \lambda \gamma \lambda_c/4) \mathcal{V}^{2q}_{k} \nonumber \\ &-& \lambda \gamma \lambda_c/12 \mathcal{V}^{2q}_{k} + 2 \lambda c_{19} q | \mathcal{V}_{k} |^{2q-1} \nonumber\\ &-& \lambda \gamma \lambda_c/12 \mathcal{V}^{2q}_{k} + 6\lambda q(2q-1) 2^{2q-3} \beta dP_1 | \mathcal{V}_{k} |^{2q-1} \nonumber \\ &-& \lambda \gamma \lambda_c/12 \mathcal{V}^{2q}_{k} + \lambda q(2q-1) 2^{4q-3} \beta^q P^q_1 E\|\xi_{k+1}\|^{2q} | \mathcal{V}_{k} |^{q}. \nonumber \\ \label{eq:2p2} \end{eqnarray} If we choose $$\tilde{M}:= \max \left\lbrace \tilde{M}_1, \frac{24 c_{19}q}{\gamma \lambda_c}, \frac{72q(2q-1) 2^{2q-3} \beta dP_1}{\gamma \lambda_c}, \left(\frac{ 12q(2q-1)2^{4q-3}\beta^q P^q_1E\|\xi_{k+1}\|^{2q}}{\gamma \lambda_c}\right)^{1/q} \right\rbrace $$ then on $\{ V_k \ge \tilde{M}\}$, the second, the third and the fourth term in the RHS of (\ref{eq:2p2}) are bounded by $0$ and then \begin{eqnarray*} E_k[\mathcal{V}^{2q}_{k+1}] &\le& (1 - \lambda \gamma \lambda_c/4) \mathcal{V}^{2q}_{k}. \end{eqnarray*} On $\{ \mathcal{V}_{k} < \tilde{M} \}$, we have \begin{eqnarray*} E_k[\mathcal{V}^{2q}_{k+1}] &\le& (1 - \lambda \gamma \lambda_c/4)\mathcal{V}^{2q}_{k} + \lambda \tilde{N}, \end{eqnarray*} where $\tilde{N}=2 c_{19} q \tilde{M}^{2q-1} + 6 q(2q-1) 2^{2q-3} \beta dP_1 \tilde{M}^{2q-1} + q(2q-1) 2^{4q-3} \beta^q P^q_1 E\|\xi_{k+1}\|^{2q} \tilde{M}^{q} $. For sufficiently small $\lambda$, we get from these bounds $$ E[\mathcal{V}^{2q}_k] \le (1 - \lambda \gamma \lambda_c/4)^k\mathcal{V}^{2q}_0 + \frac{4\tilde{N}}{\gamma \lambda_c}.$$ The proof is complete by using (\ref{eq:VXV}). \end{proof} \subsection{Explicit dependence of constants on important parameters} Similar to \cite{gao}, we choose $\mu_0$ in such a way that $$\int_{\mathbb{R}^{2d}}\mathcal{V}(x,v)d\mu_0(dx,dv) = \mathcal{O}(\beta), \qquad \int_{\mathbb{R}^{2d}}e^{\mathcal{V}(x,v)}d\mu_0(dx,dv) = \mathcal{O}(e^{\beta}).$$ Then we get $C^c_x = C^c_v = C^a_x = C^a_v = \mathcal{O}((\beta + d)/\beta).$ It follows that $$c_2 = c_3 = c_7 = c_{16} = \mathcal{O}(\sqrt{(\beta + d)/\beta}).$$ It is checked that $$A_c = \mathcal{O}(\beta), \qquad \alpha_c = \mathcal{O}(1), \qquad \Lambda_c = \mathcal{O}(\beta + d), \qquad R_1 = \mathcal{O}(\sqrt{1 + d/\beta}),$$ and \begin{eqnarray*} c_* &=& \mathcal{O}(\sqrt{\beta+d}e^{-\mathcal{O}(\beta+d)}), \\ C_* &=&\mathcal{O}\left( e^{\Lambda_c/p} \left( R_1^{p-3} \frac{d+\beta}{\beta c_*} \right)^{1/p} \right) = \mathcal{O}\left( \frac{(d+\beta)^{1/2 - 1/(2p)}}{\beta^{1/2 - 1/(2p)}} \frac{e^{2\Lambda_c/p}}{\Lambda_c^{1/(2p)}} \right) \\ &=& \mathcal{O}\left( \frac{(d+\beta)^{1/2 - 1/(2p)}}{\beta^{1/2 - 1/(2p)}} \left( \frac{e^{\Lambda_c}}{\Lambda_c^{1/2}} \right)^{2/p} \Lambda^{1/(2p)}_c \right) = \mathcal{O}\left( \frac{(d+\beta)^{1/2}}{\beta^{1/2 - 1/(2p)} c^{2/p}_*} \right). \end{eqnarray*} The constant $c_*, C_*$ are $\mu_*$ and $C$ respectively in \cite{gao}. In addition, we check \begin{eqnarray*} c_{19} = \mathcal{O}(d+\beta), \qquad \tilde{M} = \tilde{M}_1 = \mathcal{O}((d + \beta)^2/d),\\ \tilde{N} = \mathcal{O}\left( \frac{(d+\beta)^{4q-1}}{d^{2q-1}} \right), \qquad c_{18} = \mathcal{O}((d+\beta)^{3/2}/d^{1/2}) \end{eqnarray*} and hence $$\tilde{C} = \mathcal{O}\left( \frac{(d+\beta)^{1/2 + 2/p}}{\beta^{1/2} d^{1/(2p)} } \frac{e^{-c_*}}{c^{2/p}_*(1-e^{-c_*})} \right).$$ From Lemma 16 of \cite{gao}, we get $$\mathcal{W}_p (\mu_0,\pi_{\mathbf{z}}) = \mathcal{O}\left(\sqrt{\frac{\beta+d}{\beta}}\right). $$ Furthermore, it is observed that $$\sigma = \mathcal{O}\left( \frac{(d+\beta)^{1-1/(4q)}}{d^{1/2-1/(4q)}} \right) .$$ Therefore, for a fixed $k$, the term $\mathcal{B}_1$ is bounded by \begin{eqnarray*} \mathcal{B}_1 &=& \mathcal{O}\left( \frac{(d+\beta)^{3/2 + 2/p - 1/(4q)}}{\beta^{1/2} d^{1/2+1/(4q)} } \frac{e^{-c_*}}{c^{2/p}_*(1-e^{-c_*})} \right)(\lambda^{1/(2p)} + \delta^{1/(2p)}) \\ &&+ \mathcal{O}\left( \frac{(d+\beta)^{3/2 + 1/(2p) - 1/(4q)}}{d^{1/2 - 1/(4q)}\beta^{1/2} c^{2/p}_*} \right) e^{-c_*k\lambda}. \end{eqnarray*} Since $c_*$ is exponentially small in $(\beta+d)$, our bound for $\mathcal{B}_1$ is worse than that of $\mathcal{J}_1(\varepsilon) + \overline{\mathcal{J}}_0(\varepsilon)$ given in \cite{gao}. \section*{Acknowledgments} Both authors were supported by the NKFIH (National Research, Development and Innovation Office, Hungary) grant KH 126505 and the ``Lend\"ulet'' grant LP 2015-6 of the Hungarian Academy of Sciences. The authors thank Minh-Ngoc Tran for helpful discussions.
proofpile-arXiv_065-4142
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:introduction} \input{sec_introduction} \section{Spatial Consistency}\label{sec:spatial_cons} \input{sec_spatial_consistency} \section{Measurements}\label{sec:measurements} \input{sec_measurements} \section{Numerical Results} \label{sec:numerical_results} \input{sec_numerical_results} \section{Conclusion} \label{sec:conclusion} In this paper, we evaluate the spatial consistency feature based on \ac{SIMO} measurements with angular information of the \acp{MPC}. By studying the behavior of \(d_{\mathrm{CMD}}\) over distance we categorize each track into one of the ``\ac{LoS} radial'', ``\ac{LoS} tangential'', ``Far away'' and ``Uncorrelated'' classes. In general, the similarity decreases over distance, however, the speed of decreasing varies form class to class due to individual geometry. Moreover, low similarity is observed in the \ac{NLoS} scenarios, proving it difficult to cluster users in such scenarios. These findings could be beneficial for future \ac{3GPP} proposal in channel model. This work is a first step to motivate a further and deeper analysis of the measurement data with respect to spatial consistency. As a next step, we will analyze the dependency of the spatial consistency from angular spread and K-factor to better understand in which radio environments massive \ac{MIMO} schemes that utilize similarity in covariance matrices can be applied. This will also include a direct comparison with the \ac{3GPP} spatial consistency feature. \section*{Acknowledgment}\label{sec:acknowledgment} A part of this work has been performed in the framework of the Horizon 2020 project ONE5G (ICT-760809), funded by the European Union. The authors would like to acknowledge the contributions of their colleagues in the project, although the views expressed in this contribution are those of the authors and do not necessarily represent the project. Moreover, the authors thank their Fraunhofer \ac{HHI} colleagues for conducting the measurements, which is done by Fabian Undi, Leszek Raschkowski and Stephan Jaeckel. Stephan Jaeckel also did the data post-processing. We also thank Boonsarn Pitakdumrongkija\footnote{While Boonsarn was with NEC at the time of the measurements, in the meantime he left NEC.}, Xiao Peng and Masayuki Ariyoshi from NEC Japan for enabling these measurements in the first place. \bibliographystyle{IEEEtran} \subsection{Track Classification}\label{sec:track_class} After a thorough examination of each track with its serving \ac{BS} and corresponding surroundings shown in \cref{fig:meas_map}, we sort each measurement into one of the following classes: \begin{itemize} \item \textbf{\acs{LoS} radial}: The track is approximately radial/perpendicular to the \ac{BS} and \ac{LoS} is available throughout the whole track. \item \textbf{\acs{LoS} tangential}: The track is approximately tangential \ac{w.r.t.} the position of the \ac{BS} with a \ac{LoS} condition over the complete track. \item \textbf{Far away}: The track is located far away (at least \SI{100}{\m}) from the \ac{BS} \ac{w.r.t.} the track distance of \(\approx \SI{40}{\meter}\). Only available in the open street scenario. \item \textbf{Uncorrelated}: Tracks with low correlation, with either \ac{NLoS} condition or obstructed \ac{LoS} condition\footnote{Obstructed \ac{LoS} means objects with a size similar to the wavelength between transmitter and receiver, e.g. three branches.}. \item \textbf{Other}: The curve shape of the spatial consistency does not follow the other classes. \end{itemize} Table~\ref{tab:track_classes} maps the measurement tracks from \cref{fig:meas_map} to the classes given above. Note that due to the highly dynamic transmission environment, e.g. transition from \ac{LoS} to \ac{NLoS} or vice versa, and obstacles that are not shown in the measurement map, e.g. parking cars, the above-mentioned classes can not cover all measurements. Therefore, we leave out evaluation of the measurements in the ``Other'' class for future work. We can see from the table that except for the ``Far away'' class, each class contains tracks from both ``Campus'' and ``Open Street'' scenario. In order to study the behavior of the \ac{CMD} similarity \ac{w.r.t.} distance, we set the starting \ac{LTTS} of each measurement as user 1, while treating each of the remaining \acp{LTTS} as user 2 moving away from user 1. Therefore, the \(d_{\mathrm{CMD}}\) between two users can be calculated according to \cref{eq:cmd} for each track. \cref{fig:LoS_radial} shows the \ac{CMD} similarity of class ``\ac{LoS} radial'' over distance. For visualizing purpose, the area between the minimum and maximum \(d_{\mathrm{CMD}}\) over all tracks in the class at each \ac{LTTS} is shown as a gray cover plot. Otherwise, the many lines within the same figure would be hard to distinguish. Instead, two representative \(d_{\mathrm{CMD}}\) curves are given to show the dynamic range of the measurement tracks. We can see in \cref{fig:LoS_radial} that in this class the average \(d_{\mathrm{CMD}}\) drops steadily over distance but overall remains high, e.g. mostly above 0.5 up to \SI{20}{\m}. This high correlation can be explained by the \ac{LoS} condition and that most of the power is received by the \ac{LoS} path. Similarly, \cref{fig:LoS_tangential} shows results for the ``\ac{LoS} tangential'' class. Here \(d_{\mathrm{CMD}}\) decreases almost proportionally with the distance down to 0.1. In comparison to the ``\ac{LoS} radial'' class, the ``\ac{LoS} tangential'' class decreases faster. This can be explained by the change in angles. In the ``\ac{LoS} radial'' class, where the tracks are perpendicular to the \ac{BS}, moving along tracks only changes the elevation angle of the \acp{MPC}, whereas in class ``\ac{LoS} tangential'', where the tracks are tangential to the \ac{BS}, moving along tracks changes both azimuth and elevation angles of the \acp{MPC}. The additional change in the azimuth angle of the \acp{MPC} causes the steeper decrease in \(d_{\mathrm{CMD}}\). Next, the ``Far away'' class is shown in \cref{fig:Far_Away}. It is of interest to note that the \ac{CMD} similarity fluctuates over distance and no significant decrease is observed for tracks in this class. This is due to the fact that distance between users has a smaller impact on the channel for users located far away from the \ac{BS} than the ones which are closer to the \ac{BS}. To further elaborate, the change in channel coefficients is dependent on the ratio of the relative distance between users to the total distance from the \ac{BS} to the track. The smaller the ratio, the smaller the changes in \(d_{\mathrm{CMD}}\). At last, \cref{fig:Uncorrelated} shows the ``Uncorrelated'' class. In this case, low correlation is observed in the measurements, since \(d_{\mathrm{CMD}}\) drops below 0.4 after the first few \acp{LTTS}. A cross-check with \cref{fig:meas_map} shows that most of these measurements are taken on tracks without direct \ac{LoS} to the serving \ac{BS}. This indicates that the angles of the received \acp{MPC} at the \ac{BS} change rapidly even by moving the transmitter for \SI{1}{\meter}. One explanation for this could be a fast change of scattering objects that results in non-continuous phase and amplitude jumps of \acp{MPC}. This is also called ``death'' and ``birth'', or lifetime, of scatterers. In our measurements this relates to scatterers observed by the \ac{BS} which was a \ac{UCA} deployed on a pole and therefore able to receive \acp{MPC} from all azimuth directions. \subsection{Result Analysis}\label{sec:result_analysis} These 4 classifications show that the spatial consistency depends less on the \acp{LSP} of a given scenario and more on the individual geometry, as both the ``Campus'' and ``Open Street'' scenarios appear in the same classes. In general, the similarity of covariance matrices decreases over distance. However, depending on each classification, the similarity can have large variation locally, e.g. the similarity remains high over \SI{40}{\m} distances in the ``Far-Away'' category, whereas in the ``Uncorrelated'' category, the similarity is barely seen. Furthermore, the similarity of the covariance matrices is decreasing within a very short distance in many of the \ac{NLoS} scenarios. This indicates that clustering of users based on covariance matrix similarity can be difficult to achieve in \ac{NLoS} scenarios. The above described effects are not captured by the current \ac{3GPP} proposal of the spatial consistency feature where only a single dependency on the ``correlation distance'' and the distance between users is captured, see \cite{KDJT18}. Therefore, an extension of the spatial consistency feature in \cite{3GPP17-38901} is required. \begin{table} \renewcommand{\tabcolsep}{1pt} \centering \begin{tabular}{C{1.3cm}|C{0.3cm}|C{4.8cm}} \hline Track Class & \acs{BS} ids & Track ids \\ \hline \hline \multirow{4}{\linewidth}{\acs{LoS} radial} & 1 & 2 (\SI{6}{\meter}) \\ \cline{2-3} & 2 & 4 (\SI{6}{\meter}) \\ \cline{2-3} & 5 & 13, 25 \\ \cline{2-3} & 6 & 24 (\SI{6}{\meter})\\ \hline \multirow{3}{\linewidth}{\acs{LoS} tangential} & 1 & 4 (\SI{3}{\meter})\\ \cline{2-3} & 2 & 3 \\ \cline{2-3} & 3 & 4 (\SI{6}{\meter})\\ \cline{2-3} & 5 & 23, 24 \\ \cline{2-3} & 6 & 14, 15 (\SI{3}{\meter}), 22\\ \cline{2-3} & 7 & 17 (\SI{6}{\meter}), 18 (\SI{3}{\meter}), 19 (\SI{3}{\meter}), 20 (\SI{3}{\meter}) \\ \hline \multirow{3}{\linewidth}{Far away} & 5 & 15 (\SI{3}{\meter}), 16, 18 (\SI{3}{\meter}), 19, 20 (\SI{3}{\meter}), 21, 22 \\ \cline{2-3} & 6 & 17 (\SI{6}{\meter}), 18 (\SI{6}{\meter}), 19 (\SI{6}{\meter}) \\ \cline{2-3} & 7 & 13 (\SI{6}{\meter}), 15 (\SI{3}{\meter}),23 (\SI{6}{\meter}), 25 (\SI{3}{\meter}) \\ \hline \multirow{4}{\linewidth}{Un-correlated Tracks} & 1 & 5 (\SI{6}{\meter}), 6, 7, 9, 10, 11, 12 \\ \cline{2-3} & 2 & 1 (\SI{6}{\meter}), 2 (\SI{6}{\meter}), 5, 6, 7, 8, 10 (\SI{6}{\meter}) \\ \cline{2-3} & 3 & 2, 5 (\SI{6}{\meter}), 9, 10 (\SI{3}{\meter}), 11, 12 \\ \cline{2-3} & 5 & 26 (\SI{3}{\meter}) \\ \cline{2-3} & 6 & 21 (\SI{6}{\meter}) \\ \hline \end{tabular} \caption{Mapping of measurement tracks to track-classes} \label{tab:track_classes} \vspace{-0.35cm} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{Paper_LoS_radial} \caption{\ac{CMD} similarity over distance for the tracks in class ``\acs{LoS} radial'' according to \cref{tab:track_classes}. } \label{fig:LoS_radial} \vspace{-0.35cm} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{Paper_LoS_tangential} \caption{\ac{CMD} similarity over distance for the tracks in class ``\acs{LoS} tangential'' according to \cref{tab:track_classes}. } \label{fig:LoS_tangential} \vspace{-0.35cm} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{Paper_Far_Away} \caption{\ac{CMD} similarity over distance for the tracks in class ``Far away'' according to \cref{tab:track_classes}. } \label{fig:Far_Away} \vspace{-0.35cm} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{Paper_Uncorrelated} \caption{\ac{CMD} similarity over distance for the tracks in class ``Uncorrelated'' according to \cref{tab:track_classes}. } \label{fig:Uncorrelated} \vspace{-0.35cm} \end{figure} \subsection{Covariance Matrix}\label{sec:cov_mtrx} The covariance matrix is obtained by averaging the channel coefficients over the time duration $\tau$ and the number of \acl{OFDM} subcarriers $N$. Channel coefficients between the transmitter and the receiver at time $t$ on the $n$-th subcarrier are denoted as \(\mathbf{H}_{t,n} \in \mathbb{C}^{n_r \times n_t}\), where \(n_r\) and \(n_t\) are the number of antennas at the receiver and the transmitter, respectively. The covariance matrix at the receiver side $\mathbf{R}\in\mathbb{C}^{n_r\times n_r}$ is often defined as \begin{equation} \mathbf{R}^{(\mathrm{Lit})} = \mathbb{E}\left[\mathbf{H}_{t,n}\mathbf{H}^\mathrm{H}_{t,n}\right], \label{eq:cov_literature} \end{equation} without further explanation on how exactly the covariance matrix can be obtained \cite{ANAC13,MHA+17}, because the covariance matrix is directly generated in the simulations and Gaussian noise is added for individual channel realizations. In \cref{eq:cov_literature}, $\mathbb{E}[\cdot]$ denotes the expectation value. However, in our case the covariance matrix has to be obtained from discrete measurements and is calculated as \begin{equation} \mathbf{R}^{(\mathrm{Meas})}=\frac{1}{\tau N}\sum_{t=1}^{\tau}\sum_{n=1}^N\mathbf{H}_{t,n}\mathbf{H}^\mathrm{H}_{t,n}. \label{eq:cov_measurements} \end{equation} It can be seen from \cref{eq:cov_measurements} that the covariance matrix depends on the selection of the averaging time \(\tau\) and the averaging bandwidth represented by \(N\). Based on the covariance matrix according to \cref{eq:cov_measurements}, details on the \ac{CMD} to study the spatial correlation between two user is given next. \subsection{Correlation Matrix Distance}\label{sec:cmd} The \ac{CMD} was proposed in \cite{GVL12} and served as a novel measure to track the changes in spatial structure of non-stationary \ac{MIMO} channels. Results in \cite{KDJT18} have shown a strong correlation between the physical distance and the \ac{CMD}. Given the covariance matrices of two users ($\mathbf{R}_1$,$\mathbf{R}_2$), the similarity measure based on \ac{CMD}, according to \cite{GVL12}, can be obtained by \begin{equation} \label{eq:cmd} d_\mathrm{CMD}\left(\mathbf{R}_1,\mathbf{R}_2\right) =\frac{\mathrm{Tr}(\mathbf{R}_1^\mathrm{H}\mathbf{R}_2)}{\|\mathbf{R}_1\|_\mathrm{F}\cdot\|\mathbf{R}_2\|_\mathrm{F}} \\ \end{equation} where \(\mathrm{Tr}(\cdot)\) denotes the ``trace'' operator. The \ac{CMD} based similarity measure is a normalized metric which is upper-bounded by $1$ in the case of $\mathbf{R}_1$ and $\mathbf{R}_2$ being collinear, and lower-bounded by $0$ in the case of $\mathbf{R}_1$ and $\mathbf{R}_2$ being orthogonal.
proofpile-arXiv_065-4146
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \begin{abstract} A number of current approaches to quantum and neuromorphic computing use superconductors as the basis of their platform or as a measurement component, and will need to operate at cryogenic temperatures. Semiconductor systems are typically proposed as a top-level control in these architectures, with low-temperature passive components and intermediary superconducting electronics acting as the direct interface to the lowest-temperature stages. The architectures, therefore, require a low-power superconductor–semiconductor interface, which is not currently available. Here we report a superconducting switch that is capable of translating low-voltage superconducting inputs directly into semiconductor-compatible (above 1,000 mV) outputs at kelvin-scale temperatures (1\,K or 4\,K). To illustrate the capabilities in interfacing superconductors and semiconductors, we use it to drive a light-emitting diode (LED) in a photonic integrated circuit, generating photons at 1\,K from a low-voltage input and detecting them with an on-chip superconducting single-photon detector. We also characterize our device’s timing response (less than 300 ps turn-on, 15 ns turn-off), output impedance (greater than 1\,M\Ohm), and energy requirements (0.18\,\fjum, 3.24\,mV/nW). \end{abstract} At present, a number of quantum and neuromorphic computing architectures plan to operate at cryogenic temperatures, using superconductors as the basis of their platform~\cite{Zhang2018,King2018} or as a measurement component~\cite{Wang2018,Shainline2017,Slichter2017,Bonneau2016}. In these architectures, semiconductor systems are often proposed as a top-level control with low-temperature passive components and intermediary superconducting electronics acting as the direct interface to the lowest-temperature stages~\cite{McDermott2018}--this stratification is required because semiconductor-based amplification of small superconducting signals consumes too much power for extensive use at kelvin-scale temperatures~\cite{Patra2018,Ortlepp2013,Homulle2017}. As a result, the architectures require a low-power superconductor–semiconductor interface to, for example, leverage complementary metal–oxide–semiconductor (CMOS) coprocessors for classical control of superconducting qubits~\cite{Reilly2015}, or as a means to drive optoelectronics from superconducting detectors. However, the ability to interface superconductors with semiconductors is a missing component in these advanced computing ecosystems. The primary issue with interfacing superconductor electronics with semiconductor electronics is one of bandgap and impedance mismatch. The average superconductor has a bandgap almost a thousand times smaller than that of a semiconductor (e.g. 2.8 meV for Nb versus 1,100 meV for Si). Similarly, the impedances of these systems differ greatly: a typical transistor element has an effective input impedance in the 10$^4$-10$^9$ range, whereas a typical superconducting logic element will have an output impedance in the 0-10$^1$ range. Due to these mismatches, it is extremely difficult to drive the high-impedance inputs of a semiconductor element to \about1,000~mV using \about1~mV superconductor outputs. At present, there are only two known ways to generate 1,000 mV directly from a superconducting output: connect many few-millivolt devices (such as Josephson junctions) in series\cite{Benz2015}, or allow a superconducting nanowire to latch\cite{McCaughan2014}. The most successful previous attempts at creating a superconductor-to-semiconductor interface consist of a superconducting preamplifier stage combined with a semiconductor amplifier stage\cite{Ortlepp2013,Feng2003,VanDuzer1990}. This approach is effective at translating signal levels, but is power-constrained. In particular, using semiconductor transistors in an amplifier configuration necessarily draws significant static power (\about1~mW each), which limits scalability on a cryogenic stage. In related work, a CMOS-latch input was used after the preamplifier to limit static power\cite{Wei2011}, but this introduced the need for per-channel threshold calibration. Alternatively, it has been shown that a $>$1~V output can be created from a nanowire device such as the nanocryotron\cite{McCaughan2014}, but using the nanocryotron as a means for semiconductor-logic interfacing has drawbacks: creation of the high-impedance state is a relatively slow hotspot-growth process along the length of the nanowire (0.25\,nm/ps in NbN\cite{Berggren2018}); it is hysteretic and not able to self-reset without external circuitry; and output-input feedback is a concern, as the input and output terminals are galvanically connected\cite{Zhao2017}. In this Article, we report a monolithic switch device that can translate low-voltage superconducting inputs directly into semiconductor-compatible ($>$1,000 mV) outputs. The switch combines a low-impedance resistor input (1-50 \Ohm) with a high-impedance ($>$1~M\Ohm) superconducting nanowire-meander switch element. The input element and switching element are isolated galvanically but coupled thermally by a thin dielectric spacer (25~nm \sioo). When input current is applied to the resistor, the state of the entire nanowire-meander is switched from superconducting to normal. The input induces an extremely large impedance change in the output: from 0\,\Ohm to $>$1\,M\Ohm (\reffig{overview}). The power cost of inducing this change is surprisingly small when compared to existing methods, and crucially it can be operated in a non-hysteric (that is, self-resetting) regime. As a demonstration of a superconductor-semiconductor interface, we have used the switch to drive an LED in a photonic integrated circuit, generating photons at 1~K from a low-voltage input and detecting them with an on-chip superconducting single-photon detector. \begin{figure} \centering \includegraphics[width=6.5in]{Fig_1.pdf} \caption{High-impedance superconducting switch overview. (a) Scanning electron micrograph of one device (inset) closeup of the nanowire meander. (b) Schematic illustration of the device, showing the three primary layers (resistor, dielectric, and nanowire) as well as contact pad geometry. (c) Resistance data versus input power for several devices and circuit schematic for resistance measurement. Maximum resistance is proportional to device area, with devices 1-4 having areas 44, 68, 92, and 116~\textmu{}m$^2$. (d) I-V curve of one device for three different input powers.} \label{overview} \end{figure} \section*{High-impedance superconducting switch} Our device consists of a 3-layer stack (\reffig{overview}b). On the top of the stack is a resistor made from a thin film of normal metal with a small resistance (1-50\,\Ohm). On the bottom of the stack is a meandered nanowire patterned from a superconducting thin film. The nanowire layer acts as a high-impedance, phonon-sensitive switch, while the resistor layer is used to convert electrical energy into Cooper-pair-breaking phonons. Like related low-impedance thermal devices\cite{Lee2003,Zhao2018}, between these two layers is a dielectric thermal spacer that has two purposes: thermally coupling the resistor layer to the nanowire layer, and electrically disconnecting the input (resistor) from the output (nanowire switch). The device has four terminals total, with two of the terminals connected to the resistor and two of the terminals connected to the nanowire (\reffig{overview}). Fabrication details are available in the Methods section. The device begins in the ``off'' state where there is no electrical input to the resistor and the nanowire has a small current-bias. To transition to the ``on'' state, a voltage or current is applied to the input terminals of the resistor and thermal phonons are generated. The thin dielectric carries phonons generated from the resistor to the nanowire. Phonons with energy $>$2$\Delta$ break Cooper pairs within the nanowire, destroying the superconducting state of the nanowire. Once the superconducting state has been completely destroyed in the entire nanowire, the device is in the on state. Analogously, this process can be described in terms of an effective temperature: the dielectric layer is thin enough that the phonon systems between the nanowire and the resistor are tightly coupled, meaning the phonon temperature in the nanowire is closely tied to the temperature of the resistor. When enough electrical power is delivered to the resistor, the nanowire is driven above its critical temperature and becomes normal, jumping from 0\,\Ohm to $>$1\,M\Ohm. Once the device has switched, current is then driven into the high-impedance output load and a large voltage can be generated. When characterizing a switch, of primary importance is its on and off resistance. We measured the steady-state behavior of the switch by applying power to the resistor inputs of several devices and measuring the nanowire resistance with an AC resistance bridge. As can be seen in \reffig{overview}, each device remained at zero resistance until a critical surface power \Pc was reached. When more than \Pc was applied to the resistor, the resistance of the underlying nanowire increased rapidly, ultimately saturating at the normal-state resistance of the device. One potential concern was that phonons from the resistor could escape in-plane (e.g. out through the thick gold leads), resulting in wasted power. This type of edge power loss would scale with the length of the edge, and so we measured \Pc for devices of several sizes. However, by dividing each device's \Pc by its active area $A$, we found that the devices had critical surface power densities $\Dc = \Pc/A$ of 21.0\,$\pm$\,0.6\,\nwum. This means power loss through edge effects (e.g. along the substrate plane or into the gold contacts) did not play a role at the scale of device measured here. Additionally, through thermal modeling, we also found that worst-case thermal crosstalk between adjacent devices would be negligible if the devices were separated by a few micrometers (further details on lateral heat transport and crosstalk are available in the Supplementary Information). As shown in \reffig{overview}c, the resistance of each device continues to increase beyond $\Pc$ -- this was likely due to non-uniform dissipation within the resistor element creating local temperature variation. Note that performing this measurement with a low-power measurement technique (such as an AC resistance bridge) was critical in order to limit Joule heating from current passing through the nanowire element. In this experiment, we applied a maximum of 10 nA (\about100 pW) in order to guarantee that the nanowire was not heated by the measurement process. The 4.5-nm-thick tungsten silicide (WSi) film used for the nanowires had a \Tc of 3.4~K, and all measurements were taken at a base temperature of 0.86~K. \section*{Driving a cryogenic LED} As a means of demonstrating the superconductor-semiconductor interface, we used the switch to dynamically enable a cryogenic LED in a photonic integrated circuit (PIC) using only a low-level input voltage. Shown in \reffig{led}, the output from the switch was wirebonded to a PIC that had an LED which was waveguide-coupled to a superconducting nanowire single-photon detector (SNSPD). The switch translated the 50~mV input (\reffig{led}b) signal into 1.12~V at the output, enabling and disabling the LED in a free-running mode. Photons produced by the LED were coupled via waveguide into the detector, producing clicks on the detector output (\reffig{led}c). The switch was driven with 94.0~\uW of input power (55.9~\nwum, well above \Dc), generating an on-state resistance of approximately 400~k\Ohm. We note these particular LEDs had a low overall efficiency (\about$10^{-6}$ as characterized in \refcite{Buckley2017}), and so a large LED input power was necessary to generate the handful of photons per period. The large LED power requirement necessitated wide nanowires (1~\micron wide for this experiment) to carry the requisite current, and so the device area and input power scaled proportionally. Steady-state behavior of the circuit is shown in \reffig{led}d, which characterizes the detector response with the switch input power above and below the \Dc threshold. We additionally verified that the counts measured on the detector were in fact photons generated by the LED--not false counts due to sample heating or other spurious effects--by reducing the LED bias below threshold and observing no clicks on the detector output, and also by quadrupling the switching input power and observing no heating-induced change in count rate. \begin{figure}[H] \centering \includegraphics[width=6.5in]{Fig_2.pdf} \caption{Driving a photonic integrated circuit at 1~K. (a) Schematic and circuit setup for powering a cryogenic LED with the switch and reading out the generated light using a waveguide-coupled single-photon detector. (b,c) Switch input and detector output versus time. When $v_{in}$ is high, photons generated by the LED are transmitted via waveguide to a superconducting nanowire single-photon detector, producing detection pulses. (inset) Zoom-in of the detector output pulses. (d) Detector count rates for the experiment with the LED on (red) and off (blue).} \label{led} \end{figure} \section*{Transient and sub-threshold response} We also characterized the transient properties of the device when driving high-impedance loads by placing a 8.7~k\Ohm on-chip resistor at the output of the device. In this experiment, we applied voltage pulses to the resistor input with a pulse generator and measured the device output, while applying a current bias to the nanowire either below the retrapping current (\reffig{step}, red data), or near the critical current (\reffig{step}, cyan data). As seen in the circuit diagram of \reffig{step}, an output voltage could only be generated when the switch reached a significant resistance ($\gg$1k\Ohm) allowing us to probe the impedance transition of the device. The results from this experiment, shown in \reffig{step}, showed that the device could turn-on from its low-impedance state to its high-impedance state below 300~ps, characterized by a power-delay product on the order of \about100 aJ per square micrometer of device area. \begin{figure}[H] \centering \includegraphics[]{Fig_3.pdf} \caption{Driving an 8.7 k\Ohm load using the switch. (a) Output produced by a 10-ns-wide square pulse to the heater with input surface power density $D = 50$~\nwum, highlighting the latching and non-latching regimes. Trace data taken with a 1~GHz bandwidth-limited amplifiers and oscilloscope. (b) Turn-on delay \tauon versus applied input power when the nanowire is biased below the retrapping current (blue) and near \Ic (red). The solid lines are fits generated by the ballistic phonon transport modeling, and the dotted lines are constant-energy per unit area curves corresponding to 0.18~\fjum (blue) and 0.10~\fjum (red).} \label{step} \end{figure} Crucially, the impulse-response also demonstrates that this device can self-reset. As highlighted in \reffig{step}a, when the nanowire is biased below the retrapping current, it becomes non-hysteretic and returns to the zero-voltage (superconducting) state after the input is turned off. The logic follows simply: below the retrapping current, the self-heating caused by the nanowire bias current does not generate enough power to keep the wire above \Tc. Thus, when the additional heating from the resistor is removed, the nanowire is forced back into the superconducting state--it does not get stuck in the on-state or ``latch.'' This non-latching property is in contrast to existing thin-film nanowire devices previously developed, and is critical to guarantee device reset in unclocked systems where the input bias to the devices is not periodically turned off. We note that even in this regime, there is still a thermal recovery time constant for the device to transition from the normal (on) to superconducting (off) state. We found that fall time was on the order 10\,ns, which is consistent with the thermal recovery time constants previously reported for WSi. Additionally, it should be noted that although the nanowire fabricated here has a very high kinetic inductance, the L/R time constant of the switch--that could potentially limit the rise time of the current output--is not a limiting factor. When the device transitions from low- to high-impedance, the expected \Lk/\Rs is equal to 0.39 ps. To better understand the response of the device below the critical surface power density \Dc, we measured the nanowire critical current as a function of power applied to the resistor. The result of this characterization is shown in \reffig{ic}. For each datapoint, we applied a fixed amount of electrical power to the input resistor and measured the critical current of the nanowire several hundred times, taking the median value as $I_c(P)$. We then extracted an effective temperature for the nanowire by numerically inverting the Ginzburg-Landau relation $I_c(t) = I_{c0} (1-t^2)^{3/2} (1+t^2)^{1/2}$ where $I_{c0}$ was the critical current at zero applied power and $t$ was the normalized temperature of the device $T/T_c$ (for this material, the $T_c$ was measured to be 3.4~K). Note the non-uniform heating causes the critical current to reach zero at 8\,\nwum--well before the jump in resistivity shown in \reffig{overview}c at \Dc (21\,\nwum)--because localized heat can suppress \Ic but the resistivity measurement accounts for the state of every part of the nanowire. The data in \reffig{ic} show that there is a non-linear relationship between the nanowire temperature and the applied heating power. We note that these nanowires are likely constricted due to current crowding\cite{Clem2011a} at the bends, and as a result may obfuscate changes in \Ic at low powers in \reffig{ic}. \begin{figure} \centering \includegraphics[]{Fig_4.pdf} \caption{Critical current and inferred temperature versus input power density. As the input power density is increased, the effective temperature of the meander rises, and its critical current is reduced. The small discontinuity is an experimental artifact from measuring very small critical current values.} \label{ic} \end{figure} \section*{Thermal transport modeling} A complete model of the dynamics of the device requires an investigation of the nonequilibrium dynamics of the electron and phonon systems of the heater, dielectric spacer, and nanowires. While a full description is beyond the scope of this paper, we find that an approximation using ballistic phonon transport is sufficient for describing the main experimental results. Given that the bulk mean-free-path of phonons in \(\text{SiO}_2\) is on the order of \SI{1}{\micro\meter} at \SI{2.5}{\kelvin}, we assume that phonons escaping from the heater travel through the dielectric without scattering and either interact with the nanowire or continue unimpeded to the substrate\cite{Allmaras2018}. Within this model and under the simplifying assumption of equilibrated electron and phonon systems in the nanowire at temperature \(T_{WSi}\), the energy balance equation of the nanowire is given by \begin{equation} \left(C_{e}\left(T_{WSi}\right) + C_{ph}\left(T_{WSi}\right)\right)d\:\fillfactor \frac{\partial T_{WSi}}{\partial t} = \fillfactor \chi_{abs} P_{h} - \Sigma \left({T_{WSi}}^4 - {T_{sub}}^4\right) \end{equation} where \(C_e\left(T_{WSi}\right)\) is the BCS electron heat capacity, \(C_{ph}\left(T_{WSi}\right)\) is the lattice heat capacity, \(d\) is the nanowire thickness, \fillfactor is the nanowire fill factor, \(\chi_{abs}\) is the fraction of energy incident on the nanowire which is absorbed, \(\Sigma\) describes the magnitude of phonon energy flux from the nanowire to the substrate per unit area, and \(P_{h}\) is the power dissipated by the heater per unit area. For a WSi device with \(d =\) \SI{4.5}{\nano\meter}, \fillfactor = 0.5, sheet resistance \(\rho_{sq} = \) \SI{590}{\ohm/sq}, \(T_c = \) \SI{3.4}{\kelvin}, and using values from the literature \cite{Sidorova2018}\cite{Marsili2016} for the diffusion coefficient \(D = \) \SI{0.74}{\centi\meter^2/\second} and specific heat ratio \(C_e\left(T_c\right)/C_{ph}\left(T_c\right) \sim 1\), the parameters \(\chi_{abs}\) and \(\Sigma\) are chosen to fit the turn-on delay vs dissipated power results of~\reffig{step}b. The calculated curves show the turn-on delay for temperature thresholds of \SI{2.5}{\kelvin} and \SI{3}{\kelvin}, where temperature threshold refers to the minimum temperature required at a given bias current to switch the device. The two remaining free parameters were fitted to be \(\chi_{abs} = 0.02\), and \(\Sigma = \) \SI{0.7}{\watt/\meter^2 \kelvin^4}. This value of \(\Sigma\) corresponds to nonbolometric phonon bottle-necking at the \(\text{WSi/SiO}_2\) interface with a conversion time from the non-escaping to escaping group of phonons with a magnitude over \SI{1}{\nano\second}, which is consistent with experiment \cite{Sidorova2018}. For comparison, the estimation of the parameter \(\chi_{abs}\) based on the solution of ballistic phonon transport in the nanowire is provided in the Supplementary Information. While the basic principle of operation is applicable to materials with higher critical temperatures, the details of the heat transfer between heater and superconductor would change. The heat capacity of materials increases with temperature, so more energy will be required to heat the superconductor to its current dependent critical temperature. With a larger superconducting gap \(\Delta\), higher-energy phonons are required to break Cooper pairs. At the same time, higher temperature operation means that phonons from the heater will have higher energies, and shorter mean-free-paths in the dielectric, which will lead to additional scattering and absorption. It is currently unclear if this will make heat transfer less efficient due to the additional scattering, or more efficient by keeping energy trapped in the local area of the nanowire. A useful figure of merit for these devices is \VP, the output voltage generated per unit input power while the switch is on. Due to proportionalities between the device resistance and area, and also between nanowire width and \Ic, this figure of merit is area- and shape-independent--it depends only on the materials used and the nanowire configuration. We calculated \VP by first noting that the power required to heat the full area $A$ of a given device was $\Dc A$. For a device with nanowires of width $w$, thickness $t$, and fill-factor $f$, the amount of switching resistance generated in that area is $R_s f A/w^2$, where $R_s$ is the nanowire normal-state sheet resistance. For the WSi material used here, the bias current density $J$ was 1.6\e{9}~A/m$^2$ (non-latching) or 7.2\e{9}~A/m$^2$ (latching), and $R_s$ was 590~\Ohm/sq. The resulting voltage generated per unit power is then $R_s f J/w^2 \Dc$ (thus, independent of device area), and so for the devices characterized in \reffig{overview}, \VP was 0.72~mV/nW (non-latching) or 3.24~mV/nW (latching). One potential area of concern when using this device as a switch is the power usage from current-biasing the device. Typically, a current-bias capable of driving high voltages require a large amount static energy dissipation: when using a resistor or MOSFET-based current source as a current bias, to achieve a maximum voltage of \Vmax will generally require $\Ibias\Vmax$ of static power. However, for these devices a better approach will be to use inductive biasing to generate the high-impedance current bias. Superconducting thin films such as the one used here lend themselves particularly well to the generation of large inductances in compact areas, due to their large kinetic inductance. For instance, to generate a 200~mV swing on a CMOS input capacitance of 5~fF requires a bias current of 50~\uA being carried by an 160~nH inductor -- a trivial amount of inductance to generate with a superconducting nanowire. More importantly, the inductor can be charged only when needed, using a low-voltage superconducting element such as a Josephson junction. \section*{Conclusions} Our superconducting thermal switch has a number of favourable features as a communications device between superconducting and semiconducting elements: it provides switch impedances of more than 1M\Ohm, input–output isolation, low turn-on time, and low-power operation with zero passive power required. Even with the reset-time limitations presented by thermal recovery, this switch has particular applicability for driving optoelectronics on a cryogenic stage such as LEDs or modulators. In applications such as quantum photonic feed-forward experiments or low-power neuromorphic hardware\cite{Shainline2018e}, clicks from efficient superconducting detectors need to be converted to optoelectronic-compatible signals, and a nanosecond-scale thermal recovery is acceptable as long as the initial response is fast. We note for these types of applications, it may be best to configure the device geometry to minimize propagation delay -- the propagation delay for powering an LED to 1~V will be much smaller if 1~mA is driven across a switch of 1~k\Ohm, rather than driving 1~\uA across 1~M\Ohm. It may also be helpful to drive the device with another superconducting three-terminal device\cite{McCaughan2016a}, as this can provide a purely non-resistive superconducting input and be fabricated in the same step as the nanowire meander. Looking forward, there are a number of practical methods to enhance the operation of this device, depending on what tradeoffs are acceptable in a given application. The simplest would be to use multiple layers of nanowire: the on-resistance could, for example, be effectively doubled by adding an additional nanowire meander underneath the first (at some minor turn-on energy cost). If power usage is a concern, the on-state power requirements could be decreased by placing the device on a membrane. This would greatly increase the thermal resistance, reducing overall energy cost at the cost of increasing the thermal turn-off time. For higher operational frequencies, a different nanowire material could be used (for example, NbN for a \about1~ns thermal reset time~\cite{Kerman2009}). Finally, this device does not fundamentally need to be a thermal device: any method of inducing a phase change in a superconducting film -- for instance, using an electric-field induced superconductor-to-insulator transition~\cite{Ueno2008} -- could be operated equivalently. \section*{Methods} \subsection*{Fabrication details} Fabrication began with a clean thermal oxide wafer (150~nm \sioo on Si). WSi was sputtered uniformly over the entire wafer to a thickness of 4.5~nm, and afterwards--but before breaking vacuum--a thin capping layer (1-2~nm) of amorphous Si was also sputtered on top. (WSi was chosen primarily for its high practical fabrication yield in our lab--other highly-resistive thin-film superconductors should work equivalently, although with potential power/thermal tradeoffs discussed in the thermal modeling section.) Next, contact pads for the superconducting layer were patterned using a liftoff process and deposited by evaporating 5~nm~Ti / 100~nm~Au / 5~nm~Ti. We then patterned and etched the WSi layer to form the nanowires. Afterwards, we sputtered the whole wafer with 25~nm of \sioo. \sioo was chosen because of its compatibility with WSi -- in past experiments we had found that \sioo deposition did not negatively impact the superconducting parameters of the WSi layer (\Tc, \Ic, etc.). Using a liftoff process, we then fabricated the resistor layer by evaporating 15~nm of PdAu. Lastly, the low-resistance contact pads were deposited using another liftoff process, evaporating 5~nm Ti followed by 100~nm of Au. \begin{addendum} \item[Acknowledgements] The authors would like to thank Florent Lecocq for helpful discussions, and Adriana Lita for insight into the fabrication development. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation thereon. Part of this research was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. J.P.A. was supported by a NASA Space Technology Research Fellowship. Support for this work was provided in part by the DARPA Defense Sciences Offices, through the DETECT program. \item[Author contributions] A.N.M., V.V., S.M.B., and J.M.S. conceived and designed the experiments. A.N.M. performed the experiments. J.P.A. and A.G.K. analyzed and modeled the thermal properties of the device. A.N.M. and V.V. fabricated the devices. A.N.M., A.T, and S.W.N. analyzed the data. \item[Competing Interests] The authors declare U.S. Patent US10236433B1 \item[Data availability] The data that support the findings of this study are available within the paper. Additional data are available from the corresponding authors upon reasonable request. \end{addendum} \newpage \bibliographystyle{naturemag}
proofpile-arXiv_065-4148
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} We consider the classical matching problem in an arbitrary unweighted bipartite graph $G(A \cup B, E)$ with $|A|=|B|=n$ and $E\subseteq A\times B$. A \emph{matching} $M\subseteq E$ is a set of vertex-disjoint edges. We refer to a largest cardinality matching $M$ in $G$ as a \emph{maximum matching}. A maximum matching is \emph{perfect} if $|M|=n$. Now suppose the graph is weighted and every edge $(a,b)\in E$ has a \emph{weight} specified by ${\sf c}(a,b)$. The weight of any subset of edges $E'\subseteq E$ is given by $\sum_{(a,b)\in E}{\sf c}(a,b)$. A \emph{minimum-weight maximum matching} is a maximum matching with the smallest weight. In this paper, we present an algorithm to compute a maximum matching faster by carefully assigning weights of $0$ and $1$ to the edges of $G$. \subparagraph*{Maximum matching in graphs:} In an arbitrary bipartite graph with $n$ vertices and $m$ edges, Ford and Fulkerson's algorithm~\cite{ford_fulkerson} iteratively computes, in each phase, an augmenting path in $\mathcal{O}(m)$ time, leading to a maximum cardinality matching in $\mathcal{O}(mn)$ time. Hopcroft and Karp's algorithm (HK-Algorithm)~\cite{hk_sicomp73} reduces the number of phases from $n$ to $\mathcal{O}(\sqrt{n})$ by computing a maximal set of vertex-disjoint shortest augmenting paths in each phase. A single phase can be implemented in $\mathcal{O}(m)$ time leading to an overall execution time of $\mathcal{O}(m\sqrt{n})$. In weighted bipartite graphs with $n$ vertices and $m$ edges, the well-known Hungarian method computes a minimum-weight maximum matching in $\mathcal{O}(mn)$ time~\cite{hungarian_56}. Gabow and Tarjan designed a weight-scaling algorithm (GT-Algorithm) to compute a minimum-weight perfect matching in $\mathcal{O}(m\sqrt{n}\log (nC))$ time, provided all edge weights are integers bounded by $C$~\cite{gt_sjc89}. Their method, like the Hopcroft-Karp algorithm, computes a maximal set of vertex-disjoint shortest (for an appropriately defined augmenting path cost) augmenting paths in each phase. For the maximum matching problem in arbitrary graphs (not necessarily bipartite), a weighted approach has been applied to achieve a simple $\mathcal{O}(m\sqrt{n})$ time algorithm~\cite{gabow2017weighted}. Recently Lahn and Raghvendra~\cite{lr_soda19} gave $\tilde{\mathcal{O}}(n^{6/5})$ and $\tilde{\mathcal{O}}(n^{7/5})$ time algorithms for finding a minimum-weight perfect bipartite matching in planar and $K_h$-minor\footnote{They assume $h=O(1)$.} free graphs respectively, overcoming the $\Omega(m\sqrt{n})$ barrier; see also Asathulla~\textit{et al}.~\cite{soda-18}. Both these algorithms are based on the existence of an $r$-clustering which, for a parameter $r >0$, is a partitioning of $G$ into edge-disjoint clusters $\{\mathcal{R}_1,\ldots,\mathcal{R}_k\}$ such that $k = \tilde{\mathcal{O}}(n/\sqrt{r})$, every cluster $\mathcal{R}_j$ has $\mathcal{O}(r)$ vertices, and each cluster has $\tilde{\mathcal{O}}(\sqrt{r})$ \emph{boundary} vertices. A boundary vertex has edges from two or more clusters incident on it. Furthermore, the total number of boundary vertices, counted with multiplicity, is $\tilde{\mathcal{O}}(n/\sqrt{r})$. The algorithm of Lahn and Raghvendra extends to any graph that admits an $r$-clustering. There are also algebraic approaches for the design of fast algorithms for bipartite matching; see for instance~\cite{madry2013navigating,fast_matrix_matching}. \subparagraph*{Matching in geometric settings:} In geometric settings, $A$ and $B$ are points in a fixed $d$-dimensional space and $G$ is a complete bipartite graph on $A$ and $B$. For a fixed integer $p\ge 1$, the weight of an edge between $a \in A$ and $b\in B$ is $\|a-b\|^p$, where $\|a-b\|$ denotes the Euclidean distance between $a$ and $b$. The weight of a matching $M$ is given by $\bigl(\sum_{(a,b)\in M} \|a-b\|^p\bigr)^{1/p}$. For any fixed $p \ge 1$, we wish to compute a perfect matching with the minimum weight. When $p=1$, the problem is the well-studied \emph{Euclidean bipartite matching} problem. A minimum-weight perfect matching for $p = \infty$ will minimize the largest-weight edge in the matching and is referred to as a \emph{bottleneck matching}. The Euclidean bipartite matching in a plane can be computed in $\tilde{\mathcal{O}}(n^{3/2+\delta})$~\cite{s_socg13} time for an arbitrary small $\delta>0$; see also Sharathkumar and Agarwal~\cite{sa_soda12}. Efrat~\textit{et al}.\ present an algorithm to compute a bottleneck matching in the plane in $\tilde{\mathcal{O}}(n^{3/2})$~\cite{efrat_algo} time. Both these algorithms use geometric data structures in a non-trivial fashion to speed up classical graph algorithms. When $p=1$, for any $0<\varepsilon\le 1$, there is an $\varepsilon$-approximation algorithm for the Euclidean bipartite matching problem that runs in $\tilde{\mathcal{O}}(n/\varepsilon^d)$ time~\cite{sa_stoc12}. However, for $p >1$, all known $\varepsilon$-approximation algorithms take $\Omega(n^{3/2}/\varepsilon^d)$ time. We note that it is possible to find a $\Theta(1)$-approximate bottleneck matching in $2$-dimensional space by reducing the problem to finding maximum flow in a planar graph and then finding the flow using an $\tilde{\mathcal{O}}(n)$ time max-flow algorithm~\cite{multiple_planar_maxflow}. There are numerous other results; see also~\cite{argawal_transportation_17,sa_stoc_14,argawal_phillips_rms}. Designing exact and approximation algorithms that break the $\Omega(n^{3/2})$ barrier remains an important research challenge in computational geometry. \subparagraph*{Our results:} We present a weighted approach to compute a maximum cardinality matching in an arbitrary bipartite graph. Our main result is a new matching algorithm that takes as input a weighted bipartite graph $G(A\cup B, E)$ with every edge having a weight of $0$ or $1$. Let $w \leq n$ be an upper bound on the weight of any matching in $G$. Consider the subgraph induced by all the edges of $G$ with a weight $0$. Let $\{K_1,K_2,\ldots, K_l\}$ be the connected components in this subgraph and let, for any $1\le i \le l$, $V_i$ and $E_i$ be the vertices and edges of $K_i$. We refer to each connected component $K_i$ as a \emph{piece}. Suppose $|V_i|=\mathcal{O}(r)$ and $|E_i|=\mathcal{O}(mr/n)$. Given $G$, we present an algorithm to compute a maximum matching in $G$ in $\BigOT( m(\sqrt{w}+ \sqrt{r}+\frac{wr}{n}))$ time. Consider any graph in which removal of sub-linear number of ``separator'' vertices partitions the graph into connected components with $\mathcal{O}(r)$ vertices and $\mathcal{O}(mr/n)$ edges. We can apply our algorithm to any such graph by simply setting the weight of every edge incident on any separator vertex to $1$ and weights of all other edges to $0$. When all the edge weights are $1$ or all edge weights are $0$, our algorithm will be identical to the HK-Algorithm algorithm and runs in $\mathcal{O}(m\sqrt{n})$ time. However, if we can carefully assign weights of $0$ and $1$ on the edges such that both $w$ and $r$ are sub-linear in $n$ and for some constant $\gamma < 3/2$, $wr=\mathcal{O}(n^{\gamma})$, then we can compute a maximum matching in $G$ in $o(m\sqrt{n})$ time. Using our algorithm, we obtain the following result for bottleneck matching: \begin{itemize} \item Given two point sets $A, B \subset \mathbb{R}^2$ and an $0< \varepsilon \le 1$, we reduce the problem of computing an $\varepsilon$-approximate bottleneck matching to computing a maximum cardinality matching in a subgraph $\mathcal{G}$ of the complete bipartite graph on $A$ and $B$. We can, in $\mathcal{O}(n)$ time assign $0/1$ weights to the $\mathcal{O}(n^2)$ edges of $\mathcal{G}$ with so that any matching has a weight of $\mathcal{O}(n^{2/3})$. Despite possibly $\Theta(n^2)$ edges in $\mathcal{G}$, we present an efficient implementation of our graph algorithm with $\tilde{\mathcal{O}}(n^{4/3}/\varepsilon^4)$ execution time that computes an $\varepsilon$-approximate bottleneck matching for $d=2$; all previously known algorithms take $\Omega(n^{3/2})$ time. Our algorithm, for any fixed $d \ge 2$ dimensional space, computes an $\varepsilon$-approximate bottleneck matching in $\frac{1}{\varepsilon^{\mathcal{O}(d)}}n^{1+\frac{d-1}{2d-1}}\mathrm{poly}\log n$ time. (See Section~\ref{sec:bottleneck}). \end{itemize} The algorithm of Lahn and Raghvendra~\cite{lr_soda19} for $K_h$-minor free graphs requires the clusters to have a small number of boundary vertices, which is used to create a compact representation of the residual network. This compact representation becomes prohibitively large as the number of boundary vertices increase. For instance, their algorithm has an execution time of $\Omega(m\sqrt{n})$ for the case where $G$ has a balanced vertex separator of $\Theta(n^{2/3})$. Our algorithm, on the other hand, extends to any graph with a sub-linear vertex separator. Given any graph $G(A \cup B,E)$ that has an easily computable balanced vertex separator for every subgraph $G'(V',E')$ of size $|V'|^{\delta}$, for $\delta\in [1/2,1)$, there is a $0/1$ weight assignment on edges of the graph so that the weight of any matching is $\mathcal{O}(n^{\frac{2\delta}{1+\delta}})$ and $r= \mathcal{O}(n^{\frac{1}{1+\delta}})$. This assignment can be obtained by simply recursively sub-dividing the graph using balanced separators until each piece has $\mathcal{O}(r)$ vertices and $\mathcal{O}(mr/n)$ edges. All edges incident on the separator vertices are then assigned a weight of $1$ and all other edges are assigned a weight of $0$. As a result, we obtain an algorithm that computes the maximum cardinality matching in $\tilde{\mathcal{O}}(mn^{\frac{\delta}{1+\delta}})$ time. \subparagraph*{Our approach:} Initially, we compute, in $\mathcal{O}(m\sqrt{r})$ time, a maximum matching within all pieces. Similar to the GT-Algorithm, the rest of our algorithm is based on a primal-dual method and executes in phases. Each phase consists of two stages. The first stage conducts a Hungarian search and finds at least one augmenting path containing only zero slack (with respect to the dual constraints) edges. Let the admissible graph be the subgraph induced by the set of all zero slack edges. Unlike in the GT-Algorithm, the second stage of our algorithm computes augmenting paths in the admissible graph that are not necessarily vertex-disjoint. In the second stage, the algorithm iteratively initiates a DFS from every free vertex. When a DFS finds an augmenting path $P$, the algorithm will augment the matching immediately and terminate this DFS. Let all pieces of the graph that contain the edges of $P$ be \emph{affected}. Unlike the GT-Algorithm, which deletes all edges visited by the DFS, our algorithm deletes only those edges that were visited by the DFS and did not belong to an affected piece. Consequently, we allow for visited edges from an affected piece to be reused in another augmenting path. As a result, our algorithm computes several more augmenting paths per phase than the GT-Algorithm, leading to a reduction of number of phases from $\mathcal{O}(\sqrt{n})$ to $\mathcal{O}(\sqrt{w})$. Note, however, that the edges of an affected piece may now be visited multiple times by different DFS searches within the same phase. This increases the cumulative time taken by all the DFS searches in the second stage. However, we are able to bound the total number of affected pieces across all phases of the algorithm by $\mathcal{O}(w\log w)$. Since each piece has $\mathcal{O}(mr/n)$ edges, the total time spent revisiting these edges is bounded by $\mathcal{O}(mrw\log (w)/n)$. The total execution time can therefore be bounded by $\BigOT( m(\sqrt{w}+ \sqrt{r}+\frac{wr}{n}))$. \section{Preliminaries} We are given a bipartite graph $G(A\cup B, E)$, where any edge $(a,b) \in E$ has a weight ${\sf c}(a, b)$ of $0$ or $1$. Given a matching $M$, a vertex is \emph{free} if it is not matched in $M$. An \emph{alternating path} (resp. cycle) is a simple path (resp. cycle) that alternates between edges in $M$ and not in $M$. An \emph{augmenting path} is an alternating path that begins and ends at a free vertex. A matching $M$ and an assignment of dual weights $y(\cdot)$ on the vertices of $G$ is \emph{feasible} if for any $(a,b) \in A\times B$: \begin{align} y(b) - y(a) \le {\sf c}(a,b) \qquad \textnormal{if } (a, b)\not\in M,\label{eq:feas1}\\ y(a) - y(b) = {\sf c}(a,b) \qquad \textnormal{if } (a, b) \in M. \label{eq:feas2} \end{align} To assist in describing our algorithm, we first define a residual network and an augmented residual network with respect to a feasible matching $M, y(\cdot)$. A \emph{residual network} $G_M$ with respect to a feasible matching $M$ is a directed graph where every edge $(a, b)$ is directed from $b$ to $a$ if $(a, b) \not\in M$ and from $a$ to $b$ if $(a,b) \in M$. The weight $s(a,b)$ of any edge is given by the slack of this edge with respect to feasibility conditions~\eqref{eq:feas1} and~\eqref{eq:feas2}, i.e., if $(a,b) \not\in M$, then $s(a ,b)= {\sf c}(a,b)+y(a)-y(b)$ and $s(a,b)=0$ otherwise. An \emph{augmented residual network} is obtained by adding to the residual network an additional vertex $s$ and additional directed edges from $s$ to every vertex in $B_F$, each of having a weight of $0$. We denote the augmented residual network as $G_M'$. \section{Our algorithm} \label{sec:graphmatch} Throughout this section we will use $M$ to denote the current matching maintained by the algorithm and $A_F$ and $B_F$ to denote the vertices of $A$ and $B$ that are free with respect to $M$. Initially $M=\emptyset$, $A_F=A$, and $B_F=B$. Our algorithm consists of two steps. The first step, which we refer to as the \emph{preprocessing step}, will execute the Hopcroft-Karp algorithm and compute a maximum matching within every piece. Any maximum matching $M_{\textsc{Opt}}$ has at most $w$ edges with a weight of $1$ and the remaining edges have a weight of $0$. Therefore, $|M_{\textsc{Opt}}|-|M| \le w$. The time taken by the preprocessing step for $K_i$ is $\mathcal{O}(|E_i|\sqrt{|V_i|}) = \mathcal{O}(|E_i|\sqrt{r})$. Since the pieces are vertex disjoint, the total time taken across all pieces is $\mathcal{O}(m\sqrt{r})$. After this step, no augmenting path with respect to $M$ is completely contained within a single piece. We set the dual weight $y(v)$ of every vertex $v \in A\cup B$ to $0$. The matching $M$ along with the dual weights $y(\cdot)$ satisfies~\eqref{eq:feas1} and~\eqref{eq:feas2} and is feasible. The second step of the algorithm is executed in \emph{phases}. We describe phase $k$ of the algorithm. This phase consists of two stages. \subparagraph*{First stage:} In the first stage, we construct the augmented residual network $G_M'$ and execute Dijkstra's algorithm with $s$ as the source. Let $\ell_v$ for any vertex $v$ denote the shortest path distance from $s$ to $v$ in $G_M'$. If a vertex $v$ is not reachable from $s$, we set $\ell_v$ to $\infty$. Let \begin{equation} \label{eq:closestfreevertex}\ell = \min_{v \in A_F}\ell_v. \end{equation} Suppose $M$ is a perfect matching or $\ell=\infty$, then this algorithm returns with $M$ as a maximum matching. Otherwise, we update the dual weight of any vertex $v \in A\cup B$ as follows. If $\ell_v \ge \ell$, we leave its dual weight unchanged. Otherwise, if $\ell_v < \ell$, we set $y(v) \leftarrow y(v) + \ell - \ell_v$. After updating the dual weights, we construct the \emph{admissible graph} which consists of a subset of edges in the residual network $G_M$ that have zero slack. After the first stage, the matching $M$ and the updated dual weights are feasible. Furthermore, there is at least one augmenting path in the admissible graph. This completes the first stage of the phase. \subparagraph*{Second stage:} In the second stage, we initialize $G'$ to be the admissible graph and execute DFS to identify augmenting paths. For any augmenting path $P$ found during the DFS, we refer to the pieces that contain its edges as \emph{affected pieces} of $P$. Similar to the HK-Algorithm, the second stage of this phase will initiate a DFS from every free vertex $b \in B_F$ in $G'$. If the DFS does not lead to an augmenting path, we delete all edges that were visited by the DFS. On the other hand, if the DFS finds an augmenting path $P$, then the matching is augmented along $P$, all edges that are visited by the DFS and do not lie in an affected piece of $P$ are deleted, and the DFS initiated at $b$ will terminate. Now, we describe in detail the DFS initiated for a free vertex $b \in B_F$. Initially $P=\langle b=v_1\rangle$. Every edge of $G'$ is marked unvisited. At any point during the execution of DFS, the algorithm maintains a simple path $P = \langle b=v_1, v_2,\ldots, v_k\rangle$. The DFS search continues from the last vertex of this path as follows: \begin{itemize} \item If there are no unvisited edges that are going out of $v_k$ in $G'$, \begin{itemize} \item If $P=\langle v_1\rangle$, remove all edges that were marked as visited from $G'$ and terminate the execution of DFS initiated at $b$. \item Otherwise, delete $v_k$ from $P$ and continue the DFS search from $v_{k-1}$, \end{itemize} \item If there is an unvisited edge going out of $v_k$, let $(v_k,v)$ be this edge. Mark $(v_k,v)$ as visited. If $v$ is on the path $P$, continue the DFS from $v_k$. If $v$ is not on the path $P$, add $(v_k,v)$ to $P$, set $v_{k+1}$ to $v$, and, \begin{itemize} \item Suppose $v \in A_F$, then $P$ is an augmenting path from $b$ to $v$. Execute the \textsc{Augment}\ procedure which augments $M$ along $P$. Delete from $G'$ every visited edge that does not belong to any affected piece of $P$ and terminate the execution of DFS initiated at $b$. \item Otherwise, $v \in (A\cup B)\setminus A_F$. Continue the DFS from $v_{k+1}$. \end{itemize} \end{itemize} The \textsc{Augment}\ procedure receives a feasible matching $M$, a set of dual weights $y(\cdot)$, and an augmenting path $P$ as input. For any $(b,a) \in P\setminus M$, where $a \in A$ and $b \in B$, set $y(b) \leftarrow y(b)-2{\sf c}(a,b)$. Then augment $M$ along $P$ by setting $M\leftarrow M\oplus P$. By doing so, every edge of $M$ after augmentation satisfies the feasibility condition~\eqref{eq:feas2}. This completes the description of our algorithm. The algorithm maintains the following invariants during its execution: \begin{enumerate} \item[(I1)] The matching $M$ and the set of dual weights $y(\cdot)$ are feasible. Let $y_{\max}=\max_{v \in B} y(v)$. The dual weight of every vertex $v \in B_F$ is $y_{\max}$ and the dual weight for every vertex $v \in A_F$ is $0$. \item[(I2)] For every phase that is fully executed prior to obtaining a maximum matching, at least one augmenting path is found and the dual weight of every free vertex of $B_F$ increases by at least $1$. \end{enumerate} \subparagraph*{Comparison with the GT-Algorithm:} In the GT-Algorithm, the admissible graph does not have any alternating cycles. Also, every augmenting path edge can be shown to not participate in any future augmenting paths that are computed in the current phase. By using these facts, one can show that the edges visited unsuccessfully by a DFS will not lead to an augmenting path in the current phase. In our case, however, admissible cycles can exist. Also, some edges on the augmenting path that have zero weight remain admissible after augmentation and may participate in another augmenting path in the current phase. We show, however, that any admissible cycle must be completely inside a piece and cannot span multiple pieces (Lemma~\ref{lem:nocycle}). Using this fact, we show that edges visited unsuccessfully by the DFS that do not lie in an affected piece will not participate in any more augmenting paths (Lemma~\ref{lem:final} and Lemma~\ref{lem:one-path}) in the current phase. Therefore, we can safely delete them. \subparagraph*{Correctness:} From Invariant (I2), each phase of our algorithm will increase the cardinality of $M$ by at least $1$ and so, our algorithm terminates with a maximum matching. \subparagraph*{Efficiency:} We use the following notations to bound the efficiency of our algorithm. Let $\{P_1,\ldots,P_t\}$ be the $t$ augmenting paths computed in the second step of the algorithm. Let $\mathbb{K}_i$ be the set of affected pieces with respect to the augmenting path $P_i$. Let $M_0$ be the matching at the end of the first step of the algorithm. Let, for $1\le i\le t$, $M_i = M_{i-1}\oplus P_i$, i.e., $M_i$ is the matching after the $i$th augmentation in the second step of the algorithm. The first stage is an execution of Dijkstra's algorithm which takes $\mathcal{O}(m+n\log n)$ time. Suppose there are $\lambda$ phases; then the cumulative time taken across all phases for the first stage is $\mathcal{O}(\lambda m+\lambda n\log n)$. In the second stage, each edge visited by a DFS is discarded for the remainder of the phase, provided it is not in an affected piece. Since each affected piece has $\mathcal{O}(mr/n)$ edges, the total time taken by all the DFS searches across all the $\lambda$ phases is bounded by $\mathcal{O}((m+n \log n)\lambda + (mr/n)\sum_{i=1}^t |\mathbb{K}_i|)$. In Lemma~\ref{lem:numberofphases}, we bound $\lambda$ by $\sqrt{w}$ and $\sum_{i=1}^t |\mathbb{K}_i|$ by $\mathcal{O}(w\log w)$. Therefore, the total time taken by the algorithm including the time taken by preprocessing step is $\mathcal{O}(m\sqrt{r}+m\sqrt{w}+n\sqrt{w}\log n+\frac{mrw\log w}{n}) = \BigOT( m(\sqrt{w}+ \sqrt{r}+\frac{wr}{n}))$. \begin{lemma} \label{lem:netcostslack1} \label{lem:netcostslack} For any feasible matching $M, y(\cdot)$ maintained by the algorithm, let $y_{\max}$ be the dual weight of every vertex of $B_F$. For any augmenting path $P$ with respect to $M$ from a free vertex $u \in B_F$ to a free vertex $v \in A_F$, \begin{equation*} {\sf c}(P)= y_{\max}+\sum_{(a,b) \in P}s(a,b). \end{equation*} \end{lemma} \begin{proof} The weight of $P$ is \[{\sf c}(P) = \sum_{(a,b) \in P} {\sf c}(a,b) = \sum_{(a,b) \in P\setminus M} (y(b)-y(a) + s(a,b)) + \sum_{(a,b) \in P\cap M} (y(a) - y(b)).\] Since every vertex on $P$ except for $u$ and $v$ participates in one edge of $P\cap M$ and one edge of $P \setminus M$, we can write the above equation as \begin{equation} \label{eq:distance} {\sf c}(P) = y(u) - y(v) + \sum_{(a,b)\in P \setminus M} s(a,b) = y(u) - y(v) + \sum_{(a,b)\in P} s(a,b). \end{equation} The last equality follows from the fact that edges of $P\cap M$ satisfy~\eqref{eq:feas2} and have a slack of zero. From (I1), we get that $y(u)=y_{\max}$ and $y(v)=0$, which gives, \begin{equation*} {\sf c}(P)= y_{\max}+\sum_{(a,b) \in P}s(a,b). \end{equation*} \end{proof} \begin{lemma} \label{lem:nocycle} For any feasible matching $M, y(\cdot)$ maintained by the algorithm, and for any alternating cycle $C$ with respect to $M$, if ${\sf c}(C)>0$, then \begin{equation*} \sum_{(a,b) \in P}s(a,b) > 0, \end{equation*} i.e., $C$ is not a cycle in the admissible graph. \end{lemma} \begin{proof} The claim follows from~\eqref{eq:distance} and the fact that the first vertex $u$ and the last vertex $v$ in a cycle are the same. \end{proof} \begin{lemma} \label{lem:numberofphases} The total number of phases is $\mathcal{O}(\sqrt{w})$ and the total number of affected pieces is $\mathcal{O}(w\log w)$, i.e., $\sum_{i=1}^t |\mathbb{K}_i| = \mathcal{O}(w\log w)$. \end{lemma} \begin{proof} Let $M_{\textsc{Opt}}$ be a maximum matching, which has weight at most $w$. Consider any phase $k$ of the algorithm. By (I2), the dual weight $y_{\max}$ of every free vertex in $B_F$ is at least $k$. The symmetric difference of $M$ and $M_{\textsc{Opt}}$ will contain $j=|M_{\textsc{Opt}}|-|M|$ vertex-disjoint augmenting paths. Let $\{\mathcal{P}_1,\ldots, \mathcal{P}_j\}$ be these augmenting paths. These paths contain edges of $M_\textsc{Opt}$ and $M$, both of which are of weight at most $w$. Therefore, the sum of weights of these paths is \begin{equation*} \sum_{i=1}^j {\sf c}(\mathcal{P}_i) \le 2w. \end{equation*} Let $y_{\max}$ be the dual weight of every vertex $b$ of $B$ that is free with respect to $M$. i.e., $b \in B_F$. From (I2), $y_{\max} \ge k$. From Lemma~\ref{lem:netcostslack} and the fact that the slack on every edge is non-negative, we immediately get, \begin{equation} \label{eq:phaseslength} 2w \ge \sum_{i=1}^j{\sf c}(\mathcal{P}_i) \ge j y_{\max}\ge jk. \end{equation} When $\sqrt{w} \leq k <\sqrt{w}+1$, it follows from the above equation that $j = |M_{\textsc{Opt}}|-|M| \le 2\sqrt{w}$. From (I2), we will compute at least one augmenting path in each phase and so the remaining $j$ unmatched vertices are matched in at most $2\sqrt{w}$ phases. This bounds the total number of phases by $3\sqrt{w}$. Recollect that $\{P_1,\ldots, P_t\}$ are the augmenting paths computed by the algorithm. The matching $M_0$ has $|M_{\textsc{Opt}}|-t$ edges. Let $y_{\max}^l$ correspond to the dual weight of the free vertices of $B_F$ when the augmenting path $P_l$ is found by the algorithm. From Lemma~\ref{lem:netcostslack}, and the fact that $P_l$ is an augmenting path consisting of zero slack edges, we have $y_{\max}^l = {\sf c}(P_l)$. Before augmenting along $P_l$, there are $|M_{\textsc{Opt}}|-t+l-1$ edges in $M_{l-1}$ and $j=|M_{\textsc{Opt}}|-|M_{l-1}| = t-l+1$. Plugging this in to~\eqref{eq:phaseslength}, we get ${\sf c}(P_l)= y_{\max}^l \le \frac{2w}{t-l+1}$. Summing over all $1\le l\le t$, we get, \begin{equation} \label{eq:lengthbound} \sum_{l=1}^t {\sf c}(P_l) \le w \sum_{l=1}^t \frac{2}{t-l+1} = \mathcal{O}(w \log t) = \mathcal{O}(w \log w). \end{equation} For any augmenting path $P_l$, the number of affected pieces is upper bounded by the number of non-zero weight edges on $P_l$, i.e., $|\mathbb{K}_l| \le {\sf c}(P_l)$. Therefore, \[\sum_{l=1}^t |\mathbb{K}_l| \le \sum_{l=1}^t {\sf c}(P_l) = \mathcal{O}(w \log w).\] \end{proof} \section{Proof of invariants} We now prove (I1) and (I2). Consider any phase $k$ in the algorithm. Assume inductively that at the end of phase $k-1$, (I1) and (I2) hold. We will show that (I1) and (I2) also hold at the end of the phase $k$. We establish a lemma that will help us prove (I1) and (I2). \begin{lemma} \label{lem:matchedge} For any edge $(a,b) \in M$, let $\ell_a$ and $\ell_b$ be the distances returned by Dijkstra's algorithm during the first stage of phase $k$, then $\ell_a=\ell_b$. \end{lemma} \begin{proof} The only edge directed towards $b$ is an edge from its match $a$. Therefore, any path from $s$ to $b$ in the augmented residual network, including the shortest path, should pass through $a$. Since the slack on any edge of $M$ is $0$, $\ell_b=\ell_a+s(a,b)= \ell_a$. \end{proof} \begin{lemma} \label{lem:feas} Any matching $M$ and dual weights $y(\cdot)$ maintained during the execution of the algorithm are feasible. \end{lemma} \begin{proof} We begin by showing that the dual weight modifications in the first stage of phase $k$ will not violate dual feasibility conditions~\eqref{eq:feas1} and~\eqref{eq:feas2}. Let $\tilde{y}(\cdot)$ denote the dual weights after the execution of the first stage of the algorithm. Consider any edge $(u,v)$ directed from $u$ to $v$. There are the following possibilities: If both $\ell_u$ and $\ell_v$ are greater than or equal to $\ell$, then $y(u)$ and $y(v)$ remain unchanged and the edge remains feasible. If both $\ell_u$ and $\ell_v$ are less than $\ell$, suppose $(u,v) \in M$. Then, from Lemma~\ref{lem:matchedge}, $\ell_u=\ell_v$. We have, $\tilde{y}(u) = y(u)+\ell-\ell_u$, $\tilde{y}(v)=y(v)+\ell-\ell_v$, and $\tilde{y}(u)-\tilde{y}(v)= y(u)-y(v) +\ell_v - \ell_u = {\sf c}(u,v)$ implying $(u,v)$ satisfies~\eqref{eq:feas2}. If $\ell_u$ and $\ell_v$ are less than $\ell$ and $(u,v)\not\in M$, then $u \in B$ and $v \in A$. By definition, $y(u) - y(v) + s(u,v)= {\sf c}(u,v)$. By the properties of shortest paths, for any edge $(u,v)$, $\ell_v -\ell_u \le s(u,v)$. The dual weight of $u$ is updated to $y(u)+\ell-\ell_u$ and dual weight of $v$ is updated to $y(v)+\ell-\ell_v$. The difference in the updated dual weights $\tilde{y}(u)-\tilde{y}(v) = (y(u) + \ell - \ell_u) - (y(v) + \ell - \ell_v) = y(u) - y(v) + \ell_v-\ell_u \le y(u) - y(v)+s(u,v) = {\sf c}(u,v)$. Therefore, $(u,v)$ satisfies~\eqref{eq:feas1}. If $\ell_u < \ell$ and $\ell_v \ge \ell$, then, from Lemma~\ref{lem:matchedge}, $(u,v)\not\in M$, and so $u \in B$ and $v \in A$. From the shortest path property, for any edge $(u,v)$, $\ell_v -\ell_u \le s(u,v)$. Therefore, \begin{equation*} \tilde{y}(u) - \tilde{y}(v) =y(u)-y(v)+\ell-\ell_u \le y(u)-y(v)+\ell_v -\ell_u \le y(u)-y(v)+s(u,v) = {\sf c}(u,v), \end{equation*} implying $(u,v)$ satisfies~\eqref{eq:feas1}. If $\ell_u \ge \ell$ and $\ell_v < \ell$, then, from Lemma~\ref{lem:matchedge}, $(u,v)\not\in M$, and so $u \in B$ and $v \in A$. Since $\ell_v < \ell$, we have, \begin{equation*} \tilde{y}(u) - \tilde{y}(v) =y(u)-y(v)-\ell+\ell_v < y(u)-y(v) \le {\sf c}(u,v), \end{equation*} implying $(u,v)$ satisfies~\eqref{eq:feas1}. In the second stage of the algorithm, when an augmenting path $P$ is found, the dual weights of some vertices of $B$ on $P$ decrease and the directions of edges of $P$ change. We argue these operations do not violate feasibility. Let $\tilde{y}(\cdot)$ be the dual weights after these operations. Consider any edge $(a,b) \in A \times B$. If $b$ is not on $P$, then the feasibility of $(a,b)$ is unchanged. If $b$ is on $P$ and $a$ is not on $P$, then $\tilde{y}(b) \leq y(b)$, and $\tilde{y}(b) - \tilde{y}(a) \leq y(b) - y(a) \leq {\sf c}(a,b)$, implying \eqref{eq:feas1} holds. The remaining case is when both $a$ and $b$ are on $P$. Consider if $(a,b) \in M$ after augmentation. Prior to augmentation, $(a,b)$ was an admissible edge not in $M$, and we have $y(b) - y(a) = {\sf c}(a,b)$ and $\tilde{y}(b) = y(b) - 2c(a,b)$. So, $\tilde{y}(a) - \tilde{y}(b) = y(a) - (y(b) - 2{\sf c}(a,b)) = y(a) - y(b) + 2{\sf c}(a,b) = {\sf c}(a,b)$, implying \eqref{eq:feas2} holds. Finally, consider if $(a,b) \notin M$ after augmentation. Then, prior to augmentation, $(a,b)$ was in $M$, and $y(a) - y(b) = {\sf c}(a,b)$. So, $\tilde{y}(b) - \tilde{y}(a) \leq y(b) - y(a) = -{\sf c}(a,b) \leq {\sf c}(a,b)$, implying \eqref{eq:feas1} holds. We conclude the second stage maintains feasibility. \end{proof} Next we show that the dual weights $A_F$ are zero and the dual weights of all vertices of $B_F$ are equal to $y_{max}$. At the start of the second step, all dual weights are $0$. During the first stage, the dual weight of any vertex $v$ will increase by $\ell - \ell_v$ only if $\ell_v < \ell$. By~\eqref{eq:closestfreevertex}, for every free vertex $a \in A_F$, $\ell_a \ge \ell$, and so the dual weight of every free vertex of $A$ remains unchanged at $0$. Similarly, for any free vertex $b \in B_F$, $\ell_b = 0$, and the dual weight increases by $\ell$, which is the largest possible increase. This implies that every free vertex in $B_F$ will have the same dual weight of $y_{\max}$. In the second stage, matched vertices of $B$ undergo a decrease in their dual weights, which does not affect vertices in $B_F$. Therefore, the dual weights of vertices of $B_F$ will still have a dual weight of $y_{\max}$ after stage two. This completes the proof of (I1). Before we prove (I2), we will first establish a property of the admissible graph after the dual weight modifications in the first stage of the algorithm. \begin{lemma} \label{lem:firststage} After the first stage of each phase, there is an augmenting path consisting of admissible edges. \end{lemma} \begin{proof} Let $a \in A_F$ be a free vertex whose shortest path distance from $s$ in the augmented residual network is $\ell$, i.e., $\ell_a=\ell$. Let $P$ be the shortest path from $s$ to $a$ and let $P_a$ be the path $P$ with $s$ removed from it. Note that $P_a$ is an augmenting path. We will show that after the dual updates in the first stage, every edge of $P_a$ is admissible. Consider any edge $(u,v) \in P_a \cap M$, where $u \in A$ and $v \in B$. From Lemma~\ref{lem:matchedge}, $\ell_u=\ell_v$. Then the updated dual weights are $\tilde{y}(u) = y(u)+\ell-\ell_u$ and $\tilde{y}(v) = y(v)+\ell-\ell_v$. Therefore, $\tilde{y}(u)-\tilde{y}(v)=y(u)-y(v) -\ell_u+\ell_v= {\sf c}(u,v)$, and $(u,v)$ is admissible. Otherwise, consider any edge $(u,v) \in P_a \setminus M$, where $u \in B$ and $v\in A$. From the optimal substructure property of shortest paths, for any edge $(u,v) \in P_a$ directed from $u$ to $v$, $\ell_v-\ell_u=s(u,v)$. Therefore, the difference of the new dual weights is \begin{equation*} \tilde{y}(u)-\tilde{y}(v)= y(u)+\ell-\ell_u -y(v) -\ell+\ell_v = y(u)-y(v)-\ell_u+\ell_v = y(u)-y(v)+s(u,v)={\sf c}(u,v), \end{equation*} implying that $(u,v)$ is admissible. \end{proof} \subparagraph*{Proof of (I2):} From Lemma~\ref{lem:firststage}, there is an augmenting path of admissible edges at the end of the first stage of any phase. Since we execute a DFS from every free vertex $b \in B_F$ in the second stage, we are guaranteed to find an augmenting path. Next, we show in Corollary~\ref{cor:one-path} that there is no augmenting path of admissible edges at the end of stage two of phase $k$, i.e., all augmenting paths in the residual network have a slack of at least $1$. This will immediately imply that the first stage of phase $k+1$ will have to increase the dual weight of every free vertex by at least $1$ completing the proof for (I2). Edges that are deleted during a phase do not participate in any augmenting path for the rest of the phase. We show this in two steps. First, we show that at the time of deletion of an edge $(u,v)$, there is no path in the admissible graph that starts from the edge $(u,v)$ and ends at a free vertex $a \in A_F$ (Lemma~\ref{lem:one-path}). In Lemma~\ref{lem:final}, we show that any such edge $(u,v)$ will not participate in any admissible alternating path to a free vertex of $A_F$ for the rest of the phase. We use DFS$(b,k)$ to denote the DFS initiated from $b$ in phase $k$. Let $P^b_u$ denote the path maintained by DFS$(b,k)$ when the vertex $u$ was added to the path. \begin{lemma} \label{lem:final} Consider some point during the second stage of phase $k$ where there is an edge $(u,v)$ that does not participate in any admissible alternating path to a vertex of $A_F$. Then, for the remainder of phase $k$, $(u,v)$ does not participate in any admissible alternating path to a vertex of $A_F$. \end{lemma} \begin{proof} Assume for the sake of contradiction that at some later time during phase $k$, $(u,v)$ becomes part of an admissible path $P_{y,z}$ from a vertex $y$ to a vertex $z \in A_F$. Consider the first time this occurs for $(u,v)$. During the second stage, the dual weights of some vertices of $B$ may decrease just prior to augmentation; however, this does not create any new admissible edges. Therefore, $P_{y,z}$ must have become an admissible path due to augmentation along a path $P_{a,b}$ from some $b \in B_F$ to some $a \in A_F$. Specifically, $P_{y,z}$ must intersect $P_{a,b}$ at some vertex $x$. Therefore, prior to augmenting along $P_{a,b}$, there was an admissible path from $y$ to $a$ via $x$. This contradicts the assumption that $(u,v)$ did not participate in any admissible path to a vertex of $A_F$ prior to this time. \end{proof} \begin{lemma} \label{lem:dfsprop} Consider the execution of DFS$(b,k)$ and the path $P_u^b$. Suppose the DFS$(b,k)$ marks an edge $(u,v)$ as visited. Let $P_{v}$ be an admissible alternating path from $v$ to any free vertex $a \in A_F$ in $G'$. Suppose $P_{v}$ and $P_u^b$ are vertex-disjoint. Then, DFS$(b,k)$ will find an augmenting path that includes the edge $(u,v)$. \end{lemma} \begin{proof} $P_v$ and $P^b_u$ are vertex-disjoint and so, $v$ is not on the path $P^b_u$. Therefore, DFS($b,k$) will add $(u,v)$ to the path and we get the path $P=P^b_v$. We will show that all edges of $P_v$ are unvisited by DFS($b,k$), and so the DFS procedure, when continued from $v$, will discover an augmenting path. We show, through a contradiction, that all edges of $P_v$ are not yet visited by DFS($b,k$). Consider, for the sake of contradiction, among all the edges of $P_v$, the edge $(u',v')$ that was marked visited first. We claim the following: \begin{itemize} \item[(i)] {\it $(u',v')$ is visited before $(u,v)$}: This follows from the assumption that when $(u,v)$ was marked as visited, $(u',v')$ was already marked as visited by the DFS. \item[(ii)] {\it $(u,v)$ is not a descendant of $(u',v')$ in the DFS}: If $(u',v')$ was an ancestor of $(u,v)$ in the DFS, then $P^b_u$ contains $(u',v')$. By definition, $P_v$ also contains $(u',v')$, which contradicts the assumption that $P^b_u$ and $P_v$ are disjoint paths. \item[(iii)] {\it When $(u',v')$ is marked visited, it will be added to the path by the DFS}: The only reason why $(u',v')$ is visited but not added is if $v'$ is already on the path $P^b_{u'}$. In that case, $P_v$ and $P^b_{u'}$ will share an edge that was visited before $(u',v')$ contradicting the assumption that $(u',v')$ was the earliest edge of $P_v$ to be marked visited. \end{itemize} From (iii), when $(u',v')$ was visited, it was added to the path $P^b_{v'}$. Since $(u',v')$ was the edge on $P_v$ that was marked visited first by DFS($b,k$), all edges on the subpath from $v'$ to $a$ are unvisited. Therefore, the DFS($b,k$), when continued from $v'$, will not visit $(u,v)$ (from (ii)), will find an augmenting path, and terminate. From (i), $(u,v)$ will not be marked visited by DFS($b,k$) leading to a contradiction. \end{proof} \begin{lemma} \label{lem:one-path} Consider a DFS initiated from some free vertex $b\in B_F$ in phase $k$. Let $M$ be the matching at the start of this DFS and $M'$ be the matching when the DFS terminates. Suppose the edge $(u,v)$ was deleted during DFS$(b,k)$. Then there is no admissible path starting with $(u,v)$ and ending at a free vertex $a \in A_F$ in $G_{M'}$. \end{lemma} \begin{proof} At the start of phase $k$, $G'$ is initialized to the admissible graph. Inductively, we assume that all the edges discarded in phase $k$ prior to the execution of DFS($b,k$) do not participate in any augmenting path of admissible edges with respect to $M$. Therefore, any augmenting path of admissible edges in $G_M$ remains an augmenting path in $G'$. There are two possible outcomes for DFS($b,k$). Either, (i) the DFS terminates without finding an augmenting path, or (ii) the DFS terminates with an augmenting path $\tilde{P}$ and $M'=M\oplus \tilde{P}$. In case (i), $M=M'$ and any edge $(u,v)$ visited by the DFS($b,k)$ is marked for deletion. For the sake of contradiction, let $(u,v)$ participate in an admissible path $P$ to a free vertex $a' \in A_F$. Since $u$ is reachable from $b$ and $a'$ is reachable from $u$ in $G_{M}$, $a'$ is reachable from $b$. This contradicts the fact that DFS($b,k$) did not find an augmenting path. Therefore, no edge $(u,v)$ marked for deletion participates in an augmenting path with respect to $M$. In case (ii), $M'=M\oplus \tilde{P}$. DFS($b,k$) marks two kinds of edges for deletion. \begin{itemize} \item[(a)] Any edge $(u,v)$ on the augmenting path $\tilde{P}$ such that ${\sf c}(u,v)=1$ is deleted, and, \item[(b)] Any edge $(u,v)$ that is marked visited by DFS($b,k$), does not lie on $\tilde{P}$, and does not belong to any affected piece is deleted. \end{itemize} In (a), there are two possibilities (1) $(u,v) \in \tilde{P} \cap M$ or (2) $(u,v)\in \tilde{P}\setminus M$. If $(u,v) \in M$ (case (a)(1)), then, after augmentation along $\tilde{P}$, $s(u,v)$ increases from $0$ to at least $2$, and $(u,v)$ is no longer admissible. Therefore, $(u,v)$ does not participate in any admissible alternating paths to a free vertex in $A_F$ with respect to $G_{M'}$. If $(u,v) \not\in M$ (case (a)(2)), then the \textsc{Augment}\ procedure reduces the dual weight of $u \in B$ by $2$. So, every edge going out of $u$ will have a slack of at least $2$. Therefore, $(u,v)$ cannot participate in any admissible path $P$ to a free vertex in $A_F$. This completes case (a). For (b), we will show that $(u,v)$, even prior to augmentation along $\tilde{P}$, did not participate in any path of admissible edges from $v$ to any free vertex of $A_F$. For the sake of contradiction, let there be a path $P_v$ from $v$ to $a' \in A_F$. We claim that $P_v$ and $P^b_u$ are not vertex-disjoint. Otherwise, from Lemma~\ref{lem:dfsprop}, the path $\tilde{P}$ found by DFS($b,k$) includes $(u,v)$. However, by our assumption for case (b), $(u,v)$ does not lie on $\tilde{P}$. Therefore, we safely assume that $P_v$ intersects $P^b_u$. There are two cases: \begin{itemize} \item {\it ${\sf c}(u,v)=1$}: We will construct a cycle of admissible edges containing the edge $(u,v)$. Since ${\sf c}(u,v)=1$, our construction will contradict Lemma~\ref{lem:nocycle}. Let $x$ be the first vertex common to both $P_v$ and $P^b_u$ as we walk from $v$ to $a'$ on $P_v$. To create the cycle, we traverse from $x$ to $u$ along the path $P^b_u$, followed by the edge $(u,v)$, followed by the path from $v$ to $x$ along $P_v$. All edges of this cycle are admissible including the edge $(u,v)$. \item {\it ${\sf c}(u,v)=0$}: In this case, $(u,v)$ belongs to some piece $K_i$ that is not an affected piece. Among all edges visited by DFS($b,k$), consider the edge $(u',v')$ of $K_i$, the same piece as $(u,v)$, such that $v'$ has a path to the vertex $a'\in A_F$ with the fewest number of edges. Let $P_{v'}$ be this path. We claim that $P_{v'}$ and $P^b_{u'}$ are not vertex-disjoint. Otherwise, from Lemma~\ref{lem:dfsprop}, the path $\tilde{P}$ found by DFS($b,k$) includes $(u',v')$ and $K_i$ would have been an affected piece. Therefore, we can safely assume that $P_{v'}$ intersects with $P^b_{u'}$. Let $z$ be the first intersection point with $P^b_{u'}$ as we walk from $v'$ to $a'$ and let $z'$ be the vertex that follows after $z$ in $P^b_{u'}$. There are two possibilities: \begin{itemize} \item {\it The edge $(z,z')\in K_i$:} In this case, $(z,z')$ is also marked visited by DFS($b,k$), and $z'$ has path to $a'$ with fewer number of edges than $v'$. This contradicts our assumption about $(u',v')$. \item {\it The edge $(z,z') \not\in K_i$:} In this case, consider the cycle obtained by walking from $z$ to $u'$ along the path $P^b_{u'}$ followed by the edge $(u',v')$ and the path from $v'$ to $z$ along $P_{v'}$. Since $(u',v') \in K_i$ and $(z,z') \not\in K_i$, the admissible cycle contains at least one edge of weight $1$. This contradicts Lemma~\ref{lem:nocycle}. \end{itemize} \end{itemize} This concludes case (b) which shows that $(u,v)$ did not participate in any augmenting paths with respect to $M$. From Lemma~\ref{lem:final}, it follows that $(u,v)$ does not participate in any augmenting path with respect to $G_{M'}$ as well. \end{proof} \begin{corollary} \label{cor:one-path} At the end of any phase, there is no augmenting path of admissible edges. \end{corollary} \section{Minimum bottleneck matching} \label{sec:bottleneck} We are given two sets $A$ and $B$ of $n$ $d$-dimensional points. Consider a weighted and complete bipartite graph on points of $A$ and $B$. The weight of any edge $(a,b) \in A\times B$ is given by its Euclidean distance and denoted by $\|a-b\|$. For any matching $M$ of $A$ and $B$ let its largest weight edge be its \emph{bottleneck edge}. In the \emph{minimum bottleneck matching} problem, we wish to compute a matching $M_\textsc{Opt}$ of $A$ and $B$ with the smallest weight bottleneck edge. We refer to this weight as the \emph{bottleneck distance} of $A$ and $B$ and denote it by $\beta^*$. An \emph{$\varepsilon$-approximate bottleneck matching} of $A$ and $B$ is any matching $M$ with a bottleneck edge weight of at most $(1+\varepsilon)\beta^*$. We present an algorithm that takes as input $A,B$, and a value $\delta$ such that $\beta^*\le \delta \le (1+\varepsilon/3)\beta^*$, and produces an $\varepsilon$-approximate bottleneck matching. For simplicity in presentation, we describe our algorithm for the $2$-dimensional case when all points of $A$ and $B$ are in a bounding square $S$. The algorithm easily extends to any arbitrary fixed dimension $d$. For $2$-dimensional case, given a value $\delta$, our algorithm executes in $\tilde{\mathcal{O}}(n^{4/3}/\varepsilon^3)$ time. Although, the value of $\delta$ is not known to the algorithm, we can first find a value $\alpha$ that is guaranteed to be an $n$-approximation of the bottleneck distance~\cite[Lemma 2.2]{av_scg04} and then select $\mathcal{O}(\log n/\varepsilon)$ values from the interval $[\alpha/n, \alpha]$ of the form $(1+\varepsilon/3)^i\alpha/n$, for $0\le i \le \mathcal{O}(\log n /\varepsilon)$. We will then execute our algorithm for each of these $\mathcal{O}(\log n/\varepsilon)$ selected values of $\delta$. Our algorithm returns a maximum matching whose edges are of length at most $(1+\varepsilon/3)\delta$ in $\mathcal{O}(n^{4/3}/\varepsilon^3)$ time. At least one of the $\delta$ values chosen will be a $\beta^* \le \delta \le (1+\varepsilon/3)\beta^*$. The matching returned by the algorithm for this value of $\delta$ will be perfect ($|M|=n$) and have a bottleneck edge of weight at most $(1+\varepsilon/3)^2\beta^* \le (1+\varepsilon)\beta^*$ as desired. Among all executions of our algorithm that return a perfect matching, we return a perfect matching with the smallest bottleneck edge weight. Therefore, the total time taken to compute the $\varepsilon$-approximate bottleneck matching is $\tilde{\mathcal{O}}(n^{4/3}/\varepsilon^4)$. Given the value of $\delta$, the algorithm will construct a graph as follows: Let $\mathbb{G}$ be a grid on the bounding square $S$. The side-length of every square in this grid is $\varepsilon\delta/(6\sqrt{2})$. For any cell $\xi$ in the grid $\mathbb{G}$, let $N(\xi)$ denote the subset of all cells $\xi'$ of $\mathbb{G}$ such that the minimum distance between $\xi$ and $\xi'$ is at most $\delta$. By the use of a simple packing argument, it can be shown that $|N(\xi)| = \mathcal{O}(1/\varepsilon^2)$. For any point $v \in A\cup B$, let $\xi_v$ be the cell of grid $\mathbb{G}$ that contains $v$. We say that a cell $\xi$ is \emph{active} if $(A\cup B)\cap \xi \neq \emptyset$. Let $A_\xi$ and $B_\xi$ denote the points of $A$ and $B$ in the cell $\xi$. We construct a bipartite graph $\mathcal{G}(A\cup B, \mathcal{E})$ on the points in $A\cup B$ as follows: For any pair of points $(a,b) \in A\times B$, we add an edge in the graph if $\xi_b \in N(\xi_a)$. Note that every edge $(a,b)$ with $\|a-b\| \le \delta$ will be included in $\mathcal{G}$. Since $\delta$ is at least the bottleneck distance, $\mathcal{G}$ will have a perfect matching. The maximum distance between any cell $\xi$ and a cell in $N(\xi)$ is $(1+\varepsilon/3)\delta$. Therefore, no edge in $\mathcal{G}$ will have a length greater than $(1+\varepsilon/3)\delta$. This implies that any perfect matching in $\mathcal{G}$ will also be an $\varepsilon$-approximate bottleneck matching. We use our algorithm for maximum matching to compute this perfect matching in $\mathcal{G}$. Note, that $\mathcal{G}$ can have $\Omega(n^2)$ edges. For the sake of efficiency, our algorithm executes on a compact representation of $\mathcal{G}$ that is described later. Next, we assign weights of $0$ and $1$ to the edges of $\mathcal{G}$ so that the any maximum matching in $\mathcal{G}$ has a small weight $w$. For a parameter\footnote{Assume $r$ to be a perfect square.} $r > 1$, we will carefully select another grid $\mathbb{G}'$ on the bounding square $S$, each cell of which has a side-length of $\sqrt{r}(\varepsilon\delta/(6\sqrt{2}))$ and encloses $\sqrt{r}\times \sqrt{r}$ cells of $\mathbb{G}$. For any cell $\xi$ of the grid $\mathbb{G}$, let $\Box_{\xi}$ be the cell in $\mathbb{G}'$ that contains $\xi$. Any cell $\xi$ of $\mathbb{G}$ is a \emph{boundary cell} with respect to $\mathbb{G}'$ if there is a cell $\xi' \in N(\xi)$ such that $\Box_{\xi'}\neq \Box_{\xi}$. Equivalently, if the minimum distance from $\xi$ to $\Box_{\xi}$ is at most $\delta$, then $\xi$ is a boundary cell. For any boundary cell $\xi$ of $\mathbb{G}$ with respect to grid $\mathbb{G}'$, we refer to all points of $A_\xi$ and $B_\xi$ that lie in $\xi$ as boundary points. All other points of $A$ and $B$ are referred to as internal points. We carefully construct this grid $\mathbb{G}'$ such that the total number of boundary points is $\mathcal{O}(n/\varepsilon\sqrt{r})$ as follows: First, we will generate the vertical lines for $\mathbb{G}'$, and then we will generate the horizontal lines using a similar construction. Consider the vertical line $y_{ij}$ to be the line $x=i(\varepsilon\delta)/(6\sqrt{2})+j\sqrt{r}(\varepsilon\delta/(6\sqrt{2}))$. For any fixed integer $i$ in $[1, \sqrt{r}]$, consider the set of vertical lines $\mathbb{Y}_i=\{y_{ij}\mid y_{ij} \textnormal{ intersects the bounding square } S\}$. We label all cells $\xi$ of $\mathbb{G}$ as boundary cells with respect to $\mathbb{Y}_i$ if the distance from $\xi$ to some vertical line in $\mathbb{Y}_i$ is at most $\delta$. We designate the points inside the boundary cells as boundary vertices with respect to $\mathbb{Y}_i$. For any given $i$, let $A_i$ and $B_i$ be the boundary vertices of $A$ and $B$ with respect to the lines in $\mathbb{Y}_i$. We select an integer $\kappa = \arg\min_{1\le i\le \sqrt{r}} |A_i\cup B_i|$ and use $\mathbb{Y}_{\kappa}$ as the vertical lines for our grid $\mathbb{G}'$. We use a symmetric construction for the horizontal lines. \begin{lemma} \label{lem:boundarysize} Let $A_i$ and $B_i$ be the boundary points with respect to the vertical lines $\mathbb{Y}_i$. Let $\kappa = \arg\min_{1\le i\le \sqrt{r}} |A_i\cup B_i|$. Then, $|A_\kappa\cup B_\kappa| = \mathcal{O}(n/(\varepsilon\sqrt{r}))$. \end{lemma} \begin{proof} For any fixed cell $\xi$ in $\mathbb{G}$, of the $\sqrt{r}$ values of $i$, there are $\mathcal{O}(1/\varepsilon)$ values for which $\mathbb{Y}_i$ has a vertical line at a distance at most $\delta$ from $\xi$. Therefore, each cell $\xi$ will be a boundary cell in only $\mathcal{O}(1/\varepsilon)$ shifts out of $\sqrt{r}$ shifts. So, $A_\xi$ and $B_\xi$ will be counted in $A_i \cup B_i$ for $\mathcal{O}(1/\varepsilon)$ different values of $i$. Therefore, if we take the average over choices of $i$, we get \begin{equation*} \min_{1\le i\le \sqrt{r}}|A_i\cup B_i| \le \frac{1}{\sqrt{r}}\sum_{i=1}^{\sqrt{r}} |A_i\cup B_i| \le \mathcal{O}(n/(\varepsilon\sqrt{r})). \end{equation*} \end{proof} Using a similar construction, we guarantee that the boundary points with respect to the horizontal lines of $\mathbb{G}'$ is also at most $\mathcal{O}(n/(\varepsilon\sqrt{r}))$. \begin{corollary} \label{cor:smallweight} The grid $\mathbb{G}'$ that we construct has $\mathcal{O}(n/(\varepsilon\sqrt{r}))$ many boundary points. \end{corollary} For any two cells $\xi$ and $\xi' \in N(\xi)$ of the grid $\mathbb{G}$, suppose $\Box_{\xi}\neq \Box_{\xi'}$. Then the weights of all edges of $A_{\xi}\times B_{\xi'}$ and of $B_\xi\times A_{\xi'}$ are set to $1$. All other edges have a weight of $0$. We do not make an explicit weight assignment as it is expensive to do so. Instead, we can always derive the weight of an edge when we access it. Only boundary points will have edges of weight $1$ incident on them. From Corollary~\ref{cor:smallweight}, it follows that any maximum matching will have a weight of $w = \mathcal{O}(n/(\varepsilon\sqrt{r}))$. The edges of every piece in $\mathcal{G}$ have endpoints that are completely inside a cell of $\mathbb{G}'$. Note, however, that there is no straight-forward bound on the number of points and edges of $\mathcal{G}$ inside each piece. Moreover, the number of edges in $\mathcal{G}$ can be $\Theta(n^2)$. Consider any feasible matching $M, y(\cdot)$ in $\mathcal{G}$. Let $\mathcal{G}_M$ be the residual network. In order to obtain a running time of $\tilde{\mathcal{O}}(n^{4/3}/\varepsilon^3)$, we use the grid $\mathbb{G}$ to construct a compact residual network $\mathcal{CG}_M$ for any feasible matching $M,y(\cdot)$ and use this compact graph to implement our algorithm. The following lemma assists us in constructing the compressed residual network. \begin{lemma} \label{lem:differ1} Consider any feasible matching $M, y(\cdot)$ maintained by our algorithm on $\mathcal{G}$ and any active cell $\xi$ in the grid $\mathbb{G}$. The dual weight of any two points $a,a' \in A_{\xi}$ can differ by at most $2$. Similarly, the dual weights of any two points $b,b' \in B_{\xi}$ can differ by at most $2$. \end{lemma} \begin{proof} We present our proof for two points $b,b' \in B_{\xi}$. A similar argument will extend for $a,a'\in A_{\xi}$. For the sake of contradiction, let $y(b) \ge y(b')+3$. $b'$ must be matched since $y(b') < y(b) \le y_{\max}$. Let $m(b') \in A$ be the match of $b'$ in $M$. From~\eqref{eq:feas2}, $y(m(b'))-y(b')={\sf c}(b',m(b'))$. Since both $b$ and $b'$ are in $\xi$, the distance ${\sf c}(b,m(b'))={\sf c}(b',m(b'))$. So, $y(b)-y(m(b'))\ge (y(b') +3)-y(m(b')) = 3-{\sf c}(b,m(b'))$. This violates~\eqref{eq:feas1} leading to a contradiction. \end{proof} For any feasible matching and any cell $\xi$ of $\mathbb{G}$, we divide points of $A_{\xi}$ and $B_{\xi}$ based on their dual weight into at most three clusters. Let $A_{\xi}^1, A_{\xi}^2$ and $A_{\xi}^3$ be the three clusters of points in $A_{\xi}$ and let $B_{\xi}^1, B_{\xi}^2$ and $B_{\xi}^3$ be the three clusters of points in $B_{\xi}$. We assume that points with the largest dual weights are in $A_{\xi}^1$ (resp. $B_{\xi}^1$), the points with the second largest dual weights are in $A_{\xi}^2$ (resp. $B_{\xi}^2$), and the points with the smallest dual weights are in $A_{\xi}^3$ (resp. $B_{\xi}^3$). \subparagraph*{Compact residual network:} Given a feasible matching $M$, we construct a compact residual network $\mathcal{CG}_M$ to assist in the fast implementation of our algorithm. This vertex set $\mathcal{A}\cup \mathcal{B}$ for the compact residual network is constructed as follows. First we describe the vertex set $\mathcal{A}$. For every active cell $\xi$ in $\mathbb{G}$, we add a vertex $a_{\xi}^1$ (resp. $a_{\xi}^2, a_{\xi}^3$) to represent the set $A_{\xi}^1$ (resp. $A_{\xi}^2, A_{\xi}^3$) provided $A_{\xi}^1\neq \emptyset$ (resp. $A_{\xi}^2\neq \emptyset, A_{\xi}^3\neq \emptyset$). We designate $a_{\xi}^1$ (resp. $a_{\xi}^2, a_{\xi}^3$) as a \emph{free} vertex if $A_{\xi}^1\cap A_F \neq \emptyset$ (resp. $A_{\xi}^2\cap A_F\neq \emptyset, A_{\xi}^3\cap A_F\neq \emptyset$). Similarly, we construct a vertex set $\mathcal{B}$ by adding a vertex $b_{\xi}^1$ (resp. $b_{\xi}^2, b_{\xi}^3$) to represent the set $B_{\xi}^1$ (resp. $B_{\xi}^2, B_{\xi}^3$) provided $B_{\xi}^1\neq \emptyset$ (resp. $B_{\xi}^2\neq \emptyset, B_{\xi}^3\neq \emptyset$). We designate $b_{\xi}^1$ (resp. $b_{\xi}^2, b_{\xi}^3$) as a \emph{free} vertex if $B_{\xi}^1\cap B_F \neq \emptyset$ (resp. $B_{\xi}^2\cap B_F\neq \emptyset, B_{\xi}^3\cap B_F\neq \emptyset$). Each active cell $\xi$ of the grid $\mathbb{G}$ therefore has at most six points. Each point in $\mathcal{A}\cup \mathcal{B}$ will inherit the dual weights of the points in its cluster; for any vertex $a_\xi^1 \in \mathcal{A}$ (resp. $a_\xi^2 \in \mathcal{A}, a_\xi^3 \in \mathcal{A}$), let $y(a_{\xi}^1)$(resp. $y(a_{\xi}^2),y(a_{\xi}^3)$) be the dual weight of all points in $A_{\xi}^1$ (resp. $A_{\xi}^2, A_{\xi}^3$). We define $y(b_{\xi}^1)$, $y(b_{\xi}^2)$, and $y(b_{\xi}^3)$ as dual weights of points in $B_{\xi}^1, B_{\xi}^2$, and $B_{\xi}^3$ respectively. Since there are at most $n$ active cells, $|\mathcal{A}\cup \mathcal{B}|=\mathcal{O}(n)$. Next, we create the edge set for the compact residual network $\mathcal{CG}$. For any active cell $\xi$ in the grid $\mathbb{G}$ and for any cell $\xi'\in N(\xi)$, \begin{itemize} \item We add a directed edge from $a_{\xi}^i$ to $b_{\xi'}^j$, for $i,j \in \{1,2,3\}$ if there is an edge $(a,b) \in (A_{\xi}^i\times B_{\xi'}^j)\cap M$. We define the weight of $(a_{\xi}^i,b_{\xi'}^j)$ to be ${\sf c}(a,b)$. We also define the slack $s(a_{\xi}^i,b_{\xi'}^j)$ to be ${\sf c}(a_{\xi}^i,b_{\xi'}^j) -y(a_{\xi}^i)+y(b_{\xi'}^j)$ which is equal to $ s(a_{\xi}^i,b_{\xi'}^j)={\sf c}(a,b)-y(a)+y(b) = s(a,b) = 0$. \item We add a directed edge from $b_{\xi}^i$ to $a_{\xi'}^j$, for $i,j \in \{1,2,3\}$ if $(B_{\xi}^i \times A_{\xi'}^j)\setminus M \neq \emptyset$. Note that the weight and slack of every directed edge in $B_{\xi}^i \times A_{\xi'}^j$ are identical. We define the weight of $(b_{\xi}^i,a_{\xi'}^j)$ to be ${\sf c}(a,b)$ for any $(a,b) \in A_{\xi'}^j \times B_{\xi}^i$. We also define the slack $s(b_{\xi}^i,a_{\xi'}^j) = {\sf c}(b_{\xi}^i,a_{\xi'}^j) -y(b_{\xi}^i)+y(a_{\xi'}^j)$ which is equal to the slack $s(a,b)$. \end{itemize} For each vertex in $\mathcal{A}\cup \mathcal{B}$, we added at most two edges to every cell $\xi' \in N(\xi)$. Since $N(\xi) = \mathcal{O}(1/\varepsilon^2)$, the total number of edges in $\mathcal{E}$ is $\mathcal{O}(n/\varepsilon^2)$. For a cell $\Box$ in $\mathbb{G}'$, let $\mathcal{A}_{\Box}$ be the points of $\mathcal{A}$ generated by cells of $\mathbb{G}$ that are contained inside the cell $\Box$. A piece $K_\Box$ has $\mathcal{A}_{\Box} \cup \mathcal{B}_{\Box}$ as the vertex set and $\mathcal{E}_{\Box}=((\mathcal{A}_{\Box}\times\mathcal{B}_{\Box})\cup (\mathcal{B}_{\Box}\times \mathcal{A}_{\Box})\cap \mathcal{E})$ as the edge set. Note that the number of vertices in any piece $K_{\Box}$ is $\mathcal{O}(r)$ and the number of edges in $K_{\Box}$ is $\mathcal{O}(r/\varepsilon^2)$. Every edge $(u,v)$ of any piece $K_{\Box}$ has a weight ${\sf c}(u,v)=0$ and every edge $(u,v)$ with a weight of zero belongs to some piece of $\mathcal{CG}$. The following lemma shows that the compact graph $\mathcal{CG}$ preserves all minimum slack paths in $\mathcal{G}_M$. \begin{lemma} \label{lem:compactprop} For any directed path $\mathcal{P}$ in the compact residual network $\mathcal{CG}$, there is a directed path $P$ in the residual network such that $\sum_{(u,v)\in P}s(u,v) = \sum_{(u,v)\in \mathcal{P}} s(u,v)$. For any directed path $P$ in $\mathcal{G}_M$, there is a directed path $\mathcal{P}$ in the compact residual network such that $\sum_{(u,v) \in P} s(u,v) \ge \sum_{(u,v)\in\mathcal{P}}s(u,v).$ \end{lemma} \subparagraph*{Preprocessing step:} At the start, $M= \emptyset$ and all dual weights are $0$. Consider any cell $\Box$ of the grid $\mathbb{G}'$ and any cell $\xi$ of $\mathbb{G}$ that is contained inside $\Box$. Suppose we have a point $a_{\xi}^1$. We assign a demand $d_{a_\xi^1}=|A_{\xi}^1|=|A_{\xi}|$ to $a_{\xi}^1$. Similarly, suppose we have a point $b_{\xi}^1$, we assign a supply $ s_{b_\xi^1}=|B_{\xi}^1|=|B_{\xi}|$. The preprocessing step reduces to finding a maximum matching of supplies to demand. This is an instance of the unweighted transportation problem which can be solved using the algorithm of~\cite{sidford} in $\tilde{\mathcal{O}}(|\mathcal{E}_{\Box}|\sqrt{|\mathcal{A}_{\Box}\cup \mathcal{B}_{\Box}|}) = \tilde{\mathcal{O}}(|\mathcal{E}_{\Box}|\sqrt{r})$. Every edge of $\mathcal{E}$ participates in at most one piece. Therefore, the total time taken for preprocessing across all pieces is $\tilde{\mathcal{O}}(|\mathcal{E}|\sqrt{r})=\tilde{\mathcal{O}}(n\sqrt{r}/\varepsilon^2)$. We can trivially convert the matching of supplies to demand to a matching in $\mathcal{G}$. \subparagraph*{Efficient implementation of the second step:} Recollect that the second step of the algorithm consists of phases. Each phase has two stages. In the first stage, we execute Dijkstra's algorithm in $\mathcal{O}(n\log n/\varepsilon^2)$ time by using the compact residual network $\mathcal{CG}$. After adjusting the dual weight of nodes in the compact graph, in the second stage, we iteratively compute augmenting paths of admissible edges by conducting a DFS from each vertex. Our implemnetation of DFS has the following differences from the one described in Section~\ref{sec:graphmatch}. \begin{itemize} \item Recollect that each free vertex $v \in \mathcal{B}$ may represent a cluster that has $t>0$ free vertices. We will execute DFS from $v$ exactly $t$ times, once for each of the free vertices of $\mathcal{B}$. \item During the execution of any DFS, unlike the algorithm described in Section~\ref{sec:graphmatch}, the DFS will mark an edge as visited only when it backtracks from the edge. Due to this change, all edges on the path maintained by the DFS are marked as unvisited. Therefore, unlike the algorithm from Section~\ref{sec:graphmatch}, this algorithm will not discard weight $1$ edges of an augmenting path after augmentation. From Lemma~\ref{lem:numberofphases}, the total number of these edges is $\mathcal{O}(w\log w)$. \end{itemize} \subparagraph*{Efficiency:} The first stage is an execution of Dijkstra's algorithm which takes $\mathcal{O}(|\mathcal{E}|+|\mathcal{V}|\log |\mathcal{V}|)=\mathcal{O}(n\log n/\varepsilon^2)$ time. Suppose there are $\lambda$ phases; then the cumulative time taken across all phases for the first stage is $\tilde{\mathcal{O}}(\lambda n/\varepsilon^2)$. In the second stage of the algorithm, in each phase, every edge is discarded once it is visited by a DFS, unless it is in an affected piece or it is an edge of weight $1$ on an augmenting path. Since each affected piece has $\mathcal{O}(r/\varepsilon^2)$ edges, and since there are $\mathcal{O}(w \log w)$ edges of weight $1$ on the computed augmenting paths, the total time taken by all the DFS searches across all the $\lambda$ phases is bounded by $\tilde{\mathcal{O}}(n\lambda/\varepsilon^2 + r/\varepsilon^2\sum_{i=1}^t |\mathbb{K}_i| +w\log w)$. In Lemma~\ref{lem:numberofphases}, we bound $\lambda$ by $\sqrt{w}$ and $\sum_{i=1}^t |\mathbb{K}_i|$ by $\mathcal{O}(w\log w)$. Therefore, the total time taken by the algorithm including the time taken by preprocessing step is $\tilde{\mathcal{O}}((n/\varepsilon^2)(\sqrt{r}+\sqrt{w}+\frac{wr}{n}))$. Setting $r=n^{2/3}$, we get $w=\mathcal{O}(n/(\varepsilon\sqrt{r})) = \mathcal{O}(n^{2/3}/\varepsilon)$, and the total running time of our algorithm is $\tilde{\mathcal{O}}(n^{4/3}/\varepsilon^3)$. To obtain the bottleneck matching, we execute this algorithm on $\mathcal{O}(\log (n/\varepsilon))$ guesses; therefore, the total time taken to compute an $\varepsilon$-approximate bottleneck matching is $\tilde{\mathcal{O}}(n^{4/3}/\varepsilon^4)$. For $d > 2$, we choose $r=n^{\frac{d}{2d-1}}$ and $w=\mathcal{O}(n/(d\varepsilon r^{1/d}))$. With these values, the execution time of our algorithm is $\frac{1}{\varepsilon^{\mathcal{O}(d)}}n^{1+\frac{d-1}{2d-1}}\mathrm{poly}\log n$.
proofpile-arXiv_065-4152
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In this article, all dimer quivers are nondegenerate and embed in a two-torus. A dimer algebra $A$ is noetherian if and only if its center $Z$ is noetherian, if and only if $A$ is a noncommutative crepant resolution of its 3-dimensional toric Gorenstein center \cite{B4, Br, D}. We show that the center $Z$ of a nonnoetherian dimer algebra is also $3$-dimensional, and may be viewed as the coordinate ring for a toric Gorenstein singularity that has precisely one `smeared-out' point of positive geometric dimension. This is made precise using the notion of a depiction, which is a finitely generated overring that is as close as possible to $Z$, in a suitable geometric sense (Definition \ref{depiction def}). Denote by $\operatorname{nil}Z$ the nilradical of $Z$, and by $\hat{Z} := Z/\operatorname{nil}Z$ the reduced ring of $Z$. Our main theorem is the following. \begin{Theorem} \label{big theorem2} (Theorems \ref{generically noetherian}, \ref{hopefully...}.) Let $A$ be a nonnoetherian dimer algebra with center $Z$, and let $\Lambda := A/\left\langle p - q \ | \ \text{$p,q$ a non-cancellative pair} \right\rangle$ be its homotopy algebra with center $R$. \begin{enumerate} \item The nonnoetherian rings $Z$, $\hat{Z}$, and $R$ each have Krull dimension 3, and the integral domains $\hat{Z}$ and $R$ are depicted by the cycle algebras of $A$ and $\Lambda$. \item The reduced scheme of $\operatorname{Spec}Z$ and the scheme $\operatorname{Spec}R$ are birational to a noetherian affine scheme, and each contain precisely one closed point of positive geometric dimension. \end{enumerate} \end{Theorem} Dimer models were introduced in string theory in 2005 in the context of brane tilings \cite{HK, FHVWK}. The dimer algebra description of the combinatorial data of a brane tiling arose from the notion of a superpotential algebra (or quiver with potential), which was introduced a few years earlier in \cite{BD}. Stable (i.e., `superconformal') brane tilings quickly made their way to the mathematics side, but the more difficult study of unstable brane tilings was largely left open, in regards to both their mathematical and physical properties. There were two main difficulties: in contrast to the stable case, the `mesonic chiral ring' (closely related to what we call the cycle algebra\footnote{The mesonic chiral ring is the ring of gauge invariant operators; these are elements of the dimer algebra that are invariant under isomorphic representations, and are thus cycles in the quiver.}) did not coincide with the center of the dimer algebra, and (ii) although the mesonic chiral ring still appeared to be a nice ring, the center certainly was not. The center is supposed to be the coordinate ring for an affine patch on the extra six compact dimensions of spacetime, the so-called (classical) vacuum geometry. But examples quickly showed that the center of an unstable brane tiling could be infinitely generated. To say that the vacuum geometry was a nonnoetherian scheme -- something believed to have no visual representation or concrete geometric interpretation -- was not quite satisfactory from a physics perspective. However, unstable brane tilings are physically allowable theories. To make matters worse, almost all brane tilings are unstable, and it is only in the case of a certain uniform symmetry (an `isoradial embedding') that they become stable. Moreover, in the context of 11-dimensional M-theory, stable and unstable brane tilings are equally `good'. The question thus remained: \begin{center} \textit{What does the vacuum geometry of an unstable brane tiling look like?} \end{center} The aim of this article is to provide an answer. In short, the vacuum geometry of an unstable brane tiling looks just like the vacuum geometry of a stable brane tiling, namely a $3$-dimensional complex cone, except that there is precisely one curve or surface passing through the apex of the cone that is identified as a single `smeared-out' point. \section{Preliminary definitions} Throughout, $k$ is an uncountable algebraically closed field. Given a quiver $Q$, we denote by $kQ$ the path algebra of $Q$, and by $Q_{\ell}$ the paths of length $\ell$. The vertex idempotent at vertex $i \in Q_0$ is denoted $e_i$, and the head and tail maps are denoted $\operatorname{h},\operatorname{t}: Q_1 \to Q_0$. By monomial, we mean a non-constant monomial. \subsection{Dimer algebras, homotopy algebras, and cyclic contractions} \begin{Definition} \label{dimer def} \rm{ \ $\bullet$ Let $Q$ be a finite quiver whose underlying graph $\overbar{Q}$ embeds into a real two-torus $T^2$, such that each connected component of $T^2 \setminus \overbar{Q}$ is simply connected and bounded by an oriented cycle of length at least $2$, called a \textit{unit cycle}.\footnote{In forthcoming work, we consider the nonnoetherian central geometry of homotopy algebras on higher genus surfaces. Dimer quivers on other surfaces arise in contexts such as Belyi maps, cluster categories, and bipartite field theories; see e.g., \cite{BGH, BKM, FGU}.} The \textit{dimer algebra} of $Q$ is the quiver algebra $A := kQ/I$ with relations $$I := \left\langle p - q \ | \ \exists \ a \in Q_1 \text{ such that } pa \text{ and } qa \text{ are unit cycles} \right\rangle \subset kQ,$$ where $p$ and $q$ are paths. Since $I$ is generated by certain differences of paths, we may refer to a path modulo $I$ as a \textit{path} in the dimer algebra $A$. $\bullet$ Two paths $p,q \in A$ form a \textit{non-cancellative pair} if $p \not = q$, and there is a path $r \in kQ/I$ such that $$rp = rq \not = 0 \ \ \text{ or } \ \ pr = qr \not = 0.$$ $A$ and $Q$ are called \textit{non-cancellative} if there is a non-cancellative pair; otherwise they are called \textit{cancellative}. $\bullet$ The \textit{homotopy algebra} of $Q$ is the quotient $$\Lambda := A/\left\langle p - q \ | \ p,q \text{ is a non-cancellative pair} \right\rangle.$$ A dimer algebra $A$ coincides with its homotopy algebra if and only if $A$ is cancellative, if and only if $A$ is noetherian, if and only if $\Lambda$ is noetherian \cite[Theorem 1.1]{B4}. $\bullet$ Let $A$ be a dimer algebra with quiver $Q$. \begin{itemize} \item[--] A \textit{perfect matching} $D \subset Q_1$ is a set of arrows such that each unit cycle contains precisely one arrow in $D$. \item[--] A \textit{simple matching} $D \subset Q_1$ is a perfect matching such that $Q \setminus D$ supports a simple $A$-module of dimension $1^{Q_0}$ (that is, $Q \setminus D$ contains a cycle that passes through each vertex of $Q$). Denote by $\mathcal{S}$ the set of simple matchings of $A$. \item[--] $A$ is said to be \textit{nondegenerate} if each arrow of $Q$ belongs to a perfect matching.\footnote{For our purposes, it suffices to assume that each cycle contains an arrow that belongs to a perfect matching; see \cite{B1}.} \end{itemize} Each perfect matching $D$ defines a map $$n_D: Q_{\geq 0} \to \mathbb{Z}_{\geq 0}$$ that sends path $p$ to the number of arrow subpaths of $p$ that are contained in $D$. $n_D$ is additive on concatenated paths, and if $p,p' \in Q_{\geq 0}$ are paths satisfying $p + I = p' + I$, then $n_D(p) = n_D(p')$. In particular, $n_D$ induces a well-defined map on the paths of $A$. }\end{Definition} Now consider dimer algebras $A = kQ/I$ and $A' = kQ'/I'$, and suppose $Q'$ is obtained from $Q$ by contracting a set of arrows $Q_1^* \subset Q_1$ to vertices. This contraction defines a $k$-linear map of path algebras $$\psi: kQ \to kQ'.$$ If $\psi(I) \subseteq I'$, then $\psi$ induces a $k$-linear map of dimer algebras, called a \textit{contraction}, $$\psi: A \to A'.$$ Denote by $$B := k\left[x_D : D \in \mathcal{S}' \right]$$ the polynomial ring generated by the simple matchings $\mathcal{S}'$ of $A'$. To each path $p \in A'$, associate the monomial $$\bar{\tau}(p) := \prod_{D \in \mathcal{S}'} x_D^{n_D(p)} \in B.$$ For each $i,j \in Q'_0$, this association may be extended to a $k$-linear map $\bar{\tau}: e_jA'e_i \to B$, which is an algebra homomorphism if $i = j$, and injective if $A'$ is cancellative \cite[Proposition 4.29]{B2}. Given $p \in e_jAe_i$ and $q \in e_{\ell}A'e_k$, we will write $$\overbar{p} := \bar{\tau}_{\psi}(p) := \bar{\tau}(\psi(p)) \ \ \ \text{ and } \ \ \ \overbar{q} := \bar{\tau}(q).$$ $\psi$ is called a \textit{cyclic contraction} if $A'$ is cancellative and \begin{equation*} \label{cycle algebra} S := k \left[ \cup_{i \in Q_0} \bar{\tau}_{\psi}(e_iAe_i) \right] = k \left[ \cup_{i \in Q'_0} \bar{\tau}(e_iA'e_i) \right] =: S'. \end{equation*} In this case, we call $S$ the \textit{cycle algebra} of $A$. The cycle algebra is independent of the choice of cyclic contraction $\psi$ \cite[Theorem 3.14]{B3}, and is isomorphic to the center of $A'$ \cite[Theorem 1.1.3]{B2}. Moreover, every nondegenerate dimer algebra admits a cyclic contraction \cite[Theorem 1.1]{B1}. In addition to the cycle algebra, the \textit{homotopy center} of $A$, $$R := k\left[ \cap_{i \in Q_0} \bar{\tau}_{\psi}(e_iAe_i) \right],$$ also plays an important role. This algebra is isomorphic to the center of the homotopy algebra $\Lambda$ of $Q$ \cite[Theorem 1.1.3]{B2}. \begin{Notation} \rm{ Let $\pi: \mathbb{R}^2 \rightarrow T^2$ be a covering map such that for some $i \in Q_0$, $$\pi\left(\mathbb{Z}^2 \right) = i.$$ Denote by $Q^+ := \pi^{-1}(Q) \subset \mathbb{R}^2$ the covering quiver of $Q$. For each path $p$ in $Q$, denote by $p^+$ the unique path in $Q^+$ with tail in the unit square $[0,1) \times [0,1) \subset \mathbb{R}^2$ satisfying $\pi(p^+) = p$. For $u \in \mathbb{Z}^2$, denote by $\mathcal{C}^u$ the set of cycles $p$ in $A$ such that $$\operatorname{h}(p^+) = \operatorname{t}(p^+) + u \in Q_0^+.$$ }\end{Notation} \begin{Notation} \rm{ We denote by $\sigma_i \in A$ the unique unit cycle (modulo $I$) at $i \in Q_0$, and by $\sigma$ the monomial $$\sigma := \overbar{\sigma}_i = \prod_{D \in \mathcal{S}'}x_D.$$ The sum $\sum_{i \in Q_0} \sigma_i$ is a central element of $A$. }\end{Notation} \begin{Lemma} \label{cyclelemma} \ \begin{enumerate} \item If $p \in \mathcal{C}^{(0,0)}$ is nontrivial, then $\overbar{p} = \sigma^n$ for some $n \geq 1$. \item Let $u \in \mathbb{Z}^2$ and $p,q \in \mathcal{C}^u$. Then $\overbar{p} = \overbar{q} \sigma^n$ for some $n \in \mathbb{Z}$. \end{enumerate} \end{Lemma} \begin{proof} (1) is \cite[Lemma 5.2.1]{B2}, and (2) is \cite[Lemma 5.2]{B6}. \end{proof} \subsection{Nonnoetherian geometry: depictions and geometric dimension} Let $S$ be an integral domain and a finitely generated $k$-algebra, and let $R$ be a (possibly nonnoetherian) subalgebra of $S$. Denote by $\operatorname{Max}S$, $\operatorname{Spec}S$, and $\operatorname{dim}S$ the maximal spectrum (or variety), prime spectrum (or affine scheme), and Krull dimension of $S$ respectively; similarly for $R$. For a subset $I \subset S$, set $\mathcal{Z}_S(I) := \left\{ \mathfrak{n} \in \operatorname{Max}S \ | \ \mathfrak{n} \supseteq I \right\}$. \begin{Definition} \label{depiction def} \rm{\cite[Definition 3.1]{B2} \begin{itemize} \item We say $S$ is a \textit{depiction} of $R$ if the morphism $$\iota_{S/R}: \operatorname{Spec}S \rightarrow \operatorname{Spec}R, \ \ \ \ \mathfrak{q} \mapsto \mathfrak{q} \cap R,$$ is surjective, and \begin{equation} \label{condition} \left\{ \mathfrak{n} \in \operatorname{Max}S \ | \ R_{\mathfrak{n}\cap R} = S_{\mathfrak{n}} \right\} = \left\{ \mathfrak{n} \in \operatorname{Max}S \ | \ R_{\mathfrak{n} \cap R} \text{ is noetherian} \right\} \not = \emptyset. \end{equation} \item The \textit{geometric height} of $\mathfrak{p} \in \operatorname{Spec}R$ is the minimum $$\operatorname{ght}(\mathfrak{p}) := \operatorname{min} \left\{ \operatorname{ht}_S(\mathfrak{q}) \ | \ \mathfrak{q} \in \iota^{-1}_{S/R}(\mathfrak{p}), \ S \text{ a depiction of } R \right\}.$$ The \textit{geometric dimension} of $\mathfrak{p}$ is $$\operatorname{gdim} \mathfrak{p} := \operatorname{dim}R - \operatorname{ght}(\mathfrak{p}).$$ \end{itemize} } \end{Definition} We will denote the subsets (\ref{condition}) of the algebraic variety $\operatorname{Max}S$ by \begin{equation*} \label{U U*} U_{S/R} := \left\{ \mathfrak{n} \in \operatorname{Max}S \ | \ R_{\mathfrak{n}\cap R} = S_{\mathfrak{n}} \right\}, \ \ \ \ U^*_{S/R} := \left\{ \mathfrak{n} \in \operatorname{Max}S \ | \ R_{\mathfrak{n} \cap R} \text{ is noetherian} \right\}. \end{equation*} The subset $U_{S/R}$ is open in $\operatorname{Max}S$ \cite[Proposition 2.4.2]{B5}. \begin{Example} \rm{ Let $S = k[x,y]$, and consider its nonnoetherian subalgebra $$R = k[x,xy,xy^2, \ldots] = k + xS.$$ $R$ is then depicted by $S$, and the closed point $xS \in \operatorname{Max}R$ has geometric dimension 1 \cite[Proposition 2.8 and Example 2.20]{B2}. Furthermore, $U_{S/R}$ is the complement of the line $$\mathcal{Z}(x) = \left\{ x = 0 \right\} \subset \operatorname{Max}S.$$ In particular, $\operatorname{Max}R$ may be viewed as 2-dimensional affine space $\mathbb{A}_k^2 = \operatorname{Max}S$ with the line $\mathcal{Z}(x)$ identified as a single `smeared-out' point. From this perspective, $xS$ is a positive dimensional point of $\operatorname{Max}R$. }\end{Example} In the next section, we will show that the reduced center and homotopy center of a nonnoetherian dimer algebra are both depicted by its cycle algebra, and both contain precisely one point of positive geometric dimension. \section{Proof of main theorem} Throughout, $A$ is a nonnoetherian dimer algebra with center $Z$ and reduced center $\hat{Z} := Z/\operatorname{nil}Z$. By assumption $A$ is nondegenerate, and thus there is a cyclic contraction $\psi: A \to A'$ to a noetherian dimer algebra $A'$ \cite[Theorem 1.1]{B1}. The center $Z'$ of $A'$ is isomorphic to the cycle algebra $S$ \cite[Theorem 1.1.3]{B2}, and the reduced center $\hat{Z}$ of $A$ is isomorphic to a subalgebra of $R$ \cite[Theorem 4.1]{B6}.\footnote{It is often the case that $\hat{Z}$ is isomorphic to $R$; an example where $\hat{Z} \not \cong R$ is given in \cite[Example 4.3]{B6}.} We may therefore write \begin{equation} \label{ZKA} \hat{Z} \subseteq R. \end{equation} The following structural results will be useful. \begin{Lemma} \label{cyclelemma2} Let $g \in B$ be a monomial. \begin{enumerate}[resume] \item If $g \in R$, $h \in S$ are monomials and $g \not = \sigma^n$ for each $n \geq 0$, then $gh \in R$. \item If $g \in R$ and $\sigma \nmid g$, then $g \in \hat{Z}$. \item If $g \in R$, then there is some $m \geq 1$ such that $g^m \in \hat{Z}$. \item If $g \in S$, then there is some $m \geq 0$ such that for each $n \geq 1$, $g^n \sigma^m \in \hat{Z}$. \item If $g \sigma \in S$, then $g \in S$. \item There is a monomial $h \in S \setminus R$ such that $\sigma \nmid h$, and $h^n \not \in R$ for each $n \geq 1$. \end{enumerate} \end{Lemma} \begin{proof} (1) is \cite[Lemma 6.1]{B6}; (2) - (4) is \cite[Lemma 5.3]{B6}; and (5) is \cite[Lemma 4.18]{B2} for $u \not = (0,0)$, and Lemma \ref{cyclelemma}.1 for $u = (0,0)$; and (6) is \cite[Proposition 3.13]{B4}. \end{proof} \begin{Lemma} \label{S noetherian} The cycle algebra $S$ is a finite type integral domain. \end{Lemma} \begin{proof} The cycle algebra $S$ is generated by the $\bar{\tau}_{\psi}$-images of cycles in $Q$ with no nontrivial cyclic proper subpaths. Since $Q$ is finite, there is only a finite number of such cycles. Therefore $S$ is a finitely generated $k$-algebra. $S$ is also an integral domain since it is a subalgebra of the polynomial ring $B$. \end{proof} It is well-known that the Krull dimension of the center of any cancellative dimer algebra (on a torus) is $3$ (e.g.\ \cite{Br}). The isomorphism $S \cong Z'$ therefore implies that the Krull dimension of the cycle algebra $S$ is $3$. In the following we give a new and independent proof of this result. \begin{Lemma} \label{Jess} The cycle algebra $S$ has Krull dimension $3$. \end{Lemma} \begin{proof} Fix $j \in Q'_0$ and cycles in $e_jA'e_j$, \begin{equation} \label{uv} s_1 \in \mathcal{C}'^{(1,0)}, \ \ \ \ t_1 \in \mathcal{C}'^{(-1,0)}, \ \ \ \ s_2 \in \mathcal{C}'^{(0,1)}, \ \ \ \ t_2 \in \mathcal{C}'^{(0,-1)}. \end{equation} Consider the algebra $$T := k[\sigma, \overbar{s}_1,\overbar{s}_2,\overbar{t}_1,\overbar{t}_2 ] \subseteq S' \stackrel{\textsc{(i)}}{=} S,$$ where (\textsc{i}) holds since the contraction $\psi$ is cyclic. Since $A'$ is cancellative, if $$p \in \mathcal{C}'^u \ \ \ \text{ and } \ \ \ q \in \mathcal{C}'^v$$ are cycles in $A'$ satisfying $\overbar{p} = \overbar{q}$, then $u = v$ \cite[Lemma 3.9]{B4}. Thus there are no relations among the monomials $\overbar{s}_1,\overbar{s}_2,\overbar{t}_1,\overbar{t}_2$, by our choice of cycles (\ref{uv}). However, by Lemma \ref{cyclelemma}.1, there are integers $n_1,n_2 \geq 1$ such that \begin{equation} \label{siti} \overbar{s}_1 \overbar{t}_1 = \sigma^{n_1} \ \ \ \text{ and } \ \ \ \overbar{s}_2 \overbar{t}_2 = \sigma^{n_2}. \end{equation} (i) We claim that $\operatorname{dim}T = 3$. Since $T$ is a finite type integral domain, the variety $\operatorname{Max}T$ is equidimensional. It thus suffices to show that the chain of ideals of $T$, \begin{equation} \label{hope} 0 \subset (\sigma, \overbar{s}_1,\overbar{s}_2) \subseteq (\sigma, \overbar{s}_1, \overbar{s}_2, \overbar{t}_1) \subseteq (\sigma, \overbar{s}_1,\overbar{s}_2,\overbar{t}_1,\overbar{t}_2 ), \end{equation} is a maximal chain of distinct primes. The inclusions in (\ref{hope}) are strict since the relations among the monomial generators are generated by the two relations (\ref{siti}). Moreover, (\ref{hope}) is a maximal chain of primes of $T$: Suppose $\overbar{s}_i$ is in a prime $\mathfrak{p}$ of $T$. Then $\sigma$ is in $\mathfrak{p}$, by (\ref{siti}). Whence $\overbar{s}_{i+1}$ or $\overbar{t}_{i+1}$ is also in $\mathfrak{p}$, again by (\ref{siti}). Thus $(\sigma, \overbar{s}_1,\overbar{s}_2)$ is a minimal prime of $T$. (ii) We now claim that $\operatorname{dim}S = \operatorname{dim}T$. By Lemma \ref{cyclelemma}.2, we have $$S[\sigma^{-1}] = T[\sigma^{-1}].$$ Furthermore, $S$ and $T$ are finite type integral domains, by Lemma \ref{S noetherian}. Thus $\operatorname{Max}S$ and $\operatorname{Max}T$ are irreducible algebraic varieties that are isomorphic on their open dense sets $\{\sigma \not = 0\}$. Therefore $\operatorname{dim}S = \operatorname{dim}T$. \end{proof} \begin{Corollary} The Krull dimension of the center of a noetherian dimer algebra is $3$. \end{Corollary} \begin{proof} Follows from Lemma \ref{Jess} and the isomorphism $Z' \cong S$. \end{proof} \begin{Lemma} \label{max ideal} The morphisms \begin{equation} \label{max surjective} \begin{array}{rcl} \kappa_{S/\hat{Z}}: \operatorname{Max}S \to \operatorname{Max}\hat{Z}, & \ & \mathfrak{n} \mapsto \mathfrak{n} \cap \hat{Z},\\ \kappa_{S/R}: \operatorname{Max}S \to \operatorname{Max}R, & & \mathfrak{n} \mapsto \mathfrak{n} \cap R, \end{array} \end{equation} and $$\begin{array}{rcl} \iota_{S/\hat{Z}}: \operatorname{Spec}S \to \operatorname{Spec}\hat{Z}, & \ & \mathfrak{q} \mapsto \mathfrak{q} \cap \hat{Z},\\ \iota_{S/R}: \operatorname{Spec}S \to \operatorname{Spec}R, & & \mathfrak{q} \mapsto \mathfrak{q} \cap R, \end{array}$$ are well-defined and surjective. \end{Lemma} \begin{proof} (i) We first claim that $\kappa_{S/\hat{Z}}$ and $\kappa_{S/R}$ are well-defined maps. Indeed, let $\mathfrak{n} \in \operatorname{Max}S$. By Lemma \ref{S noetherian}, $S$ is of finite type, and by assumption $k$ is algebraically closed. Therefore the intersections $\mathfrak{n} \cap \hat{Z}$ and $\mathfrak{n} \cap R$ are maximal ideals of $\hat{Z}$ and $R$ respectively (e.g., \cite[Lemma 2.1]{B5}). (ii) We claim that $\kappa_{S/\hat{Z}}$ and $\kappa_{S/R}$ are surjective. Let $\mathfrak{m} \in \operatorname{max}\hat{Z}$. Then $S\mathfrak{m}$ is a proper ideal of $S$ since $S$ is a subalgebra of the polynomial ring $B$. Thus, since $S$ is noetherian, there is a maximal ideal $\mathfrak{n} \in \operatorname{Max}S$ containing $S\mathfrak{m}$. Whence, $$\mathfrak{m} \subseteq S\mathfrak{m} \cap R \subseteq \mathfrak{n} \cap R.$$ But $\mathfrak{n} \cap R$ is a maximal ideal of $R$ by Claim (i). Therefore $\mathfrak{m} = \mathfrak{n} \cap R$. Similarly, $\kappa_{S/R}$ is surjective. (iii) We claim that $\iota_{S/\hat{Z}}$ and $\iota_{S/R}$ are well-defined maps. Let $\mathfrak{q} \in \operatorname{Spec}S$. The composition $R \hookrightarrow S \to S/\mathfrak{q}$ has kernel $\mathfrak{q} \cap R$. But $\mathfrak{q}$ is prime, and so $S/\mathfrak{q}$ is an integral domain. Whence $R/(\mathfrak{q} \cap R) \cong S/\mathfrak{q}$ is an integral domain. Thus $\mathfrak{q} \cap R$ is a prime ideal of $R$. (iv) Finally, we claim that $\iota_{S/\hat{Z}}$ and $\iota_{S/R}$ are surjective. By \cite[Lemma 3.6]{B5}, if $Q$ is a finitely generated algebra over an uncountable field $k$, and $P \subseteq Q$ is a subalgebra, then $\iota_{Q/P}: \operatorname{Spec}Q \to \operatorname{Spec}P$ is surjective if and only if $\kappa_{Q/P}:\operatorname{Max}Q \to \operatorname{Max}P$ is surjective. Therefore, $\iota_{S/\hat{Z}}$ and $\iota_{S/R}$ are surjective by Claim (ii). \end{proof} \begin{Lemma} \label{sigma in m} If $\mathfrak{p} \in \operatorname{Spec}\hat{Z}$ contains a monomial, then $\mathfrak{p}$ contains $\sigma$. \end{Lemma} \begin{proof} Suppose $\mathfrak{p}$ contains a monomial $g$. Then there is a nontrivial cycle $p$ such that $\overbar{p} = g$. Let $q^+$ be a path from $\operatorname{h}(p^+)$ to $\operatorname{t}(p^+)$. Then the concatenated path $(pq)^+$ is a cycle in $Q^+$. Thus, there is some $n \geq 1$ such that $\overbar{p}\overbar{q} = \sigma^n$, by Lemma \ref{cyclelemma}.1. By Lemma \ref{max ideal}, there is a prime ideal $\mathfrak{q} \in \operatorname{Max}S$ such that $\mathfrak{q} \cap R = \mathfrak{p}$. Furthermore, $\overbar{p} \overbar{q} = \sigma^n$ is in $\mathfrak{q}$ since $\overbar{p} \in \mathfrak{p}$ and $\overbar{q} \in S$. Whence $\sigma$ is also in $\mathfrak{q}$ since $\mathfrak{q}$ is prime. But $\sigma \in \hat{Z}$. Therefore $\sigma \in \mathfrak{q} \cap \hat{Z} = \mathfrak{p}$. \end{proof} Denote the origin of $\operatorname{Max}S$ by $$\mathfrak{n}_0 := \left( \overbar{s} \in S \ | \ s \text{ a nontrivial cycle} \right)S \in \operatorname{Max}S.$$ Consider the maximal ideals of $\hat{Z}$ and $R$ respectively, $$\mathfrak{z}_0 := \mathfrak{n}_0 \cap \hat{Z} \ \ \text{ and } \ \ \mathfrak{m}_0 := \mathfrak{n}_0 \cap R.$$ \begin{Proposition} \label{local nonnoetherian} The localizations $\hat{Z}_{\mathfrak{z}_0}$ and $R_{\mathfrak{m}_0}$ are nonnoetherian. \end{Proposition} \begin{proof} There is a monomial $g \in S \setminus R$ such that $g^n \not \in R$ for each $n \geq 1$, by Lemma \ref{cyclelemma2}.6. (i) Fix $n \geq 1$. We claim that $g^n$ is not in the localization $R_{\mathfrak{m}_0}$. Assume otherwise; then there is a $b_1 \in R$, $b_2 \in \mathfrak{m}_0$, $\beta \in k^{\times}$, such that $$g^n = \frac{b_1}{\beta + b_2},$$ since $\mathfrak{m}_0$ is a maximal ideal of $R$. Whence, $$b_2 g^n = b_1 - \beta g^n.$$ Consequently, $-\beta g^n$ is a monomial summand of $b_2g^n$, since $b_1$ is in $R$ and $\beta g^n$ is not. But this is not possible since $b_2$ is in $\mathfrak{m}_0$ and $B$ is a polynomial ring, a contradiction. (ii) We now claim that $R_{\mathfrak{m}_0}$ is nonnoetherian. By Lemma \ref{cyclelemma2}.4, there is an $m \geq 1$ such that for each $n \geq 1$, $$h_n := g^n\sigma^m \in R.$$ Consider the chain of ideals of $R_{\mathfrak{m}_0}$, $$0 \subset (h_1) \subseteq (h_1, h_2) \subseteq (h_1, h_2, h_3) \subseteq \ldots.$$ Assume to the contrary that the chain stabilizes. Then there is an $N \geq 1$ such that $$h_N = \sum_{n=1}^{N-1} c_nh_n,$$ with $c_n \in R_{\mathfrak{m}_0}$. Thus, since $R$ is an integral domain, $$g^N = \sum_{n =1}^{N-1}c_ng^n.$$ Furthermore, since $R$ is a subalgebra of the polynomial ring $B$ and $g$ is a monomial, there is some $1 \leq n \leq N-1$ such that $$c_n = g^{N-n} + b,$$ with $b \in R_{\mathfrak{m}_0}$. But then $g^{N-n} = c_n - b \in R_{\mathfrak{m}_0}$, contrary to Claim (i). (iii) Similarly, $\hat{Z}_{\mathfrak{z}_0}$ is nonnoetherian. \end{proof} \begin{Lemma} \label{contains a non-constant monomial} Suppose that each monomial in $\hat{Z}$ is divisible (in $B$) by $\sigma$. If $\mathfrak{p} \in \operatorname{Spec}\hat{Z}$ contains a monomial, then $\mathfrak{p} = \mathfrak{z}_0$. \end{Lemma} \begin{proof} Suppose $\mathfrak{p} \in \operatorname{Spec}\hat{Z}$ contains a monomial. Then $\sigma$ is in $\mathfrak{p}$ by Lemma \ref{sigma in m}. Furthermore, there is some $\mathfrak{q} \in \operatorname{Spec}S$ such that $\mathfrak{q} \cap \hat{Z} = \mathfrak{p}$, by Lemma \ref{max ideal}. Suppose $g$ is a monomial in $\hat{Z}$. By assumption, there is a monomial $h$ in $B$ such that $g = \sigma h$. By Lemma \ref{cyclelemma2}.5, $h$ is also in $S$. Whence $g = \sigma h \in \mathfrak{q}$ since $\sigma \in \mathfrak{p} \subseteq \mathfrak{q}$. But $g \in \hat{Z}$. Therefore $g \in \mathfrak{q} \cap \hat{Z} = \mathfrak{p}$. Since $g$ was arbitrary, $\mathfrak{p}$ contains all monomials in $\hat{Z}$. \end{proof} \begin{Remark} \rm{ In Lemma \ref{contains a non-constant monomial}, we assumed that $\sigma$ divides all monomials in $\hat{Z}$. An example of a dimer algebra with this property is given in Figure \ref{sigma divides all}, where the center is the nonnoetherian ring $Z \cong \hat{Z} = R = k + \sigma S$, and the cycle algebra is the quadric cone, $S = k[xz, xw, yz, yw]$. The contraction $\psi: A \to A'$ is cyclic. }\end{Remark} \begin{figure} $$\begin{array}{ccc} \xy 0;/r.4pc/: (-12,6)*+{\text{\scriptsize{$2$}}}="1";(0,6)*+{\text{\scriptsize{$1$}}}="2";(12,6)*+{\text{\scriptsize{$2$}}}="3"; (-12,-6)*+{\text{\scriptsize{$1$}}}="4";(0,-6)*+{\text{\scriptsize{$2$}}}="5";(12,-6)*+{\text{\scriptsize{$1$}}}="6"; (-12,0)*{\cdot}="7";(0,0)*{\cdot}="8";(12,0)*{\cdot}="9"; (-6,6)*{\cdot}="10";(6,6)*{\cdot}="11";(-6,-6)*{\cdot}="12";(6,-6)*{\cdot}="13"; {\ar@[green]_1"2";"10"};{\ar_y"10";"1"};{\ar^{}_w"7";"4"};{\ar@[green]_1"4";"12"};{\ar_x"12";"5"};{\ar@[green]^1"5";"8"};{\ar@[green]^1"2";"11"};{\ar^x"11";"3"};{\ar^w"9";"6"}; {\ar@[green]^1"6";"13"};{\ar^y"13";"5"}; {\ar@[green]^1"3";"9"};{\ar@[green]_1"1";"7"};{\ar^z"8";"2"}; \endxy & \ \ \ \stackrel{\psi}{\longrightarrow} \ \ \ & \xy 0;/r.4pc/: (-12,6)*+{\text{\scriptsize{$2$}}}="1";(0,6)*+{\text{\scriptsize{$1$}}}="2";(12,6)*+{\text{\scriptsize{$2$}}}="3"; (-12,-6)*+{\text{\scriptsize{$1$}}}="4";(0,-6)*+{\text{\scriptsize{$2$}}}="5";(12,-6)*+{\text{\scriptsize{$1$}}}="6"; {\ar_{y}"2";"1"};{\ar_{w}"1";"4"};{\ar_{x}"4";"5"};{\ar^{z}"5";"2"};{\ar^{x}"2";"3"};{\ar^{w}"3";"6"};{\ar^{y}"6";"5"}; \endxy\\ Q & & Q' \end{array}$$ \caption{An example where $\sigma$ divides all monomials in $R$. The quivers are drawn on a torus, the contracted arrows are drawn in green, and the arrows are labeled by their $\bar{\tau}_{\psi}$ and $\bar{\tau}$-images.} \label{sigma divides all} \end{figure} \begin{Lemma} \label{a cycle q} Suppose that there is a monomial in $\hat{Z}$ which is not divisible (in $B$) by $\sigma$. Let $\mathfrak{m} \in \operatorname{Max}\hat{Z} \setminus \left\{ \mathfrak{z}_0 \right\}$. Then there is a monomial $g \in \hat{Z} \setminus \mathfrak{m}$ such that $\sigma \nmid g$. \end{Lemma} \begin{proof} Let $\mathfrak{m} \in \operatorname{Max}\hat{Z} \setminus \left\{ \mathfrak{z}_0 \right\}$. (i) We first claim that there is a monomial in $\hat{Z} \setminus \mathfrak{m}$. Assume otherwise. Then $$\mathfrak{n}_0 \cap \hat{Z} \subseteq \mathfrak{m}.$$ But $\mathfrak{z}_0 := \mathfrak{n}_0 \cap \hat{Z}$ is a maximal ideal by Lemma \ref{max ideal}. Thus $\mathfrak{z}_0 = \mathfrak{m}$, contrary to assumption. (ii) We now claim that there is a monomial in $\hat{Z} \setminus \mathfrak{m}$ which is not divisible by $\sigma$. Indeed, assume to the contrary that every monomial in $\hat{Z}$, which is not divisible by $\sigma$, is in $\mathfrak{m}$. By assumption, there is a monomial in $\hat{Z}$ that is not divisible by $\sigma$. Thus there is at least one monomial in $\mathfrak{m}$. Therefore $\sigma$ is in $\mathfrak{m}$, by Lemma \ref{sigma in m}. There is an $\mathfrak{n} \in \operatorname{Max}S$ such that $\mathfrak{n} \cap \hat{Z} = \mathfrak{m}$, by Lemma \ref{max ideal}. Furthermore, $\sigma \in \mathfrak{n}$ since $\sigma \in \mathfrak{m}$. Suppose $g \in \hat{Z}$ is a monomial for which $\sigma \mid g$; say $g = \sigma h$ for some monomial $h \in B$. Then $h \in S$ by Lemma \ref{cyclelemma2}.5. Whence, $g = \sigma h \in \mathfrak{n}$. Thus $$g \in \mathfrak{n} \cap \hat{Z} = \mathfrak{m}.$$ It follows that every monomial in $\hat{Z}$, which \textit{is} divisible by $\sigma$, is also in $\mathfrak{m}$. Therefore every monomial in $\hat{Z}$ is in $\mathfrak{m}$. But this contradicts our choice of $\mathfrak{m}$ by Claim (i). \end{proof} Recall the subsets (\ref{U U*}) of $\operatorname{Max}S$ and the morphisms (\ref{max surjective}). \begin{Proposition} \label{n in maxs} Let $\mathfrak{n} \in \operatorname{Max}S$. Then \begin{equation*} \label{n in maxs1} \mathfrak{n} \cap \hat{Z} \not = \mathfrak{z}_0 \ \ \text{ if and only if } \ \ \hat{Z}_{\mathfrak{n} \cap \hat{Z}} = S_{\mathfrak{n}}, \end{equation*} and \begin{equation*} \label{n in maxs2} \mathfrak{n} \cap R \not = \mathfrak{m}_0 \ \ \text{ if and only if } \ \ R_{\mathfrak{n} \cap R} = S_{\mathfrak{n}}. \end{equation*} Consequently, $$\kappa_{S/\hat{Z}}(U_{S/\hat{Z}}) = \operatorname{Max}\hat{Z} \setminus \{ \mathfrak{z}_0 \} \ \ \ \text{ and } \ \ \ \kappa_{S/R}(U_{S/R}) = \operatorname{Max}R \setminus \{ \mathfrak{m}_0 \}.$$ \end{Proposition} \begin{proof} Let $\mathfrak{n} \in \operatorname{Max}S$. (i) Set $\mathfrak{m} := \mathfrak{n} \cap \hat{Z}$, and suppose $\mathfrak{m} \not = \mathfrak{z}_0$. We claim that $\hat{Z}_{\mathfrak{m}} = S_{\mathfrak{n}}$. Consider $g \in S \setminus \hat{Z}$. It suffices to show that $g$ is in the localization $\hat{Z}_{\mathfrak{m}}$. By \cite[Proposition 5.14]{B2}, $S$ is generated by $\sigma$ and a set of monomials in $B$ not divisible by $\sigma$. Furthermore, $\sigma$ is in $\hat{Z}$. Thus it suffices to suppose that $g$ is a monomial which is not divisible by $\sigma$. Let $u \in \mathbb{Z}^2$ and $p \in \mathcal{C}^u$ be such that $\overbar{p} = g$. If $u = 0$, then $g = \overbar{p} = \sigma^n$ for some $n \geq 1$, by Lemma \ref{cyclelemma}.1. Whence $g \in \hat{Z}$, contrary to our choice of $g$. Thus $u \not = 0$. (i.a) First suppose $\sigma$ does not divide all monomials in $\hat{Z}$. Fix $i \in Q_0$. By Lemma \ref{a cycle q}, there is a nontrivial cycle $q \in e_iAe_i$ such that $$\overbar{q} \in \hat{Z} \setminus \mathfrak{m} \ \ \text{ and } \ \ \sigma \nmid \overbar{q}.$$ Let $v \in \mathbb{Z}^2$ be such that $q \in \mathcal{C}^v$. Then $v \not = 0$ since $\sigma \nmid \overbar{q}$, by Lemma \ref{cyclelemma}.1. We claim that $u \not = v$. Assume to the contrary that $u = v$. Then by Lemma \ref{cyclelemma}.2, $\overbar{p} = \overbar{q}$ since $\sigma \nmid \overbar{p}$ and $\sigma \nmid \overbar{q}$. But $\overbar{q}$ is in $\hat{Z}$, whereas $\overbar{p}$ is not, a contradiction. Therefore $u \not = v$. Since $u \not = v$ are nonzero, there are lifts of $p$ and $q$ in the cover $Q^+$ that intersect at some vertex $j^+ \in Q_0^+$. We may thus factor $p$ and $q$ into paths, $$p = p_2e_jp_1 \ \ \text{ and } \ \ q = q_2e_jq_1.$$ Consider the cycle $$r := q_2p_1p_2q_1 \in e_iAe_i.$$ Let $\ell \geq 0$ be such that $\sigma^{\ell} \mid \overbar{r}$ and $\sigma^{\ell +1} \nmid \overbar{r}$. Since $\sigma \nmid \overbar{p}$ and $\sigma \nmid \overbar{q}$, there is a maximal cyclic subpath $s^+$ of $r^+$ (modulo $I$), such that $\overbar{s} = \sigma^{\ell}$. (The subpath $s$ may be trivial.) Let $t$ be the cycle obtained from $r$ by replacing $s$ with the vertex $e_{\operatorname{t}(s)}$ (modulo $I$). Then $\sigma \nmid \overbar{t}$. Since $i \in Q_0$ was arbitrary, we have $\overbar{t} \in R$. Thus $\overbar{t} \in \hat{Z}$ since $\sigma \nmid \overbar{t}$, by Lemma \ref{cyclelemma2}.2. Therefore, since $\overbar{q} \in \hat{Z} \setminus \mathfrak{m}$, $$g = \overbar{p} = \overbar{r} \overbar{q}^{-1} = \sigma^{\ell} \overbar{t} \overbar{q}^{-1} \in \hat{Z}_{\mathfrak{m}}.$$ Thus \begin{equation} \label{S subseteq} S \subseteq \hat{Z}_{\mathfrak{m}}. \end{equation} Denote by $\tilde{\mathfrak{m}} := \mathfrak{m} \hat{Z}_{\mathfrak{m}}$ the maximal ideal of $\hat{Z}_{\mathfrak{m}}$. Then, since $\hat{Z} \subset S$, $$\hat{Z}_{\mathfrak{m}} = \hat{Z}_{\tilde{\mathfrak{m}} \cap \hat{Z}} \subseteq S_{\tilde{\mathfrak{m}} \cap S} \stackrel{\textsc{(i)}}{\subseteq} (\hat{Z}_{\mathfrak{m}})_{\tilde{\mathfrak{m}} \cap \hat{Z}_{\mathfrak{m}}} = (\hat{Z}_{\mathfrak{m}})_{\mathfrak{m}\hat{Z}_{\mathfrak{m}}} = \hat{Z}_{\mathfrak{m}},$$ where (\textsc{i}) holds by (\ref{S subseteq}). Therefore $$S_{\mathfrak{n}} = \hat{Z}_{\mathfrak{m}}.$$ (i.b) Now suppose $\sigma$ divides all monomials in $\hat{Z}$. Then $\mathfrak{m}$ does not contain any monomials since $\mathfrak{m} \not = \mathfrak{z}_0$, by Lemma \ref{contains a non-constant monomial}. In particular, $\sigma \not \in \mathfrak{m}$. By Lemma \ref{cyclelemma2}.4, there is an $n \geq 0$ such that $g \sigma^n \in \hat{Z}$. Thus $$g = (g \sigma^n) \sigma^{-n} \in \hat{Z}_{\mathfrak{m}}.$$ Therefore $$S \subseteq \hat{Z}_{\mathfrak{m}}.$$ It follows that $$S_{\mathfrak{n}} = \hat{Z}_{\mathfrak{m}}.$$ Therefore, in either case (a) or (b), Claim (i) holds. (ii) Now suppose $\mathfrak{n} \cap R \not = \mathfrak{m}_0$. We claim that $R_{\mathfrak{n} \cap R} = S_{\mathfrak{n}}$. Since $\mathfrak{n} \cap R \not = \mathfrak{m}_0$, there is a monomial $g \in R \setminus \mathfrak{n}$. Thus there is some $n \geq 1$ such that $g^n \in \hat{Z}$, by Lemma \ref{cyclelemma2}.3. Furthermore, $g^n \not \in \mathfrak{n}$ since $\mathfrak{n}$ is a prime ideal. Consequently, $$g^n \in \hat{Z} \setminus (\mathfrak{n} \cap \hat{Z}).$$ Whence \begin{equation} \label{today} \mathfrak{n} \cap \hat{Z} \not = \mathfrak{z}_0. \end{equation} Therefore $$S_{\mathfrak{n}} \stackrel{\textsc{(i)}}{=} \hat{Z}_{\mathfrak{n} \cap \hat{Z}} \stackrel{\textsc{(ii)}}{\subseteq} R_{\mathfrak{n} \cap R} \subseteq S_{\mathfrak{n}},$$ where (\textsc{i}) holds by (\ref{today}) and Claim (i), and (\textsc{ii}) holds by (\ref{ZKA}). It follows that $R_{\mathfrak{n} \cap R} = S_{\mathfrak{n}}$. (iii) Finally, we claim that $$\hat{Z}_{\mathfrak{z}_0} \not = S_{\mathfrak{n}_0} \ \ \text{ and } \ \ R_{\mathfrak{m}_0} \not = S_{\mathfrak{n}_0}.$$ These inequalities hold since the local algebras $\hat{Z}_{\mathfrak{z}_0}$ and $R_{\mathfrak{m}_0}$ are nonnoetherian by Proposition \ref{local nonnoetherian}, whereas $S_{\mathfrak{n}}$ is noetherian by Lemma \ref{S noetherian}. \end{proof} \begin{Lemma} \label{n cap Z = n' cap Z} Let $\mathfrak{q}$ and $\mathfrak{q}'$ be prime ideals of $S$. Then $$\mathfrak{q} \cap \hat{Z} = \mathfrak{q}' \cap \hat{Z} \ \ \text{ if and only if } \ \ \mathfrak{q} \cap R = \mathfrak{q}' \cap R.$$ \end{Lemma} \begin{proof} (i) Suppose $\mathfrak{q} \cap \hat{Z} = \mathfrak{q}' \cap \hat{Z}$, and let $s \in \mathfrak{q} \cap R$. Then $s \in R$. Whence there is some $n \geq 1$ such that $s^n \in \hat{Z}$, by Lemma \ref{cyclelemma2}.3. Thus $$s^n \in \mathfrak{q} \cap \hat{Z} = \mathfrak{q}' \cap \hat{Z}.$$ Therefore $s^n \in \mathfrak{q}'$. Thus $s \in \mathfrak{q}'$ since $\mathfrak{q}'$ is prime. Consequently, $s \in \mathfrak{q}' \cap R$. Therefore $\mathfrak{q} \cap R \subseteq \mathfrak{q}' \cap R$. Similarly, $\mathfrak{q} \cap R \supseteq \mathfrak{q}' \cap R$. (ii) Now suppose $\mathfrak{q} \cap R = \mathfrak{q}' \cap R$, and let $s \in \mathfrak{q} \cap \hat{Z}$. Then $s \in \hat{Z} \subseteq R$. Thus $$s \in \mathfrak{q} \cap R = \mathfrak{q}' \cap R.$$ Whence $s \in \mathfrak{q}' \cap \hat{Z}$. Therefore $\mathfrak{q} \cap \hat{Z} \subseteq \mathfrak{q}' \cap \hat{Z}$. Similarly, $\mathfrak{q} \cap \hat{Z} \supseteq \mathfrak{q}' \cap \hat{Z}$. \end{proof} \begin{Proposition} \label{coincide prop} The subsets $U_{S/\hat{Z}}$ and $U_{S/R}$ of $\operatorname{Max}S$ are equal. \end{Proposition} \begin{proof} (i) We first claim that $$U_{S/\hat{Z}} \subseteq U_{S/R}.$$ Indeed, suppose $\mathfrak{n} \in U_{S/\hat{Z}}$. Then since $\hat{Z} \subseteq R \subseteq S$, we have $$S_{\mathfrak{n}} = \hat{Z}_{\mathfrak{n} \cap \hat{Z}} \subseteq R_{\mathfrak{n} \cap R} \subseteq S_{\mathfrak{n}}.$$ Thus $$R_{\mathfrak{n} \cap R} = S_{\mathfrak{n}}.$$ Therefore $\mathfrak{n} \in U_{S/R}$, proving our claim. (ii) We now claim that $$U_{S/R} \subseteq U_{S/\hat{Z}}.$$ Let $\mathfrak{n} \in U_{S/R}$. Then $R_{\mathfrak{n} \cap R} = S_{\mathfrak{n}}$. Thus by Proposition \ref{n in maxs}, $$\mathfrak{n} \cap R \not = \mathfrak{n}_0 \cap R.$$ Therefore by Lemma \ref{n cap Z = n' cap Z}, $$\mathfrak{n} \cap \hat{Z} \not = \mathfrak{n}_0 \cap \hat{Z}.$$ But then again by Proposition \ref{n in maxs}, $$\hat{Z}_{\mathfrak{n} \cap \hat{Z}} = S_{\mathfrak{n}}.$$ Whence $\mathfrak{n} \in U_{S/\hat{Z}}$, proving our claim. \end{proof} We denote the complement of a set $W \subseteq \operatorname{Max}S$ by $W^c$. \begin{Theorem} \label{isolated sing} The following subsets of $\operatorname{Max}S$ are open, dense, and coincide: \begin{equation} \label{coincide} \begin{array}{c} U^*_{S/\hat{Z}} = U_{S/\hat{Z}} = U^*_{S/R} = U_{S/R}\\ = \kappa_{S/\hat{Z}}^{-1}(\operatorname{Max}\hat{Z} \setminus \left\{ \mathfrak{z}_0 \right\} ) = \kappa_{S/R}^{-1}\left(\operatorname{Max}R \setminus \left\{ \mathfrak{m}_0 \right\} \right)\\ = \mathcal{Z}_S(\mathfrak{z}_0S)^c = \mathcal{Z}_S(\mathfrak{m}_0S)^c. \end{array} \end{equation} In particular, $\hat{Z}$ and $R$ are locally noetherian at all points of $\operatorname{Max}\hat{Z}$ and $\operatorname{Max}R$ except at $\mathfrak{z}_0$ and $\mathfrak{m}_0$. \end{Theorem} \begin{proof} For brevity, set $\mathcal{Z}(I) := \mathcal{Z}_S(I)$. (i) We first show the equalities of the top two lines of (\ref{coincide}). By Proposition \ref{coincide prop}, $U_{S/R} = U_{S/\hat{Z}}$. By Lemma \ref{S noetherian}, $S$ is noetherian. Thus for each $\mathfrak{n} \in \operatorname{Max}S$, the localization $S_{\mathfrak{n}}$ is noetherian. Therefore, by Proposition \ref{n in maxs}, $$U^*_{S/\hat{Z}} = U_{S/\hat{Z}} \ \ \text{ and } \ \ U^*_{S/R} = U_{S/R}.$$ Moreover, again by Proposition \ref{n in maxs}, $$U_{S/\hat{Z}} = \kappa_{S/\hat{Z}}^{-1}( \operatorname{Max}\hat{Z} \setminus \left\{ \mathfrak{z}_0 \right\}) \ \ \text{ and } \ \ U_{S/R} = \kappa_{S/R}^{-1}\left( \operatorname{Max}R \setminus \left\{ \mathfrak{m}_0 \right\} \right).$$ (ii) We now claim that the complement of $U_{S/R} \subset \operatorname{Max}S$ is the zero locus $\mathcal{Z}(\mathfrak{m}_0S)$. Suppose $\mathfrak{n} \in \mathcal{Z}(\mathfrak{m}_0S)$; then $\mathfrak{m}_0S \subseteq \mathfrak{n}$. Whence, \begin{equation*} \label{m0 = m0 cap R} \mathfrak{m}_0 \subseteq \mathfrak{m}_0S \cap R \subseteq \mathfrak{n} \cap R. \end{equation*} Thus $\mathfrak{n} \cap R = \mathfrak{m}_0$ since $\mathfrak{m}_0$ is a maximal ideal of $R$. But then $\mathfrak{n} \not \in U_{S/R}$ by Claim (i). Therefore $U^c_{S/R} \supseteq \mathcal{Z}(\mathfrak{m}_0S)$. Conversely, suppose $\mathfrak{n} \in U_{S/R}^c$. Then $\kappa_{S/R}(\mathfrak{n}) \not \in \kappa_{S/R}(U_{S/R})$, by the definition of $U_{S/R}$. Thus $\mathfrak{n} \cap R = \kappa_{S/R}(\mathfrak{n}) = \mathfrak{m}_0$, by Claim (i). Whence, $$\mathfrak{m}_0S = (\mathfrak{n} \cap R)S \subseteq \mathfrak{n},$$ and so $\mathfrak{n} \in \mathcal{Z}(\mathfrak{m}_0S)$. Therefore $U^c_{S/R} \subseteq \mathcal{Z}(\mathfrak{m}_0S)$. (iii) Finally, we claim that the subsets (\ref{coincide}) are open dense. Indeed, there is a maximal ideal of $R$ distinct from $\mathfrak{m}_0$, and $\kappa_{S/R}$ is surjective by Lemma \ref{max ideal}. Thus the set $\kappa_{S/R}^{-1}\left( \operatorname{Max}R \setminus \left\{ \mathfrak{m}_0 \right\} \right)$ is nonempty. Moreover, this set equals the open set $\mathcal{Z}(\mathfrak{m}_0S)^c$, by Claim (ii). But $S$ is an integral domain. Therefore $\mathcal{Z}(\mathfrak{m}_0S)^c$ is dense since it is nonempty and open. \end{proof} \begin{Theorem} \label{generically noetherian} The center $Z$, reduced center $\hat{Z}$, and homotopy center $R$ of $A$ each have Krull dimension 3, $$\operatorname{dim}Z = \operatorname{dim}\hat{Z} = \operatorname{dim}R = \operatorname{dim}S = 3.$$ Furthermore, the fraction fields of $\hat{Z}$, $R$, and $S$ coincide, \begin{equation} \label{function fields} \operatorname{Frac}\hat{Z} = \operatorname{Frac}R = \operatorname{Frac}S. \end{equation} \end{Theorem} \begin{proof} Recall that $S$ is of finite type by Lemma \ref{S noetherian}, and $\hat{Z} \subseteq R \subseteq S$ are integral domains since they are subalgebras of the polynomial ring $B$. The sets $U_{S/\hat{Z}}$ and $U_{S/R}$ are nonempty, by Theorem \ref{isolated sing}. Thus $\hat{Z}$, $R$, and $S$ have equal fraction fields \cite[Lemma 2.4]{B5}; and equal Krull dimensions \cite[Theorem 2.5.4]{B5}. In particular, $\operatorname{dim} \hat{Z} = \operatorname{dim}R = \operatorname{dim}S = 3$, by Lemma \ref{Jess}. Finally, each prime $\mathfrak{p} \in \operatorname{Spec}Z$ contains the nilradical $\operatorname{nil}Z$, and thus $\operatorname{dim}Z = \operatorname{dim}\hat{Z}$. \end{proof} Recall that the reduction $X_{\operatorname{red}}$ of a scheme $X$, that is, its reduced induced scheme structure, is the closed subspace of $X$ associated to the sheaf of ideals $\mathcal{I}$, where for each open set $U \subset X$, $$\mathcal{I}(U) := \left\{ f \in \mathcal{O}_X(U) \ | \ f(\mathfrak{p}) = 0 \ \text{ for all } \ \mathfrak{p} \in U \right\}.$$ $X_{\operatorname{red}}$ is the unique reduced scheme whose underlying topological space equals that of $X$. If $R := \mathcal{O}_X(X)$, then $\mathcal{O}_{X_{\operatorname{red}}}(X_{\operatorname{red}}) = R/\operatorname{nil}R$. \begin{Theorem} \label{hopefully...} \ Let $A$ be a nonnoetherian dimer algebra, and let $\psi: A \to A'$ a cyclic contraction. \begin{enumerate} \item The reduced center $\hat{Z}$ and homotopy center $R$ of $A$ are both depicted by the center $Z' \cong S$ of $A'$. \item The affine scheme $\operatorname{Spec}R$ and the reduced scheme of $\operatorname{Spec}Z$ are birational to the noetherian scheme $\operatorname{Spec}S$, and each contain precisely one closed point of positive geometric dimension, namely $\mathfrak{m}_0$ and $\mathfrak{z}_0$. \end{enumerate} \end{Theorem} \begin{proof} (1) We first claim that $\hat{Z}$ and $R$ are depicted by $S$. By Theorem \ref{isolated sing}, $$U^*_{S/\hat{Z}} = U_{S/\hat{Z}} \not = \emptyset \ \ \text{ and } \ \ U^*_{S/R} = U_{S/R} \not = \emptyset.$$ Furthermore, by Lemma \ref{max ideal}, the morphisms $\iota_{S/\hat{Z}}$ and $\iota_{S/R}$ are surjective.\footnote{The fact that $S$ is a depiction of $R$ also follows from \cite[Theorem D.1]{B5}, since the algebra homomorphism $\tau_{\psi}: \Lambda \to M_{|Q_0|}(B)$ is an impression \cite[Theorem 5.9.1]{B2}.} (2.i) As schemes, $\operatorname{Spec}S$ is isomorphic to $\operatorname{Spec}\hat{Z}$ and $\operatorname{Spec}R$ on the open dense subset $U_{S/\hat{Z}} = U_{S/R}$, by Theorem \ref{isolated sing}. Thus all three schemes are birationally equivalent. Furthermore, $\operatorname{Spec}\hat{Z}$ and $\operatorname{Spec}R$ each contain precisely one closed point where $\hat{Z}$ and $R$ are locally nonnoetherian, namely $\mathfrak{z}_0$ and $\mathfrak{m}_0$, again by Theorem \ref{isolated sing}. (2.ii) We claim that the closed points $\mathfrak{z}_0 \in \operatorname{Spec}\hat{Z}$ and $\mathfrak{m}_0 \in \operatorname{Spec}R$ have positive geometric dimension. Indeed, since $A$ is nonnoetherian, there is a cycle $p$ such that $\sigma \nmid \overbar{p}$ and $\overbar{p}^n \in S \setminus R$ for each $n \geq 1$, by Lemma \ref{cyclelemma2}.6. If $\overbar{p}$ is in $\mathfrak{m}_0S$, then there are monomials $g \in R$, $h \in S$, such that $gh = \overbar{p}$. Furthermore, $g \not = \sigma^n$ for all $n \geq 1$ since $\sigma \nmid \overbar{p}$. But then $\overbar{p} = gh$ is in $R$ by Lemma \ref{cyclelemma2}.1, a contradiction. Therefore $$\overbar{p} \not \in \mathfrak{m}_0S.$$ Consequently, for each $c \in k$, there is a maximal ideal $\mathfrak{n}_c \in \operatorname{Max}S$ such that $$(\overbar{p} -c, \mathfrak{m}_0)S \subseteq \mathfrak{n}_c.$$ Thus, $$\mathfrak{m}_0 \subseteq (\overbar{p} -c, \mathfrak{m}_0)S \cap R \subseteq \mathfrak{n}_c \cap R.$$ Whence $\mathfrak{n}_c \cap R = \mathfrak{m}_0$ since $\mathfrak{m}_0$ is maximal. Therefore by Theorem \ref{isolated sing}, $$\mathfrak{n}_c \in U^c_{S/R}.$$ Set $$\mathfrak{q} := \bigcap_{c \in k} \mathfrak{n}_c.$$ The intersection of radical ideals is radical, and so $\mathfrak{q}$ is a radical ideal. Thus, since $S$ is noetherian, the Lasker-Noether theorem implies that there are minimal primes $\mathfrak{q}_1, \ldots, \mathfrak{q}_{\ell} \in \operatorname{Spec}S$ over $\mathfrak{q}$ such that $$\mathfrak{q} = \mathfrak{q}_1 \cap \cdots \cap \mathfrak{q}_{\ell}.$$ Since $\ell < \infty$, at least one $\mathfrak{q}_i$ is a non-maximal prime, say $\mathfrak{q}_1$. Then $$\mathfrak{m}_0 = \bigcap_{c \in k} (\mathfrak{n}_c \cap R) = \bigcap_{c \in k} \mathfrak{n}_c \cap R = \mathfrak{q} \cap R \subseteq \mathfrak{q}_1 \cap R.$$ Whence $\mathfrak{q}_1 \cap R = \mathfrak{m}_0$ since $\mathfrak{m}_0$ is maximal. Since $\mathfrak{q}_1$ is a non-maximal prime ideal of $S$, $$\operatorname{ht}(\mathfrak{q}_1) < \operatorname{dim}S.$$ Furthermore, $S$ is a depiction of $R$ by Claim (1). Thus $$\operatorname{ght}(\mathfrak{m}_0) \leq \operatorname{ht}(\mathfrak{q}_1) < \operatorname{dim}S \stackrel{\textsc{(i)}}{=} \operatorname{dim}R,$$ where (\textsc{i}) holds by Theorem \ref{generically noetherian}. Therefore $$\operatorname{gdim} \mathfrak{m}_0 = \operatorname{dim}R - \operatorname{ght}(\mathfrak{m}_0) \geq 1,$$ proving our claim. \end{proof} \begin{Remark} \rm{ Although $\hat{Z}$ and $R$ determine the same variety using depictions, their associated affine schemes $$(\operatorname{Spec}\hat{Z}, \mathcal{O}_{\hat{Z}}) \ \ \ \text{ and } \ \ \ \left( \operatorname{Spec}R, \mathcal{O}_{R} \right)$$ will not be isomorphic if their rings of global sections, $\hat{Z}$ and $R$, are not isomorphic. }\end{Remark} \ \\ \textbf{Acknowledgments.} The author was supported by the Austrian Science Fund (FWF) grant P 30549-N26. Part of this article is based on work supported by the Heilbronn Institute for Mathematical Research. \bibliographystyle{hep} \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
proofpile-arXiv_065-4154
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Neurosceince has found an unusual ally in the form of computer science which has strengthened and widened its scope. The wide availability and easy-to-use nature of video equipment has enabled neuroscientists to record large scale behavioral data of animals and analyze it from neurosciecne perspective. Traditionally, neuroscientists would record videos of animals which they wanted to study and then manually annotate the video data themselves. Normally this approach is reasonable if the video data being annotated is not large, but it becomes very inconvenient, tiresome, erroneous and slow as the amount of data increases. This is mainly because the annotations made by human annotators are not perfectly reproducible. Two annotations of the same sample done by two different persons will likely differ. Even the annotation done for the same sample at different times by same person might not be exactly the same. All of these factors have contributed to the demand of a general purpose automated annotation approach for video data. For behavioral phenotyping and neuroscience applications, researchers are usually interested in gesture and locomotion tracking. Fortunately, computer science has answers to this problem in the form of machine learning and computer vision based tracking methods. The research in this area is still not mature, but it is receiving a lot of attention lately. Primary motivation for automated annotation is the reproducibility and ability to annotate huge amounts of data in practical amount of time. \par The field is not mature. There is no consensus on which approach to follow yet, but most of the researchers follow a loose set of rules. Some researchers approach this problem by treating video as sequence of still images and then applying computer vision algorithms to every frame in succession without considering their temporal relationship. Some of the researchers include temporal information to some extent while some approach towards it with the assistance of additional hardware. The general framework is similar. Animals (mice/rats/insects) are kept in a controlled environment, either restrained or free where the lighting and illumination can be manipulated. In order to acquire the video data, single or multiple video cameras are installed. These might be simple video cameras or depth cameras. There might be some additional accessories such as physical markers or body mounted sensors. \label{sec:introduction} \section{Problem Statement} Behavioral phenotyping depends upon annotated activity data of rodents. We can identify the activity type of a mouse when we see how it moves, behaves and acts over an extended period of time. One of many proposed approaches is to track the limb movements of the rodents and convert them into quantifiable patterns. The limbs tracking can be either achieved by recording them from frontal, lateral, top or bottom view. Typical tracking example from frontal and lateral view is shown in Fig. \ref{FrontalSample} and \ref{LateralSample}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{FrontalSample.png} \caption{Frontal view of a mouse with its moving limbs marked} \label{FrontalSample} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{LateralSample.png} \caption{Lateral view of a mouse with its moving limbs marked} \label{LateralSample} \end{figure} Cases shown in \ref{FrontalSample} and \ref{LateralSample} are typical examples of activity tracking in rodents and small animals. They present the following challenges \begin{enumerate} \item Spatial resolution in most consumer grade video cameras which have enough temporal resolution is not enough for effective tracking \item The limbs might move faster at one point in time while they might be stationary at another point in time, rendering the development of uniform motion model impossible. \item The limbs might overlap with each other or other body parts, therefore presenting occlusions. \end{enumerate} \section{Motion tracking principles in videos} Videos are sequences of images/frames which if displayed with high enough frequency will be perceived as continuous content by the human eye. Although the video content appears continuous, it is still comprised of discrete images to which all the image processing techniques can be applied. Besides, the contents of two consecutive frames are usually closely related. The fact that video frames are closely related in spatial and temporal domains makes object and motion tracking possible in videos. Motion/Object tracking in video started with detecting objects in individual frames which in turn can be used for object tracking in video sequences. It involves monitoring an object's shape and motion path in every frame. This is achieved by solving the temporal correspondence problem, to match region in successive frames of a video sequence. \par Motion detection is very significant when it comes to object tracking in video sequences. One one hand, motion adds another dimension to already complex problem of object detection in the form of object's temporal change requirements, on the other hand, it also provides additional information for detection and tracking. There are numerous researchers actively working on this problem with different approaches. Most of these methods involve single or multiple techniques for motion detection. They can be broadly classified three major categories; background subtraction and temporal differencing based approaches, statistical approaches and optical flow based approaches. \subsection{Background Subtraction and Temporal Differencing} Commonly used for motion segmentation in static scenes, background subtraction attempts to detect and track motion by subtracting the current image pixel-by-pixel from a reference/background image. The pixels which yield difference above a threshold are considered as foreground. The creation of the background image is known as background modeling. Once the foreground pixels are classified, some morphological post processing is done to enhance the detected motion regions. Different techniques for background modeling, subtraction and post processing results in different approaches for the background subtraction method \citep{BkgPaper1,BkgPaper2,BkgPaper3,BkgPaper4,BkgPaper5}. \par In temporal differencing, motion is detected by taking pixel-by-pixel difference of consecutive frames (two or three). It is different from background subtraction in the sense that the background or reference image is not stationary. It is the mainly used in scenarios involving a moving camera \citep{BkgPaper6,BkgPaper7,BkgPaper8,BkgPaper9,BkgPaper10}. \subsection{Statistical approaches} Statistical methods are inspired by background subtraction methods in terms of keeping and updating statistics of the foreground and background pixels. Foreground and background pixels are differentiated by comparing pixel statistics with that of background model. This approach is stable in the presence of noise, illumination changes and shadows \cite{StatPaper1,StatPaper2,StatPaper3,StatPaper4,StatPaper5,StatPaper6,StatPaper7,StatPaper8,StatPaper9,StatPaper10}. \subsection{Optical Flow} Optical flow is the distribution of apparent velocities of movement of brightness patterns in an image. It can arise from relative motion of objects and the observer. Consequently, it can give spatial and temporal information about various objects in the video \cite{FlowPaper1,FlowPaper2}. Optical flow methods exploit the flow fields of moving objects for motion detection. In this approach, the apparent velocity and direction of every pixel is be computed \cite{FlowPaper3,FlowPaper4,FlowPaper5,FlowPaper6,FlowPaper7}. Optical flow based methods can detect motion in video sequences when the observer is stationary or moving, however, most of the optical flow methods are computationally complex and cannot be used in real-time without specialized hardware. \section{Major trends} Motion tracking for neuroscience applications can be treated as a special case of motion tracking; which means that all the motion tracking techniques can be applied to it in one way or the other. Although the general idea is the same, the environment for such type of motion tracking can be different. A typical setup includes a closed environment (either a room or a box), video cameras, the animal and control systems. The animal can either be restrained or freely behaving. There might be just a single camera or multiple cameras which records the motion from different angles. For this survey, we will go through all those cases which involves motion tracking (especially limbs tracking, head tracking and gesture tracking) of laboratory animals for behavioral phenotyping or medical assessment purposes. Based on their intended use and nature, we have divided the approaches found in literature in following categories. \begin{enumerate} \item Commercially available solutions \item Hardware based methods \item Video tracking aided by hardware \begin{enumerate} \item Semi-automated \item Completely automated \end{enumerate} \item Video tracking methods mostly dependent on software based tracking \begin{enumerate} \item Semi-automated (aided by users or markers) \item Completely automated \end{enumerate} \end{enumerate} \section{Commercially available solutions} \par We have covered commercially available solutions or approaches in this section. These solution includes all those hardware and software based methods which are available on demand from specific companies. Noldus corporation (\url{http://www.noldus.com/}) and CleverSyns (\url{http://cleversysinc.com/CleverSysInc/}) are two of the prominent names involved in the development of behavioral research technologies. The Orthotic Group (\url{http://www.theorthoticgroup.com/}) is involved primarily in gait analysis of humans but their approaches are exportable to gait analysis in rodents and small animals too. The Mouse Specific Inc. (\url{https://mousespecifics.com/}) deals primarily with behavioral research technologies for rodents. A few of the commercially available solutions are summarized based on the white papers from their parent companies. \par Dorman et. al. did a comparison of two hardware assisted gait analysis systems; DigiGait and TreadScan \cite{PaperDigiGaitAndTreadScan}. The $DigiGait^{TM}$ imaging system uses a high-speed, 147 frames-per-second video camera mounted inside a stainless steel treadmill chassis below a transparent treadmill belt to capture ventral images of the subject. The treadmill is lit from the inside of the chassis by two fluorescent lights and from overhead by one fluorescent light. The $TreadScan^{TM}$ imaging system uses a high-speed, 100 frames-per-second video camera adjacent to a translucent treadmill belt to capture video reflected from a mirror mounted under the belt at $45^0$. Images are automatically digitized by $DigiGait^{TM}$ and $TreadScan^{TM}$ systems. $DigiGait^{TM}$ videos are manually cropped and imported then automatically analyzed. The software identifies the portions of the paw that are in contact with the treadmill belt in the stance phase of stride as well as tracks the foot through the swing phase of stride. Measures are calculated for 41 postural and kinematic metrics of gait. The authors found that $DigiGait^{TM}$ system consistently measured significantly longer stride measures $TreadScan^{TM}$. Both systems’ measures of variability were equal. Reproducibility was inconsistent on both systems. Only the $TreadScan^{TM}$ detected normalization of gait measures and the time spent on analysis was dependent on operator experience. $DigiGait^{TM}$ and $TreadScan^{TM}$ has been particularly well received in neuro-physiological research \cite{PaperDigiGait1,PaperDigiGait2,PaperDigiGait3,PaperDigiGait4,PaperDigiGait5,PaperDigiGait6,PaperDigiGait7} and \cite{PaperTreadScan1,PaperTreadScan2,PaperTreadScan3,PaperTreadScan4,PaperTreadScan5}. \par Cleversys Inc. introduced a commercial solution for gait analysis in rodents, called GaitScan \cite{PaperGaitScan}. GaitScan system records video of the rodent running either on a transparent belt treadmill or on a clear free‐walk runway. The video of the ventral (underside) view of the animal is obtained using a high‐speed digital camera. The video essentially captures the foot prints of the animal as they walk/run. GaitScan software can work with videos taken from any treadmill or runway device that allows the capture of its footprints on any video capturing hardware system with a high‐speed camera. The accompanying software let the user track multiple gait parameters which can be later used for behavioral phenotyping. This solution has also been used in multiple researches \cite{PaperGaitScan1,PaperGaitScan2,PaperGaitScan3,PaperGaitScan4,PaperGaitScan5}. \par TrackSys ltd. introduced two systems for rodents motor analysis. One system is called 'ErasmusLadder'. The mouse traverses a horizontal ladder between two goal boxes. Each rung of the ladder contains a touch-sensitive sensor. These sensors allow the system to measure numerous parameters relative to motor performance and learning such as step time and length, missteps,back steps and jumps \cite{PaperELadder}. It has been used in multiple researches \cite{PaperELadder1,PaperELadder2,PaperELadder3,PaperELadder4,PaperELadder5}. Its tracking performance hasn't been reported by its manufacturer. The other system is called 'CatWalk' \cite{PaperCatWalk}. It is comprised of a plexiglass walkway which can reflect light internally. When the animals paws touch the glass, the light escapes as their paw print and is captured by a high speed camera mounted beneath the walkway.It can be used to quantize a number of gait parameters such as pressure, stride length, swing and stance duration. Multiple researchers have used 'CatWalk' in gait analysis \cite{PaperCatWalk1,PaperCatWalk2,PaperCatWalk3,PaperCatWalk4,PaperCatWalk5}. \section{Hardware based methods} \par \label{DrosophilaHardware} Kain et. al \cite{PaperKain} proposed an explicit hardware based leg tracking method for automated behavior classification in Drosophila flies.The fly is made to walk on a spherical treadmill. Dyes which are sensitive to specific wavelengths of light are applied to its legs and then the leg movement is recorded by two mounted cameras. This way, 15 gait features are recorded and tracked in real time. This approach has the appeal for real time deployment but it cannot be generalized to any limb tracking application because it needs a specific hardware setup. Moreover, being heavily dependent on photo-sensitive dyes decreases its robustness. \par Snigdha et al \cite{PaperRoy} proposed 3D tracking of mice whiskers using optical motion capture hardware. The 3D tracking system used (Hawk Digital Real Time System, Motion Analysis Corp., Santa Rosa, CA, USA) is composed of two cameras in conjunctions and the Cortex analysis software (Motion Analysis, CA, USA). The whiskers are marker with retro-reflective markers and their X, Y, and Z coordinates are digitized and stored along with video recordings of the marker movements. The markers are fashioned from a retro-reflective tape backed with adhesive (Motion Analysis Corp., Santa Rosa, CA, USA) and fastened onto the whiskers using the tape’s adhesive. Markers were affixed to the whisker at a distance of about 1 cm from the base. Reliable 3D tracking requires that a marker be visible at all times by both cameras. This condition can be satisfied in head-fixed mice where the orientation of the mouse to the cameras remains fixed. The system was connected to a dual processor Windows based computer for data collection. The proposed tracking framework is easy to install and computationally cheap but like other hardware-assisted frameworks, it also needs specialized hardware and thus isn't very scalable and portable. Also, for reliable tracking, the retro-reflective markers should be visible to the cameras at all times which makes the framework less robust. \par Scott Tashman et. al. proposed a bi-plane radiography assisted by static CT scan based method for 3D tracking of skeletons in small animals \cite{PaperScott}. The high-speed biplane radiography system consists of two 150 kVp X-ray generators optically coupled to synchronized high-speed video cameras. For static radiostereometric analysis [RefRSA] (RSA),they implanted minimum three radiopaque bone markers per bone to enable accurate registration between the two views. The acquire radiographs are first corrected for geometric distortion. They calculated ray-scale weighted centroids for each marker with sub-pixel resolution. They tested this system on dogs and reported an error of 0.02 mm when inter-marker distance calculated by their system was compared to true inter-marker distance of 30 mm. For dynamic gait tracking, this system is reported to be very accurate but it required specialized hardware. Moreover, since the marker implantation is invasive, it can alter the behavior of animals being studied. \par Harvey et. al. proposed an optoelectronic based whisker tracking method for head-bound rats. In the proposed method, the rat’s head is fixed to a metal bar protruding from the top of the restraining device \cite{PaperHarvey}. Its paw rests on a micro switch which records lever presses. A turntable driven by a stepping motor rotates a single sphere/cube into the rat’s ¨whisking space¨. The whiskers are marked to increase chances of detection. The movements of a single whisker are detected by a laser emitter and an array of CCD detectors. Once the data is recorded, a single whisker is identified manually which serves as a reference point. As the article is more focused on whisking responses of the rodents to external stimuli, they have not reported the whiskers detection and tracking accuracy. R. Bermejo et. al reported similar approach for tracking individual whiskers \cite{PaperBemejo}. They restrained the rats and then used a combination of CCDs and laser emitters. The rats were placed in such a way that their whiskers blocked the path of laser, casting a shadow over CCDs, thus registering presence of a whisker which can be tracked by tracking the voltage shifts on CCD array. They also have not reported tracking accuracy. \par Kyme et. al. \cite{PaperKyme} proposed a marker assisted hardware based method for head motion tracking of freely behaving and tube-bounds rats. They glued a marker with a specific black and white pattern to the rat's head. Motion tracking was performed using the Micron-Tracker $S \times 60$ (ClaronTech. Inc., Toronto, Canada), a binocular-tracking system that computes a best-fit pose of printed markers in the field of measurement Kyme et al.\cite{PaperKymeMarker}. The author have reported accurate tracking for more than 95\% of the time in case of tube-bound rats and similar performance for freely behaving rats if the tracking algorithm is assisted 10\% of the time. These figures seems impressive but the approach has one major drawback; it can only be used in a very specific setting. It requires a specialized setup and it needs to glue external markers to the test subject's head, which might affect its behavior. Moreover, the same authors have used the Micron-Tracker based approach for synchronizing head movements of a rat with position emission tomography scans of their brains and have reported that the marker-assisted tracking method was able to synchronize the head movements with scan intervals with an error of less than 10 $ms$ \cite{PaperKymeSynch}. \par Pasquet et. al. proposed a wireless inertial sensors based approach for tracking and quantifying head movements in rats \cite{PaperPasquet}. The inertial measurement unit (IMU) contains a digital 9-axis inertial sensor (MPU-9150, Invensense) that samples linear acceleration, angular velocity and magnetic field strength in three dimensions, a low-power programmable microcontroller (PIC16, Microchip) running a custom firmware and a Bluetooth radio, whose signal is transmitted through a tuned chip antenna. This system was configured with Labview for data acquisition and the analysis was done in R. The sensors record any head movements by registering the relative change in acceleration with respect to gravity. Since the sensors record data in 9 axes, it is used to detect events in rats behavior based on head movements. The authors have reported a detection accuracy of 96.3\% and a mean correlation coefficient of 0.78 $\pm$ 0.14 when the recorded data is compared for different rats(n = 19 rats). The reported performance figures are very good in terms of event detection and consistency but the system can only be used to track head movements. Also, the system requires specialized hardware which limits its portability. \section{Video tracking aided by hardware} \subsection{Semi-automated} \par Knutsen et al. proposed the use of overhead IR LEDs along with normal video cameras for head and whisker tracking of unrestrained behaving mice \cite{PaperKnutsen}. The overhead IR leds are used to flash IR light onto the mouse head which is reflected back from its eyes. The reflected flash is recorded by an IR camera. In first few frame of every movie, a user identifies a region of interest (ROI) for the eyes which encircles a luminous spot (reflection from the eye). This luminous spot is tracked in subsequent frames by looking for pixels with high luminosity in the shifted ROI. Once eyes are located in every frame, they are used to track head and whiskers in intensity videos. First a mask averaged on all those frames which contains no mice is subtracted from the frame. Then user-initiated points are used to form whisker shaft by spline interpolation. For the next frame, sets of candidate points are initiated and shaft from current frames is convolved with candidate shafts from next frame to locate the set of points most likely being a whisker. Although the pipeline has no temporal context involved, yet it is quite efficient in whisker tracking with a high Pearson correlation between ground truth and tracked whisker shafts. The downsides of this approach are the need for high speed videos and additional IR hardware. \par Gravel et. al. proposed an X-Ray area scan camera assisted tracking method for gait parameters of rats walking on a treadmill \cite{PaperGravel}. The system consists of a Coroskop C arm X-ray system from Siemens, equipped with an image intensifier OPTILUX 27HD. The X-Ray system is used in detecting flouroscopic markers placed on hind limbs of the rat. A a high-speed area scan camera from Dalsa (DS-41-300K0262), equipped with a C-mount zoom lens (FUJINON-TV, H6X12.R, 1:1.2/12.5–75) mounted on the image intensifier is used for video acquisition and a computer is used to overlay the detected markers on the video. The treadmill with the overlying box is placed on a free moving table and positioned near the X-ray image intensifier. The X-ray side view videos of locomotion are captured while the animal walked freely at different speeds imposed by the treadmill. The acquired video and marker data is processed in four steps; correction for image distortion, image denoising and contrast enhancement, frame-to-frame morphological marker identification and statistical gait analysis. The data analysis process can be run in automated mode for image correction and enhancement however the morphological marker identification is user assisted.The kinematic gait patterns are computed using a Bootstrap method \cite{PaperBootstrap}. Using Bootstrap method and multiple Monte Carlo runs, the authors have reported consistent gait prediction and tracking with a confidence of 95\%. They have compared the performance of the proposed system with manual marker annotation by a user by first manually processing 1 hour 30 minutes of data and then processing only 12 minutes data by the system assisted by the same user. They have reported only 8\% deviation in gait cycle duration, therefore claiming a 7-folds decrease in processing time with acceptable loss in accuracy. Although the proposed results are impressive, the system is still not scalable and portable because it relies on dedicated hardware as well as continuous user assistance. \subsection{Completely automated} \par Akihiro Nakamura et. al. \cite{PaperAkihiro} proposed a depth sensor based approach for paw tracking of mice on a transparent floor. The proposed system captures the subjects shape from beneath using a low-cost infrared depth sensor (Microsoft Kinect) and an opaque infrared pass filter. The system is composed of an open-field apparatus, a Kinect sensor, and a personal computer. The open field is a square of $400 mm \times 400 mm$ and the height of the surrounding wall is 320 mm. The Kinect device is fixed 430 mm below the floor so that the entire open-field area can be captured by the device. For the experiment in the opaque conditions, the floor of the open field was covered with tiled infrared-pass filters (FUJIFILM IR-80 (Fuji Film, Tokyo, Japan)), which are commonly used in commercial cameras. The depth maps, consisting of $320 \times 240 $depth pixels, are captured at 30 frames per second. The tracking algorithm has four steps; preprocessing, feature-point extraction, footprint detection and labeling. During preprocessing, the subject’s depth information is extracted from the raw depth map by applying background subtraction to the raw depth map. The noise produced by pre-processing steps is removed by morphological operations. AGEX algorithm \citep{PaperAGEX} is used for feature extraction after pre-processing. Center of mass of AGEX point clouds is used for paw detection and labeling. All those pixels whose Euclidean distance is lower than a threshold from the center of mass are considered to be member pixels of the paws. This framework offers the benefits of low computational cost but it is not robust. It can be used only for paw tracking in a specific setting. \par César S. Mendes et. al. \cite{PaperMendes} proposed an integrated hardware and software system called 'MouseWalker' that provides a comprehensive and quantitative description of kinematic features in freely walking rodents. The MouseWalker apparatus primarily comprises four components: the fTIR floor and walkway wall, the supporting posts, the $45^0$ mirror, and the background light. A white LED light strip for black and white cameras or a colored LED light strip for color cameras is glued to a 3/8-inch U-channel aluminum base LED mount. This LED/aluminum bar is clamped to the long edges of a 9.4-mm (3/8-inch) thick piece of acrylic glass measuring 8 by 80 cm. A strip of black cardboard is glued and sewn over the LED/acrylic glass contact areas. To build the acrylic glass walkway, all four sides were glued together with epoxy glue and cable ties and placed over the fTIR floor. Videos are acquired using a Gazelle 2.2-MP camera (Point Grey, Richmond, Canada) mounted on a tripod and connected to a Makro-Planar T 2/50 lens (Carl Zeiss, Jena, Germany) at maximum aperture (f/2.0) to increase light sensitivity and minimize depth of field. The 'MouseWalker' program is developed and compiled in MATLAB (The Mathworks, MA, USA) \cite{MouseWalker}. The body and footprints of the mouse are distinguished from the background and from each other based on their color or pixel intensity. The RGB color of the mouse body and footprints are user defined. The tail is identified as a consecutive part of the body below a thickness threshold. Three equidistant points along the tail are used to characterize tail curvature. Head is defined by the relative position of the nose. The center and direction of this head part are also recorded along with the center of the body without the tail and its orientation. A body "back" point is defined as the point which is halfway between the body center and start of the tail. For the footprints of the animal, the number of pixels within a footprint, as well as the sum of the brightness of these pixels, are stored by the software. The 'MouseWalker' can be used to track speed, steps frequency, swing period and length of steps, stance time, body linearity index footprint clustering and leg combination indexes: no swing, single-leg swing, diagonal-leg swing, lateral-leg swing, front or hind swing, three-leg swing, or all-legs swing (unitless). Like other hardware assisted methods, this method also suffers from the lack of portability and scalability. \par Wang et al. proposed a pipeline for tracking motion and identifying micro-behavior of small animals based on Microsoft Kinect sensors and IR cameras \cite{PaperWang}. This is achieved by employing Microsoft Kinect cameras along with normal video cameras to record movement of freely behaving rodents from three different perspectives.The IR depth images from Microsoft Kinect are used to extract shape of the rodents by background subtraction. After shape extraction, five pixel-based features are extracted from the resultant blobs which are used for tracking and behavior classification by Support Vector Machines. Although the pipeline is not exclusively used for motion tracking, yet the idea of using depth cameras is potentially a good candidate for motion tracking as well. \par Monteiro et al. \cite{PaperMoteiro} took a similar approach to Wang et al. \cite{PaperWang} by using Microsoft Kinect depth cameras for video capturing \cite{PaperMoteiro}. Instead of using background subtraction, they introduced a rough temporal context by tracking morphological features of multiple frames for motion tracking. In their approach, the morphological features are extracted frame by frame. Then features from multiple adjacent frames are concatenated to introduce a rough temporal context. Finally decision trees are used for behavior classification. A decision tree is then trained from this dataset for automatic behavior classification. The authors have reported a classification accuracy of 66.9\% when the classifier is trained to classify four behaviors on depth map videos of 25 minutes duration. When only three behaviors are considered, the accuracy jumps to 76.3\%. Although the introduced temporal context is rough and the features are primitive, the classification performance achieved firmly establishes the usefulness of machine learning in behavioral classification. Like \cite{PaperMoteiro}, this approach is also not solely used for motion tracking, but they have introduced a rough temporal context for tracking along with depth cameras which can be beneficial in motion tracking only approaches. \par Voigts et al. proposed an unsupervised whisker tracking pipeline aided by the use of IR sensors for selective video capturing \cite{PaperVoigts}. They captured high speed (1000 frames per seconds) video data by selectively recording those frames which contained mice. It was achieved by sensing the mice by an IR sensor which then triggers the video camera to start recoding. Once the mice leaves the arena, the IR sensors triggers the video camera to stop capturing. This selectively-acquired video data is used for whisker tracking. First, a background mask is calculated by averaging 100 frames containing no mice. This mask is subtracted from every single frame. Then vector fields from each frame that resulted in a convergence of flows on whisker-like structures are generated. These fields are then integrated to generate spatially continuous traces of whiskers which are grouped into whisker splines. This approach is completely unsupervised when it comes to whisker tracking with a rough temporal context as well but it is very greedy in terms of computational resources so it cannot be employed in real time. \par Petrou et. al. \cite{PaperPetrou} proposed a marker-assisted pipeline for tracking legs of female crickets. The crickets are filmed with three cameras, two mounted above and one mounted below the crickets which are made to walk on transparent glass floor. Leg joints are marked with fluorescent dyes for better visualization. The tracking procedure is initiated by a user by selecting marker position in initial frames. The initial tracking is carried out to next frames by constrained optimization and using euclidean distance between joints of current frame and the next frame. This pipeline does a decent job in terms of tracking performance as the average deviation between human annotated ground truth (500 digitized frames) and automatic tracking is 0.5 mm where the spatial depth of the camera is 6 pixel/mm. This approach however requires special setup and cannot be exported to other environments. \par Xu et. al. \cite{PaperXu} proposed another marker assisted tracking pipeline for small animals. In proposed pipeline, the limbs and joints are first shaved, marked with dyes and then recorded with consumer grade cameras (200 frames per second). Tracking is then done in steps which include marker position estimation, position prediction and mismatch occlusion. Marker position is estimated by correlation in two methods. In one method, normalized cross correlation between gray scale region of interest and user generated sample markers is found. The pixels with highest correlation are considered as the marker pixels. In the second method, normalized covariance matrix of marker model and color ROI is used to estimate pixels with highest normalized covariance values which are considered as marker pixels. Once the marker positions are estimated in current frame, they are projected to next frame by polynomial fitting and Kalman filers. For occlusion handling, they assume that a marker position or image background cannot change abruptly, so if there is a sudden change, it must be an occlusion. The approach is simple and scalable enough to be exported to any environment while at the same time, due to its dependency on markers, its not robust. \par John et. al. \cite{PaperJohn} proposed a semi-automated approach for simultaneously extracting three-dimensional kinematics of multiple points on each of an insect’s six legs. White dots are first painted on insects leg joints. Two synchronized video cameras placed under the glass floor of the platform are used to record video data at 500 frames per second. The synchronized video data is then used to generate 3D point clouds for the regions of interest by triangulation. The captured video frames are first subtracted from a background frame modeled by a Gaussian mean of 100 frames with no insects. After image enhancement, a user defines the initial tracking positions of leg joints in 3D point cloud which are then tracked both in forward and backward direction automatically. The user can correct any mismatched prediction in any frame. The authors have reported a tracking accuracy of 90\% when the user was allowed to make corrections in 3-5\% of the frames. Proposed approach is simple in terms of implementation, accurate in terms of spatial and temporal resolution and easy to operate. However, it needs constant user assistance and does not have any self-correction capability. \par Hwang et. al. \cite{PaperHwang} followed a similar approach to the one proposed by John et. al. \cite{PaperJohn} but without the use of markers. They used a combination of six-color charge-coupled device (CCD) cameras (BASLER Co. Sca640-70fc) for video recording of the insects. To capture the diverse motions of the target animal, they used two downward cameras and four lateral cameras as well as a transparent acrylic box. The initial skeleton of the insect was calculated manually, so the method is not completely automated. After the initial skeleton, they estimated the roots and extremities of the legs followed by middle joints estimation. Any errors in the estimation were corrected by Forward And Backward Reaching Inverse Kinematics (FABRIK) \cite{PaperFABRIK}. The authors have not reported any quantitaive results which might help us to compare it with other similar approaches however they have included graphics of their estimation results in the paper. This paper does not directly deal with motion estimation in rodents, however, given the unique approach to using cameras and pose estimation, it is a worthwhile addition to the research in the field. \section{Video tracking methods mostly dependent on software based tracking} In this section, we will focus on all those research works which try to solve the locomotion and gesture tracking problem by processing raw and un-aided video streams. In this scenario, there is no specialized hardware installed apart from one or multiple standard video camera. There are no physical markers on the mice/animals bodies as well which can help track its motion. These works approaches the problem from purely a computer vision point of view. \subsection{Semi-automated} \par Gyory et. al \cite{PaperGyory} proposed a semi-automated pipeline for tracking rat's whiskers. In proposed pipeline, videos are acquired with high speed cameras (500 frames per second) and are first pre-processed to adjust its brightness.The brightness adjusted image is eroded to get rid of small camera artifacts. Then a static background subtraction is applied which leaves only the rat body in the field of view. As whiskers are represented by arcs with varying curvature, a polar-rectangular transform is applied and then a horizontal circular shift is introduced so that whiskers are aligned as straight lines on a horizontal plane. Once the curved whiskers are represented by straight lines, hough transform is used to locate them.The approach is too weak and non robust to be considered for any future improvements. The reported computational cost is high (processing speed of 2 fps). Also, it works on high speed videos (>500 fps). It is highly sensitive to artifacts and it cannot take care of occlusion, dynamic noise and broken whisker representation. \par Hamers et. al. proposed a specific setup based on inner-reflecting plexi-glass walkway \cite{PaperHamers}.The animals traverse a walkway (plexi-glass walls, spaced 8 cm apart) with a glass floor (109 3 15 3 0.6 cm) located in a darkened room. The walkway is illuminated by a fluorescent tube from long edge of the glass floor. For most of the way, the light travels internally in the glass walkway, but when some pressure is applied, for example by motion of a mouse, the light escapes and is visible from outside. The escaped light, which is scattered from the paws of the mouse, is recorded by a video camera aimed at a $45^0$ mirror beneath the glass walkway. The video frames are then thresholded to detect bright paw prints. The paws are labeled (left, right, front, hind). The system can extrapolate a tag (label of the footprint) to the bright areas in next frame which minimizes the need for user intervention but in some cases, user intervention becomes necessary. The authors haven't reported paw detection/tracking performance. \subsection{Completely automated} \par Da Silva et. al. conducted a study on the reproducibility of automated tracking of behaving rodents in controlled environments \cite{PaperDaSilva}. rats in a circular box of 1 m diameter with 30 cm walls. The monitoring camera was mounted in such a way that it captured the rodents from top view while they were behaving. They used a simple thresholding algorithm to determine pixels belonging to the rodent. Athough the method is rudimentary as compared to state of the art, the authors have reported a pearson correlation of $r = 0.873$ when they repeated the same experiment at different ages of the animals, thus validating its reproducibility. However, this setup can only be used to track whole body of rodents, it cannot identify micro-movements such as limbs motion. \par Leroy et. al. proposed the combination of transparent Plexiglas floor and background modeling based motion tracking \cite{PaperLeroy}. The rodents were made to walk on a transparent plexiglas floor illuminated by florescent light and was recorded from below. A background image was taken when there was no mouse on the floor. This background image was subtracted from every video frame to produce a continuously updating mouse silhouette. Mouse tail was excluded by an erosion followed by dilation of the mouse silhouette. Then the center of mass of the mouse was calculated which was tracked through time to determine if the mouse is running or walking. Since the paws are colored, color segmentation is used to isolate paws from the body. The authors have reported a maximum tracking error of $4 \pm 1.9$ and a minimum tracking error of $2 \pm 1.6$ when 203 manually annotated footprints are compared to their automatic counter parts. \par Dankert et. al. proposed a machine vision based automated behavioral classification approach for Drosophila \cite{PaperDankert}. The approach does not cover locomotion in rodents, it covers micro-movements in flies. Videos of a pair of male and female flies are recorded for 30 minutes in a controlled environment. Wing beat and legs motion data is manually annotated for lunging, chasing, courtship and aggression. The data analysis consists of four stages. In first stage, Foreground image $F_I$ computed by dividing the original image I by ($\mu I + 3\sigma_I$) ($F_I$ values in false-colors). In second stage, The fly body is localized by fitting a Gaussian mixture model \cite{PaperGMM} (GMM) with three Gaussians; background, other parts and body to the histogram of $F_I$ values (gray curve) using the Expectation Maximization (EM) algorithm \cite{PaperGMM}. First (top) and final (bottom) iterations of the GMM-EM optimization. All pixels with brightness values greater than a threshold are assigned to the body, and are fit with an ellipse. In third stage, full fly is detected by segmenting the complete fly from the background, with body parts and wings \cite{PaperOtsu}. In fourth stage, head and abdomen are resolved by dividing the fly along the minor axis of the body ellipsoid and comparing the brightness-value distribution of both halves. In fifth stage, 25 measurements are computed, characterizing body size, wing pose, and position and velocity of the fly pair. A k-nearest neighbor classifier is trained for action detection. The authors have reported a false positive rate for lunging at 0.01 when 20 minutes worth of data was used for training the classifier. Although this article does not directly deal with rodents, the detection and tracking algorithms used for legs and wings can be used for legs motion detection in rodents too. \par Nathan et al. \cite{PaperNathan} proposed a whisker tracking method for mice based on background subtraction, whisker modeling and statistical approaches. Head of the mice as fixed, so they were not behaving freely. They used a high speed camera with a shutter speed of 500 frames per second. In order to track whiskers, an average background image was modeled from all the video frames and then subtracted from every single frame. Afterwards, pixel level segmentation was done to initiate candidate sites by looking for line like artifacts. Once the candidate boxes are initiated, they are modeled by two ellipsoids with perpendicular axes. The ellipsoid with higher eccentricity is the best possible candidate site for whiskers. These whiskers are then traced in every single frame of the video sequence by using expectation maximization. The approach has some strong points. It requires no manual initiation, it is highly accurate and because of superb spatial resolution and pixel-level tracking, even micro-movements of whiskers can be tracked. But all the strengths come at a cost; the approach is computationally very expensive which means it cannot be deployed in real time. There is another downside to pixel-level and frame-level processing, the temporal context is lost in the process. \par Kim et al. \cite{PaperKim} proposed a method similar to the one proposed by Clack. et al. \citep{PaperNathan} to track whisker movements in freely behaving mice. They use Otsu's algorithm to separate foreground and background and then find the head of the mouse by locating triangular shaped object in the foreground. Once the head and snout are detected, hough transform is used to find line-like shapes (whiskers) on each side of the snout. Mid points of the detected lines are used to form ellipsoidal regions which help track whiskers in every single frame. This pipeline was proposed to track whisking in mice after a surgical procedures. There is no ground truth available, so the approach cannot be evaluated for tracking quantitatively. Besides, the pipeline is not feasible for real time deployment due to high computational costs. \par Palmer et al. proposed a paw-tracking algorithm for mice when they grab food and can be used for gesture tracking as well \citep{PaperPalmer9}. They developed the algorithm by treating it as a pose estimation problem. They model each digit as a combination of three phalanges (bones). Each bone is modeled by an ellipsoid. For 4 digits, there are total 12 ellipsoids. The palm is modeled by an additional ellipse. Forearm is also modeled as an ellipsoid while nose is modeled as an elliptic paraboloid. The paw is modeled using 16 parameters for the digits (four degrees of freedom per digit), four constant vectors representing the metacarpal bones and 6 parameters for position and rotation of the palm of the paw. Furthermore, the forearm is assumed to be fixated at the wrist and can rotate along all three axes in space. This amounts to a total of 22 parameters. In each frame, these ellipsoids are projected in such away that they best represent the edges. The best projection of ellipsoids is found by optimization and is considered a paw. They haven't reported any quantitative results. This approach is very useful if the gesture tracking problem is treated as pose estimation with a temporal context. \par In \cite{PaperPalmer}, Palmer et al. extended their work from \cite{PaperPalmer9}.The basic idea is the same. It models the paw made of different parts. Four digits (fingers), each digit having 3 phalanges (bones). Each phalanges is modeled by an ellipsoid, so total 12 ellipsoid for the phalanges plus an additional one for the palm. In this paper, the movement of the 13 ellipsoids is modeled by 19 degree freedom vectors, unlike 22 from \cite{PaperPalmer9}. The solution hypothesis is searched not simultaneously, but in stages to reduce the number of calculations. This is done by creating different number of hypotheses for every joint of every digit and then finding the optimum hypotheses. $\backslash$ \par A Giovannucci et. al. \cite{PaperOurs} proposed an optical flow and cascade learners based approach for tracking of head and limb movements in head-fixed mice walking/running on spherical/cylindrical treadmill. Unlike other approaches, only one camera installed from a lateral field of view was used for limbs tracking and one camera installed in front of the mouse was used for whisker tracking. The calculated dense optical flow fields in frame-to-frame method for whisker tracking. The estimated optical flow fields were used to train dictionary learning algorithms for motion detection in whiskers. They annotated 4217 frames for limbs detection and 1053 frames for tails detection and then used them to train Haar-Cascades classifiers for both the cases. They have reported a high correlation of $0.78 \pm 0.15$ for whiskers and $0.85 \pm 0.01$ for hind limb. The proposed hardware solution in the paper is low cost and easy to implement. The tracking approach is also computationally not demanding and can be run in real time. They however did not deal with the micro-patterns in motion dynamics which can be best captured with the inclusion of temporal context to the tracking approach. \par Heidi et. al, proposed Automated Gait Analysis Through Hues and Areas (AGATHA) \cite{PaperAGATHA}. AGATHA first isolates the sagittal view of the animal by subtracting a background image where the animal is not present, transforming the frame into a HSV (Hues, Saturation, Value) image. The hue values are used to convert the HSV image into a binomial silhouette. Next, AGATHA locates the row of pixels representing the interface between the rat and the floor. AGATHA may not accurately locate the rat-floor interface if the animal moves with a gait pattern containing a completely aerial phase. Second, AGATHA excludes the majority of nose and tail contacts with the floor by comparing the contact point to the animal’s center of area in the sagittal view. Foot contact with the ground is visualized over time by stacking the rat/floor interface across multiple frames. The paw contact stacked over multiple frames is then used for gait analysis. Multiple gait parameters such as limbs velocity, stride frequency can be calculated. When results from AGATHA were compared to manual annotation on a 1000 fps video, they deviated by a small amount. For example, limbs velocity calculated b AGATHA was 1.5\% off from the velocity calculated manually. Similarly, AGATHA registered a difference of 0.2 cm in stride length from manual annotation. In general, the approach is simple and scalable but limited in scope. \section{Conclusion} \par The gesture detection and tracking approaches are still in the developing phase. There is no single approach powerful enough which can track micro-movements of limbs, whiskers or snout of the rodents which are necessary for gesture identification and behavioral phenotyping. In general, those approaches which use specialized hardware are more successful than those approaches which solely depend on standard video camera. For example, the use of X-Ray imaging to detect surgically implanted markers has been proven very successful to track limbs and joins movements with high precision. Moreover, the use of specific markers attached to either limbs or whiskers of the rodents also increase the overall tracking accuracy of an approach. However, there is a downside to this approach, the rodents might not behave naturally. Therefore, more and more research is being conducted on scalable, portable and non invasive tracking methods which only need standard video cameras. \par We have summarized some important aspects of selected approaches in table \ref{table1}. Following things need to be kept in mind To properly interpret the table. \paragraph{Code availability:} It means whether the code is available or not. If it is available, is it free or paid. \paragraph{Performance:} If the performance is given in terms of standard deviation, it signifies the consistency of proposed approach either against itself or an annotated dataset (which is pointed out). For example, if the table says that the proposed system can make a 90\% accurate estimation of limbs velocity with an SD of 3\%, it means that the system performance fluctuates somewhere between 87\% to 93\%. If absolute accuracy is given, it means each and every detected instant is compared to manually annotated samples. If only \% SD is given or just SD is given, it means that the system can consistently reproduce the same result with specified amount of standard deviation, regardless of its performance against the ground truth. \paragraph{Need specialized setup \& Invasiveness}: This means that whether the method need any specialized hardware other than the housing setup or video cameras. If they housing setup itself is arranged in a specific way but it does not contain any specialized materials, we say that the hardware setup required is not specialized. By invasiveness, we mean that a surgery has to be conducted to implant the markers. If no surgery is needed to implant markers, we call it semi-invasive. If no markers are needed, we call it non-invasive. \begin{landscape} \begin{longtable} {|p{0.5cm}|l|p{4cm}|p{9cm}|p{2cm}|p{2cm}|} \caption{Comparison of different approaches. Legend:: Invasive: Approaches which requires surgery to put markers for tracking, semi-invasive: Approaches which do not need surgery for marker insertion, non-invasive: no marker needed. Real time means that the system can process frames at the same rate they are being acquired. If it needs specialized equipment apart from standard video cameras and housing setup, it is pointed out in the last column}\\ \label{table1}\\ \hline & Type & Code availability & Performance & Real time or offline & Need specialized setup \& Invasiveness \\ \hline \cite{PaperDigiGaitAndTreadScan}& Commercial & Paid & Comparison with ground truth not provided. One paper reports the reproducibilty: 2.65 \% max SD & Yes & Yes \\ \hline \cite{PaperGaitScan} & Commercial & Paid & Comparison with ground truth not provided. One paper reports the reproducibilty: 1.57 \% max SD & Yes & Yes \\ \hline \cite{PaperKain} & Research & data and code for demo available at http://lab.debivort.org/leg-tracking-and-automated-behavioral-classification-in-Drosophila/ & tracking performance not reported, behavioral classification of 12 traits reported to be max at 71\% & Tracking real time, classification offline & yes \\ \hline \cite{PaperScott} & Research & not available & tracking: SD of only 0.034\% when compared with ground truth, Max SD of 1.71 degrees in estimating joint angle & real time legs and joints tracking & yes, invasive \\ \hline \cite{PaperHarvey} & Research & not available & tracking performance not reported explicitly & real time whisker tracking & yes, semi-invasive \\ \hline \cite{PaperBemejo} & Research & available on request & whisker tracking performance not reported explicitly & real time single whisker tracking & yes, semi-invasive \\ \hline \cite{PaperKyme} & Research & not available & head motion tracked correctly with a max false postive of 13\% & real time head and snout tracking & yes, semi-invasive \\ \hline \cite{PaperKymeMarker} & Research & not available & head motion tracked continuously with a reported SD of only 0.5 mm & real time head and snout tracking & yes, semi-invasive \\ \hline \cite{PaperPasquet} & Research & not available & head motion tracked with an accuracy of 96.3 \% and the tracking can be reproduced over multiple studies with a correlation cefficient of 0.78 & real time head tracking & yes, semi-invasive \\ \hline \cite{PaperKnutsen} & Research & code and demo data available at https://goo.gl/vYaYPy & they reported a correlation between whisking amplitude and velocity as a measure of reliability, R = 0.89 & Offline head and whisker tracking & no, invasive \\ \hline \cite{PaperGravel} & Research & not available & Tracking and gait prediction with confidence of 95 \%, deviation between human annotator and computer at 8\% & Offline & yes, semi-invasive \\ \hline \cite{PaperAkihiro} & Research & not available & Paw tracked with an accuracy of 88.5 on transparent floor and 83.2 \% on opaque floor & Offline & yes, semi-invasive \\ \hline \cite{PaperMendes} & Research & code available at \url{https://goo.gl/58DQij} & tail and paws tracked with an accuracy >90 \% & Real time & yes, semi-invasive \\ \hline \cite{PaperWang} & Research & not available & 5 class behavioral classification problem, accuracy in bright condition is 95.34 and in dark conditions is 89.4\% & offline & yes, non-invasive \\ \hline \cite{PaperMoteiro} & Research & not available & 6 behavioral class accuracy: 66.9 \%, 4 behavioral class accuracy: 76.3\% & offline & yes, non-invasive \\ \hline \cite{PaperVoigts} & Research & code available at \url{https://goo.gl/eY2Yza} & whisker detection rate: 76.9\%, peak spatial error in whisker detection: 10 pixels & offline & yes, non-invasive \\ \hline \cite{PaperPetrou} & Research & not available & Peak deviation between human annotator and automated annotation: 0.5 mm with a camera of 6 pixel/mm resolution & offline & yes, non-invasive \\ \hline \cite{PaperJohn} & Research & not available & Tracking accuracy >90 \% after the algorithm was assisted by human users in 3-5 \% of the frames & offline & yes, semi-invasive \\ \hline \cite{PaperGyory} & Research & code available at \url{https://goo.gl/Gny89o} & A max deviation of 17.7\% between human and automated whisker annotation & offline & yes, non-invasive \\ \hline \cite{PaperLeroy} & Research & not available & Maximum paw detection error: 5.9 \%, minimum error : 0.4 \% & offline & no, non-invasive \\ \hline \cite{PaperDankert} & Research & Source code at \url{https://goo.gl/zesyez} , demo data at \url{https://goo.gl/dn2L3y} & Behavioral classification: 1\% false positive rate & offline & no, semi-invasive \\ \hline \cite{PaperNathan} & Research & Source code available at \url{https://goo.gl/JCv3AV} & Whisker tracing accuracy: max error of 0.45 pixels & offline & no, non-invasive \\ \hline \cite{PaperOurs} & Research & not available & Correlation with annotated data; for whiskers r = 0.78, for limbs r = 0.85 & real time& no, non-invasive \\ \hline \cite{PaperAGATHA} & Research & code available at \url{https://goo.gl/V54mpL} & Velocity calculated by AGATHA was off from manually calculated velocity by 1.5\% & real time& no, non-invasive \\ \hline \end{longtable} \bigskip\centering \end{landscape} \subsection{Future Research} \par Based on the literature survey we conducted, we have the following recommendations for future research: \begin{enumerate} \item Methods would benefit from an effective use of different camera configurations to get spatial data at high resolution in 3D space. \item One of the most relevant shortcomings of the field is the lack of public databases to validate new algorithms. Different approaches are tested on the (usually private) data from the lab developing the solution. Building a standardized gesture tracking dataset which can be used as a benchmark would benefit the community in a similar way as large object recognition databases (PASCAL, ImageNet or MS COCO) allowed significant progress in the Computer Vision literature. \item Currently there exist large amounts of non labeled data samples (thousands of video hours). The use of unsupervised learning algorithms that could benefit the parameter learning of supervised methods is one of the most challenging future research lines. \item In addition, the use of semi-supervised and weakly-supervised learning algorithms could benefit the community. The challenge in this particular case is to minimize the user intervention (supervision) maximizing the improvements on the accuracy. \item Finally, deep learning methods have been shown to outperform many computer vision tasks. Their application to this field seems a promising research line. \end{enumerate} \section{Bibliography}
proofpile-arXiv_065-4161
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\label{sec:level1}Introduction} There have been irrefutable amount of evidences suggesting the presence of a mysterious, non-luminous, collisionless and non-baryonic form of matter in the present universe \cite{Tanabashi:2018oca}. The hypothesis for existence of this form of matter, more popularly known as dark matter (DM) due to its non-luminous nature, is strongly backed by early galaxy cluster observations~\cite{Zwicky:1933gu}, observations of galaxy rotation curves~\cite{Rubin:1970zza}, the more recent observation of the bullet cluster~\cite{Clowe:2006eq} and the latest cosmological data provided by the Planck satellite~\cite{Aghanim:2018eyx}. The latest data from the Planck satellite suggest that around $27\%$ of the present universe's energy density is in the form of dark matter. In terms of density parameter $\Omega$ and $h = \text{(Hubble Parameter)}/(100 \;\text{km} \text{s}^{-1} \text{Mpc}^{-1})$, the present dark matter abundance is conventionally reported as \begin{equation} \Omega_{\text{DM}} h^2 = 0.120\pm 0.001 \label{dm_relic} \end{equation} at 68\% CL~\cite{Aghanim:2018eyx}. While such astrophysics and cosmology based experiments are providing such evidence suggesting the presence of dark matter at regular intervals in the last several decades, there is hardly anything known about the particle nature of it. The requirements which a particle dark matter candidate has to satisfy, as pointed out in details by the authors of \cite{Taoso:2007qk} rule out all the standard model (SM) particles from being DM candidates. While the neutrinos in the SM come very close to satisfying these requirements, they have tiny abundance in the present universe. Apart from that, they have a large free streaming length (FSL) due to their relativistic nature and give rise to hot dark matter (HDM), ruled out by observations. This has led to a plethora of beyond standard model (BSM) scenarios proposed by the particle physics community to account for dark matter in the universe. Most of these BSM scenarios are based on a popular formalism known as the weakly interacting massive particle (WIMP) paradigm. In this formalism, a particle dark matter candidate having mass around the electroweak scale and having electroweak type couplings to SM particles can give rise to the correct relic abundance in the present epoch, a remarkable coincidence often referred to as the \textit{WIMP Miracle}~\cite{Kolb:1990vq}. Since the mass is around the electroweak corner and couplings to the SM particles are sizeable, such DM candidates are produced thermally in the early universe followed by its departure from chemical equilibrium leading to its freeze-out. Such DM candidates typically become non-relativistic shortly before the epoch of freeze-out and much before the epoch of matter-radiation equality. Such DM candidates are also categorised as cold dark matter (CDM). The CDM candidates in the WIMP paradigm have very good direct detection prospects due to its sizeable interaction strength with SM particles and hence can be observed at ongoing and future direct search experiments \cite{panda17, Tan:2016zwf, Aprile:2017iyp, Akerib:2016vxi, Akerib:2015cja, Aprile:2015uzo, Aalbers:2016jon, Liu:2017drf}. However, no such detection has yet been done casting doubts over the viability of such DM paradigms. This has also motivated the particle physics community to look for other alternatives to WIMP paradigm. Although such null results could indicate a very constrained region of WIMP parameter space, they have also motivated the particle physics community to look for beyond the thermal WIMP paradigm where the interaction scale of DM particle can be much lower than the scale of weak interaction i.e.\,\,DM may be more feebly interacting than the thermal WIMP paradigm. This falls under the ballpark of non-thermal DM \cite{Hall:2009bx}. In this scenario, the initial number density of DM in the early Universe is negligible and it is assumed that the interaction strength of DM with other particles in the thermal bath is so feeble that it never reaches thermal equilibrium at any epoch in the early Universe. In this set up, DM is mainly produced from the out of equilibrium decays of some heavy particles in the plasma. It can also be produced from the scatterings of bath particles, however if same couplings are involved in both decay as well as scattering processes then the former has the dominant contribution to DM relic density over the latter one \cite{Hall:2009bx}. The production mechanism for non-thermal DM is known as freeze-in and the candidates of non-thermal DM produced via freeze-in are often classified into a group called Freeze-in (Feebly interacting) massive particle (FIMP). For a recent review of this DM paradigm, please see \cite{Bernal:2017kxu}. Interestingly, such non-thermal DM candidates can have a wide range of allowed masses, well beyond the typical WIMP regime. The possibility of a light DM candidate have interesting implications for astrophysical structure formation in the universe. Although a light DM candidate like SM neutrinos which constitute HDM is already ruled out and CDM is one of the most well studied scenario (specially within the context of WIMP paradigm), there also exists an intermediate possibility where DM remains mildly relativistic at the epoch of matter-radiation equality. Consequently, the free streaming length of such candidates fall in between the large FSL of HDM and small FSL of CDM. Such DM candidates which can be kept at intermediate stage between HDM and CDM are typically referred to as warm dark matter (WDM). WDM candidates have typical masses in keV range, in contrast to typical mass of HDM in sub-eV mass and CDM with GeV-TeV scale masses. For a recent review on keV scale singlet fermion as WDM, please have a look at \cite{Adhikari:2016bei}. Although such WDM candidates may not be as motivating as WIMP or typical CDM candidates from direct search point of view, there are strong motivations from astrophysics point of view. Typical WDM scenarios can provide a solution to several small scale structure problems faced by CDM paradigm. The missing satellite problem, too big to fail problem fall in the list of such small structure problems, a recent review of which can be found in \cite{Bullock:2017xww}. The above mentioned classification of HDM, CDM and WDM is primarily based on their FSL, typically equal to the distance for which the DM particles can propagate freely without interacting. Typically, the free streaming length $\lambda_{\text{FS}} =0.1$ Mpc, about the size of a dwarf galaxy, acts as a boundary line between HDM ($\lambda_{\text{FS}} >0.1$ Mpc) and WDM ($\lambda_{\text{FS}} <0.1$ Mpc). For CDM, on the other hand, the FSL are considerably smaller than this value. Therefore, CDM structures keep forming till scales as small as the solar system which gives rise to disagreement with observations at small scales \cite{Bullock:2017xww}. HDM, on the other hand, erases all small scale structure due to its large free streaming length, disfavouring the bottom up approach of structure formation. WDM can therefore act as a balance between the already ruled out HDM possibility and the CDM paradigm having issues with small scale structures. More details about the calculation of FSL can be found in \cite{Boyarsky:2008xj, Merle:2013wta}. We show that our non-thermal DM candidate can be a keV scale fermion which can give rise to a sub-dominant WDM component. Such mixed CDM and WDM type hybrid DM scenario was also considered in some recent works \cite{Borah:2017hgt, DuttaBanik:2016jzv}. However, our model is not restrictive to such combinations as we show that the non-thermal DM candidate can have masses in the keV-GeV range as well. Apart from the mysterious $27\%$ of the universe in the form of unknown DM, the visible sector making up to $5\%$ of the universe also creates a puzzle. This is due to the asymmetric nature of the visible sector. The visible or baryonic part of the universe has an abundance of baryons over anti-baryons. This is also quoted as baryon to photon ratio $(n_{B}-n_{\bar{B}})/n_{\gamma} \approx 10^{-10}$ which is rather large keeping in view of the large number density of photons. If the universe is assumed to start in a symmetric manner at the big bang epoch which is a generic assumption, there has to be a dynamical mechanism that can lead to a baryon asymmetric universe at present epoch. The requirements such a dynamical mechanism needs to satisfy were put forward by Sakharov more than fifty years ago, known as the Sakharov's conditions~\cite{Sakharov:1967dj}: baryon number (B) violation, C and CP violation and departure from thermal equilibrium. Unfortunately, all these requirements can not be fulfilled in the required amount within the framework of the SM, again leading to several BSM scenarios. out of equilibrium decay of a heavy particle leading to the generation of baryon asymmetry has been a very well known mechanism for baryogenesis \cite{Weinberg:1979bt, Kolb:1979qa}. One interesting way to implement such a mechanism is leptogenesis \cite{Fukugita:1986hr} where a net leptonic asymmetry is generated first which gets converted into baryon asymmetry through $B+L$ violating EW sphaleron transitions. The interesting feature of this scenario is that the required lepton asymmetry can be generated within the framework of the seesaw mechanism \cite{Minkowski:1977sc, Mohapatra:1979ia, Yanagida:1979as, GellMann:1980vs, Glashow:1979nm, Schechter:1980gr} that explains the origin of tiny neutrino masses \cite{Tanabashi:2018oca}, another observed phenomena which the SM fails to address. Although the explanation for dark matter, baryon asymmetry of the universe and origin of neutrino mass can arise independently in different BSM frameworks, it is interesting, economical and predictive to consider a common framework for their origin. In fact a connection between DM and baryons appears to be a natural possibility to understand their same order of magnitude abundance $\Omega_{\rm DM} \approx 5 \Omega_{B}$. Discarding the possibility of any numerical coincidence, one is left with the task of constructing theories that can relate the origin of these two observed phenomena in a unified manner. There have been several proposals already which mainly fall into two broad categories. In the first one, the usual mechanism for baryogenesis is extended to apply to the dark sector which is also asymmetric \cite{Nussinov:1985xr, Davoudiasl:2012uw, Petraki:2013wwa, Zurek:2013wia}. The second one is to produce such asymmetries through annihilations \cite{Yoshimura:1978ex, Barr:1979wb, Baldes:2014gca} where one or more particles involved in the annihilations eventually go out of thermal equilibrium in order to generate a net asymmetry. The so-called WIMPy baryogenesis \cite{Cui:2011ab, Bernal:2012gv, Bernal:2013bga} belongs to this category, where a dark matter particle freezes out to generate its own relic abundance and then an asymmetry in the baryon sector is produced from DM annihilations. The idea extended to leptogenesis is called WIMPy leptogenesis \cite{Kumar:2013uca, Racker:2014uga, Dasgupta:2016odo, Borah:2018uci}. Motivated by all these, we propose a scenario where the DM sector is a hybrid of one thermal and one non-thermal components while the thermal DM annihilations play a dominant role in creating a leptonic asymmetry which gets converted into baryon asymmetry eventually, after electroweak phase transition. The non-thermal DM can also be at keV scale giving rise to the possibility of WDM which can have interesting consequences at astrophysical structure formation as well as DM indirect detection experiments. The neutrino mass arises at one loop level where the dark sector particles take part in the loop mediation. This paper is organised as follows. In section \ref{sec1} we discuss our model followed by the origin of neutrino mass in section \ref{sec2}. In section \ref{sec3} we describe the co-genesis of WIMP, FIMP and lepton asymmetry followed by relevant constraints from direct detection and lepton flavour violation in section \ref{sec4}. We then discuss our results in section \ref{sec5} and finally conclude in section \ref{sec6}. \section{The Model} \label{sec1} We consider a minimal extension of the SM by two different types of singlet fermions and three different types of scalar fields shown in table \ref{tab1a}, \ref{tab2a} respectively. To achieve the desired interactions of these new fields among themselves as well as with the SM particles, we consider additional discrete symmetries $\mathbb{Z}_2 \times \mathbb{Z}^{\prime}_2$. While one such $\mathbb{Z}_2$ symmetry is enough to accommodate DM, radiative neutrino mass as well as generation of lepton asymmetry from DM annihilation in a way similar to what we achieve in a version of scotogenic model \cite{Borah:2018uci}, the other discrete symmetry $\mathbb{Z}^{\prime}_2$ is required in order to have the desired couplings of FIMP DM. To prevent tree level interaction between FIMP DM and SM leptons through $\bar{L} \tilde{H} \chi$ (needed to avoid the decay of $\chi$ to light SM particles), we have introduced this another discrete symmetry $\mathbb{Z}^{\prime}_2$ under which $\chi, \phi$ and another singlet scalar $\phi^{\prime}$ are odd. If $\phi^{\prime}$ acquires a non-zero vacuum expectation value (vev), it can lead to one loop mixing between neutrinos and non-thermal DM. This possibility is shown in table \ref{tab1a}, \ref{tab2a}. \begin{table} \begin{center} \begin{tabular}{|c|c|} \hline Particles & $SU(3)_c \times SU(2)_L \times U(1)_{Y} \times \mathbb{Z}_2 \times \mathbb{Z}^{\prime}_2$ \\ \hline $Q_L$ & $(3,2, \frac{1}{3}, +, +)$ \\ $u_R$ & $(3^*,1,\frac{4}{3}, +, +)$ \\ $d_R$ & $(3^*,1,-\frac{2}{3}, +, +)$ \\ $\ell_L$ & $(1,2,-1, +, +)$ \\ $\ell_R$ & $(1,1,-2, +, +)$ \\ $\chi$ & $(1,1,0, +, -)$ \\ $N$ & $(1,1,0, -, +)$ \\ \hline \end{tabular} \end{center} \caption{Fermion content of the model} \label{tab1a} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c|} \hline Particles & $SU(3)_c \times SU(2)_L \times U(1)_{Y} \times \mathbb{Z}_2 \times \mathbb{Z}^{\prime}_2$ \\ \hline $H$ & $(1,2, 1, +, +)$ \\ $\eta$ & $(1,2, 1, -, +)$ \\ $\phi$ & $(1,1,0, -, -)$ \\ $\phi^{\prime}$ & $(1,1,0, +, -)$ \\ \hline \end{tabular} \end{center} \caption{Scalar content of the model} \label{tab2a} \end{table} The relevant part of the Yukawa Lagrangian is \begin{align} {\cal L} & \supset \frac{1}{2}(M_N)_{ij} N_iN_j + \frac{1}{2} m_{\chi} \chi \chi+y_{ij} \, \bar{L}_i \tilde{\eta} N_j +y^{\prime}_i \phi \chi N_i+ \text{h.c.}. \label{yukawa2} \end{align} The scalar potential is \begin{equation} V = V_{H \eta} + V_{H \phi} + V_{\eta \phi} \end{equation} where \begin{align} V_{H \eta} &= \mu_H^2|H|^2 +\mu_{\eta}^2|\eta|^2+\frac{\lambda_1}{2}|H|^4+\frac{\lambda_2}{2}|\eta|^4+\lambda_3|H|^2|\eta|^2 \nonumber \\ & +\lambda_4|H^\dag \eta|^2 + \{\frac{\lambda_5}{2}(H^\dag \eta)^2 + \text{h.c.}\}, \label{scalar1a} \end{align} \begin{align} V_{H \phi} & = \mu_{\phi}^2\phi^2+\frac{\lambda_6}{2} \phi^4+\lambda_7|H|^2 \phi^2 + \mu_{\phi^{\prime}}^2 (\phi^{\prime})^2+\frac{\lambda^{\prime}_6}{2} (\phi^{\prime})^4+\lambda^{\prime}_7|H|^2 (\phi^{\prime})^2 \label {scalar2a} \end{align} \begin{align} V_{\eta \phi} & = \lambda_8 |\eta|^2 \phi^2 + \lambda_9 \eta^{\dagger} H \phi \phi^{\prime} + \lambda^{\prime}_8 |\eta|^2 (\phi^{\prime})^2 + \lambda^{\prime}_9 \phi^2 (\phi^{\prime})^2 \label {scalar3a} \end{align} Since we require the SM Higgs and the singlet scalar $\phi^{\prime}$ to acquire non-zero vev as $$ \langle H \rangle=\left( 0, \;\; \frac{ v}{\sqrt 2} \right)^T, \;\; \langle \phi^{\prime} \rangle = \frac{u}{\sqrt 2} $$ we minimise the above scalar potential with respect to these two fields and find the following minimisation conditions. \begin{align} -\mu^2_H = \frac{\lambda_1}{2} v^2 + \frac{\lambda^{\prime}_7}{2} u^2 \nonumber \\ -\mu^2_{\phi^{\prime}} = \frac{\lambda^{\prime}_6}{2} u^2 + \frac{\lambda^{\prime}_7}{2} v^2 \end{align} The corresponding mass squared matrix is \begin{equation} M^2_{H \phi^{\prime}} = \left(\begin{array}{cc} \ \lambda_1 v^2 & \lambda^{\prime}_7 \frac{vu}{2} \\ \ \lambda^{\prime}_7 \frac{vu}{2} & \lambda^{\prime}_6 u^2 \end{array}\right) \end{equation} This will give rise to a mixing between the SM like Higgs and a singlet scalar given by \begin{equation} \tan{2\theta_1} \approx 2\sin{\theta_1} \approx 2\theta_1 = \frac{\lambda^{\prime}_7 vu}{\lambda^{\prime}_6 u^2-\lambda_1 v^2} \approx \frac{\lambda^{\prime}_7 v}{\lambda^{\prime}_6 u} \end{equation} where in the last step we have assumed a hierarchy $u \gg v$. The mass eigenstates corresponding to the charged and pseudo-scalar components of $\eta$ are \begin{eqnarray} m_{\eta^\pm}^2 &=& \mu_{\eta}^2 + \frac{1}{2}\lambda_3 v^2 , \nonumber\\ m_{\eta_I}^2 &=& \mu_{\eta}^2 + \frac{1}{2}(\lambda_3+\lambda_4-\lambda_5)v^2=m^2_{\eta^\pm}+ \frac{1}{2}\left(\lambda_4-\lambda_5\right)v^2. \label{mass_relation} \end{eqnarray} The neutral scalar component of $\eta$ and $\phi$ mix with each other resulting in the following mass squared matrix. \begin{equation} M^2_{\eta \phi} = \left(\begin{array}{cc} \ \mu_{\eta}^2 + \frac{1}{2}(\lambda_3+\lambda_4+\lambda_5)v^2 +\frac{\lambda^{\prime}_8}{2}u^2 & \lambda_9 \frac{vu}{4} \\ \ \lambda_9 \frac{vu}{4} & \mu^2_{\phi}+\frac{\lambda_7}{2}v^2 + \frac{\lambda^{\prime}_9}{2}u^2 \end{array}\right), \end{equation} which can be diagonalized by a $2\times 2 $ unitary matrix with the mixing angle given by \begin{equation} \tan{2\theta_2} \approx 2 \sin{\theta_2} \approx 2 \theta_2 = \frac{\lambda_9 vu}{2( \mu^2_{\phi}+\frac{\lambda_7}{2}v^2 + \frac{\lambda^{\prime}_9}{2}u^2-\mu_{\eta}^2 - \frac{1}{2}(\lambda_3+\lambda_4+\lambda_5)v^2 -\frac{\lambda^{\prime}_8}{2}u^2)} \end{equation} Considering $ \lvert m_{\eta} - m_{\phi} \rvert < m_{\chi}$, we can prevent the three body decays $\eta \rightarrow \phi \chi \nu$ or $\phi \rightarrow \eta \chi \nu$. Even if we allow such three body decays, they will be phase space suppressed compared to two body decays which contribute to the production of non-thermal DM $\chi$ which we will discuss shortly. We also consider the mixing between $\eta-\phi$ to be non-zero so that the thermal DM is an admixture of singlet and doublet scalars\footnote{We will use the notation DM to denote them in our analysis.}. This has crucial implications for DM phenomenology as well as leptogenesis as we discuss below. Assuming $m_{\chi} \sim$ keV-GeV, we consider its production mechanisms which also have the potential to produce a lepton asymmetry. The relevant diagrams for producing lepton asymmetry and non-thermal DM are shown in FIG. \ref{fig:coasym} and \ref{fig:fimp}, respectively. \begin{figure} \centering \begin{tabular}{lr} \begin{tikzpicture}[/tikzfeynman/small] \begin{feynman} \vertex (i){$\eta^-$}; \vertex [below = 2.cm of i] (j){$N_i$}; \vertex [below right= 1.414cm of i] (v1); \vertex [right = 1.cm of v1] (v2); \vertex [right = 3.cm of i] (m){$X$}; \vertex [below = 2.cm of m] (o){$L_\alpha$}; \diagram*[small]{(i) -- [charged scalar] (v1),(v1) -- [fermion,edge label = $l_\alpha$] (v2),(v2) -- [ scalar] (m),(v2) -- [fermion] (o),(j) -- [anti fermion] (v1)}; \end{feynman} \end{tikzpicture} & \begin{tikzpicture}[/tikzfeynman/small] \begin{feynman} \vertex (i){$\eta^-$}; \vertex [right = 3.cm of i] (j){$X$}; \vertex [below = 1.5cm of i] (k){$N_i$}; \vertex [below = 1.5cm of j] (l){$L_\alpha$}; \vertex [right = 1.5cm of i] (v1); \vertex [below = 1.5cm of v1] (v2); \diagram*[small]{(i) -- [charged scalar] (v1),(v1) -- [charged scalar,edge label = $\eta$] (v2),(v1) -- [scalar] (j),(v2) -- [ fermion] (l),(k) -- [anti fermion] (v2)}; \end{feynman} \end{tikzpicture} \\ \begin{tikzpicture}[/tikzfeynman/small] \begin{feynman} \vertex (i){$\eta^-$}; \vertex [right = 4.cm of i] (j){$X$}; \vertex [below = 2.cm of i] (k){$N_i$}; \vertex [below = 2cm of j] (l){$L_\alpha$}; \vertex [right = 1.cm of k] (k1); \vertex [left = 1.cm of l] (l1); \vertex [right = 2.cm of i] (v1); \vertex [below = 2.cm of v1] (v2); \vertex [below = 1.cm of v1] (v3); \vertex [left = 0.5cm of v3] (v4); \vertex [below = 1.5cm of v4] (v5); \diagram*[small]{(i) -- [charged scalar] (v1),(v1) -- [charged scalar,edge label = $\eta$] (v3),(k) -- [anti fermion] (k1),(k1) -- [anti fermion,edge label=$l_\beta$] (v3),(k1) -- [charged scalar,edge label'=$\eta$] (l1),(v3) -- [ majorana,edge label=$N_j$] (l1),(v1) -- [scalar] (j),(l1) -- [fermion] (l),(v4) -- [red,scalar](v5)}; \end{feynman} \end{tikzpicture} & \begin{tikzpicture}[/tikzfeynman/small] \begin{feynman} \vertex (i){$\eta^-$}; \vertex [below = 1.6cm of i] (j){$N_i$}; \vertex [right = 1.4cm of i] (i1); \vertex [below = 1.6cm of i1] (j1); \vertex [right = 1.cm of i] (l1); \vertex [below = 1.6cm of l1] (l3); \vertex [right = 1.cm of l1] (l2); \vertex [below = 0.8cm of l2] (v1); \vertex [right = 4.2cm of i] (k){$X$}; \vertex [right = 1.2cm of v1] (v2); \vertex [below = 1.6cm of k] (m){$L_\alpha$}; \diagram*[small]{(i) -- [charged scalar] (l1),(l1) -- [majorana,edge label=$N_j$](v1),(v1)--[scalar](l3),(l3)--[anti fermion,edge label = $l_\beta$](l1), (j)--[fermion](l3),(v1) -- [fermion,edge label = $l_\alpha$](v2),(v2)--[scalar](k),(v2)--[fermion](m),(i1)--[red,scalar](j1)}; \end{feynman} \end{tikzpicture} \end{tabular} \caption{Feynman diagrams contributing to $\langle\sigma v\rangle_{\eta N_i \rightarrow X L}$ and the interference term $\epsilon$. Here $X \equiv h,\gamma,W^{\pm},Z$.} \label{fig:coasym} \end{figure} \begin{figure} \centering \begin{tabular}{lcr} \begin{tikzpicture}[/tikzfeynman/small] \begin{feynman} \vertex (i){$N$}; \vertex [right = 1.13cm of i] (v); \vertex [above = 0.8cm of v] (v1); \vertex [right = 0.8cm of v1](j){$\phi$}; \vertex [below = 1.6cm of j](k){$\chi$}; \diagram*[small]{(i)--[fermion](v),(v)--[scalar](j),(v)--[fermion](k)}; \end{feynman} \end{tikzpicture} & \begin{tikzpicture}[/tikzfeynman/small] \begin{feynman} \vertex (i){$\nu,l^{\pm}$}; \vertex [below = 1.6cm of i] (j){$\eta^0,\eta^\mp$}; \vertex [right = 0.8cm of i] (v1); \vertex [below = 0.8cm of v1] (va); \vertex [right = 0.8cm of va] (vb); \vertex [above = 0.8cm of vb] (v2); \vertex [right = 0.5cm of v2] (k){$\phi$}; \vertex [below = 1.6cm of k] (l){$\chi$}; \diagram*[small]{(i)--[fermion](va),(j)--[anti charged scalar](va),(va)--[fermion,edge label=$N$](vb),(vb)--[fermion](l),(vb)--[scalar](k)}; \end{feynman} \end{tikzpicture} & \begin{tikzpicture}[/tikzfeynman/small] \begin{feynman} \vertex (i){$\eta^0,\eta^\mp$}; \vertex [right = 1.5cm of i] (v1); \vertex [above right = 0.8cm of v1] (a){$\nu,l^\mp$}; \vertex [right = 1.2cm of v1] (v2); \vertex [above right = 0.8cm of v2] (b){$\phi$}; \vertex [right = 0.8cm of v2] (c){$\chi$}; \diagram*[small]{(i)--[charged scalar](v1),(v1)--[anti fermion](a),(v1)--[fermion,edge label'=$N_i$](v2),(v2)--[fermion](c),(v2)--[scalar](b)}; \end{feynman} \end{tikzpicture} \end{tabular} \caption{Feynman diagrams corresponding to the production of non-thermal DM $\chi$.} \label{fig:fimp} \end{figure} we implement the model in \texttt{SARAH 4} \cite{Staub:2013tta} and extract the thermally averaged annihilation rates from \texttt{micrOMEGAs 4.3} \cite{Barducci:2016pcb} to use while solving the relevant Boltzmann equations to be discussed below. \begin{figure} \centering \begin{tabular}{lr} \begin{tikzpicture} \begin{feynman} \vertex (i1); \vertex [right=2cm of i1] (a); \vertex [right=2cm of a] (b); \vertex [right=2cm of b] (c); \vertex [right=2cm of c] (d); \vertex [above=1.7cm of b] (tb1); \vertex [above=1.cm of tb1] (z2); \vertex [left=0.75cm of z2] (wh) {$\left\langle H\right\rangle$}; \vertex [right=0.75cm of z2] (wp) {$\langle H \rangle$}; \diagram* { i1 -- [fermion, edge label'=$\nu_{L}$] (a) -- [fermion, edge label'=$N$] (b),(b) -- [anti fermion, edge label'=$N$] (c) -- [anti fermion, edge label'=$\nu_{L}$] (d), a -- [charged scalar, quarter left, edge label=$\eta$] (tb1), tb1 -- [anti charged scalar, insertion=0.9] (wh), tb1 -- [anti charged scalar, insertion=0.9] (wp), tb1 -- [anti charged scalar, quarter left, edge label=$\eta$] (c), }; \end{feynman} \end{tikzpicture} & \begin{tikzpicture} \begin{feynman} \vertex (i1); \vertex [right=2cm of i1] (a); \vertex [right=2cm of a] (b); \vertex [right=2cm of b] (c); \vertex [right=2cm of c] (d); \vertex [above=1.7cm of b] (tb1); \vertex [above=1.cm of tb1] (z2); \vertex [left=0.75cm of z2] (wh) {$\left\langle H\right\rangle$}; \vertex [right=0.75cm of z2] (wp) {$\langle \phi^\prime \rangle$}; \diagram* { i1 -- [fermion, edge label'=$\nu_{L}$] (a) -- [fermion, edge label'=$N$] (b),b -- [anti fermion, edge label'=$N$] (c) -- [anti fermion, edge label'=$\chi$] (d), a -- [charged scalar, quarter left, edge label=$\eta$] (tb1), tb1 -- [anti charged scalar, insertion=0.9] (wh), tb1 -- [anti charged scalar, insertion=0.9] (wp), tb1 -- [anti charged scalar, quarter left, edge label=$\phi$] (c), }; \end{feynman} \end{tikzpicture} \end{tabular} \caption{Radiative light neutrino mass and mixing of non-thermal DM with light neutrinos} \label{numassmixing} \end{figure} \section{Neutrino Mass} \label{sec2} As can be noticed from the particle content of the model and the Yukawa Lagrangian mentioned above, light neutrino mass does not arise at tree level as long as one of the discrete symmetries namely, $\mathbb{Z}_2$ remains unbroken. At one loop level however, one can have light neutrino masses originating from the diagram shown in the left panel of FIG. \ref{numassmixing}. This is same as the way light neutrino masses are generated in scotogenic model proposed by Ma \cite{Ma:2006km}. The one-loop expression for neutrino mass is \begin{equation} (m_{\nu})_{ij}=\sum_{k} \frac{y_{ik} y_{kj} M_{k}}{32 \pi^{2}} \left[\frac{m_{\eta R}^{2}}{m_{\eta R}^{2}-M_{k}^{2}} \log{\left(\frac{m_{\eta R}^{2}}{M_{k}^{2}}\right)}-\frac{m_{\eta I}^{2}}{m_{\eta I}^{2}-M_{k}^{2}} \log\left(\frac{m_{\eta I}^{2}}{M_{k}^{2}}\right)\right] \label{n_mass} \end{equation} where $M_{k}$ is the right handed neutrino mass. The above Eq.~\eqref{n_mass} equivalently can be written as \begin{equation} (m_{\nu})_{ij} \equiv (y^{T}\Lambda y)_{ij} \end{equation} where $\Lambda$ can be defined as, \begin{equation} \Lambda_{k} = \frac{M_{k}}{32 \pi^{2}} \left[\frac{m_{\eta R}^{2}}{m_{\eta R}^{2}-M_{k}^{2}} \log{\left(\frac{m_{\eta R}^{2}}{M_{k}^{2}}\right)}-\frac{m_{\eta I}^{2}}{m_{\eta I}^{2}-M_{k}^{2}} \log\left(\frac{m_{\eta I}^{2}}{M_{k}^{2}}\right)\right]. \end{equation} In order to incorporate the constraints from neutrino oscillation data on three mixing angles and two mass squared differences, it is often useful to express these Yukawa couplings in terms of light neutrino parameters. This is possible through the Casas-Ibarra (CI) parametrisation \cite{Casas:2001sr} extended to radiative seesaw model \cite{Toma:2013zsa} which allows us to write the Yukawa couplings as \begin{equation} y=\sqrt{\Lambda}^{-1} R \sqrt{m^{\rm diag}_{\nu}} U_{\rm PMNS}^{\dagger}. \end{equation} Here $m^{\rm diag}_{\nu} = {\rm diag}(m_1, m_2, m_3)$ is the diagonal light neutrino mass matrix and $R$ can be a complex orthogonal matrix in general with $RR^{T}=\mathbb{1}$ which we have taken it to be a general, this $3\times3$ orthogonal matrix $R$ can be parametrised by three complex parameters of type $\theta_{\alpha \beta} = \theta^R_{\alpha \beta} + i\theta^I_{\alpha \beta}, \theta^R_{\alpha \beta} \in [0, 2\pi], \theta^I_{\alpha \beta} \in \mathbb{R}$ \cite{Ibarra:2003up} \footnote{For some more discussions on different possible structure of this matrix and implications on a particular leptogenesis scenario in this model, we refer to the recent work \cite{Mahanta:2019gfe}.}. In general, the orthogonal matrix $R$ for $n$ flavours can be product of $^nC_2$ number of rotation matrices of type \begin{align} R_{\alpha \beta} &= \begin{pmatrix} \cos{(\theta^R_{\alpha \beta} + i\theta^I_{\alpha \beta})} & \cdots & \sin{(\theta^R_{\alpha \beta} + i\theta^I_{\alpha \beta})} \\ \vdots & \ddots & \vdots \\ - \sin{(\theta^R_{\alpha \beta} + i\theta^I_{\alpha \beta})} & \cdots & \cos{(\theta^R_{\alpha \beta} + i\theta^I_{\alpha \beta})} \end{pmatrix}, \end{align} with rotation in the $\alpha-\beta$ plane and dots stand for zero. For example, taking $\alpha=1, \beta=2$ we have \begin{align} R_{12} &= \begin{pmatrix} \cos{(\theta^R_{12} + i\theta^I_{12})} & \sin{(\theta^R_{12} + i\theta^I_{12})} & 0 \\ -\sin{(\theta^R_{12} + i\theta^I_{12})} & \cos{(\theta^R_{12} + i\theta^I_{12})} & 0 \\ 0 & 0 & 1\end{pmatrix}. \end{align} We see that CP phases in $U$ do not contribute to $\epsilon_{N_i\eta}$ given in eq.\eqref{eq:asymB}, but complex variables in the orthogonal matrix $R$ can lead to non-vanishing value of $\epsilon_{N_i\eta}$. This is similar to leptogenesis from pure decay in this model \cite{Hugle:2018qbw} where, in the absence of flavour effects, the orthogonal matrix $R$ played a crucial role. The matrix denoted by $U_{\rm PMNS}$ is the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) leptonic mixing matrix \begin{equation} U_{\text{PMNS}} = U^{\dagger}_l U_L. \label{pmns0} \end{equation} If the charged lepton mass matrix is diagonal or equivalently, $U_L = \mathbb{1}$, then the PMNS mixing matrix is identical to the diagonalising matrix of neutrino mass matrix. The PMNS mixing matrix can be parametrised as \begin{equation} U_{\text{PMNS}}=\left(\begin{array}{ccc} c_{12}c_{13}& s_{12}c_{13}& s_{13}e^{-i\delta}\\ -s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta}& c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i\delta} & s_{23}c_{13} \\ s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta} & -c_{12}s_{23}-s_{12}c_{23}s_{13}e^{i\delta}& c_{23}c_{13} \end{array}\right) U_{\text{Maj}} \label{matrixPMNS} \end{equation} where $c_{ij} = \cos{\theta_{ij}}, \; s_{ij} = \sin{\theta_{ij}}$ and $\delta$ is the leptonic Dirac CP phase. The diagonal matrix $U_{\text{Maj}}=\text{diag}(1, e^{i\alpha}, e^{i\beta})$ contains the Majorana CP phases $\alpha, \beta$ which remain undetermined at neutrino oscillation experiments. We summarise the $3\sigma$ global fit values in table \ref{tabglobalfit} from the recent analysis \cite{Esteban:2018azc}, which we use in our subsequent analysis. The diagram on right panel of FIG. \ref{numassmixing} gives rise to radiative mixing of non-thermal DM with light neutrinos. However, due to non-thermal nature of this DM candidate, the relevant Yukawa coupling $y^{\prime}_i$ in Eq.(\ref{yukawa2}) are very small, as we discuss in upcoming sections. For such tiny couplings, the mixing between non-thermal DM and light neutrinos will be too small to have any observable consequences like monochromatic lines in X-ray or Gamma-ray spectrum. We leave such exploration of detection prospects for such non-thermal DM candidates to future studies. \begin{table}[htb] \centering \begin{tabular}{|c|c|c|} \hline Parameters & Normal Hierarchy (NH) & Inverted Hierarchy (IH) \\ \hline $ \frac{\Delta m_{21}^2}{10^{-5} \text{eV}^2}$ & $6.79-8.01$ & $6.79-8.01 $ \\ $ \frac{|\Delta m_{31}^2|}{10^{-3} \text{eV}^2}$ & $2.427-2.625$ & $2.412-2.611 $ \\ $ \sin^2\theta_{12} $ & $0.275-0.350 $ & $0.275-0.350 $ \\ $ \sin^2\theta_{23} $ & $0.418-0.627$ & $0.423-0.629 $ \\ $\sin^2\theta_{13} $ & $0.02045-0.02439$ & $0.02068-0.02463 $ \\ $ \delta (^\circ) $ & $125-392$ & $196-360$ \\ \hline \end{tabular} \caption{Global fit $3\sigma$ values of neutrino oscillation parameters \cite{Esteban:2018azc}.} \label{tabglobalfit} \end{table} \section{Analysis of co-genesis} \label{sec3} In order to do the entire analysis we need to solve the coupled differential equations of thermal DM (which is thermally produced and denoted by DM hereafter), non-thermal DM (denoted by $\chi$), lepton asymmetry as well as the source of $\chi$ (and partial source of lepton asymmetry) which is the lightest heavy right handed neutrino i.e. $N$ which provides a non-thermal origin of $\chi$. The coupled Boltzmann equations for DM and $N$ are given as For this scenario the Boltzmann equations for the $Z_2$ odd particles take the following form: \begin{align} \frac{dY_{N_k}}{dz} &= -\frac{s}{zH(z)}\left[(Y_{N_k}-Y^{\rm eq}_{N_k})\langle \Gamma_{N_k \rightarrow L_\alpha \eta}\rangle + (Y_{N_k}-Y^{\rm eq}_{N_k})\langle \Gamma_{N_k \rightarrow \phi \chi}\rangle \right. \nonumber \\ &+ \left. (Y_{N_k}Y_{\eta}-Y^{\rm eq}_{N_k}Y^{\rm eq}_{\eta})\langle \sigma v \rangle_{\eta N_k\rightarrow L{\rm SM}} + \sum_{l=1}^3\left[(Y_{N_k}Y_{N_l}-Y^{\rm eq}_{N_k}Y^{\rm eq}_{N_l}) s\langle \sigma v \rangle_{N_l N_k\rightarrow {\rm SM SM}} \right. \right. \nonumber \\ &+ \left. \left. s\langle\sigma v\rangle_{\rm N_kN_l\rightarrow \chi \chi} Y_{N_k}Y_{N_l} - (Y_{N_k}Y_{N_l} - Y^2_{\eta}\gamma^{N_k}_{\eta}\gamma^{N_l}_{\eta})s\langle \sigma v\rangle_{\rm N N \rightarrow \eta \eta}\right]\right], \nonumber \\ \frac{dY_{\eta}}{dz} &= \frac{s}{zH(z)}\left[ (Y_{N_k}-Y^{\rm eq}_{N_k})\langle \Gamma_{N_i \rightarrow L_\alpha \eta}\rangle - (Y_{\eta}-Y^{\rm eq}_{\eta})\langle \Gamma_{\eta \rightarrow L_\alpha \phi \chi}\rangle - 2(Y^2_{\eta}-(Y^{\rm eq}_{\eta})^2)\langle \sigma v\rangle_{\eta \eta \rightarrow {\rm SM SM}}\right. \nonumber \\ &- \left. \sum^3_{m=1}(Y_{N_m}Y_{\eta}-Y^{\rm eq}_{N_m}Y^{\rm eq}_{\eta})\langle \sigma v \rangle_{\eta N_m\rightarrow L{\rm SM}} - (Y_{\phi}Y_{\eta}-Y^{\rm eq}_{\phi}Y^{\rm eq}_{\eta})\langle \sigma v \rangle_{\eta \phi\rightarrow {\rm SM}{\rm SM}} \right. \nonumber \\ &+ \left. (Y_{N_k}Y_{N_l} -Y^2_{\eta}\gamma^{N_k}_{\eta}\gamma^{N_l}_{\eta})s\langle \sigma v\rangle_{\rm N N \rightarrow \eta \eta} \right]. \nonumber \\ \frac{dY_{\phi}}{dz} &= \frac{s}{zH(z)}\left[(Y_{N_k}-Y^{\rm eq}_{N_k})\langle \Gamma_{N_k \rightarrow \phi \chi}\rangle + (Y_{\eta}-Y^{\rm eq}_{\eta})\langle \Gamma_{\eta \rightarrow L_\alpha \phi \chi}\rangle -2(Y^2_{\phi}-(Y^{\rm eq}_{\phi})^2)\langle \sigma v\rangle_{\phi \phi \rightarrow {\rm SM SM}} \right. \nonumber \\ &- \left.(Y_{\phi}Y_{\eta}-Y^{\rm eq}_{\phi}Y^{\rm eq}_{\eta})\langle \sigma v \rangle_{\eta \phi\rightarrow {\rm SM}{\rm SM}} \right].\label{eq:BE1} \end{align} where \begin{align} Y^{\rm eq}_i &= n^{\rm eq}_i/s,~~s=g_*\frac{2\pi^2}{45}T^3, ~~H\sqrt{\frac{4\pi^3}{45}g_*}\frac{T^2}{M_{pl}}, \nonumber \\ \langle \Gamma \rangle &= \Gamma \frac{K_1(z)}{K_2(z)},~~~~\gamma^i_j=\frac{n^{eq}_i}{n^{eq}_j},~~z=\frac{M_N}{T} \nonumber \\ \langle \sigma_{ij\rightarrow kl} v \rangle &= \frac{x_f}{8m^2_im^2_jK_2((M_i/M_{N})x_f)K_2((M_j/M_{N})x_f)} \nonumber \\ &\times\int_{s_{int}}^\infty \sigma_{ij\rightarrow kl}(s-2(M^2_i + M^2_j))\sqrt{s}K_1(\sqrt{s}x_f/M_{N}) \end{align} and $z=\frac{M_{\rm DM}}{T}$, $M_{\rm PL}$ is the Planck mass, $Y=n/s$ denotes the ratio of number density to entropy density, $s_{\rm int} = \text{Max}\{(M_i+M_j)^2,(M_k+M_l)^2\}$, $M_{\rm Pl}$ is the Planck Mass, $H$ is the Hubble rate of expansion, $n^{\rm eq}_i$'s are the equilibrium number density of $i^{th}$ species and $K_i$'s are the modified Bessel functions of order $i$. \begin{figure} \centering \begin{tabular}{lr} \begin{tikzpicture}[/tikzfeynman/small] \begin{feynman} \vertex (i){$\phi_{R,I}$}; \vertex [below = 1.6cm of i] (j){$\phi_{R,I}$}; \vertex [below = 0.8cm of i] (k); \vertex [right = 0.8cm of k] (v1); \vertex [left = 0.5cm of v1] (v1a){$\langle h\rangle$}; \vertex [right = 0.8cm of v1] (v2); \vertex [right = 0.5cm of v2] (v2a){$\langle h\rangle$}; \vertex [right = 2.4cm of i] (m){$W^+,Z$}; \vertex [below = 1.6cm of m] (o){$W^-,Z$}; \diagram*[small]{(i) -- [scalar](v1),(j) -- [scalar](v1),(v1) -- [scalar,edge label=$h$](v2),(v2) -- [boson](m),(v2) -- [boson](o),(v1) -- [scalar](v1a),(v2) -- [scalar](v2a)}; \end{feynman} \end{tikzpicture} & \begin{tikzpicture}[/tikzfeynman/small] \begin{feynman} \vertex (i){$\phi_{R,I}$}; \vertex [below = 1.6cm of i] (j){$\phi_{R,I}$}; \vertex [below = 0.8cm of i] (k); \vertex [right = 0.8cm of k] (v1); \vertex [right = 1.6cm of i] (m){$h$}; \vertex [below = 1.6cm of m] (o){$h$}; \diagram*[small]{(i) -- [scalar](v1),(j) -- [scalar](v1),(v1) -- [scalar](m),(v1) -- [scalar](o)}; \end{feynman} \end{tikzpicture} \end{tabular} \caption{Diagrams for the major annihilation channel responsible for the relic abundance of cold dark matter.} \label{fig:DMann} \end{figure} It is worthwhile to notice that the main annihilation proceeses leading to sufficient relic density for cold dark matter we consider are presented in FIG. \ref{fig:DMann}. Now, in our model we have the possibility of generating the leptonic asymmetry through the co-annihilation channel of right handed $N_i$ and $\eta$ \cite{Borah:2018uci}. This is shown in FIG. \eqref{fig:coasym} where the doublet $\eta$ co-annihilates with $N_i$ into $\nu,X$ or $e^-,X$ where $X \equiv h,\gamma,W^{\pm},Z $. The $CP$-violation comes from the interference with the loop diagram of the vertex. In addition to that, there can be additional contribution to lepton asymmetry from the decay of the lightest right handed neutrino $N(\equiv N_1)$ as well, similar to vanilla leptogenesis. The vanilla leptogenesis scenario with hierarchical $N_i$ masses~\cite{Buchmuller:2004nz} is viable for the mass of $N_1$ satisfying $M_{1}^{\rm min}\gtrsim 10^9$ GeV~\cite{Davidson:2002qv, Buchmuller:2002rq}.\footnote{Including flavor and thermal effects could, in principle, lower this bound to about $10^6$ GeV~\cite{Moffat:2018wke}. Addition of real scalar singlet can further reduce it to 500 GeV \cite{Alanne:2018brf}.} A similar lower bound can be derived in the scotogenic model with only two $\mathbb{Z}_2$ odd SM-singlet fermions in the strong washout regime. However, with three such SM-singlet fermions, the bound can be lowered to about 10 TeV~\cite{Hugle:2018qbw, Borah:2018rca}, even without resorting to a resonant enhancement of the CP-asymmetry~\cite{Pilaftsis:2003gt, Dev:2017wwc}. We do not show the details of vanilla leptogenesis here, but include that contribution into account for the final lepton asymmetry. Instead, we highlight the other feature from WIMP DM annihilation, as it connects the source of baryon asymmetry to the dark matter sector. For the details of vanilla leptogenesis, we refer to the above references where vanilla leptogenesis was studied in the context of type I seesaw as well as minimal scotogenic model. It should be noted that typically, the lowest scale of lepton number violation is more effective in creating lepton asymmetry. This scale, in our case is the scale of WIMP DM freeze-out, which lies below the right handed neutrino masses. However, if right handed neutrino masses are not very heavy compared to WIMP DM mass, then both of them can have some sizeable contribution to the origin of lepton asymmetry. We will show more details of our hybrid source of leptogenesis in a companion paper. The Boltzmann equations responsible for the DM density is given by eq.(\ref{eq:BE1}) and leptonic asymmetry is given as follows: \begin{align} \frac{dY_{\Delta L}}{dz} &= \frac{s}{zH(z)}\left[\sum_i\left( \epsilon_{N_i}(Y_{N_i} - Y^{\rm eq}_{N_i})\langle \Gamma_{N_i \rightarrow L_\alpha \eta}\rangle - Y_{\Delta L}r_{N_i}\langle\Gamma_{N_i \rightarrow L_\alpha \eta}\rangle \right.\right. \nonumber \\ &+ \left. \left. \epsilon_{N_i \eta} \langle \sigma v\rangle_{\eta N_i\rightarrow L {\rm SM}}\left(Y_{\eta}Y_{N_i} - Y^{\rm eq}_{\eta}Y^{\rm eq}_{N_i}\right) - \frac{1}{2}Y_{\Delta L}Y^{\rm eq}_lr_{N_i}r_\eta \langle \sigma v\rangle_{\eta N_i \rightarrow {\rm SM} \overline{L}} \right)\right.\nonumber \\ &- \left. Y_{\Delta L}Y^{\rm eq}_lr^2_\eta\langle \sigma v\rangle_{\eta \eta \rightarrow LL} - Y_{\Delta L}Y^{\rm eq}_\eta\langle \sigma v \rangle^{wo}_{\eta L \rightarrow \eta \overline{L}} \right], \label{eq:asym} \\ H &= \sqrt{\frac{4\pi^3 g_*}{45}}\frac{M^2_{\chi}}{M_{\rm PL}}, \quad s = g_* \frac{2\pi^2}{45}\left(\frac{M_{\chi}}{z}\right)^3, \nonumber \\ r_j &= \frac{Y^{\rm eq}_j}{Y^{\rm eq}_l}, \quad \quad \langle \Gamma_{j\rightarrow X} \rangle = \frac{K_1(M_j/T)}{K_2(M_j/T)}\Gamma_{j\rightarrow X}, \nonumber \end{align} And the CP asymmetry which is arising from the interference between tree and 1-loop diagrams in Fig. \ref{fig:coasym} can be estimated as \begin{align} \epsilon_{N_i \eta} &= \frac{1}{4\pi(yy^\dagger)_{ii}} \sum_{j}\Im[(yy^\dagger)^2_{ij}]\widetilde{\epsilon}_{ij}, \label{eq:asymB} \\ \widetilde{\epsilon}_{ij} &= \frac{\sqrt{r_j}}{6 r_i \left(-r_i^{3/2}+r_i (r_j-2)+\sqrt{r_i} r_j+1\right)^2(\sqrt{r_i}-3)} \left(r_i^{7/2} (3 r_j+1) +\sqrt{r_i} (3 r_j+5)+1 \right.\nonumber \\ &- \left.3 r_i^{5/2} \left(r_j \left(D+(r_j-3) r_j+4\right) - 3 D -2\right)-3 r_i^{3/2} \left(2 \left(D+3\right) + r_j \left(r_j \left(D+r_j+1\right)-D-4\right)\right) \right. \nonumber \\ &- \left. r_i^4+f^3 \left(3 D+3 r_j^2+11\right) - 3 r_i^2 \left(r_j \left(D+2 (r_j-1) r_j+2\right) - D +6\right)+r_i \left(1-3 r_j \left(D+r_j-4\right)\right) \right) \nonumber \\ &+ \frac{\sqrt{r_j}}{4 r_i}\left(\sqrt{r_i}-1+\frac{\sqrt{r_j}}{(1+\sqrt{r_i})^2}(\sqrt{r_i}-1+r_j)\left(\log\left(\frac{1+\sqrt{r_i}r_j}{r_i(1+\sqrt{r_i})}\right)-\log\left(\frac{1+r_i+r^{3/2}_i+\sqrt{r_i}r_j}{r_i(1+\sqrt{r_i})} \right)\right.\right. \nonumber \\ &+ \left.\left. \log\left(1+\frac{1+\sqrt{r_i}}{\sqrt{r_i}(\sqrt{r_i}-1+r_i+r_j)}\right)\right)\right)\label{eq:coanneps} \\ D &= \sqrt{(r_i-r_j) \left(r_i+4 \sqrt{r_i}-r_j+4\right)} \qquad r_l = \frac{M^2_{N_l}}{m^2_\eta}. \nonumber \end{align} It should be noted that in the above expression always ($1\leq r_j\leq r_i$) where $j$ stands for $N_j$ inside the loop while $i$ stands for $N_i$ as one of the initial state particles, shown in Fig. \ref{fig:coasym}. This is simply to realise the "on-shell" -ness of the loop particles in order to generate the required CP asymmetry. In the above Boltzmann equation we see that along with the process which produces the asymmetry i.e $\langle \sigma v \rangle_{N_i \eta \rightarrow X L}$ and $\langle \Gamma_{N_i \rightarrow L \eta} \rangle$ we have washout terms coming from three kinds of processes: 1)the process $\langle \sigma v \rangle_{\eta \eta \rightarrow L L}$ poses as one of the wash out along with 2) $\langle \sigma v \rangle_{\eta L \rightarrow \eta \overline{L}}$. Now, according to Cui {\it et. al} \cite{Cui:2011ab} if we need to achieve asymmetry through dark matter annihilation then the {\it wash-out} processes $\langle \sigma v \rangle_{N_i \eta \rightarrow X L}$ should {\it freeze-out} before the WIMP {\it freeze-out}. In order to do that one has to keep the following ratio below unity \begin{align} \frac{\Gamma_{\rm wash-out}(x)}{\Gamma_{\rm WIMP}(x)}\sim \frac{\langle \sigma_{wash-out} v \rangle \prod_{i} Y^{eq}_i(x)}{4\langle \sigma_{ann} v\rangle Y^{eq}_X(x) Y_\gamma}. \end{align} So, in our case $\langle \sigma_{ann} v\rangle$ is similar to that of the standard Inert Doublet Model WIMP annihilation channel ($\eta \eta \rightarrow W^+ W^-$) which is naturally stronger than the $\langle \sigma_{wash-out} v \rangle$ which in our case is for ($N_i \eta \rightarrow X L$). Further details of the asymmetry generated through such t channel annihilations of dark matter are shown in \cite{Borah:2018uci}. Non-thermal DM can be produced in a way similar to the FIMP scenario mentioned above. In such a case, the initial abundance is assumed to be zero or negligible and its interaction rate with the standard model particles or thermal bath is so feeble that thermal equilibrium is never attained. In such a case, non-thermal can be produced by out of equilibrium decays or scattering from particles in the thermal bath while the former typically dominates if same type of couplings is involved in both the processes. Further details of this mechanism for keV scale sterile neutrinos can be found in \cite{Merle:2015oja, Shakya:2015xnx, Konig:2016dzg} as well as the review on keV sterile neutrino DM \cite{Adhikari:2016bei} \footnote{A thermally produced keV scale sterile neutrino typically overcloses the universe \cite{Nemevsek:2012cd, Bezrukov:2009th, Borah:2017hgt}. This requires late time entropy dilution mechanism due to the late decay of heavier right handed neutrinos \cite{Scherrer:1984fd} or some kind of non-standard cosmological phase \cite{Biswas:2018iny}.}. For a general review of FIMP DM paradigm, please see \cite{Bernal:2017kxu}, as mentioned earlier. Using the FIMP prescription described in the above-mentioned works, we can write down the corresponding Boltzmann equation for $\chi$, the FIMP candidate as \begin{align} \frac{dY_\chi}{dz} &= \frac{1}{zH}\left[\sum_{i}\left(Y_{N_i}\langle \Gamma_{\rm N_i\rightarrow \phi \chi} \rangle + \sum_{j}Y_{N_i}Y_{N_j}s\langle \sigma v \rangle_{\rm N_iN_j\rightarrow \chi \chi} \right) + Y_{\eta}\langle \Gamma_{\eta \rightarrow L_\alpha \phi \chi}\rangle \right. \nonumber \\ &+ \left. Y_lY_{\eta}s\langle \sigma v \rangle_{\eta l \rightarrow \phi \chi}\right]. \end{align} Here the first contribution on the right hand side is from the decay process $N\rightarrow \phi\chi$ while the second one is from annihilation $N N \rightarrow \chi \chi$. The fact that $\chi$ was never produced in equilibrium requires the Yukawa coupling governing the interaction among $N, \phi, \chi$ to be very small, as we mention below. Since the same Yukawa coupling appears twice in the annihilation process $N N \rightarrow \chi \chi$, the two body decay will dominate the production. Another dominant contribution can come from the $s$-channel annihilation process of $(\nu,l^{\pm}),(\eta^0,\eta^\pm)\rightarrow \phi \chi$ that appears in the third term on the right hand side of the above equation. The dominant production processes of $\chi$ in our work are shown in FIG. \ref{fig:fimp}. \begin{figure} \centering \includegraphics[width=0.65\textwidth]{1keVCDM50WDM50} \caption{Co-genesis of equal abundance of WIMP (3 TeV mass) and FIMP (1 keV mass) DM candidates and baryon asymmetry.} \label{cogen1} \end{figure} \begin{figure} \centering \includegraphics[width=0.65\textwidth]{1MeVCDM50WDM50} \\ \caption{Co-genesis of equal abundance of WIMP (3 TeV mass) and FIMP (1 MeV mass) DM candidates and baryon asymmetry.} \label{cogen2} \end{figure} \begin{figure} \centering \includegraphics[width=0.65\textwidth]{1GeVCDM50WDM50} \caption{Co-genesis of equal abundance of WIMP (3 TeV mass) and FIMP (1 GeV mass) DM candidates and baryon asymmetry.} \label{cogen3} \end{figure} \section{Direct Detection and Lepton Flavour Violation} \label{sec4} Although the detection prospects of FIMP candidate are very limited, the WIMP can have very good direct detection signatures that can be probed at direct detection experiments like LUX \cite{Akerib:2016vxi}, PandaX-II \cite{panda17, Tan:2016zwf} and Xenon1T \cite{Aprile:2017iyp, Aprile:2018dbl}. Since the WIMP is a scalar, we can have Higgs mediated spin independent elastic scattering of DM off nucleons. This direct detection cross section can be estimated as \cite{Barbieri:2006dq} \begin{equation} \sigma_{\text{SI}} = \frac{\lambda^2_L f^2}{4\pi}\frac{\mu^2 m^2_n}{m^4_h m^2_{\rm DM}} \label{sigma_dd} \end{equation} where $\mu = m_n m_{\rm DM}/(m_n+m_{\rm DM})$ is the DM-nucleon reduced mass and $\lambda_L$ is the quartic coupling involved in DM-Higgs interaction. For WIMP, an admixture of scalar doublet and scalar singlet given by $\eta_1 = \cos{\theta_2} \eta_R + \sin{\theta_2} \phi$, the Higgs-DM coupling will be $\lambda_L = \cos{\theta_2} (\lambda_3+\lambda_4+\lambda_5) + \sin{\theta_2} \lambda_7/2$. A recent estimate of the Higgs-nucleon coupling $f$ gives $f = 0.32$ \cite{Giedt:2009mr} although the full range of allowed values is $f=0.26-0.63$ \cite{Mambrini:2011ik}. Since DM has a doublet component in it, there arises the possibility of tree level $Z$ boson mediated processes $\eta_R n \rightarrow \eta_I n$, $n$ being a nucleon. This process, if allowed, can give rise to a very large direct detection rate ruled out by experimental data. However, due to the inelastic nature of the process, one can forbid such scattering if $\delta = m_{\eta_I} - m_{\eta_R} > 100$ keV, typical kinetic energy of DM particle. Another interesting observational prospect of our model is the area of charged lepton flavour violation. In the SM, there is no such process at tree level. However, at radiative level, such processes can occur in the SM. But they are suppressed by the smallness of neutrino masses, much beyond the current and near future experimental sensitivities. Therefore, any experimental observation of such processes is definitely a sign of BSM physics, like the one we are studying here. In the present model, this becomes inevitable due to the couplings of new $\mathbb{Z}_2$ odd particles to the SM lepton doublets. The same fields that take part in the one-loop generation of light neutrino mass shown in FIG. \ref{numassmixing} can also mediate charged lepton flavour violating processes like $\mu \rightarrow e \gamma, \mu \rightarrow 3e $ etc. For example, the neural scalars in the internal lines of loops in FIG. \ref{numassmixing} will be replaced by their charged counterparts (which emit a photon) whereas the external fermion legs can be replaced by $\mu, e$ respectively, giving the one-loop contribution to $\mu \rightarrow e \gamma $. Since the couplings and masses involved in this process are the same as the ones that generate light neutrino masses and play a role in DM relic abundance, we can no longer choose them arbitrarily. Lepton flavour violation in scotogenic model was studied by several authors including \cite{Vicente:2014wga, Toma:2013zsa}. Here we use the \texttt{SPheno 3.1} interface to check the constraints from cLFV data. We particularly focus on three such cLFV decays namely, $\mu \rightarrow e \gamma, \mu \rightarrow 3e$ and $\mu \rightarrow e$ (Ti) conversion that not only are strongly constrained by present experiments but also have tantalising future prospects \cite{Toma:2013zsa}. The present bounds are: ${\rm BR}(\mu \rightarrow e \gamma) < 4.2 \times 10^{-13}$ \cite{TheMEG:2016wtm}, ${\rm BR}(\mu \rightarrow 3e) < 1.0 \times 10^{-12}$ \cite{Bellgardt:1987du}, ${\rm CR} (\mu, \rm Ti \rightarrow e, \rm Ti) < 4.3 \times 10^{-12}$ \cite{Dohmen:1993mp}. It may be noted that the sensitivities of the first two processes will be improved by around one order of magnitude compared the present upper limit on branching ratios. On the other hand, the $\mu$ to $e$ conversion (Ti) sensitivity will be increased by six order of magnitudes \cite{Toma:2013zsa} making it a highly promising test of different new physics scenarios around the TeV corner. \begin{figure}[!h] \centering \begin{tabular}{cc} \includegraphics[width=0.50\textwidth]{mutoegamma.pdf} & \includegraphics[width=0.50\textwidth]{muto3e.pdf} \\ \includegraphics[width=0.50\textwidth]{mutoeTi.pdf} \end{tabular} \caption{Scatter plots for three LFV processes obtained for the range of parameters mentioned in table \ref{tab:BPb}. The y axes correspond to the ratio of the predicted decay rate to the experimental upper limit. The black dots correspond to our benchmark point(BP) as given in table \ref{tab:BPa} and satisfy all relevant bounds from WIMP-FIMP as well as correct lepton asymmetry. The horizontal dashed lines correspond to the experimental upper bound.} \label{fig:scatt} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.50\textwidth]{DD.pdf} \caption{The scatter plot for the ratio of spin independent direct detection rates for WIMP DM to the experimental bound for the range of parameters mentioned in table \ref{tab:BPb}. The black dot corresponds to our benchmark point (BP) corresponds to the parameters given in table \ref{tab:BPa} and satisfies all relevant bounds from WIMP-FIMP as well as correct lepton asymmetry. The horizontal dashed line corresponds to the experimental upper bound.} \label{fig:dd} \end{figure} \section{Results and Discussion} \label{sec5} Since we have a large parameter space, we first choose the benchmark points in a way that gives rise to the desired phenomenology. Also, we choose three different masses of FIMP DM namely, 1 keV, 1 MeV and 1 GeV and choose the parameters in such a way that all these three cases correspond to $50\%$ contribution of FIMP to total DM abundance. The WIMP DM mass is kept fixed at 3 TeV in our analysis. We find that this relative contribution of FIMP along with the required lepton (and hence baryon) asymmetry can be generated by varying the yukawa coupling $y'_i$ as shown in table \ref{tab:BP}. In fact $y'_1$ solely decides the abundance of FIMP for a particular mass as this is the only parameter through which FIMP can couple to other particles in the model. As can be seen from this table, one requires very small $y'_1$ for the FIMP, as expected from the non-thermal scenario discussed earlier. One also requires relatively large quartic coupling $\lambda_7$ which decides the Higgs portal interactions of $\phi$ which is present as a small component in the WIMP DM eigenstate. The corresponding co-genesis results are shown in FIG. \ref{cogen1}, \ref{cogen2}, \ref{cogen3} for three different FIMP masses proportions respectively. As we can see from these plots, the WIMP as well as the lightest right handed neutrino are in equilibrium initially followed by WIMP freeze-out and right handed neutrino decay \footnote{Assumption of right handed neutrino to be in thermal equilibrium initially is justified in scotogenic model, as shown in \cite{Borah:2018rca}.}. Since the mass hierarchy is not very large, both the WIMP freeze-out and right handed neutrino abundance depletion (due to its decay) happens around the same epoch. Since WIMP freeze-out and right handed neutrino decay are related to the generation of lepton asymmetry as well as FIMP generation respectively, one can see the yield in $\Delta L$ and FIMP by the epochs of WIMP freeze-out and right handed neutrino decay. It can be seen from these plots that the required asymmetry along with WIMP-FIMP relative abundance can be achieved simultaneously leading to a successful co-genesis. In order to get the leptonic asymmetry we need the yukawa coupling $y_{ij}$ to be of $\mathcal{O}(1)$ which would be fulfilled if we take the $\lambda_5$ to be very less, to be in agreement with light neutrino masses discussed above. In doing so we would be compromising the mass difference between the $\eta_R$ and $\eta_I$. The decreasing of the mass difference opens up the inelastic channel $\eta_R,(n,p)\rightarrow \eta_I,(n,p)$ which is ruled out, as mentioned earlier. This is where the singlet scalar $\phi$ comes to rescue as it relaxes the tension among neutrino data, dark matter direct detection and generating correct lepton asymmetry This was also noted in a recent work \cite{Borah:2018uci}. The mixing between the doublet and singlet scalars through $\lambda_9$ helps in evading the Direct-Detection bound as is enters the effective $Z$-coupling to scalar WIMP. All these cases shown in FIG. \ref{cogen1}, \ref{cogen2}, \ref{cogen3} satisfy the final leptonic asymmetry (by the epoch of electroweak phase transition temperature $\sim 150-200$ GeV) required for the observed baryon asymmetry ($\eta_B = \Omega_B h^2 =0.0226$) via the sphaleron conversion factor $C_s = \frac{8N_f + 4N_H}{22N_f+13N_H}$ where $N_f=3, N_H=2$ are the number of fermion generations and Higgs doublets respectively. It should be noted that the FIMP DM with mass in the keV scale can face constraints from structure formation data. As noted in \cite{Boyarsky:2008xj}, Lyman-$\alpha$ bounds restrict the keV fermion mass to be above 8 keV if it is non-resonantly produced (similar to our model) and contributes $100\%$ to the total DM abundance. However, for less than $60\%$ contribution to total DM, such strict mass bounds do not apply. Therefore, our benchmark value of 1 keV FIMP mass in one of the cases mentioned above remains safe from such bounds. In table \ref{tab:BPa} we show the other parameters of the model for a chosen benchmark point (BP) giving $50\%-50\%$ WIMP-FIMP proportion along with successful leptogenesis. We will compare our subsequent results with respect to this BP that satisfies all our criteria. We will see that this BP remains sensitive to LFV as well as direct detection experiments. \begin{table}[!ht] \begin{tabular}{|c|c|} \hline FIMP mass & $y'_i$ \\ \hline 1 keV & $5.4128\times 10^{-8}$ \\ \hline 1 MeV & $1.711678\times 10^{-9}$ \\ \hline 1 GeV & $5.4128\times 10^{-11}$ \\ \hline \end{tabular} \caption{Three different cases for FIMP mass and $y'_i$.} \label{tab:BP} \end{table} \begin{table}[!ht] \begin{tabular}{|c|c|} \hline Parameters & Values \\ \hline $\lambda_1$ & $0.17$ \\ \hline $\lambda_5$ & $1.5\times 10^{-7}$ \\ \hline $\lambda_3=\lambda_4=\lambda_9=\lambda^\prime_8$ & $0.1$ \\ \hline $\lambda_7$ & $2.289$ \\ \hline $\lambda^\prime_7=\lambda_8=\lambda^\prime_8=\lambda^\prime_9$ & $0.0$ \\ \hline $\mu_\eta$ & $5.1$ TeV \\ \hline $\mu_\phi$ & $3$ TeV \\ \hline $m_\eta$ & $3.167$ TeV \\ \hline $m_\phi$ & $5.298$ TeV \\ \hline $M_N$ & $10.2$ TeV \\ \hline \end{tabular} \caption{The benchmark point satisfying correct DM-leptogenesis requirements corresponding to $50\%-50\%$ relative proportion of WIMP-FIMP mentioned in table \ref{tab:BP}.} \label{tab:BPa} \end{table} In FIG. \ref{fig:scatt} we have shown the scatter plot for LFV branching ratios by varying the key parameters affecting them, as shown in table \ref{tab:BPb}. In all these plots we have not taken any constraint from the relic, but the neutrinos mass constraints are being taken care of by the Casas-Ibarra parametrsation which in turn fixes the Yukawa's. The benchmark point that satisfies all relevant bounds from WIMP-FIMP as well as correct lepton asymmetry is also indicated as BP. For the same range of parameters we also show the WIMP direct detection rates in FIG. \ref{fig:dd} where the BP is also indicated. It is clear that our BP is very sensitive to the current experimental upper bounds on $\mu \rightarrow e \gamma$ as well as direct detection rates, keeping the detection prospects very much optimistic. \begin{table}[!ht] \begin{tabular}{|c|c|} \hline Parameter & variation \\ \hline $\mu_\phi$ & 1 TeV - 10 TeV \\ \hline $\mu_\eta$ & 1.7 TeV - 17 TeV \\ \hline $M_N$ & 2.21 TeV - 22.1 TeV \\ \hline $\lambda_9$ & $10^{-3}$ - 1 \\ \hline $\lambda_7$ & $10^{-1}$ - $4 \pi$ \\ \hline \end{tabular} \caption{Ranges of the parameters varied in order to get the scatter plots in FIG. \ref{fig:scatt}, \ref{fig:dd}.} \label{tab:BPb} \end{table} \section{Conclusion} \label{sec6} We have studied the possibility of two-component dark matter with one thermal and one non-thermal components with the additional feature of creating the baryon asymmetry of the universe in a minimal extension of the standard model that also accommodates light neutrino masses radiatively with dark matter particles going inside loop. The model is a simple extension of the minimal scotogenic model which consists of the SM particles plus three right handed neutrinos, one additional scalar doublet in order to achieve the additional features, not present in the minimal model. The WIMP dark matter component is produced thermally in equilibrium followed by freeze-out while the non-thermal (or FIMP) component is produced from the out-of-equilibrium decay and scattering of particles in thermal bath. The WIMP annihilations also produce a non-zero lepton asymmetry in a way similar to WIMPy leptogenesis scenarios. The WIMP is an admixture of a scalar doublet's neutral component and a scalar singlet to satisfy the criteria of neutrino mass, dark matter relic, direct detection and leptogenesis simultaneously. Interestingly, the particles which assist in the production of FIMP also partially contribute to the origin of lepton asymmetry resulting in a hybrid setup. We outline such a hybrid co-genesis of multi-component DM, lepton asymmetry in this work for some benchmark scenarios leaving a more detailed analysis for an upcoming work. We also find that our benchmark point satisfying the required abundance of WIMP-FIMP and baryon asymmetry also remains sensitive to dark matter direct detection as well charged lepton flavour violation like $\mu \rightarrow e \gamma$.
proofpile-arXiv_065-4173
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} It is well known (mainly by lattice simulations \cite{HotQCD}) that, at temperatures above a certain critical temperature $T_c \approx 150$ MeV, thermal fluctuations break up the chiral condensate $\langle \bar{q} q \rangle$, causing the complete restoration of the $SU(L)_L\otimes SU(L)_R$ chiral symmetry of QCD with $L$ light quarks ($L=2$ and $L=3$ being the physically relevant cases): this leads to a phase transition called ``chiral transition''. For what concerns, instead, the $U(1)$ axial symmetry, the nonzero contribution to the anomaly provided by the instanton gas at high temperatures \cite{GPY1981} should imply that it is always broken, also for $T>T_c$. (However, the real magnitude of its breaking and its possible {\it effective} restoration at some temperature above $T_c$ are still important debated questions in hadronic physics.) In this work, extending a previous study at zero temperature ($T=0$) \cite{LM2018}, we perform a systematic study of the modifications to the QCD vacuum energy density $\epsilon_{vac}$ in the finite-temperature case, above the chiral transition at $T_c$, caused by a nonzero value of the parameter $\theta$, using two different effective Lagrangian models which implement the $U(1)$ axial anomaly of the fundamental theory and which are both well defined also above $T_c$. In particular, we derive (and critically compare) the expressions for the topological susceptibility $\chi$ and for the second cumulant $c_4$ starting from the $\theta$ dependence of $\epsilon_{vac}(\theta)$ in the two models. Indeed, these two quantities are known to be, respectively, the second and the fourth derivative with respect to $\theta$ of the vacuum energy density, evaluated at $\theta=0$: $\epsilon_{vac}(\theta)=\,const. + \frac{1}{2}\chi\theta^2 + \frac{1}{24}c_4\theta^4 + \ldots$. The first effective Lagrangian model that we shall consider was originally proposed in Ref. \cite{ELSM1} to study the chiral dynamics at $T=0$, and later used as an effective model to study the chiral-symmetry restoration at nonzero temperature \cite{PW1984,ELSMfiniteT_1,ELSMfiniteT_2}. According to 't Hooft (see Refs. \cite{ELSM2,ELSM3} and references therein), it reproduces, in terms of an effective theory, the $U(1)$ axial breaking caused by instantons in the fundamental theory.\footnote{We recall here, however, the criticism by Christos \cite{Christos1984} (see also Refs. \cite{WDV1,WDV2}), according to which the determinantal interaction term in this effective model [see Eq. \eqref{Finite temperature ELsm: potential L>2} below] does not correctly reproduce the $U(1)$ axial anomaly of the fundamental theory.} For brevity, following the notation already introduced in Ref. \cite{LM2018}, we shall refer to it as the ``extended linear sigma ($EL_\sigma$) model''. This model is described by the following Lagrangian: \begin{equation}\label{'t Hooft effective Lagrangian} \mathscr{L}_{(EL_\sigma)}(U,U^{\dagger}) = \frac{1}{2}\mathrm{Tr} [\partial_\mu U \partial^\mu U^{\dagger}] - V(U,U^{\dagger}) , \end{equation} where \begin{equation}\label{Finite temperature ELsm: potential L>2} \begin{aligned} V(U,U^\dagger) &= \frac{1}{4}\lambda_\pi^2 \mathrm{Tr} \left[(UU^\dagger - \rho_\pi \mathbf{I})^2\right] + \frac{1}{4}\lambda_\pi^{'2} \left[\mathrm{Tr}(UU^\dagger)\right]^2 \\ &- \frac{B_m}{2\sqrt{2}}\mathrm{Tr} \left[\mathcal{M}U + \mathcal{M}^\dagger U^\dagger\right] - \kappa \left[\det U + \det U^\dagger\right] . \end{aligned} \end{equation} In this model, the mesonic effective fields are represented by a $L\times L$ complex matrix $U_{ij}$ which can be written in terms of the quark fields as $U_{ij}\sim \Overline[2]{q}_{jR}q_{iL}$, up to a multiplicative constant; moreover, $\mathcal{M}$ is a complex quark-mass matrix, given by \begin{equation}\label{mass matrix with theta term} \mathcal{M}=Me^{i\frac{\theta}{L}} , \end{equation} where $M = \diag (m_1, \ldots, m_L)$ is the physical (real and diagonal) quark-mass matrix. (In this paper, therefore, we have decided to move all the dependence on $\theta$ into the mass term, for later convenience.) For what concerns the potential $V(U,U^\dagger)$ defined in Eq. \eqref{Finite temperature ELsm: potential L>2}, we recall that the parameter $\rho_\pi$ is responsible for the fate of the $SU(L)_L \otimes SU(L)_R$ chiral symmetry, which, as is well known, depends on the temperature $T$. We shall include the effects of the temperature in the model allowing the various parameters in Eq. \eqref{Finite temperature ELsm: potential L>2} to vary with the temperature: in particular, the parameter $\rho_\pi$ will be positive, and, correspondingly, the ``vacuum expectation value'' (vev), i.e., the thermal average, of $U$ will be different from zero in the chiral limit $M=0$, until the temperature reaches the chiral phase-transition temperature $T_c$ [$\rho_\pi(T<T_c)>0$], above which it will be negative [$\rho_\pi(T>T_c)<0$], and, correspondingly, the vev of $U$ will vanish in the chiral limit $M=0$.\footnote{We notice here that we have identified the temperature $T_{\rho_\pi}$ at which the parameter $\rho_\pi$ is equal to zero with the chiral phase-transition temperature $T_c$: this is always correct except in the case $L=2$, where we have $T_{\rho_\pi} < T_c$ (see Secs. 2.2 and 3.2 for a more detailed discussion). In any case, in this paper we shall consider exclusively the region of temperatures $T > T_c$.} The second effective Lagrangian model that we shall consider is a generalization of the model proposed by Witten, Di Vecchia, Veneziano, \emph{et al.} \cite{WDV1,WDV2,WDV3} (that, following the notation introduced in Ref. \cite{LM2018}, will be denoted for brevity as the ``WDV model''), and (in a sense which will be made clear below) it approximately ``interpolates`` between the WDV model at $T=0$ and the $EL_\sigma$ model for $T>T_c$: for this reason (always following Ref. \cite{LM2018}) we shall call it the ``interpolating model'' (IM). In this model (which was originally proposed in Ref. \cite{EM1994} and elaborated on in Refs. \cite{MM2003,EM2011,MM2013}), the $U(1)$ axial anomaly is implemented, as in the WDV model, by properly introducing the topological charge density $Q(x)=\frac{g^{2}}{64\pi^{2}}\varepsilon^{\mu\nu\rho\sigma} F_{\mu\nu}^{a}(x)F_{\rho\sigma}^{a}(x)$ as an auxiliary field, so that it satisfies the correct transformation property under the chiral group.\footnote{However, we must recall here that also the particular way of implementing the $U(1)$ axial anomaly in the WDV model, by means of a logarithmic interaction term [as in Eqs. \eqref{Interpolating model Lagrangian with Q} and \eqref{potential of the interpolating model after having integrated out Q} below], was criticized by 't Hooft in Ref. \cite{ELSM2}. Unfortunately, no real progress has been done up to now to solve the controversy (recalled also in the first footnote) between Ref. \cite{ELSM2} and Ref. \cite{Christos1984}, and we are still living with it.} Moreover, it also assumes that there is another $U(1)$-axial-breaking condensate (in addition to the usual quark-antiquark chiral condensate $\langle \bar{q}q \rangle$), having the form $C_{U(1)} = \langle {\cal O}_{U(1)} \rangle$, where, for a theory with $L$ light quark flavors, ${\cal O}_{U(1)}$ is a $2L$-quark local operator that has the chiral transformation properties of \cite{tHooft1976,KM1970,Kunihiro2009} ${\cal O}_{U(1)} \sim \displaystyle{{\det_{st}}(\bar{q}_{sR}q_{tL}) + {\det_{st}}(\bar{q}_{sL}q_{tR}) }$, where $s,t = 1, \ldots, L$ are flavor indices.\footnote{The color indices (not explicitly indicated) are arranged in such a way that (i) ${\cal O}_{U(1)}$ is a color singlet, and (ii) $C_{U(1)} = \langle {\cal O}_{U(1)} \rangle$ is a \emph{genuine} $2L$-quark condensate, i.e., it has no \emph{disconnected} part proportional to some power of the quark-antiquark chiral condensate $\langle \bar{q} q \rangle$; the explicit form of the condensate for the cases $L=2$ and $L=3$ is discussed in detail in the Appendix A of Ref. \cite{EM2011}.} The effective Lagrangian of the interpolating model is written in terms of the topological charge density $Q$, the mesonic field $U_{ij} \sim \bar{q}_{jR} q_{iL}$ (up to a multiplicative constant), and the new field variable $X \sim {\det} \left( \bar{q}_{sR} q_{tL} \right)$ (up to a multiplicative constant), associated with the $U(1)$ axial condensate: \begin{equation}\label{Interpolating model Lagrangian with Q} \begin{split} \mathscr{L}_{(IM)}(U,&U^\dagger,X,X^\dagger,Q) =\frac{1}{2}\mathrm{Tr} [\partial_\mu U \partial^\mu U^{\dagger}] + \frac{1}{2}\partial_\mu X \partial^\mu X^{\dagger} -V_0(U,U^\dagger,X,X^\dagger) \\ &+ \frac{i}{2}Q \left[ \omega_1 \mathrm{Tr} (\log U -\log U^\dagger) + (1-\omega_1) (\log X -\log X^\dagger) \right] + \frac{1}{2A}Q^2 , \end{split} \end{equation} where \begin{equation}\label{potential of the interpolating model} \begin{split} V_0(U,U^\dagger,X,X^\dagger) &= \frac{1}{4}\lambda_\pi^2\mathrm{Tr} [(UU^{\dagger}-\rho_\pi \mathbf{I})^2] + \frac{1}{4}\lambda_\pi^{'2}\left[\mathrm{Tr}(UU^\dagger)\right]^2 + \frac{1}{4}\lambda_X^2 [XX^\dagger - \rho_X]^2 \\ &- \frac{B_m}{2\sqrt{2}}\mathrm{Tr}\left[\mathcal{M} U + \mathcal{M}^\dagger U^{\dagger}\right] - \frac{\kappa_1}{2\sqrt{2}}[X^\dagger \det U + X\det U^\dagger ] . \end{split} \end{equation} Once again, we have decided (for later convenience) to put all the $\theta$ dependence in the complex mass matrix $\mathcal{M}=Me^{i\frac{\theta}{L}}$.\\ As in the case of the WDV model, the auxiliary field $Q$ in \eqref{Interpolating model Lagrangian with Q} can be integrated out using its equation of motion: \begin{equation} Q = -\frac{i}{2} A \left[\omega_1 \mathrm{Tr} (\log U - \log U^\dagger )+ (1-\omega_1)(\log X - \log X^\dagger) \right] . \end{equation} After the substitution, we obtain \begin{equation}\label{Interpolating model Lagrangian without Q} \mathscr{L}_{(IM)}(U,U^\dagger,X,X^\dagger)=\frac{1}{2}\mathrm{Tr} [\partial_\mu U \partial^\mu U^{\dagger}] + \frac{1}{2}\partial_\mu X \partial^\mu X^{\dagger} -V(U,U^\dagger , X,X^\dagger) , \end{equation} where \begin{equation}\label{potential of the interpolating model after having integrated out Q} \begin{split} V(U,&U^\dagger,X,X^\dagger)=V_0(U,U^\dagger,X,X^\dagger) \\ &-\frac{1}{8} A \left[\omega_1 \mathrm{Tr} (\log U - \log U^\dagger)+ (1-\omega_1)(\log X - \log X^\dagger )\right]^2 . \end{split} \end{equation} All the parameters which appear in Eqs. \eqref{potential of the interpolating model} and \eqref{potential of the interpolating model after having integrated out Q} have to be considered as temperature dependent. In particular, the parameter $\rho_X$ plays for the $U(1)$ axial symmetry the same role the parameter $\rho_\pi$ plays for the $SU(L)_L \otimes SU(L)_R$ chiral symmetry: $\rho_X$ determines the vev of the field $X$ and it is thus responsible for the way in which the $U(1)$ axial symmetry is realised. In order to reproduce the scenario we are interested in, that is, the scenario in which the $U(1)$ axial symmetry is {\it not} restored for $T>T_c$, while the $SU(L) \otimes SU(L)$ chiral symmetry is restored as soon as the temperature reaches $T_c$, we must assume that, differently from $\rho_\pi$, the parameter $\rho_X$ remains positive across $T_c$, i.e., $\rho_\pi(T<T_c)>0$, $\rho_X(T<T_c)>0$, and $\rho_\pi(T>T_c)<0$, $\rho_X(T>T_c)>0$. For what concerns the parameter $\omega_1(T)$, in order to avoid a singular behavior of the anomalous term in the potential \eqref{potential of the interpolating model after having integrated out Q} above the chiral-transition temperature $T_c$, where the vev of the mesonic field $U$ vanishes (in the chiral limit $M=0$), we must assume that \cite{EM1994,MM2013} $\omega_1 (T\geq T_c)=0$.\\ (This way, indeed, the term including $\log U$ in the potential vanishes, eliminating the problem of the divergence, at least as far as the vev of the field $X$ is different from zero or, in other words, as far as the $U(1)$ axial symmetry remains broken also above $T_c$.) As it was already observed in Refs. \cite{EM2011,LM2018}, the Lagrangian of the WDV model is obtained from that of the interpolating model by first fixing $\omega_1=1$ and then taking the formal limits $\lambda_X \to +\infty$ and also $\rho_X \to 0$ (so that $X \to 0$): \begin{equation}\label{IM to WDV} \mathscr{L}_{(IM)}\vert_{\omega_1=1} \mathop{\longrightarrow}_{\lambda_X \to +\infty,~\rho_X \to 0} \mathscr{L}_{(WDV)} . \end{equation} For this reason, $\omega_1=1$ seems to be the most natural choice for $T=0$ (and, indeed, it was found in Ref. \cite{LM2018} that the expressions for $\chi$ and $c_4$, obtained using the interpolating model with $\omega_1=1$, coincide with those of the WDV model, \emph{regardless} of the values of the other parameters $\kappa_1$ and $\rho_X$\dots).\\ On the other side, as we have seen above, the parameter $\omega_1$ must be necessarily taken to be equal to zero above the critical temperature $T_c$, where the WDV is no more valid (because of the singular behavior of the anomalous term in the potential), and vice versa, as it was already observed in Ref. \cite{MM2013}, the interaction term $\frac{\kappa_1}{2\sqrt{2}}[X^\dagger \det U + X\det U^\dagger]$ of the interpolating model becomes very similar to the ``instantonic'' interaction term $\kappa [\det U + \det U^\dagger]$ of the $EL_\sigma$ model. More precisely, we here observe that, by first fixing $\omega_1=0$ and then taking the formal limits $\lambda_X \to +\infty$ and $A \to \infty$ (so that, writing $X= \alpha e^{i\beta}$, one has $\alpha \to \sqrt{\rho_X}$ and $\beta \to 0$, i.e., $X \to \sqrt{\rho_X}$), the Lagrangian of the interpolating model reduces to the Lagrangian of the $EL_\sigma$ model with $\kappa=\frac{\kappa_1\sqrt{\rho_X}}{2\sqrt{2}}$ (i.e., with $\kappa$ proportional to the $U(1)$ axial condensate): \begin{equation}\label{IM to ELsm} \mathscr{L}_{(IM)}\vert_{\omega_1=0} \mathop{\longrightarrow}_{\lambda_X \to +\infty,~A \to +\infty} \mathscr{L}_{(EL_\sigma)}\vert_{\kappa=\frac{\kappa_1\sqrt{\rho_X}}{2\sqrt{2}}} . \end{equation} The paper is organized as follows: in Secs. 2 and 3 we shall present the results for the extended linear sigma model and the interpolating model, respectively. These results will be obtained at the first nontrivial order in an expansion in the quark masses (since this will greatly simplify the search for the minimum of the potential). On the other side, no assumption will be done on the parameter $\theta$, which will be treated as an absolutely free parameter. Moreover, for each of the two models considered, we shall present separately the results for the cases $L\geq 3$ and $L=2$, due to the fact that (for some technical reasons which will be explained in the following: see also Ref. \cite{MM2013}) the case $L=2$ requires a more specific analysis. Finally, in the last section we shall draw our conclusions, summarizing (and critically commenting) the results obtained in this work and discussing also some possible future developments. \section{Results for the extended linear sigma model} \subsection{The case $L\geq 3$} Following the notation of Ref. \cite{MM2013}, we shall write the parameter $\rho_\pi$, for $T>T_c$, as follows: \begin{equation}\label{rho_pi for T>Tc} \rho_\pi \equiv -\frac{1}{2}B_\pi^2<0 , \end{equation} and, moreover, we shall use for the matrix field $U$ the following simple linear parametrization: \begin{equation}\label{linear parametrization} U_{ij} = a_{ij} + ib_{ij} , \end{equation} where $a_{ij}$ and $b_{ij}$ are real field variables whose vevs $\bar{a}_{ij}$ and $\bar{b}_{ij}$ vanish in the chiral limit ($\Overline[2]{U}=0$ for $M=0$, when $T>T_c$). We shall also write the complex mass matrix \eqref{mass matrix with theta term} in a similar way, i.e., separating its real and imaginary parts: \begin{equation}\label{Mass matrix form for T>Tc} \mathcal{M}_{ij} = M_{ij}\, e^{i\frac{\theta}{L}} \equiv m_{ij} + i n_{ij} . \end{equation} With this choice of the parametrizations for the parameter $\rho_\pi$, the fields $U$, and the mass matrix $\mathcal{M}$, the potential \eqref{Finite temperature ELsm: potential L>2} becomes \begin{equation}\label{Finite temperature ELsm: explicit potential L>2} \begin{aligned} V &= \frac{L}{16}\lambda_\pi^2 B_\pi^4 + \frac{1}{4}\lambda_\pi^2 B_\pi^2(a_{ij}^2+b_{ij}^2) -\frac{B_m}{\sqrt{2}}(m_{ij}a_{ji} - n_{ij}b_{ji}) \\ &+ \frac{1}{4}\lambda_\pi^2 \mathrm{Tr} \left[(UU^\dagger)^2\right] + \frac{1}{4}\lambda_\pi^{'2} \left[\mathrm{Tr}(UU^\dagger)\right]^2 - \kappa \left[\det U + \det U^\dagger\right] . \end{aligned} \end{equation} In order to find the value $\Overline[2]{U}$ for which the potential $V$ is minimum (that is, in our mean-field approach, the vev of $U$), we have to solve the following system of stationary-point equations: \begin{equation}\label{Finite temperature ELsm: minimization system L>2} \left\{ \begin{aligned} \left.\frac{\partial V}{\partial a_{ij}}\right|_S &= \frac{1}{2}\lambda_\pi^2 B_\pi^2 \,\bar{a}_{ij} - \frac{B_m}{\sqrt{2}}m_{ji} + \ldots = 0 ,\\ \left.\frac{\partial V}{\partial b_{ij}}\right|_S &= \frac{1}{2}\lambda_\pi^2 B_\pi^2 \,\bar{b}_{ij} + \frac{B_m}{\sqrt{2}}\,n_{ji} + \ldots = 0 , \end{aligned} \right. \end{equation} where the neglected terms are of quadratic or higher order in the fields. We can easily solve this system, at the leading order in the quark masses, obtaining: $\Overline[2]{U}_{ij} = \bar{a}_{ij} + i \bar{b}_{ij} \simeq \frac{2B_m}{\sqrt{2}\lambda_\pi^2 B_\pi^2}\, (m_{ji}-in_{ji})$, that is \begin{equation}\label{Vev of U L>2} \Overline[2]{U} \simeq \frac{2B_m}{\sqrt{2}\lambda_\pi^2 B_\pi^2} \mathcal{M}^\dagger = \frac{2B_m}{\sqrt{2}\lambda_\pi^2 B_\pi^2} M e^{-i\frac{\theta}{L}} . \end{equation} A simple analysis of the second derivatives of the potential $V$ with respect to the fields, calculated in this point, confirms that it is indeed a minimum of the potential. So, we find that (at the first nontrivial order in the quark masses) the vev of the mesonic field $U$ is proportional to the mass matrix. We notice here that, by virtue of the result \eqref{Vev of U L>2}, the quantities $\Overline[2]{U}\Overline[2]{U}^\dagger$ and $\mathcal{M}\Overline[2]{U}$ turn out to be independent of $\theta$. Therefore, all the terms of the potential \eqref{Finite temperature ELsm: potential L>2} carry no dependence on $\theta$ except for the ``instantonic'' one. That is, explicitly, \begin{equation}\label{Finite temperature ELsm: theta dependence of potential L>2} \begin{aligned} V_{min}(\theta) &= V(\Overline[2]{U}(\theta)) = const. -\kappa \left(\det \Overline[2]{U}(\theta) + \det \Overline[2]{U}^\dagger(\theta)\right) + \ldots \\ &= const. - 2\kappa\left(\frac{2B_m}{\sqrt{2}\lambda_\pi^2 B_\pi^2}\right)^L \det M \cos\theta + \ldots , \end{aligned} \end{equation} where the omitted terms are either constant with respect to $\theta$ or of higher order in the quark masses. Finally, from \eqref{Finite temperature ELsm: theta dependence of potential L>2} we can straightforwardly derive the topological susceptibility and the second cumulant, which turn out to be \begin{equation}\label{Finite temperature ELsm: chi and c4 L>2} \begin{aligned} \chi = \left.\frac{\partial^2 V_{min}(\theta)}{\partial \theta^2}\right|_{\theta=0} &\simeq 2\kappa\left(\frac{2B_m}{\sqrt{2}\lambda_\pi^2 B_\pi^2}\right)^L \det M ,\\ c_4 = \left.\frac{\partial^4 V_{min}(\theta)}{\partial \theta^4}\right|_{\theta=0} &\simeq -2\kappa\left(\frac{2B_m}{\sqrt{2}\lambda_\pi^2 B_\pi^2}\right)^L \det M . \end{aligned} \end{equation} \subsection{The special case $L=2$} As already said in the Introduction, the case $L=2$ requires a more specific analysis. In fact, in this case, the determinant of the matrix field $U$ is quadratic in the fields and so it must be considered explicitly in the stationary-point equations at the leading order in the quark masses. In this particular case, it is more convenient to choose for the parametrization of the field $U$ a variant of the linear parametrization \eqref{linear parametrization}, which is explicitly written in terms of the fields describing the mesonic excitations $\sigma$, $\eta$, $\vec{\delta}$ and $\vec{\pi}$, i.e., \begin{equation}\label{linear parametrization for L=2} U=\frac{1}{\sqrt{2}}\left[(\sigma + i \eta )\mathbf{I}+ (\vec{\delta}+i\vec{\pi})\cdot \vec{\tau}\right] , \end{equation} where $\tau_a$ ($a=1,2,3$) are the Pauli matrices (with the usual normalization $\mathrm{Tr} [\tau_a\tau_b] = 2\delta_{ab}$), while the multiplicative factor $\frac{1}{\sqrt{2}}$ guarantees the correct normalization of the kinetic term in the effective Lagrangian. We expect that all the vevs of the fields $\sigma$, $\eta$, $\vec{\delta}$ and $\vec{\pi}$ are (at the leading order) proportional to the quark masses, so that they vanish in the chiral limit $M\to 0$. Using the parametrization \eqref{linear parametrization for L=2}, we find the following expression for the potential \eqref{Finite temperature ELsm: potential L>2} (having defined $\Lambda_\pi^2 \equiv \lambda_\pi^2 + 2 \lambda_\pi^{'2}$): \begin{equation}\label{Finite temperature ELsm: explicit potential L=2} \begin{aligned} V &= \frac{1}{8}\lambda_\pi^2 B_\pi^4 + \frac{1}{8}\Lambda_\pi^2(\sigma^2 + \eta^2 + \vec{\delta}^2 + \vec{\pi}^2)^2 + \frac{1}{2}\lambda_\pi^2(\sigma^2\vec{\delta}^2 {+} 2\sigma\eta\vec{\delta}\cdot\vec{\pi} {+} \eta^2\vec{\pi}^2) \\ &+ \frac{1}{2}\lambda_\pi^2\left[\vec{\pi}^2 \vec{\delta}^2 {-} (\vec{\delta}\cdot\vec{\pi})^2\right] + \frac{1}{4}\lambda_\pi^2 B_\pi^2(\sigma^2 + \eta^2 + \vec{\delta}^2 + \vec{\pi}^2) \\ &- \frac{B_m}{2}\left[(m_u {+} m_d)\left(\sigma\cos\frac{\theta}{2} {-} \eta\sin\frac{\theta}{2}\right) {+} (m_u{-}m_d)\left(\delta_3\cos\frac{\theta}{2} {-} \pi_3\sin\frac{\theta}{2}\right)\right] \\ &- \kappa(\sigma^2 - \eta^2 - \vec{\delta}^2 + \vec{\pi}^2) . \end{aligned} \end{equation} We now look for the minimum of the potential, solving the following system of stationary-point equations: \begin{equation}\label{Finite temperature ELsm: minimization system L=2} \left\{ \begin{aligned} & \left.\frac{\partial V}{\partial \sigma}\right|_S = \frac{1}{2}\Lambda_\pi^2\left(\bar{\sigma}^2 + \bar{\eta}^2 + \bar{\vec{\delta}}^2 + \bar{\vec{\pi}}^2\right)\bar{\sigma} + \lambda_\pi^2\left(\bar{\sigma}\bar{\vec{\delta}}^2 + \bar{\eta}\bar{\vec{\delta}}\cdot\bar{\vec{\pi}}\right) \\ & \hspace{2 cm} + \frac{1}{2}(\lambda_\pi^2 B_\pi^2 - 4\kappa)\bar{\sigma} -\frac{B_m}{2}(m_u + m_d)\cos\frac{\theta}{2} = 0 ,\\ \\ & \left.\frac{\partial V}{\partial \eta}\right|_S = \frac{1}{2}\Lambda_\pi^2\left(\bar{\sigma}^2 + \bar{\eta}^2 + \bar{\vec{\delta}}^2 + \bar{\vec{\pi}}^2\right)\bar{\eta} + \lambda_\pi^2\left(\bar{\sigma}\bar{\vec{\delta}}\cdot\bar{\vec{\pi}} + \bar{\eta}\bar{\vec{\pi}}^2\right) \\ & \hspace{2 cm} + \frac{1}{2}(\lambda_\pi^2 B_\pi^2 + 4\kappa)\bar{\eta} +\frac{B_m}{2}(m_u + m_d)\sin\frac{\theta}{2} = 0 ,\\ \\ & \left.\frac{\partial V}{\partial \delta_a}\right|_S = \frac{1}{2}\Lambda_\pi^2\left(\bar{\sigma}^2 {+} \bar{\eta}^2 {+} \bar{\vec{\delta}}^2 {+} \bar{\vec{\pi}}^2\right)\bar{\delta}_a {+} \lambda_\pi^2\left(\bar{\sigma}^2\bar{\delta}_a {+} \bar{\sigma}\bar{\eta}\bar{\pi}_a\right) {+} \lambda_\pi^2\left[\bar{\vec{\pi}}^2\bar{\delta}_a {-} (\bar{\vec{\pi}}\cdot\bar{\vec{\delta}})\bar{\pi}_a\right] \\ & \hspace{2 cm} +\frac{1}{2}(\lambda_\pi^2 B_\pi^2 + 4\kappa)\bar{\delta}_a-\frac{B_m}{2}(m_u - m_d)\cos\frac{\theta}{2}\delta_{a3} = 0 ,\\ \\ & \left.\frac{\partial V}{\partial \pi_a}\right|_S = \frac{1}{2}\Lambda_\pi^2\left(\bar{\sigma}^2 {+} \bar{\eta}^2 {+} \bar{\vec{\delta}}^2 {+} \bar{\vec{\pi}}^2\right)\bar{\pi}_a {+} \lambda_\pi^2\left(\bar{\sigma}\bar{\eta}\bar{\delta}_a{+}\bar{\eta}^2\bar{\pi}_a\right) {+} \lambda_\pi^2\left[\bar{\vec{\delta}}^2\bar{\pi}_a {-} (\bar{\vec{\pi}}\cdot\bar{\vec{\delta}})\bar{\delta}_a\right] \\ & \hspace{2 cm} + \frac{1}{2}(\lambda_\pi^2 B_\pi^2 - 4\kappa)\bar{\pi}_a+\frac{B_m}{2}(m_u - m_d)\sin\frac{\theta}{2}\delta_{a3} = 0 . \end{aligned} \right. \end{equation} Solving these equations at the first nontrivial order in the quark masses, one immediately finds that $\bar{\delta}_1 = \bar{\delta}_2 = \bar{\pi}_1 = \bar{\pi}_2 =0$ (i.e., the matrix field $\Overline[2]{U}$ turns out to be diagonal, as expected, being the mass matrix $\mathcal{M} = M e^{i\frac{\theta}{2}}$ diagonal), and moreover \begin{equation}\label{Finite temperature ELsm: solution of the fields L=2} \begin{aligned} \bar{\sigma} &\simeq \frac{B_m (m_u+m_d)}{\lambda_\pi^2 B_\pi^2 - 4 \kappa}\cos\frac{\theta}{2} ,\qquad \bar{\eta} \simeq -\frac{B_m (m_u+m_d)}{\lambda_\pi^2 B_\pi^2 + 4 \kappa}\sin\frac{\theta}{2} ,\\ \bar{\delta}_3 &\simeq \frac{B_m (m_u-m_d)}{\lambda_\pi^2 B_\pi^2 + 4 \kappa}\cos\frac{\theta}{2} ,\qquad \bar{\pi}_3 \simeq -\frac{B_m (m_u-m_d)}{\lambda_\pi^2 B_\pi^2 - 4 \kappa}\sin\frac{\theta}{2} . \end{aligned} \end{equation} Studying the matrix of the second derivatives of the potential with respect to the fields, one immediately sees that this stationary point corresponds indeed to a minimum of the potential, provided that the condition $\lambda_\pi^2 B_\pi^2 > 4\kappa$ is satisfied. Remembering Eq. \eqref{rho_pi for T>Tc}, this condition can be written as ${\cal G}_\pi \equiv 4\kappa + 2\lambda_\pi^2 \rho_\pi < 0$ and the ``critical transition temperature'' $T_c$ is just defined by the condition ${\cal G}_\pi(T=T_c)=0$: assuming that $\kappa>0$, this implies that in this case (differently from the case $L \ge 3$) $T_c>T_{\rho_\pi}$, where $T_{\rho_\pi}$ is defined to be the temperature at which $\rho_\pi$ vanishes [with $\rho_\pi(T<T_{\rho_\pi})>0$ and $\rho_\pi(T>T_{\rho_\pi})<0$; see also Ref. \cite{MM2013} for a more detailed discussion on this question]. Substituting the solution \eqref{Finite temperature ELsm: solution of the fields L=2} into Eq. \eqref{Finite temperature ELsm: explicit potential L=2} (and neglecting, for consistency, all the terms which are more than quadratic in the quark masses or which are simply constant with respect to $\theta$), we find the following $\theta$ dependence for the minimum value of the potential: \begin{equation}\label{Finite temperature ELsm: theta dependence of potential L=2} \begin{aligned} V_{min}(\theta) &= \frac{1}{8}\lambda_\pi^2 B_\pi^4 {+} \frac{1}{4}\lambda_\pi^2 B_\pi^2(\bar{\sigma}^2 {+} \bar{\eta}^2 {+} \bar{\delta}_3^2 {+} \bar{\pi}_3^2) {-}\frac{B_m}{2}\left[(m_u{+}m_d)\left(\bar{\sigma}\cos\frac{\theta}{2}{-} \bar{\eta}\sin\frac{\theta}{2}\right) \right. \\ &+ \left.(m_u {-} m_d)\left(\bar{\delta}_3\cos\frac{\theta}{2}{-} \bar{\pi}_3\sin\frac{\theta}{2}\right)\right] {-}\kappa(\bar{\sigma}^2 {-} \bar{\eta}^2 {-} \bar{\delta}_3^2 {+} \bar{\pi}_3^2) + O(m^3) \\ &= \, const. -\frac{4\kappa B_m^2 m_u m_d}{\lambda_\pi^4 B_\pi^4 - 16\kappa^2}\cos\theta + O(m^3) . \end{aligned} \end{equation} From Eq. \eqref{Finite temperature ELsm: theta dependence of potential L=2}, we can derive the following expressions of the topological susceptibility and of the second cumulant: \begin{equation}\label{Finite temperature ELsm: chi and c4 L=2} \begin{aligned} \chi = \left.\frac{\partial^2 V_{min}(\theta)}{\partial \theta^2}\right|_{\theta=0} &\simeq \frac{4\kappa B_m^2}{\lambda_\pi^4 B_\pi^4 - 16\kappa^2} m_u m_d ,\\ c_4 = \left.\frac{\partial^4 V_{min}(\theta)}{\partial \theta^4}\right|_{\theta=0} &\simeq -\frac{4\kappa B_m^2}{\lambda_\pi^4 B_\pi^4 - 16\kappa^2} m_u m_d . \end{aligned} \end{equation} \section{Results for the interpolating model with the inclusion of a $U(1)$ axial condensate} \subsection{The case $L\geq 3$} Following, as usual, the notation of Ref. \cite{MM2013}, we shall write the parameters $\rho_\pi$ and $\rho_X$ for $T>T_c$ as follows: \begin{equation}\label{rho_pi and rho_X for T>Tc} \rho_\pi \equiv -\frac{1}{2}B_\pi^2<0 ,~~~ \rho_X \equiv \frac{1}{2}F_X^2>0 . \end{equation} Moreover, we shall continue to write the complex mass matrix in the form \eqref{Mass matrix form for T>Tc} and, concerning the fields, we shall use for $U$ the usual linear parametrization \eqref{linear parametrization}, while we shall use for $X$ the following nonlinear parametrization (in the form of a {\it polar decomposition}): \begin{equation}\label{Parametrization of the field X} X = \alpha e^{i\beta} . \end{equation} With this choice of the parametrizations for the parameters $\rho_\pi$ and $\rho_X$, the fields $U$ and $X$, and the mass matrix $\mathcal{M}$, the following expression for the potential of the interpolating model at $T>T_c$ is found: \begin{equation}\label{Finite temperature Interpolating: explicit potential L>2} \begin{aligned} V &= \frac{L}{16}\lambda_\pi^2 B_\pi^4 + \frac{1}{4}\lambda_\pi^2 B_\pi^2(a_{ij}^2+b_{ij}^2) + \frac{1}{4}\lambda_X^2\left(\alpha^2 - \frac{1}{2}F_X^2\right)^2 + \frac{1}{2}A\beta^2 \\ &-\frac{B_m}{\sqrt{2}}(m_{ij}a_{ji} - n_{ij}b_{ji})+ \frac{1}{4}\lambda_\pi^2 \mathrm{Tr} \left[(UU^\dagger)^2\right] + \frac{1}{4}\lambda_\pi^{'2} \left[\mathrm{Tr}(UU^\dagger)\right]^2 \\ &-\frac{\kappa_1\alpha}{2\sqrt{2}} \left[\cos\beta(\det U + \det U^\dagger)-i\sin\beta(\det U - \det U^\dagger)\right] . \end{aligned} \end{equation} The minimum of the potential is found by solving the following system of stationary-point equations: \begin{equation}\label{Finite temperature Interpolating: minimization system L>2} \left\{ \begin{aligned} \left.\frac{\partial V}{\partial a_{ij}}\right|_S &= \frac{1}{2}\lambda_\pi^2 B_\pi^2 \,\bar{a}_{ij} - \frac{B_m}{\sqrt{2}}m_{ji} + \ldots = 0 ,\\ \\ \left.\frac{\partial V}{\partial b_{ij}}\right|_S &= \frac{1}{2}\lambda_\pi^2 B_\pi^2 \,\bar{b}_{ij} + \frac{B_m}{\sqrt{2}}\,n_{ji} + \ldots = 0 ,\\ \\ \left.\frac{\partial V}{\partial \alpha}\right|_S &= \lambda_X^2 \left(\bar{\alpha}^2 - \frac{F_X^2}{2}\right)\bar{\alpha} \\ &- \frac{\kappa_1}{2\sqrt{2}} \left[\cos\bar{\beta}(\det \Overline[2]{U} + \det \Overline[2]{U}^\dagger)-i\sin\bar{\beta}(\det \Overline[2]{U} - \det \Overline[2]{U}^\dagger)\right] = 0 ,\\ \\ \left.\frac{\partial V}{\partial \beta}\right|_S &= A \bar{\beta} + \frac{\kappa_1\bar{\alpha}}{2\sqrt{2}} \left[\sin\bar{\beta}(\det \Overline[2]{U} + \det \Overline[2]{U}^\dagger)+i\cos\bar{\beta}(\det \Overline[2]{U} - \det \Overline[2]{U}^\dagger)\right] = 0 . \end{aligned} \right. \end{equation} We notice that the first two equations \eqref{Finite temperature Interpolating: minimization system L>2} coincide with the equations \eqref{Finite temperature ELsm: minimization system L>2}, so that the solution for $\bar{a}_{ij}$ and $\bar{b}_{ij}$ (i.e., for $\Overline[2]{U}$) will be, at the leading order in the quark masses, exactly the same that has been found in the $EL_\sigma$ model [see Eq. \eqref{Vev of U L>2}]. Moreover, with that expression for $\Overline[2]{U}$, we can see that $\det \Overline[2]{U} + \det \Overline[2]{U}^\dagger \sim \det M \cos\theta \quad$ and $\det \Overline[2]{U} - \det \Overline[2]{U}^\dagger \sim \det M \sin\theta$, and from the second couple of equations \eqref{Finite temperature Interpolating: minimization system L>2} we can conclude that $\bar{\alpha} \sim \frac{F_X}{\sqrt{2}} + O(\det M \cos\theta)$ and $\bar{\beta} \sim O(\det M \sin\theta)$. More precisely, we find that\footnote{Studying the matrix of the second derivatives, one easily sees that the solution \eqref{Finite temperature Interpolating: alpha beta L>2} for $\bar{\alpha}$ and $\bar{\beta}$ (which, in the chiral limit, reduces to $\bar{\alpha} = \frac{F_X}{\sqrt{2}}$ and $\bar{\beta}=0$) indeed corresponds to the minimum of the potential (see also Ref. \cite{MM2013} for more details).} \begin{equation}\label{Finite temperature Interpolating: alpha beta L>2} \begin{aligned} \bar{\alpha} &\simeq \frac{F_X}{\sqrt{2}} + \frac{\kappa_1}{\sqrt{2}\lambda_X^2 F_X^2} \left(\frac{2B_m}{\sqrt{2}\lambda_\pi^2 B_\pi^2}\right)^L \det M \cos\theta ,\\ \bar{\beta} &\simeq -\frac{1}{A}\frac{\kappa_1F_X}{2} \left(\frac{2B_m}{\sqrt{2}\lambda_\pi^2 B_\pi^2}\right)^L \det M \sin\theta . \end{aligned} \end{equation} We can now substitute the solutions \eqref{Vev of U L>2} and \eqref{Finite temperature Interpolating: alpha beta L>2} into the expression \eqref{Finite temperature Interpolating: explicit potential L>2}, in order to find the $\theta$ dependence of the minimum value of the potential. As in the case of the $EL_\sigma$ model, the mass term and the terms dependent only on the quantity $\Overline[2]{U}\Overline[2]{U}^\dagger$ turn out to be independent of $\theta$, while by virtue of the result \eqref{Finite temperature Interpolating: alpha beta L>2} the quantity $\Overline[2]{X}\Overline[2]{X}^\dagger$ turns out to be (at the first nontrivial order in the quark masses) $\Overline[2]{X}\Overline[2]{X}^\dagger \simeq \frac{F_X^2}{2}+ \frac{\kappa_1}{\lambda_X^2 F_X} \left(\frac{2B_m}{\sqrt{2}\lambda_\pi^2 B_\pi^2}\right)^L \det M \cos\theta$. Putting all together, we see that the $\theta$ dependence of the minimum value of the potential is given, at the lowest order in the quark masses, by the following expression: \begin{equation}\label{Finite temperature Interpolating: theta dependence of the potential L>2} \begin{aligned} V_{min}(\theta) &= \, const. - \frac{\kappa_1}{2\sqrt{2}}\left(\Overline[2]{X}^\dagger\det \Overline[2]{U} + \Overline[2]{X}\det \Overline[2]{U}^\dagger\right) + \ldots \\ &= \, const. -\frac{\kappa_1F_X}{2} \left(\frac{2B_m}{\sqrt{2}\lambda_\pi^2 B_\pi^2}\right)^L \det M\cos\theta + \ldots . \end{aligned} \end{equation} From Eq. \eqref{Finite temperature Interpolating: theta dependence of the potential L>2} we can directly derive the following expressions for the topological susceptibility $\chi$ and the second cumulant $c_4$: \begin{equation}\label{Finite temperature Interpolating: chi and c4 L>2} \begin{aligned} \chi = \left.\frac{\partial^2 V_{min}(\theta)}{\partial \theta^2}\right|_{\theta=0} &\simeq \frac{\kappa_1F_X}{2}\left(\frac{2B_m}{\sqrt{2}\lambda_\pi^2B_\pi^2}\right)^L \det M ,\\ c_4 = \left.\frac{\partial^4 V_{min}(\theta)}{\partial \theta^4}\right|_{\theta=0} &\simeq -\,\frac{\kappa_1F_X}{2}\left(\frac{2B_m}{\sqrt{2}\lambda_\pi^2B_\pi^2}\right)^L \det M . \end{aligned} \end{equation} Comparing these last results with those that we have found in the $EL_\sigma$ model (for the case $L \ge 3$), we see that they coincide with each other (at least, at the leading order in the quark masses) provided that the parameter $\kappa$ in Eqs. \eqref{Finite temperature ELsm: theta dependence of potential L>2} and \eqref{Finite temperature ELsm: chi and c4 L>2} is identified with $\kappa_1 F_X/4$ (and is thus proportional to the $U(1)$ axial condensate). \subsection{The special case $L=2$} Being the quark-mass matrix ${\cal M} = M e^{i\frac{\theta}{2}}$ diagonal, and remembering what we have found for the nondiagonal elements of the matrix field $\Overline[2]{U}$ in the case of the $EL_\sigma$ model in Sec. 2.2, we can reasonably assume $\Overline[2]{U}$ to be diagonal since the beginning (i.e., $\bar{\delta}_1 = \bar{\delta}_2 = \bar{\pi}_1 = \bar{\pi}_2 = 0$). In other words, we take $\Overline[2]{U}$ and $\Overline[2]{X}$ in the form \begin{equation}\label{Diagonal form of U L=2} \Overline[2]{U}=\frac{1}{\sqrt{2}}\big[(\bar{\sigma}+i\bar{\eta})\mathbf{I} + (\bar{\delta}_3 + i \bar{\pi}_3)\,\tau_3\big] ,\quad \Overline[2]{X} = \bar{\alpha} e^{i\bar{\beta}} . \end{equation} For what concerns the various terms of the potential in this case, the only term which needs to be put in a new and more explicit form is the interaction term between $U$ and $X$, which turns out to be: $\Overline[2]{X}^\dagger \det \Overline[2]{U} {+} \Overline[2]{X} \det \Overline[2]{U}^\dagger = \bar{\alpha}\left[(\bar{\sigma}^2 {-} \bar{\eta}^2 {-} \bar{\delta}_3^2 {+} \bar{\pi}_3^2)\cos\bar{\beta} {+} 2(\bar{\eta}\bar{\sigma} {-} \bar{\delta}_3\bar{\pi}_3)\sin\bar{\beta}\right]$. Putting together all these results, we find the following expression for the potential: \begin{equation}\label{Finite temperature Interpolating: explicit potential L=2} \begin{aligned} \bar{V} &= \frac{1}{8}\lambda_\pi^2 B_\pi^4 + \frac{1}{8}\Lambda_\pi^2(\bar{\sigma}^2 + \bar{\eta}^2 + \bar{\delta}_3^2 + \bar{\pi}_3^2)^2 + \frac{1}{2}\lambda_\pi^2(\bar{\sigma}^2\bar{\delta}_3^2 + 2\bar{\sigma}\bar{\eta}\bar{\delta}_3\bar{\pi}_3 + \bar{\eta}^2\bar{\pi}_3^2) \\ &+ \frac{1}{4}\lambda_\pi^2 B_\pi^2(\bar{\sigma}^2 + \bar{\eta}^2 + \bar{\delta}_3^2 + \bar{\pi}_3^2) + \frac{1}{4}\lambda_X^2\left(\bar{\alpha}^2-\frac{F_X^2}{2}\right)^2 +\frac{1}{2}A\bar{\beta}^2 \\ &- \frac{B_m}{2}\left[(m_u {+} m_d)\left(\bar{\sigma}\cos\frac{\theta}{2} {-} \bar{\eta}\sin\frac{\theta}{2}\right) {+} (m_u{-}m_d)\left(\bar{\delta}_3\cos\frac{\theta}{2} {-} \bar{\pi}_3\sin\frac{\theta}{2}\right)\right] \\ &- \frac{\kappa_1\bar{\alpha}}{2\sqrt{2}}\left[(\bar{\sigma}^2 - \bar{\eta}^2 - \bar{\delta}_3^2 + \bar{\pi}_3^2)\cos\bar{\beta} + 2(\bar{\eta}\bar{\sigma}-\bar{\delta}_3\bar{\pi}_3)\sin\bar{\beta}\right] . \end{aligned} \end{equation} As usual, in order to find the minimum of the potential, we have to solve the following system of stationary-point equations: \begin{equation}\label{Finite temperature Interpolating: minimization system L=2} \left\{ \begin{aligned} \left.\frac{\partial V}{\partial \sigma}\right|_S &= \frac{1}{2}\Lambda_\pi^2(\bar{\sigma}^2 + \bar{\eta}^2 + \bar{\delta}_3^2 + \bar{\pi}_3^2)\bar{\sigma} + \lambda_\pi^2(\bar{\sigma}\bar{\delta}_3^2 + \bar{\eta}\bar{\delta}_3\bar{\pi}_3) + \frac{1}{2}\lambda_\pi^2 B_\pi^2\bar{\sigma} \\ &-\frac{B_m}{2}(m_u + m_d)\cos\frac{\theta}{2}-\frac{\kappa_1\bar{\alpha}}{\sqrt{2}}(\bar{\sigma}\cos\bar{\beta}+ \bar{\eta}\sin\bar{\beta})=0 ,\\ \\ \left.\frac{\partial V}{\partial \eta}\right|_S &= \frac{1}{2}\Lambda_\pi^2(\bar{\sigma}^2 + \bar{\eta}^2 + \bar{\delta}_3^2 + \bar{\pi}_3^2)\bar{\eta} + \lambda_\pi^2(\bar{\sigma}\bar{\delta}_3\bar{\pi}_3 + \bar{\eta}\bar{\pi}_3^2) + \frac{1}{2}\lambda_\pi^2 B_\pi^2\bar{\eta} \\ &+\frac{B_m}{2}(m_u + m_d)\sin\frac{\theta}{2}-\frac{\kappa_1\bar{\alpha}}{\sqrt{2}}(-\bar{\eta}\cos\bar{\beta}+\bar{\sigma}\sin\bar{\beta})=0 ,\\ \\ \left.\frac{\partial V}{\partial \delta_3}\right|_S &= \frac{1}{2}\Lambda_\pi^2(\bar{\sigma}^2 + \bar{\eta}^2 + \bar{\delta}_3^2 + \bar{\pi}_3^2)\bar{\delta}_3 + \lambda_\pi^2(\bar{\sigma}^2\bar{\delta}_3 + \bar{\sigma}\bar{\eta}\bar{\pi}_3) + \frac{1}{2}\lambda_\pi^2 B_\pi^2\bar{\delta}_3 \\ &-\frac{B_m}{2}(m_u - m_d)\cos\frac{\theta}{2}-\frac{\kappa_1\bar{\alpha}}{\sqrt{2}}(-\bar{\delta}_3\cos\bar{\beta}-\bar{\pi}_3\sin\bar{\beta})=0 ,\\ \\ \left.\frac{\partial V}{\partial \pi_3}\right|_S &= \frac{1}{2}\Lambda_\pi^2(\bar{\sigma}^2 + \bar{\eta}^2 + \bar{\delta}_3^2 + \bar{\pi}_3^2)\bar{\pi}_3 + \lambda_\pi^2(\bar{\sigma}\bar{\eta}\bar{\delta}_3+\bar{\eta}^2\bar{\pi}_3) + \frac{1}{2}\lambda_\pi^2 B_\pi^2\bar{\pi}_3 \\ &+ \frac{B_m}{2}(m_u - m_d)\sin\frac{\theta}{2}-\frac{\kappa_1\bar{\alpha}}{\sqrt{2}}(\bar{\pi}_3\cos\bar{\beta}-\bar{\delta}_3\sin\bar{\beta})=0 ,\\ \\ \left.\frac{\partial V}{\partial \alpha}\right|_S &=\lambda_X^2\left(\bar{\alpha}^2-\frac{F_X^2}{2}\right)\bar{\alpha} \\ &- \frac{\kappa_1}{2\sqrt{2}}\left[(\bar{\sigma}^2 {-} \bar{\eta}^2 {-} \bar{\delta}_3^2 {+} \bar{\pi}_3^2)\cos\bar{\beta} + 2(\bar{\eta}\bar{\sigma}{-}\bar{\delta}_3\bar{\pi}_3)\sin\bar{\beta}\right]=0 ,\\ \\ \left.\frac{\partial V}{\partial \beta}\right|_S &= - \frac{\kappa_1\bar{\alpha}}{2\sqrt{2}}\left[-(\bar{\sigma}^2 {-} \bar{\eta}^2 {-} \bar{\delta}_3^2 {+} \bar{\pi}_3^2)\sin\bar{\beta} + 2(\bar{\eta}\bar{\sigma}{-}\bar{\delta}_3\bar{\pi}_3)\cos\bar{\beta}\right]+A\bar{\beta}=0 . \end{aligned} \right. \end{equation} Solving these equations at the first nontrivial order in the quark masses, one finds that \begin{equation}\label{Finite temperature Interpolating: solution of the fields L=2} \begin{aligned} \bar{\sigma} &\simeq \frac{B_m (m_u+m_d)}{\lambda_\pi^2 B_\pi^2 - \kappa_1 F_X}\cos\frac{\theta}{2} ,\qquad \bar{\eta} \simeq -\frac{B_m (m_u+m_d)}{\lambda_\pi^2 B_\pi^2 + \kappa_1 F_X}\sin\frac{\theta}{2} ,\\ \bar{\delta}_3 &\simeq \frac{B_m (m_u-m_d)}{\lambda_\pi^2 B_\pi^2 + \kappa_1 F_X}\cos\frac{\theta}{2} ,\qquad \bar{\pi}_3 \simeq -\frac{B_m (m_u-m_d)}{\lambda_\pi^2 B_\pi^2 - \kappa_1 F_X}\sin\frac{\theta}{2} ,\\ \bar{\alpha} &\simeq \frac{F_X}{\sqrt{2}} + \frac{\sqrt{2}\kappa_1^2\lambda_\pi^2 B_\pi^2}{\lambda_X^2 F_X (\lambda_\pi^4 B_\pi^4 - \kappa_1^2 F_X^2)^2}B_m^2(m_u^2+m_d^2) \\ &+ \frac{\sqrt{2}\kappa_1^2(\lambda_\pi^4 B_\pi^4+\kappa_1^2 F_X^2)}{\lambda_X^2 F_X^2 (\lambda_\pi^4 B_\pi^4 - \kappa_1^2 F_X^2)^2}B_m^2 m_u m_d \cos\theta ,\\ \bar{\beta} &\simeq -\frac{\kappa_1 F_X}{A}\frac{B_m^2 m_u m_d}{\lambda_\pi^4 B_\pi^4-\kappa_1^2 F_X^2}\sin\theta . \end{aligned} \end{equation} Studying the matrix of the second derivatives of the potential with respect to the fields, one immediately verifies that this solution corresponds indeed to a minimum of the potential, provided that the condition $\lambda_\pi^2 B_\pi^2 > \kappa_1 F_X$, i.e., remembering Eq. \eqref{rho_pi for T>Tc}, ${\cal G}_\pi \equiv \kappa_1 F_X + 2\lambda_\pi^2 \rho_\pi < 0$, is satisfied. As in the case of the $EL_\sigma$ model for $L=2$ (discussed in Sec. 2.2), the critical transition temperature $T_c$ is just defined by the condition ${\cal G}_\pi(T=T_c)=0$ and, assuming that $\kappa_1 F_X > 0$, this implies that (differently from the case $L \ge 3$) $T_c>T_{\rho_\pi}$ (see also Ref. \cite{MM2013} for a more detailed discussion on this question). Substituting this solution into Eq. \eqref{Finite temperature Interpolating: explicit potential L=2}, and neglecting (for consistency) all the terms which are more than quadratic in the quark masses, we find the following $\theta$ dependence for the minimum value of the potential: \begin{equation}\label{Finite temperature Interpolating: theta dependence of potential L=2} \begin{aligned} V_{min}(\theta) &= \frac{\lambda_\pi^2}{8}B_\pi^4 {+} \frac{\lambda_\pi^2}{4}B_\pi^2(\bar{\sigma}^2 {+} \bar{\eta}^2 {+} \bar{\delta}_3^2 {+} \bar{\pi}_3^2) {-}\frac{B_m}{2}\left[(m_u{+}m_d)\left(\bar{\sigma}\cos\frac{\theta}{2}{-} \bar{\eta}\sin\frac{\theta}{2}\right) \right. \\ &+ \left.(m_u {-} m_d)\left(\bar{\delta}_3\cos\frac{\theta}{2}{-} \bar{\pi}_3\sin\frac{\theta}{2}\right)\right] {-}\frac{\kappa_1 F_X}{4}(\bar{\sigma}^2 {-} \bar{\eta}^2 {-} \bar{\delta}_3^2 {+} \bar{\pi}_3^2) + O(m^3) \\ &= \, const. -\frac{\kappa_1 F_X B_m^2 m_u m_d}{\lambda_\pi^4 B_\pi^4 - \kappa_1^2 F_X^2}\cos\theta + O(m^3) . \end{aligned} \end{equation} Also in this case, we notice that this potential, as well as the expressions \eqref{Finite temperature Interpolating: solution of the fields L=2} for $\bar{\sigma}$, $\bar{\eta}$, $\bar{\delta}_3$, and $\bar{\pi}_3$, coincide exactly (at least, at the leading order in the quark masses) with the corresponding expressions \eqref{Finite temperature ELsm: solution of the fields L=2} and \eqref{Finite temperature ELsm: theta dependence of potential L=2} that we have found in the $EL_{\sigma}$ model, provided that the constant $\kappa$ is identified with $\kappa_1 F_X/4$ (and is thus proportional to the $U(1)$ axial condensate). The same consideration also applies, of course, to the results for the topological susceptibility and for the second cumulant: \begin{equation}\label{Finite temperature Interpolating: chi and c4 L=2} \begin{aligned} \chi = \left.\frac{\partial^2 V_{min}(\theta)}{\partial \theta^2}\right|_{\theta=0} &\simeq \frac{\kappa_1 F_X B_m^2}{\lambda_\pi^4 B_\pi^4 - \kappa_1^2 F_X^2} m_u m_d ,\\ c_4 = \left.\frac{\partial^4 V_{min}(\theta)}{\partial \theta^4}\right|_{\theta=0} &\simeq -\frac{\kappa_1 F_X B_m^2}{\lambda_\pi^4 B_\pi^4 - \kappa_1^2 F_X^2} m_u m_d . \end{aligned} \end{equation} \section{Conclusions: summary and analysis of the results} In this conclusive section we summarize and critically comment on the results that we have found, indicating also some possible future perspectives. Two basic remarks must be made about our results. First (as already observed at the end of Secs. 3.1 and 3.2), the results that we have found (both in the case $L \ge 3$ and in the case $L=2$) for the vacuum energy density $\epsilon_{vac}(\theta) = V_{min}(\theta)$ (and, as a consequence, for the topological susceptibility $\chi$ and the second cumulant $c_4$) in the $EL_\sigma$ model and in the interpolating model are exactly the same, provided that the parameter $\kappa$ in Eqs. \eqref{Finite temperature ELsm: theta dependence of potential L>2}--\eqref{Finite temperature ELsm: chi and c4 L>2} and \eqref{Finite temperature ELsm: theta dependence of potential L=2}--\eqref{Finite temperature ELsm: chi and c4 L=2} is identified with $\kappa_1 F_X/4$ (and is, therefore, proportional to the $U(1)$ axial condensate). In fact, we have found that \begin{equation}\label{theta dependence of epsilon} \epsilon_{vac}(\theta) \simeq const. - K \cos\theta , \end{equation} and, therefore, \begin{equation} \chi = \left.\frac{\partial^2 \epsilon_{vac}(\theta)}{\partial \theta^2}\right|_{\theta=0} \simeq K ,\qquad c_4 = \left.\frac{\partial^4 \epsilon_{vac}(\theta)}{\partial \theta^4}\right|_{\theta=0} \simeq -K , \end{equation} where, for $L \ge 3$, \begin{equation}\label{K for L>2} K_{(L \ge 3)} = 2\kappa \left( \frac{2 B_m}{\sqrt{2} \lambda_\pi^2 B_\pi^2} \right)^L \det M = \frac{\kappa_1 F_X}{2} \left( \frac{2 B_m}{\sqrt{2} \lambda_\pi^2 B_\pi^2} \right)^L \det M , \end{equation} and, for $L=2$, \begin{equation}\label{K for L=2} K_{(L=2)} = \frac{4 \kappa B_m^2}{\lambda_\pi^4 B_\pi^4 - 16\kappa^2} m_u m_d = \frac{\kappa_1 F_X B_m^2}{\lambda_\pi^4 B_\pi^4 - \kappa_1^2 F_X^2} m_u m_d . \end{equation} This result is, of course, in agreement with what we have already observed in the Introduction [see, in particular, Eq. \eqref{IM to ELsm}], but we want to emphasize that it is even \emph{stronger} than the correspondence \eqref{IM to ELsm}, since it is valid \emph{regardless} of the parameters $\lambda_X$ and $A$ of the interpolating model (which do \emph{not} appear in the above-written expressions for $\epsilon_{vac}(\theta)$, $\chi$, and $c_4$). Taking into account also the results that were found in Ref. \cite{LM2018}, we now clearly see that the so-called ``interpolating model'' indeed approximately ``interpolates'' between the WDV model at $T=0$ (for $\omega_1=1$ it reproduces the same expressions for $\chi$ and $c_4$ of the WDV model) and the $EL_\sigma$ model at $T > T_c$ (where $\omega_1=0$). We also observe that the result \eqref{K for L=2}, for the special case $L=2$, can be rewritten in the following more interesting and enlightening way: \begin{equation}\label{K for L=2 bis} K_{(L=2)} \simeq \frac{M_\eta^2 - M_\sigma^2}{4 M_\eta^2 M_\sigma^2} (B_m m_u) (B_m m_d) , \end{equation} in terms of the masses of the scalar and pseudoscalar mesonic excitations, which, at the leading order in the quark masses, are given by \cite{MM2013} \begin{equation} \begin{aligned} M_\sigma^2 = M_\pi^2 &\simeq \frac{1}{2} (\lambda_\pi^2 B_\pi^2 - 4\kappa) = \frac{1}{2} (\lambda_\pi^2 B_\pi^2 - \kappa_1 F_X) ,\\ M_\eta^2 = M_\delta^2 &\simeq \frac{1}{2} (\lambda_\pi^2 B_\pi^2 + 4\kappa) = \frac{1}{2} (\lambda_\pi^2 B_\pi^2 + \kappa_1 F_X) . \end{aligned} \end{equation} The second important remark that we want to make about our results is that both the $\theta$ dependence of $\epsilon_{vac}(\theta)$ in Eq. \eqref{theta dependence of epsilon} and the quark-mass dependence of the coefficient $K$ (proportional to $\det M$) are in agreement with the corresponding results found using the so-called ``dilute instanton-gas approximation'' (DIGA) \cite{GPY1981}. Of course, we cannot make any more quantitative statements about the comparison of our value of $K$ with the corresponding value $K_{inst}$ in DIGA, or about its dependence on the temperature $T$.\\ In this respect, recent lattice investigations have shown contrasting results. Some studies have shown a considerable agreement with the DIGA prediction even in the region right above $T_c$ \cite{lattice_1,lattice_2} or in the region above $1.5 T_c$ \cite{lattice_3}, while other studies \cite{lattice_4,lattice_5} have found appreciable deviations from the DIGA prediction for temperatures $T$ up to two or three times $T_c$. The situation is thus controversial and calls for further and more accurate studies (in this respect, see also Ref. \cite{DDSX2017}). Concerning, instead, the limits of validity of our analytical results \eqref{theta dependence of epsilon}--\eqref{K for L=2}, we recall that they were obtained at the first nontrivial order in an expansion in the quark masses. Therefore, both the coincidence between the results in the two models and the agreement with the $\theta$ dependence predicted by DIGA are valid in this approximation and it would be interesting to investigate how strongly these results are modified going beyond the leading order in the quark masses. (It is reasonable to suspect that this approximation makes sense for $T - T_c \gg m_f$, but \emph{not} for $T$ close to $T_c$, i.e., for $T - T_c \lesssim m_f$.)\\ A complete and detailed study of the $\theta$ dependence of $\epsilon_{vac}(\theta)$ both for the $EL_\sigma$ model and the interpolating model, not limited to the leading order in the quark masses, is beyond the scope of the present paper and is left for future works.\\ A first step in this direction has been, however, already done in the Appendix of the present paper, where an ``exact'' expression for the topological susceptibility $\chi$ for $T>T_c$ has been derived, both for the interpolating model (considering the effective Lagrangian in the form \eqref{Interpolating model Lagrangian with Q}--\eqref{potential of the interpolating model}, where the field variable $Q(x)$ has not yet been integrated out) and, making use of the correspondence \eqref{IM to ELsm}, also for the $EL_\sigma$ model. The expressions of $\chi$ for the two models are reported in Eqs. \eqref{chi_IM} and \eqref{chi_ELsm} respectively. We see that they are slightly different, but if one expands at the leading order in the quark masses, one easily verifies that they both tend to the same limit (with the identification $\kappa = \kappa_1 F_X/4$), given by Eqs. \eqref{Finite temperature ELsm: chi and c4 L>2} and \eqref{Finite temperature Interpolating: chi and c4 L>2} for $L \ge 3$, and by Eqs. \eqref{Finite temperature ELsm: chi and c4 L=2} and \eqref{Finite temperature Interpolating: chi and c4 L=2} for $L=2$.\\ A necessary condition for this approximation to be valid is, of course, that [see Eqs. \eqref{chi_IM}--\eqref{det Lambda and det S}] $\det {\cal S} \ll \frac{A}{\bar{\alpha}^2} \det \Lambda$, which, by virtue of Eq. \eqref{chi_IM}, implies $\chi \ll A$.\\ (In the opposite extreme case, if we formally let $A \to 0$, keeping all the rest fixed, we would obtain that $\chi \simeq A \to 0$.)\\ While at $T=0$ this condition is reasonably satisfied, since in that case one identifies $A$ with the \emph{pure-gauge} topological susceptibility and [see Ref. \cite{LM2018} and references therein] $\chi(T=0) \simeq (75~{\rm MeV})^4$, $A(T=0) \simeq (180~{\rm MeV})^4$, its validity at finite temperature, above $T_c$, is, instead, questionable. (For example, it is not even clear if, in our phenomenological Lagrangian for the interpolating model at finite temperature, the parameter $A(T)$ can be simply identified with the \emph{pure-gauge} topological susceptibility.)\\ We hope that future works (both analytical and numerical) will be able to shed light on these questions. \section*{Acknowledgments} The author is extremely grateful to Francesco Luciano for his help during the initial stage of the work. \newpage \renewcommand{\thesection}{} \renewcommand{\thesubsection}{A.\arabic{subsection} } \pagebreak[3] \setcounter{section}{1} \setcounter{equation}{0} \setcounter{subsection}{0} \setcounter{footnote}{0} \begin{flushleft} {\Large\bf \thesection Appendix A: ``Exact'' expression for the topological susceptibility above $T_c$} \end{flushleft} \renewcommand{\thesection}{A} \noindent In the interpolating model, it is possible to derive the two--point function of $Q(x)$ (i.e., the topological susceptibility $\chi$) at $\theta=0$ in another (and even more direct) way, considering the effective Lagrangian in the form \eqref{Interpolating model Lagrangian with Q}--\eqref{potential of the interpolating model}, where the field variable $Q(x)$ has not yet been integrated out (and $\theta$ is fixed to be equal to zero, by putting $\mathcal{M} = M$). Clearly, \begin{equation}\label{chi(k)} \chi(k)\equiv-i\int d^{4}x\;e^{ikx}\langle{TQ(x)Q(0)}\rangle= ({\cal K}^{-1}(k))_{Q,Q}, \end{equation} where ${\cal K}^{-1}(k)$ is the inverse of the matrix ${\cal K}(k)$ associated with the quadratic part of the Lagrangian \eqref{Interpolating model Lagrangian with Q}--\eqref{potential of the interpolating model} in the momentum space, for the ensemble of pseudoscalar fields $(Q,~S_X,~b_{11},~b_{12},\ldots)$ [see Eqs. \eqref{linear parametrization} and \eqref{Parametrization of the field X}, with $\alpha \equiv \bar{\alpha} + h_X$ and $\beta \equiv S_X/\bar{\alpha}$; the contribution of the scalar fields $(h_X,~a_{11},~a_{12},\ldots)$ is block diagonal and, therefore, can be trivially factorized out]: \begin{equation}\label{matrix K(k)} \mathcal{K}(k)= \begin{pmatrix} \frac{1}{A} & -\frac{1}{\bar{\alpha}} & 0 & \ldots \\ -\frac{1}{\bar{\alpha}} & {\cal R}(k)_{X,X} & {\cal R}(k)_{X,11} & \ldots \\ 0 & {\cal R}(k)_{11,X} & {\cal R}(k)_{11,11} & \ldots \\ \vdots & \vdots & \vdots & \ddots \\ \end{pmatrix} , \end{equation} where \begin{equation}\label{matrix R(k)} {\cal R}(k)= k^2 {\bf I} - {\cal S}, \quad {\cal S} = \begin{pmatrix} m^2_0 & \mathcal{O}(m^{L-1}) & \ldots \\ \mathcal{O}(m^{L-1}) & \Lambda_{11,11} & \ldots \\ \vdots & \vdots & \ddots \\ \end{pmatrix} , \end{equation} and (assuming that $L \ge 3$) \begin{equation}\label{m_0^2 and Lambda} m_0^2 \equiv \frac{\kappa_1}{\sqrt{2} \bar{\alpha}} \det \Overline[2]{U} ,\quad \Lambda_{ij,lm} = \frac{1}{2} \lambda_\pi^2 B_\pi^2 \delta_{il} \delta_{jm} + \ldots , \end{equation} with \begin{equation}\label{U_bar and alpha_bar} \Overline[2]{U} = \frac{2 B_m}{\sqrt{2} \lambda_\pi^2 B_\pi^2} M + \ldots ,\quad \bar{\alpha} = \frac{F_X}{\sqrt{2}} + \mathcal{O}(\det M) . \end{equation} Performing explicitly the computation, one finds that \begin{equation} \chi(k) = ({\cal K}^{-1}(k))_{Q,Q} = \frac{\det {\cal R}(k)}{\det \mathcal{K}(k)} = A \frac{\det \mathcal{R}(k)}{\det \tilde{\cal R}(k)} , \end{equation} having defined \begin{equation}\label{matrix Rtilde(k)} \tilde{\cal R}(k)= k^2 {\bf I} - \tilde{\cal S}, \quad \tilde{\cal S} = \begin{pmatrix} m_0^2 + \frac{A}{\bar{\alpha}^2} & \mathcal{O}(m^{L-1}) & \ldots \\ \mathcal{O}(m^{L-1}) & \Lambda_{11,11} & \ldots \\ \vdots & \vdots & \ddots \\ \end{pmatrix} , \end{equation} so that $\det \tilde{\cal R}(k) = \det {\cal R}(k) - \frac{A}{\bar{\alpha}^2} \det (k^2 {\bf I} - \Lambda)$. In particular, putting $k=0$, the following expression for the topological susceptibility is found: \begin{equation}\label{chi_IM} \chi \equiv \chi(k=0) = A \frac{\det {\cal S}}{\det \tilde{\cal S}} = A \frac{\det {\cal S}}{\det {\cal S} + \frac{A}{\bar{\alpha}^2} \det \Lambda} . \end{equation} This expression has been obtained for the interpolating model, but, as explained in the Introduction [see, in particular, Eq. \eqref{IM to ELsm}], if we take the formal limits $\lambda_X \to \infty$ and $A \to \infty$ (having already fixed $\omega_1=0$, since we are at $T>T_c$), we also obtain the expression for the topological susceptibility in the $EL_\sigma$ model: \begin{equation}\label{chi_ELsm} \chi_{(EL_\sigma)} = \frac{\bar{\alpha}^2 \det {\cal S}}{\det \Lambda} = \frac{\det \Overline[2]{\cal S}}{\det \Lambda} , \end{equation} where now $\bar{\alpha} = \frac{F_X}{\sqrt{2}}$ and \begin{equation} \Overline[2]{\cal S} = \begin{pmatrix} \Overline[2]{m}_0^2 & \bar{\alpha}\mathcal{O}(m^{L-1}) & \ldots \\ \bar{\alpha}\mathcal{O}(m^{L-1}) & \Lambda_{11,11} & \ldots \\ \vdots & \vdots & \ddots \\ \end{pmatrix} , \end{equation} with \begin{equation} \Overline[2]{m}_0^2 \equiv \bar{\alpha}^2 m_0^2 = \frac{\kappa_1 \bar{\alpha}}{\sqrt{2}} \det \Overline[2]{U} = 2 \kappa \det \Overline[2]{U} , \end{equation} having identified, as usual, $\kappa \equiv \frac{\kappa_1 \bar{\alpha}}{2 \sqrt{2}} = \frac{\kappa_1 F_X}{4}$.\\ Even with this identification, the two expressions \eqref{chi_IM} and \eqref{chi_ELsm} are slightly different, but if one expands at the leading order in the quark masses, using the fact that [see Eqs. \eqref{m_0^2 and Lambda} and \eqref{U_bar and alpha_bar}]: \begin{equation}\label{det Lambda and det S} \det \Lambda = \left( \frac{1}{2} \lambda_\pi^2 B_\pi^2 \right)^{L^2} + \ldots, \quad \det {\cal S} = \frac{\kappa_1}{F_X} \left( \frac{2 B_m}{\sqrt{2} \lambda_\pi^2 B_\pi^2} \right)^L \left( \frac{1}{2} \lambda_\pi^2 B_\pi^2 \right)^{L^2} \det M + \ldots, \end{equation} one easily verifies that they both tend to the same limit, given by Eqs. \eqref{Finite temperature ELsm: chi and c4 L>2} and \eqref{Finite temperature Interpolating: chi and c4 L>2}:\footnote{This approximate expression [but not the ``exact'' expressions \eqref{chi_IM} and \eqref{chi_ELsm}] was derived (for the interpolating model) also in Ref. \cite{EM1994}.} \begin{equation} \chi \simeq \frac{\kappa_1 F_X}{2} \left( \frac{2 B_m}{\sqrt{2} \lambda_\pi^2 B_\pi^2} \right)^L \det M = 2\kappa \left( \frac{2 B_m}{\sqrt{2} \lambda_\pi^2 B_\pi^2} \right)^L \det M . \end{equation} Similar results come out also in the special case $L=2$. In particular, the expressions \eqref{chi_IM} and \eqref{chi_ELsm} (for the topological susceptibility in the interpolating model and in the $EL_\sigma$ model, respectively) are valid also in this case, provided that one uses for the matrices ${\cal S}$, $\Overline[2]{\cal S}$, and $\Lambda$ the following expressions [referring to the ensemble of pseudoscalar fields $(S_X,~\eta,~\pi_3)$: see Ref. \cite{MM2013} for further details]: \begin{equation}\label{matrix S L=2} {\cal S}_{(L=2)} = \left( \begin{matrix} m_0^2 & -\frac{\kappa_1}{\sqrt{2}}\bar{\sigma} & \frac{\kappa_1}{\sqrt{2}}\bar{\delta}_3 \\ -\frac{\kappa_1}{\sqrt{2}}\bar{\sigma} & \Lambda_{11} & \Lambda_{12} \\ \frac{\kappa_1}{\sqrt{2}}\bar{\delta}_3 & \Lambda_{21} & \Lambda_{22} \end{matrix} \right) ,\qquad \Overline[2]{\cal S}_{(L=2)} = \left( \begin{matrix} \Overline[2]{m}_0^2 & -2\kappa\bar{\sigma} & 2\kappa\bar{\delta}_3 \\ -2\kappa\bar{\sigma} & \Lambda_{11} & \Lambda_{12} \\ 2\kappa\bar{\delta}_3 & \Lambda_{21} & \Lambda_{22} \end{matrix} \right) , \end{equation} and \begin{equation}\label{matrix Lambda L=2} \Lambda_{(L=2)} = \left( \begin{matrix} \frac{1}{2}(\lambda_{\pi}^{2}B_{\pi}^{2}+\kappa_1\sqrt{2}\bar{\alpha}) + \Delta & \lambda_{\pi}^{2}\bar{\delta}_3\bar{\sigma} \\ \lambda_{\pi}^{2}\bar{\delta}_3\bar{\sigma} & \frac{1}{2}(\lambda_{\pi}^{2}B_{\pi}^{2}-\kappa_1\sqrt{2}\bar{\alpha}) + \Delta \end{matrix} \right) , \end{equation} where $\Delta \equiv \frac{1}{2}\Lambda_{\pi}^{2}(\bar{\sigma}^2+\bar{\delta}_3^2)$ (having defined $\Lambda_{\pi}^{2} \equiv \lambda_{\pi}^{2} + 2\lambda_\pi^{'2}$) and \begin{equation} m_0^2 \equiv \frac{\kappa_1}{2\sqrt{2}\bar{\alpha}}(\bar{\sigma}^2-\bar{\delta}_3^2) ,\qquad \Overline[2]{m}_0^2 \equiv \kappa(\bar{\sigma}^2-\bar{\delta}_3^2) , \end{equation} with \begin{equation} \bar{\sigma} = \frac{B_m (m_u+m_d)}{\lambda_{\pi}^{2}B_{\pi}^{2} - \kappa_1 F_X} + \ldots ,\quad \bar{\delta}_3 = \frac{B_m (m_u-m_d)}{\lambda_{\pi}^{2}B_{\pi}^{2} + \kappa_1 F_X} + \ldots ,\quad \bar{\alpha} = \frac{F_X}{\sqrt{2}} + \mathcal{O}(m^2) . \end{equation} Using these expressions, one finds that, at the leading order in the quarks masses, \begin{equation} \det \Lambda = \frac{1}{4} (\lambda_\pi^4 B_\pi^4 - \kappa_1^2 F_X^2) + \mathcal{O}(m^2) ,\qquad \det {\cal S} = \frac{\kappa_1 B_m^2}{2 F_X} m_u m_d + \mathcal{O}(m^3) , \end{equation} so that, also in the special case $L=2$, one easily verifies that the two expressions \eqref{chi_IM} and \eqref{chi_ELsm} tend to the same limit, given by Eqs. \eqref{Finite temperature ELsm: chi and c4 L=2} and \eqref{Finite temperature Interpolating: chi and c4 L=2}: \begin{equation} \chi \simeq \frac{\kappa_1 F_X B_m^2}{\lambda_\pi^4 B_\pi^4 - \kappa_1^2 F_X^2} m_u m_d = \frac{4 \kappa B_m^2}{\lambda_\pi^4 B_\pi^4 - 16\kappa^2} m_u m_d . \end{equation} \newpage \renewcommand{\Large}{\large}
proofpile-arXiv_065-4186
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \setcounter{footnote}{0} Within the framework of critical ten-dimensional superstring theories, it has proven very difficult to obtain controlled scenarios leading to a low-energy effective theory with a positive cosmological constant (CC) or dark energy, see \cite{Danielsson:2018ztv,Palti:2019pca} for recent reviews. On rather general grounds, the presence of de Sitter vacua within the limit of classical two-derivative supergravity is excluded \cite{deWit:1986mwo,Maldacena:2000mw}, thus one is led to include quantum corrections in order to evade the no-go theorem.\footnote{The no-go theorem might also be circumvented at the classical level by including orientifold sources (see \cite{Cordova:2018dbb} for a recent proposal), although this scenario is also subject to stringent constraints \cite{Andriot:2018ept,Cribiori:2019clo,Andriot:2019wrs}.} One such quantum effect is fermionic condensation, which is known to occur in supersymmetric Yang-Mills theories \cite{Novikov:1983ee}. The vacuum expectation value (VEV) of all single fermions should vanish in order to preserve the symmetries of a maximally-symmetric vacuum of the theory, however non-vanishing quadratic or quartic fermion VEV's (condensates) may still be generated by non-perturbative effects such as instantons. Fermionic condensates have thus the potential to generate a positive contribution to the CC. Within the framework of the ten-dimensional superstrings, most studies have focused on gaugino condensation in the heterotic theory \cite{Dine:1985rz, Derendinger:1985kk, LopesCardoso:2003sp, Derendinger:2005ed, Manousselis:2005xa, Chatzistavrakidis:2012qb, Gemmer:2013ica, Minasian:2017eur}, which however does not seem to allow a positive CC \cite{Quigley:2015jia}. Recent results \cite{Soueres:2017lmy,Terrisse:2018qjm} indicate that the situation is more encouraging within the framework of the IIA superstring --which appears to allow the generation of a positive CC by fermionic condensates, at least in principle. In the functional integration over metrics approach to quantum gravity \cite{Gibbons:1978ac}, the gravitino condensates arise from saddle points of the 4d action corresponding to gravitational instantons. These are noncompact asymptotically locally Euclidean (ALE) spaces with self-dual Riemann curvature and thus vanishing Einstein action \cite{Eguchi:1980jx}. Although other saddle points may exist in the presence of matter fields \cite{Witten:1981nf}, they would have positive Euclidean action.\footnote{This follows from the positive action conjecture \cite{Gibbons:1978ac}, which in its turn can be seen to follow from the positive energy theorem \cite{Schon:1979uj,Schon:1979rg,Schon:1981vd,Witten:1981mf}.} Thus ALE spaces are expected to capture the dominant instanton contributions in the path integral approach of quantum gravity. Going beyond the two-derivative approximation of the 4d effective action, the gravitational instantons give a positive contribution to the action at the four-derivative order. Among the ALE spaces, the one with the minimal four-derivative action is the Eguchi-Hanson (EH) gravitational instanton \cite{Eguchi:1978gw}. In the EH background there are two positive-chirality spin-3/2 zero modes of the Dirac operator, and no spin-1/2 zero modes, thus giving rise to a nonvanishing gravitino bilinear condensate in four dimensions at one loop in the 4d gravitational coupling \cite{Hawking:1979zs,Konishi:1988mb}. The quartic gravitino VEV's receive contributions from ALE instantons with higher Hirzebruch signature. Since ALE spaces do not support spin-1/2 zero modes, no dilatino condensates are generated. The aim of the present paper is to study gravitino condensation in the IIA theory compactified on Calabi-Yau (CY) threefolds. To that end we construct a 4d consistent truncation capturing the bosonic part of the universal sector of CY IIA compactification (i.e.~the gravity multiplet, one vectormultiplet, and one hypermultiplet) in the presence of background flux and gravitino condensates generated by ALE instantons. In the limit of vanishing flux and condensates, our construction reduces to the universal bosonic sector of the effective action of IIA CY compactifications (at the two-derivative order) thus proving that the latter is also a consistent truncation. In the presence of nonvanishing flux and fermion condensates, the result should be thought of as a subsector of the 4d effective action, in the limit where the masses induced by the flux and/or the condensate are sufficiently smaller than the Kaluza-Klein (KK) scale. The condensates are controlled by the ratio of the characteristic length of the CY to the string length, and can be fine-tuned to be dominant in a region of large volume and small string coupling. The consistent truncation admits de Sitter solutions supported by the condensates, subject to certain validity conditions that we discuss. The plan of the remainder of the paper is as follows. Section \ref{sec:review} includes a brief review of the reduction of IIA on CY at the two-derivative level, in the absence of flux and condensates. Higher-order derivative corrections in the 4d action are discussed in section \ref{sec:dercorrections}. The consistent truncation to the universal sector is contained in section \ref{sec:ct}: in section \ref{sec:consistent} we construct the truncation in the presence of background flux. A further extension to include gravitino condensates is constructed in section \ref{sec:fermcond}. Maximally-symmetric vacua thereof are discussed in section \ref{sec:vacua}. Section \ref{sec:discussion} discusses the conditions of validity of our results, and some open directions. Appendix \ref{app:spin} discusses our spinor conventions. A brief review of ALE spaces is given in section \ref{app:ale}. The general form of the gravitino condensates, generated by ALE gravitational instantons in the context of 4d $\mathcal{N}=1$ supergravity, is reviewed in appendix \ref{app:grcond}. \section{Review of IIA reduction on CY}\label{sec:review} To establish notation and conventions, let us briefly review the reduction of IIA on CY at the two-derivative level, in the absence of flux and condensates. As is well known, the KK reduction of (massless) IIA supergravity around the fluxless $\mathbb{R}^{1,3}\times Y$ vacuum results in a 4d $\mathcal{N}=2$ supergravity, whose bosonic sector consists of one gravity multiplet (containing the metric and one vector), $h^{1,1}$ vector multiplets (each of which consists of one vector and two real scalars) and $h^{2,1}+1$ hypermultiplets (each of which contains four real scalars), where $h^{p,q}$ are the Hodge numbers of the CY threefold $Y$. The $2h^{1,1}$ real scalars $(v^A,\chi^A)$ in the vector multiplets come from the NS-NS $B$ field and deformations of the metric of the form, \eq{\label{4} B=\beta(x)+\sum_{A=1}^{h^{1,1}}\chi^A(x)e^A(y) ~;~~~ i\delta g_{a\bar{b}}= \sum_{A=1}^{h^{1,1}} v^A(x)e^A_{a\bar{b}}(y) ~,} where $\beta$ is a two-form in $\mathbb{R}^{1,3}$; $\{e^A_{a\bar{b}}(y),~A=1,\dots,h^{1,1}\}$ is a basis of harmonic (1,1)-forms on the CY, and $x$, $y$ are coordinates of $\mathbb{R}^{1,3}$, $Y$ respectively; we have introduced holomorphic, antiholomorphic internal indices from the beginning of the latin alphabet: $a=1,\dots,3$, $\bar{b},=1,\dots,3$, respectively. Since every CY has a K\"{a}hler form (which can be expressed as a linear combination of the basis (1,1)-forms), there is a always at least one vector multiplet (which may be called ``universal'', in that that it exists for any CY compactification) whose scalars consist of the volume modulus $v$ and one scalar $\chi$. The $2(h^{2,1}+1)$ complex scalars of the hypermultiplets, and the $h^{1,1}+1$ vectors of the gravity and the vectormultiplets arise as follows: from the one- and three-form RR potentials $C_1$, $C_3$ and the complex-structure deformations of the metric,\footnote{The right-hand side of the first equation of \eqref{7} can be seen to be automatically symmetric in its two free indices.} \eq{\spl{\label{7} \delta g_{\bar{a}\bar{b}}&=\sum_{\alpha=1}^{h^{2,1}} \zeta^{\alpha}(x) \Omega^{*cd}{}_{\bar{a}} \Phi^{\alpha}_{cd\bar{b}}(y)~;~~~C_1=\alpha(x)~; \\ C_3&=-\frac12\Big(\xi(x)\text{Im}\Omega+\xi'(x)\text{Re}\Omega\Big) +\sum_{A=1}^{h^{1,1}}\gamma^A(x)\wedge e^A(y) +\Big(\sum_{\alpha=1}^{h^{2,1}} \xi^\alpha(x) \Phi^\alpha(y)+\mathrm{c.c.}\Big) ~,}} where $\Omega(y)$ is the holomorphic threeform of the CY and $\{\Phi^{\alpha}_{ab\bar{c}}(y),~\alpha=1,\dots,h^{2,1}\}$ is basis of harmonic (2,1) forms on the CY, we obtain the complex scalars $(\zeta^\alpha, \xi^\alpha)$ and the vectors $(\alpha, \gamma^A)$. Moreover the real scalars $(\xi,\xi')$ together with the dilaton $\phi$ and the axion $b$ combine into one universal hypermultiplet. Recall that if $h$ is the 4d component of the NSNS three-form, \eq{ h=\text{d} \beta~,} the axion $b$ is given schematically by $\text{d} b \sim\star_4 h$ (the precise relation is eq.~\eqref{prhd} below). In summary, the universal bosonic sector of the 4d $\mathcal{N}=2$ supergravity arising from IIA compactification on $Y$ contains the metric and the vector of the gravity multiplet $(g_{\mu\nu},\alpha)$, the vector and the the scalars of one vectormultiplet $(\gamma,v,\chi)$, and the scalars of the universal hypermultiplet $(\xi,\xi',\phi,b)$. \subsection{Derivative corrections}\label{sec:dercorrections} Four-derivative corrections to the 4d effective action resulting from compactification of the IIA superstring on CY threefolds have been known since \cite{Antoniadis:1997eg}. More recently they have been computed in \cite{Grimm:2017okk} (see also \cite{Weissenbacher:2019mef}) from compactification of certain known terms of the ten-dimensional IIA tree-level and one-loop superstring effective action at order $\alpha^{\prime3}$. The authors of that reference take into account the graviton and $B$-field eight-derivative terms given in \cite{Gross:1986mw, Liu:2013dna}, but neglect e.g. the dilaton derivative couplings and RR couplings of the form $R^2(\partial F)^2$ and $\partial^4F^4$ calculated in \cite{Policastro:2006vt}. Furthermore \cite{Grimm:2017okk} neglects loop corrections from massive KK fields.\footnote{Presumably the KK loop corrections are subleading and vanish in the large-volume limit (see however \cite{Haack:2015pbv} for an exception to this statement). At any rate these corrections are dependent on the specific CY and at the moment can only be computed on a case-by-case basis, e.g.~around the orbifold limit where the CY reduces to $T^6/\Gamma$ with $\Gamma$ a discrete group. Winding modes are heavier than KK modes in a regime where \eqref{eqlim} holds.} In a low-energy expansion, the 4d effective action takes the schematic form \cite{Katmadas:2013mma}, \eq{\label{scm} 2\kappa^2S=\int\text{d} x^4\sqrt{g}\left( R +\beta_1\alpha' R^2+\beta_2\alpha^{\prime2} R^3+\beta_3\alpha^{\prime3} R^4 \right) ~,} where $\kappa{}$ is the four-dimensional gravitational constant, and a Weyl transformation must be performed to bring the action to the 4d Einstein frame.\footnote{\label{f4}As emphasized in \cite{Grimm:2017okk}, in computing the 4d effective action the compactification must be performed around the solution to the $\alpha'$-corrected equations of motion. This procedure can thus generate $\alpha'$-corrections also from the compactification of the ten-dimensional Einstein term.} Moreover each coefficient in the series can be further expanded in the string coupling to separate the tree-level from the one-loop contributions. Although all the higher-derivative terms in \eqref{scm} descend from the eight-derivative ten-dimensional $\alpha^{\prime3}$-corrections, they correspond to different orders of the 4d low-energy expansion. Indeed if $l_s=2\pi\sqrt{\alpha'}$, $l_{4d}$ and $l_Y$ are the string length, the four-dimensional low-energy wavelength and the characteristic length of $Y$ respectively, we have, \eq{\label{eqlim} {l_s^2} \ll l^2_{Y} \ll l^2_{4d} ~.} Moreover the term with coefficient $\beta_n$ in \eqref{scm} is of order, \eq{\label{6} \left(\frac{l_s}{l_{4d}} \right)^{2n} \left(\frac{l_s}{l_{Y}}\right)^{6-2n}~;~n=1,2,3 ~,} relative to the Einstein term, so that the $n=1$ term dominates the $n=2,3$ terms in \eqref{scm}. The ten-dimensional IIA supergravity (two-derivative) action admits solutions without flux of the form $\mathbb{R}^{1,3}\times Y$, where $Y$ is of $SU(3)$ holonomy (which for our purposes we take to be a compact CY). A sigma model argument \cite{Nemeschansky:1986yx} shows that this background can be promoted to a solution to all orders in $\alpha'$, provided the metric of $Y$ is appropriately corrected at each order in such a way that it remains K\"{a}hler.\footnote{It should be possible to generalize the sigma-model argument of \cite{Nemeschansky:1986yx} to the case of backgrounds of the form $M_4\times Y$, where $M_4$ is an ALE space, along the lines of \cite{Bianchi:1994gi}.} Indeed \cite{Grimm:2017okk} confirms this to order $\alpha^{\prime3}$ and derives the explicit corrections to the dilaton and the metric, which is deformed away from Ricci-flatness at this order. Their derivation remains valid for backgrounds of the form $M_4\times Y$, where $M_4$ is any Ricci-flat four-dimensional space. Within the framework of the effective 4d theory, nonperturbative gravitational instanton corrections arise from vacua of the form $M_4\times Y$, where $M_4$ is an ALE space. These instanton contributions are weighted by a factor $\exp(- S_0)$, where $S_0$ is the 4d effective action evaluated on the solution $M_4\times Y$. Subject to the limitations discussed above, and taking into account the Ricci-flatness of the metric of $M_4$, the IIA 4d effective action of \cite{Grimm:2017okk} reduces to, \eq{\label{15} 2\kappa{}^2S_0=\beta_1\alpha' \int_{M_4}\text{d} x^4\sqrt{g} R_{\kappa\lambda\mu\nu}R^{\kappa\lambda\mu\nu} ~,} where in the conventions of \cite{Grimm:2017okk},\footnote{\label{f1} The ten-dimensional gravitational constant of \cite{Grimm:2017okk} $2\kappa_{10}^2=(2\pi)^7\alpha^{\prime4}$, cf.~(2.4) therein, is related to the four-dimensional one via $\kappa{}^2 ={\kappa_{10}^2}/{l_s^6}$. Note in particular that eqs.~(4.9) and (4.19) of that reference are given in units where $l_s=2\pi\sqrt{\alpha'}=1$: to reinstate engineering dimensions one must multiply with the appropriate powers of $l_s$. The 4d Einstein term in \eqref{scm} has been canonically normalized via a Weyl transformation of the 4d metric. This affects the relative coefficient between two- and four-derivative terms in the action: note in particular that the right-hand side of \eqref{15} is invariant under Weyl transformations. We thank Kilian Mayer for clarifying to us the conventions of \cite{Grimm:2017okk}.} \eq{\label{8bc} \kappa{}^2 =\pi\alpha'~;~~~M_{\text{P}}=2\sqrt{\pi}~\!l_s^{-1} ~,} with $M_{\text{P}}=\kappa^{-1}$ the (reduced) 4d Planck mass and $\beta_1$ given by, \eq{\label{16} l_s^6\beta_1=2^9\pi^4\alpha^{\prime2}\int_{Y}c_2\wedge J ~,} where $c_2$ is the second Chern class of $Y$. For a generic K\"{a}hler manifold we have, \eq{ c_2\wedge J=\frac{1}{32\pi^2}\left( {R}_{mnkl}^2-\mathfrak{R}_{mn}^2+\frac14 \mathfrak{R}^2 \right)\text{vol}_6 ~,} where we have adopted real notation and defined $\mathfrak{R}_{mn}:={R}_{mnkl}J^{kl}$, $\mathfrak{R}:=\mathfrak{R}_{mn}J^{mn}$. The contractions are taken with respect to the metric compatible with the K\"{a}hler form $J$ and the connection of the Riemann tensor. The information about $Y$ enters the 4d effective action through the calculation of $\beta_1$. Since $\beta_1$ multiplies a term which is already a higher-order correction, it suffices to evaluate it in the CY limit (for which $\mathfrak{R}_{mn}$ vanishes). We thus obtain, \eq{\label{b1} \beta_1=\frac{1}{\pi^2l^2_s}\int_{Y}\text{d}^6x \sqrt{g} ~\!{R}_{mnkl}^2>0 ~. } Therefore the leading instanton contribution comes from the ALE space which minimizes the integral in \eqref{15}. This is the EH space \cite{Konishi:1989em}, cf.~\eqref{hirz}, so that, \eq{\label{s0} S_0=\frac{24}{\pi l_s^2}\int_{Y}\text{d}^6x \sqrt{g} ~\!{R}_{mnkl}^2>0 ~.} Note that $S_0$ does not depend on the dilaton: this is related to the fact that, starting from an action of the form $\int\text{d}^4 x\sqrt{g}(e^{-2\phi}R+\beta_1 \alpha' R_{\mu\nu\rho\sigma}^2)$, the dilaton exponential can be absorbed by a Weyl transformation of the form $g_{\mu\nu}\rightarrow e^{2\phi}g_{\mu\nu}$, cf.~footnote \ref{f1}. Therefore we have, \eq{\label{s01} S_0=c\left(\frac{l_Y}{l_{s}} \right)^{2} ~,} with $c$ a positive number of order one. \section{Consistent truncation}\label{sec:ct} In \cite{Terrisse:2018qjm} we presented a universal consistent truncation on Nearly-K\"{a}hler and CY manifolds in the presence of dilatino condensates. As it turns out, this consistent truncation captures only part of the universal scalar sector of the $\mathcal{N}=2$ low-energy effective supergravity obtained from IIA theory compactified on CY threefolds. Therefore we must extend the ansatz of \cite{Terrisse:2018qjm} to include the ``missing'' fields and also to take into account the gravitino condensates. \subsection{Action and equations of motion} In \cite{Soueres:2017lmy} the quartic dilatino terms of all (massive) IIA supergravities \cite{Giani:1984wc,Campbell:1984zc,Huq:1983im,Romans:1985tz,Howe:1998qt} were determined in the ten-dimensional superspace formalism of \cite{Tsimpis:2005vu}, and were found to agree with \cite{Giani:1984wc}. As follows from the result of \cite{Tsimpis:2005vu}, the quartic fermion terms are common to all IIA supergravities (massive or otherwise). In the following we will complete Romans supergravity (whose quartic fermion terms were not computed in \cite{Romans:1985tz}) by adding the quartic gravitino terms given in \cite{Giani:1984wc}. {}Furthermore we will set the dilatino to zero. Of course this would be inconsistent in general, since the dilatino couples linearly to gravitino terms. Here this does not lead to an inconsistency in the equations of motion, since we are ultimately interested in a maximally-symmettric vacuum, in which linear and cubic fermion VEV's vanish. In the conventions of \cite{Soueres:2017lmy,Terrisse:2018qjm}, upon setting the dilatino to zero, the action of Romans supergravity reads, \eq{\spl{\label{action3} S=S_b&+\frac{1}{2\kappa_{10}^2} \int\text{d}^{10}x\sqrt{{g}} \Big\{ 2(\tilde{\Psi}_M\Gamma^{MNP}\nabla_N\Psi_P) +\frac{1}{2}e^{5\phi/4}m(\tilde{\Psi}_M\Gamma^{MN}\Psi_N) \\ &-\frac{1}{2\cdot 2!}e^{3\phi/4} F_{M_1M_2}(\tilde{\Psi}^M\Gamma_{[M}\Gamma^{M_1 M_2}\Gamma_{N]}\Gamma_{11}\Psi^N) \\ &-\frac{1}{2\cdot 3!}e^{-\phi/2} H_{M_1\dots M_3}(\tilde{\Psi}^M\Gamma_{[M}\Gamma^{M_1\dots M_3}\Gamma_{N]}\Gamma_{11}\Psi^N) \\ &+\frac{1}{2\cdot 4!}e^{\phi/4} G_{M_1\dots M_4}(\tilde{\Psi}^M\Gamma_{[M}\Gamma^{M_1\dots M_4}\Gamma_{N]}\Psi^N) +L_{\Psi^4}\Big\} ~,}} where $\Psi_M$ is the gravitino; $S_b$ denotes the bosonic sector, \eq{\spl{\label{ba}S_b= \frac{1}{2\kappa_{10}^2}\int\text{d}^{10}x\sqrt{{g}}\Big( &-{R}+\frac12 (\partial\phi)^2+\frac{1}{2\cdot 2!}e^{3\phi/2}F^2\\ &+\frac{1}{2\cdot 3!}e^{-\phi}H^2+\frac{1}{2\cdot 4!}e^{\phi/2}G^2 +\frac{1}{2}m^2e^{5\phi/2}\Big) +\mathrm{CS} ~, }} and CS is the Chern-Simons term. There are 24 quartic gravitino terms as given in \cite{Giani:1984wc}, denoted $L_{\Psi^4}$ in \eqref{action3}. Of these only four can have a nonvanishing VEV in an ALE space: they are discussed in more detail in section \ref{sec:fermcond}. We emphasize that the action \eqref{action3} should be regarded as a book-keeping device whose variation with respect to the bosonic fields gives the correct bosonic equations of motion in the presence of gravitino condensates. Furthermore, the fermionic equations of motion are trivially satisfied in the maximally-symmetric vacuum. The (bosonic) equations of motion (EOM) following from (\ref{action3}) are as follows: Dilaton EOM, \eq{\spl{\label{beomf1} 0&=-{\nabla}^2\phi+\frac{3}{8}e^{3\phi/2}F^2-\frac{1}{12}e^{-\phi}H^2+\frac{1}{96}e^{\phi/2}G^2 +\frac{5}{4}m^2e^{5\phi/2}\\ &+\frac{5}{8}e^{5\phi/4}m(\tilde{\Psi}_M\Gamma^{MN}\Psi_N) \\ &-\frac{3}{16}e^{3\phi/4} F_{M_1M_2}(\tilde{\Psi}^M\Gamma_{[M}\Gamma^{M_1 M_2}\Gamma_{N]}\Gamma_{11}\Psi^N)\\ &+\frac{1}{24}e^{-\phi/2} H_{M_1\dots M_3}(\tilde{\Psi}^M\Gamma_{[M}\Gamma^{M_1\dots M_3}\Gamma_{N]}\Gamma_{11}\Psi^N) \\ &+\frac{1}{192}e^{\phi/4} G_{M_1\dots M_4}(\tilde{\Psi}^M\Gamma_{[M}\Gamma^{M_1\dots M_4}\Gamma_{N]}\Psi^N) ~. }} Einstein EOM, \eq{\spl{\label{beomf2} {R}_{MN}&=\frac{1}{2}\partial_M\phi\partial_N\phi+\frac{1}{16}m^2e^{5\phi/2}{g}_{MN} +\frac{1}{4}e^{3\phi/2}\Big( 2F^2_{MN} -\frac{1}{8} {g}_{MN} F^2 \Big)\\ &+\frac{1}{12}e^{-\phi}\Big( 3H^2_{MN} -\frac{1}{4} {g}_{MN} H^2 \Big) +\frac{1}{48}e^{\phi/2}\Big( 4G^2_{MN} -\frac{3}{8} {g}_{MN} G^2 \Big)\\ &+ \frac{1}{24}e^{\phi/4}G_{(M|}{}^{M_1M_2 M_3} (\tilde{\Psi}_P\Gamma^{[P}\Gamma_{|N)M_1M_2 M_3}\Gamma^{Q]}\Psi_Q) \\ &-\frac{1}{96}e^{\phi/4}G_{M_1\dots M_4}\Big\{ (\tilde{\Psi}_P\Gamma_{(M}\Gamma^{M_1\dots M_4}\Gamma^{P}\Psi_{N)})-(\tilde{\Psi}_P\Gamma^P\Gamma^{M_1\dots M_4}\Gamma_{(M}\Psi_{N)})\\ &+\frac12 g_{MN}(\tilde{\Psi}^P\Gamma_{[P}\Gamma^{M_1\dots M_4}\Gamma_{Q]}\Psi^Q) \Big\} -\frac18 g_{MN}L_{\Psi^4}+\frac{\delta L_{\Psi^4}}{\delta g^{MN}} ~,}} where we have set: $\Phi^2_{MN}:=\Phi_{MM_2\dots M_p}\Phi_N{}^{M_2\dots M_p}$, for any $p$-form $\Phi$. In the Einstein equation above we have not included the gravitino couplings to the two- and three-forms: these vanish in the ALE background, as we will see in the following. Moreover, we have refrained from spelling out explicitly the quartic gravitino terms, as they are numerous and not particularly enlightening. We will calculate them explicitly later on in the case of the ALE space in section \ref{sec:fermcond}. Form EOM's,\footnote{\label{f2}We are using ``superspace conventions'' as in \cite{Lust:2004ig} so that, \eq{ \Phi_{(p)}=\frac{1}{p!}\Phi_{m_1\dots m_p}\text{d} x^{m_p}{\scriptscriptstyle \wedge}\dots{\scriptscriptstyle \wedge}\text{d} x^{m_1}~;~~~ \text{d}\Big( \Phi_{(p)} {\scriptscriptstyle \wedge}\Psi_{(q)}\Big)=\Phi_{(p)} {\scriptscriptstyle \wedge}\text{d}\Psi_{(q)} +(-1)^q\text{d}\Phi_{(p)} {\scriptscriptstyle \wedge}\Psi_{(q)}~.\nonumber } In $D$ dimensions the Hodge star is defined as follows, \eq{ \star (\text{d} x^{a_1}\wedge\dots\wedge \text{d} x^{a_p})=\frac{1}{(D-p)!}\varepsilon^{a_1\dots a_p}{}_{b_1\dots b_{10-p}} \text{d} x^{b_1}\wedge\dots\wedge \text{d} x^{b_{10-p}} ~.\nonumber} } \eq{\spl{\label{beomf3} 0&=\text{d} {\star}\big[ e^{3\phi/2}F -\frac{1}{2}e^{3\phi/4} (\tilde{\Psi}^M\Gamma_{[M}\Gamma^{(2)}\Gamma_{N]}\Gamma_{11}\Psi^N) \big]+ H{\scriptscriptstyle \wedge} {\star} \big[e^{\phi/2}G {+ \frac{1}{2}e^{\phi/4} (\tilde{\Psi}^M\Gamma_{[M}\Gamma^{(4)}\Gamma_{N]}\Psi^N)} \big]\\ 0 &= \text{d}{\star} \big[ e^{-\phi}H -\frac{1}{2}e^{-\phi/2} (\tilde{\Psi}^M\Gamma_{[M}\Gamma^{(3)}\Gamma_{N]}\Gamma_{11}\Psi^N) \big] +e^{\phi/2}F{\scriptscriptstyle \wedge} {\star} \big[e^{\phi/2}G { + \frac{1}{2}e^{\phi/4} (\tilde{\Psi}^M\Gamma_{[M}\Gamma^{(4)}\Gamma_{N]}\Psi^N)} \big]\\ & -\frac{1}{2}G{\scriptscriptstyle \wedge} G + m {\star}\big[ e^{3\phi/2}F {-\frac{1}{2}e^{3\phi/4} (\tilde{\Psi}^M\Gamma_{[M}\Gamma^{(2)}\Gamma_{N]}\Gamma_{11}\Psi^N)} \big]\\ 0&=\text{d} {\star} \big[ e^{\phi/2}G +\frac{1}{2}e^{\phi/4} (\tilde{\Psi}^M\Gamma_{[M}\Gamma^{(4)}\Gamma_{N]}\Psi^N) \big] -H{\scriptscriptstyle \wedge} G ~, }} where $\Gamma^{(p)}:=\frac{1}{p!}\Gamma_{M_1\dots M_p}\text{d} x^{M_p}\wedge\dots\wedge\text{d} x^{M_1}$. In addition the forms obey the Bianchi identities, \eq{\label{bi} \text{d} F= mH~;~~~\text{d} H=0~;~~~\text{d} G=H\wedge F ~.} \subsection{Consistent truncation without condensates}\label{sec:consistent} The truncation of \cite{Terrisse:2018qjm} contains the four real scalars $(A,\chi,\phi,\xi)$, with $A$ related to the volume modulus $v$ of section \ref{sec:review}: it does not capture all the scalars of the universal sector of $\mathcal{N}=2$ supergravity, since it does not include the vectors and it truncates the two scalars $\xi'$, $b$ of section \ref{sec:review}. We must therefore expand the ansatz of \cite{Terrisse:2018qjm} to include the ``missing'' fields, at the same time taking the limit to the massless IIA theory, $m\rightarrow 0$. Explicitly we set, \eq{\label{foranscy} F=\text{d}\alpha~;~~~ H=\text{d}\chi {\scriptscriptstyle \wedge} J+\text{d}\beta~;~~~ G=\varphi\text{vol}_4+\frac12 c_0J{\scriptscriptstyle \wedge} J+ J{\scriptscriptstyle \wedge} (\text{d}\gamma - \alpha\wedge \text{d}\chi)-\frac{1}{2}\text{d}\xi{\scriptscriptstyle \wedge}\text{Im}\Omega -\frac{1}{2}\text{d}\xi'{\scriptscriptstyle \wedge}\text{Re}\Omega ~,} where $c_0$ is a real constant and $\varphi(x)$ is a 4d scalar. We have chosen to express $H$ in terms of the 4d potential $\beta$ instead of the axion. Taking into account that for a CY we have $\text{d} J=\text{d}\Omega=0$, this ansatz can be seen to automatically satisfy the Bianchi identities (\ref{bi}) in the massless limit. Our ansatz for the ten-dimensional metric reads, \eq{\label{tdma}\text{d} s^2_{(10)} =e^{2A(x)}\left(e^{2B(x)} g_{\mu\nu}\text{d} x^{\mu}\text{d} x^{\nu}+g_{mn}\text{d} y^m\text{d} y^n \right)~, } where the scalars $A$, $B$ only depend on the four-dimensional coordinates $x^\mu$. This gives, \eq{\spl{ F^2_{\mu\nu} &= e^{-2A-2B} \text{d}\alpha^2_{\mu\nu} ~;~~~ F^2 = e^{-4A-4B} \text{d}\alpha^2 \\ H^2_{mn}&= 2e^{-4A-2B}(\partial\chi)^2g_{mn} ~;~~~ H^2_{\mu\nu} = 6e^{-4A}\partial_{\mu}\chi\partial_{\nu}\chi+e^{-4A-4B}h^2_{\mu\nu}\\ H^2 &= 18e^{-6A-2B}(\partial\chi)^2+e^{-6A-6B}h^2\\ G^2_{mn} &= 3e^{-6A-2B}\Big[(\partial\xi)^2+(\partial\xi')^2\Big]g_{mn}+12e^{-6A}c_0^2g_{mn}+3e^{-6A-4B}(\text{d}\gamma - \alpha\wedge \text{d}\chi)^2 g_{mn} \\ G^2_{\mu\nu} &=-6e^{-6A-6B}\varphi^2g_{\mu\nu} + 6e^{-6A}( \partial_{\mu}\xi\partial_{\nu}\xi+\partial_{\mu}\xi'\partial_{\nu}\xi')+18e^{-6A-2B}(\text{d}\gamma - \alpha\wedge \text{d}\chi)^2_{\mu\nu}\\ G^2 &= -24 e^{-8A-8B}\varphi^2 +24e^{-8A-2B}\Big[(\partial\xi)^2+(\partial\xi')^2\Big]+72c_0^2e^{-8A}+36e^{-8A-4B}(\text{d}\gamma - \alpha\wedge \text{d}\chi)^2 ~,}} where the contractions on the left-hand sides above are computed with respect to the ten-dimensional metric; the contractions on the right-hand sides are taken with respect to the unwarped metric. It is also useful to note the following expressions, \eq{\spl{\label{hodsr} \star_{10} F &= \frac{1}{6}e^{6A} \star_4\text{d}\alpha{\scriptscriptstyle \wedge} J^3 \\ \star_{10} H &= \tfrac12 e^{4A+2B} \star_{4}\!\text{d}\chi{\scriptscriptstyle \wedge} J^2 + \tfrac16 e^{4A-2B} \star_{4}\!h{\scriptscriptstyle \wedge} J^3 \\ \star_{10} G &= -\tfrac16 \varphi e^{2A-4B} J^3 +c_0 e^{2A+4B} \text{vol}_4{\scriptscriptstyle \wedge} J+\tfrac12 e^{2A}\star_{4}(\text{d}\gamma - \alpha\wedge \text{d}\chi){\scriptscriptstyle \wedge} J^2\\ &~~~~\!+\tfrac{1}{2} e^{2A+2B} \star_{4}\!\text{d}\xi{\scriptscriptstyle \wedge} \text{Re}\Omega -\tfrac{1}{2} e^{2A+2B} \star_{4}\!\text{d}\xi'{\scriptscriptstyle \wedge} \text{Im}\Omega ~,}} where the four-dimensional Hodge-star is taken with respect to the unwarped metric. Plugging the above ansatz into the ten-dimensional EOM \eqref{beomf1}-\eqref{beomf3} we obtain the following: the internal $(m,n)$-components of the Einstein equations read, \eq{\spl{\label{et1} 0&=e^{-8A-2B}\nabla^{\mu}\left( e^{8A+2B}\partial_{\mu}A \right) -\frac{1}{32} e^{3\phi/2-2A-2B} \text{d}\alpha^2 +\frac18e^{-\phi-4A}(\partial\chi)^2 -\frac{1}{48}e^{-\phi-4A-4B}h^2\\ &-\frac{1}{32}e^{\phi/2-6A-2B}(\text{d}\gamma - \alpha\wedge \text{d}\chi)^2 +\frac{1}{16}e^{\phi/2-6A}\Big[(\partial\xi)^2 +(\partial\xi')^2\Big]\\ &+\frac{3}{16} e^{\phi/2-6A-6B}\varphi^2 +\frac{7}{16}e^{\phi/2-6A+2B}c_0^2 ~.}} The external $(\mu,\nu)$-components read, \eq{\spl{\label{et2} R^{(4)}_{\mu\nu}&= g_{\mu\nu}\left(\nabla^{2}A+\nabla^{2} B+ 8(\partial A)^2+2(\partial B)^2+10\partial A\cdot \partial B\right) \\ &-8\partial_{\mu}A\partial_{\nu}A-2\partial_{\mu}B\partial_{\nu}B -16\partial_{(\mu}A\partial_{\nu)}B+8\nabla_{\mu}\partial_{\nu}A+2\nabla_{\mu}\partial_{\nu}B\\ &+\frac32 e^{-\phi-4A} \partial_{\mu}\chi\partial_{\nu}\chi +\frac12 e^{3\phi/2 -2A-2B} \text{d}\alpha^2_{\mu\nu} +\frac14 e^{\phi-4A-4B} h^2_{\mu\nu} +\frac12 \partial_{\mu}\phi\partial_{\nu}\phi\\ &+\frac{1}{2} e^{\phi/2-6A}(\partial_{\mu}\xi\partial_{\nu}\xi+\partial_{\mu}\xi'\partial_{\nu}\xi') +\frac{3}{2} e^{\phi/2-6A-2B}(\text{d}\gamma - \alpha\wedge \text{d}\chi)^2_{\mu\nu} \\ &+\frac{1}{16} g_{\mu\nu}\Big( - \frac{1}{2} e^{3\phi/2-2A-2B} \text{d}\alpha^2 -\frac{1}{3}e^{\phi-4A-4B}h^2 -3e^{\phi/2-6A}\Big[(\partial\xi)^2 +(\partial\xi')^2 \Big]\\ &-6e^{-\phi-4A}(\partial\chi)^2 -5e^{\phi/2-6A-6B}\varphi^2 -9c_0^2e^{\phi/2-6A+2B} -\frac{9}{2}e^{\phi/2-6A-2B}(\text{d}\gamma - \alpha\wedge \text{d}\chi)^2 \Big) ~,}} while the mixed $(\mu,m)$-components are automatically satisfied. The dilaton equation reads, \eq{\spl{\label{et3} 0&=e^{-10A-4B}\nabla^{\mu}\left( e^{8A+2B}\partial_{\mu}\phi \right) -\frac{1}{4}e^{\phi/2-8A-2B}\Big[(\partial\xi)^2 +(\partial\xi')^2 \Big] -\frac{3}{8}e^{3\phi/2-4A-4B} \text{d}\alpha^2 \\ &+\frac32e^{-\phi-6A-2B}(\partial\chi)^2 +\frac{1}{12}e^{-\phi-6A-6B}h^2\\ &+\frac{1}{4} e^{\phi/2-8A-8B}\varphi^2-\frac{3}{4}c_0^2 e^{\phi/2-8A}-\frac{3}{8}e^{\phi/2-8A-4B}(\text{d}\gamma - \alpha\wedge \text{d}\chi)^2 ~.}} The $F$-form equation of motion reduces to the condition, \eq{ \text{d}(e^{3\phi/2+6A} \star_4 \text{d}\alpha) = \varphi e^{\phi/2+2A-4B}\text{d}\beta - 3e^{\phi/2+2A}\text{d}\chi\wedge\star_4(\text{d}\gamma - \alpha\wedge \text{d}\chi) ~.} The $H$-form equation reduces to the following two equations, \eq{\spl{\label{hfeom} \text{d}\left( e^{-\phi+4A+2B}\star_4\text{d}\chi \right) &= c_0\varphi \text{vol}_4 + (\text{d}\gamma - \alpha\wedge \text{d}\chi)\wedge(\text{d}\gamma - \alpha\wedge \text{d}\chi)-e^{\phi/2+2A} \text{d}\alpha\wedge\star_4(\text{d}\gamma - \alpha\wedge \text{d}\chi) ~,}} and, \eq{\label{seqh} \text{d}\left( e^{-\phi+4A-2B}\star_4 \text{d}\beta\right) = 3c_0(\text{d}\gamma - \alpha\wedge \text{d}\chi)- \text{d}\xi\wedge\text{d}\xi' + e^{\phi/2+2A-4B} \varphi \text{d}\alpha ~.} The $G$-form equation of motion reduces to, \eq{\spl{ \label{gfeom1} \text{d}\left(e^{\phi/2+2A+2B}\star_4\text{d}\xi\right) &= h{\scriptscriptstyle \wedge}\text{d}\xi'\\ \text{d}\left(e^{\phi/2+2A+2B}\star_4\text{d}\xi'\right) &= -h{\scriptscriptstyle \wedge}\text{d}\xi\\ \text{d}\left(e^{\phi/2+2A} \star_4(\text{d}\gamma - \alpha\wedge \text{d}\chi)\right) &= 2\text{d}\chi{\scriptscriptstyle \wedge}\text{d}\gamma+c_0\text{d}\beta ~,}} together with the constraint, \eq{\label{gfeom2} 0=\text{d}\left( \varphi e^{\phi/2+2A-4B}+3c_0\chi \right) ~.} This can be readily integrated to give, \eq{ \varphi= e^{-\phi/2-18A} (c_1-3c_0 \chi)~.} Since $\chi$ only appears in the equations of motion through its derivatives or through $\varphi$, we may absorb $c_1$ by redefining $\chi$. This corresponds to a gauge transformation of the ten-dimensional $B$-field. We will thus set $c_1$ to zero in the following. {\it The Lagrangian} As we can see from (\ref{tdma}) the scalar $B(x)$ can be redefined away by absorbing it in the 4d metric. This freedom can be exploited in order to obtain a 4d consistent truncation directly in the Einstein frame. The appropriate choice is, \eq{\label{baeq} B=-4A~.} With this choice one can check that the ten-dimensional equations given in (\ref{et1})-(\ref{gfeom2}) all follow from the 4d action, \eq{\spl{ S_4=&\int\text{d}^4 x\sqrt{g} \Big( R - 24 (\partial A)^2 -\tfrac{1}{2} (\partial \phi)^2 -\tfrac{3}{2} e^{-4A - \phi}(\partial \chi)^2 - \tfrac{1}{2} e^{-6A + \phi/2} \left[(\partial \xi)^2+(\partial \xi')^2\right]\\ &-\tfrac{1}{4} e^{3\phi/2 + 6A} \text{d}\alpha^2 -\tfrac{3}{4} e^{\phi/2 + 2A} (\text{d}\gamma - \alpha{\scriptscriptstyle \wedge}\text{d}\chi)^2 -\tfrac{1}{12} e^{-\phi + 12A} \text{d}\beta^2 -\tfrac{9}{2} e^{-\phi/2 - 18A} c_0^2 \chi^2 -\tfrac{3}{2} e^{\phi/2 - 14A} c_0^2 \Big)\\ &+\int 3c_0 \text{d}\gamma{\scriptscriptstyle \wedge}\beta +3c_0 \chi \ \alpha{\scriptscriptstyle \wedge}\text{d}\beta +3 \chi\ \text{d}\gamma{\scriptscriptstyle \wedge}\text{d}\gamma -\beta{\scriptscriptstyle \wedge}\text{d}\xi{\scriptscriptstyle \wedge}\text{d}\xi' ~. }} Furthermore equation \eqref{seqh} can be solved in order to express $\text{d}\beta$ in terms of a scalar $b$ (the ``axion''), \eq{\label{prhd} \text{d} \beta =e^{\phi-12A}\star_4\left[\text{d} b+\tfrac12(\xi\text{d}\xi'-\xi'\text{d}\xi) + 3c_0(\gamma - \chi \alpha) \right] ~,} where we chose the gauge most symmetric in $\xi$, $\xi'$. The Lagrangian becomes, in terms of the axion, \eq{\spl{\label{36} S_4=&\int\text{d}^4 x\sqrt{g} \Big( R - 24 (\partial A)^2 -\tfrac{1}{2} (\partial \phi)^2 -\tfrac{3}{2} e^{-4A - \phi}(\partial \chi)^2 - \tfrac{1}{2} e^{-6A + \phi/2} \left[(\partial \xi)^2+(\partial \xi')^2\right]\\ &-\tfrac{1}{4} e^{3\phi/2 + 6A} \text{d}\alpha^2 -\tfrac{3}{4} e^{\phi/2 + 2A} (\text{d}\gamma - \alpha{\scriptscriptstyle \wedge}\text{d}\chi)^2 -\tfrac{1}{2} e^{\phi-12A} \left(\text{d} b + \omega \right)^2\\ &-\tfrac{9}{2} e^{-\phi/2 - 18A} c_0^2 \chi^2 -\tfrac{3}{2} e^{\phi/2 - 14A} c_0^2 \Big) +\int 3\chi\ \text{d}\gamma{\scriptscriptstyle \wedge}\text{d}\gamma ~, }} where we have set, \eq{ \omega:=\tfrac12(\xi\text{d}\xi'-\xi'\text{d}\xi) + 3c_0(\gamma - \chi \alpha) ~.} {\it Including background three-form flux} We can include background three-form flux by modifying the form ansatz \eqref{foranscy} as follows, \eq{\spl{\label{foranscyb} F&=\text{d}\alpha~;~~~ H=\text{d}\chi {\scriptscriptstyle \wedge} J+\text{d}\beta+\frac12\text{Re}\big(b_0\Omega^*\big)\\ G&=\varphi\text{vol}_4+\frac12 c_0J{\scriptscriptstyle \wedge} J+ J{\scriptscriptstyle \wedge} (\text{d}\gamma - \alpha\wedge \text{d}\chi)-\frac{1}{2}D\xi{\scriptscriptstyle \wedge}\text{Im}\Omega -\frac{1}{2}D\xi'{\scriptscriptstyle \wedge}\text{Re}\Omega ~,}} where we have introduced a background charge $b_0\in\mathbb{C}$. The covariant derivatives are given by, \eq{ D\xi:=\text{d}\xi+b_1\alpha~;~~~D\xi':=\text{d}\xi'+b_2\alpha ~,} where we set $b_0=ib_1+b_2$. We see that the inclusion of a background charge for the three-form has the effect of gauging the isometries of the RR axions. The modified form ansatz \eqref{foranscyb} is such that it automatically satisfies the Bianchi identities. Moreover the constraint \eqref{gfeom2} becomes, \eq{\label{gfeom2mod} 0=\text{d}\left( \varphi e^{\phi/2+18A}+3c_0\chi-\Xi \right) ~,} where we have set $\Xi:=b_2\xi-b_1\xi'$. As a consequence \eqref{prhd} gets modified, \eq{\label{prhd2} \text{d} \beta =e^{\phi-12A}\star_4\left[\text{d} b+\tfrac12(\xi\text{d}\xi'-\xi'\text{d}\xi) + 3c_0(\gamma - \chi \alpha)+\Xi\alpha \right] ~.} The action reads, \eq{\spl{\label{42} S_4=&\int\text{d}^4 x\sqrt{g} \Big( R - 24 (\partial A)^2 -\tfrac{1}{2} (\partial \phi)^2 -\tfrac{3}{2} e^{-4A - \phi}(\partial \chi)^2 - \tfrac{1}{2} e^{-6A + \phi/2} \left[(D \xi)^2+(D \xi')^2\right]\\ &-\tfrac{1}{4} e^{3\phi/2 + 6A} \text{d}\alpha^2 -\tfrac{3}{4} e^{\phi/2 + 2A} (\text{d}\gamma - \alpha{\scriptscriptstyle \wedge}\text{d}\chi)^2 -\tfrac{1}{2} e^{\phi-12A} \left(\text{d} b + \tilde{\omega} \right)^2\\ &-\tfrac{1}{2} e^{-\phi/2 - 18A} \big(3c_0 \chi -\Xi\big)^2 -\tfrac{1}{2} e^{-\phi - 12A} |b_0|^2 -\tfrac{3}{2} e^{\phi/2 - 14A} c_0^2 \Big) +\int 3\chi\ \text{d}\gamma{\scriptscriptstyle \wedge}\text{d}\gamma ~, }} where we have set, \eq{\label{43} \tilde{\omega}:=\tfrac12(\xi\text{d}\xi'-\xi'\text{d}\xi) + 3c_0(\gamma - \chi \alpha)+\Xi\alpha ~.} \subsection{Consistent truncation with condensates}\label{sec:fermcond} In Euclidean signature the supersymmetric IIA action is constructed via the procedure of holomorphic complexification, see e.g.~\cite{Bergshoeff:2007cg}. This amounts to first expressing the Lorentzian action in terms of $\tilde{\Psi}_M$ instead of $\bar{\Psi}_M$ (which makes no difference in Lorentzian signature) and then Wick-rotating, see appendix \ref{app:spin} for our spinor and gamma-matrix conventions. In this way one obtains a (complexified) Euclidean action which is formally identical to the Lorentzian one, with the difference that now the two chiralities ${\Psi}_M^{\pm}$, should be thought of as independent complex spinors (there are no Majorana Weyl spinors in ten Euclidean dimensions). Although the gravitino ${\Psi}_M$ is complex in Euclidean signature, its complex conjugate does not appear in the action, hence the term ``holomorphic complexification''. Since we are interested in the case where only the 4d gravitino condenses, we expand the 10d gravitino as follows, \eq{\label{grdc1} \Psi_m=0~;~~~\Psi_{\mu+}=\psi_{\mu+}\otimes\eta-\psi_{\mu-}\otimes\eta^c~; ~~~\Psi_{\mu-}=\psi_{\mu+}'\otimes\eta^c-\psi_{\mu-}'\otimes\eta ~,} so that, \eq{ \tilde{\Psi}_{\mu+}=\tilde{\psi}_{\mu+}\otimes\tilde{\eta}+\tilde{\psi}_{\mu-}\otimes\tilde{\eta^c}~; ~~~ \tilde{\Psi}_{\mu-}=\tilde{\psi}_{\mu+}'\otimes\tilde{\eta^c}+\tilde{\psi}_{\mu-}'\otimes\tilde{\eta} ~.} In Lorentzian signature the positive- and negative-chirality 4d vector-spinors above are related though complex conjugation: $\bar{\theta}_+^\mu=\tilde{\theta}_-^\mu$, $\bar{\theta}_-^\mu=-\tilde{\theta}_+^\mu$, so that $\Psi_M$ is Majorana in 10d: $\bar{\Psi}_M=\tilde{\Psi}_M$. Upon Wick-rotating to Euclidean signature this is no longer true, and the two chiralities transform in independent representations. As already mentioned, in the present paper we focus on the contribution of ALE gravitational instantons to the fermion condensate. In this case there are no negative-chirality zeromodes and we can set, \eq{\label{posz} \psi^{\mu}_-=\psi^{\prime\mu}_-=0~. } For any two 4d positive-chirality vector-spinors, $\theta_+^\mu$, $\chi_+^\mu$, the only nonvanishing bilinears read, \eq{ \big(\theta^{[\mu_1}_+\gamma^{\mu_2\mu_3}\chi^{\mu_4]}_+\big)=\frac{i^s}{12} \varepsilon^{ \mu_1\mu_2\mu_3\mu_4 } \big(\theta^{\lambda}_+\gamma_{\lambda\rho}\chi^{\rho}_+\big)~;~~~ \big(\theta^{\lambda}_+\chi_{\lambda+}\big) ~, } where we used the Fierz identity \eqref{a4} and the Hodge duality relations \eqref{hm}; $s=1,2$ for Lorentzian, Euclidean signature respectively. Ultimately we will be interested in gamma-traceless vector-spinors, \eq{\label{zmgt} \gamma_\mu\theta_+^\mu=\gamma_\mu\chi_+^\mu=0~, } since all ALE zeromodes can be put in this gauge \cite{Hawking:1979zs}. In this case we obtain the additional relation, \eq{ \big(\theta^{\lambda}_+\gamma_{\lambda\rho}\chi^{\rho}_+\big)= -\big(\theta^{\lambda}_+\chi_{\lambda+}\big) ~.} Assuming, as is the case for ALE spaces, that only positive-chirality zeromodes exist in four dimensions, cf.~\eqref{posz}, the only nonvanishing bilinear condensates that appear in the equations of motion are proportional to, \eq{\label{spbln} \mathcal{A} :=\left(\tilde{\psi}_{\mu+}\gamma^{\mu\nu}\psi'_{\nu+}\right) =-\left(\tilde{\psi}^{\mu}_+\psi'_{\mu+}\right) ~,} where in the second equality we have assumed that $\psi^\mu_{+}$, $\psi^{\prime \mu}_{+}$ are gamma-traceless, cf.~\eqref{zmgt}. Furthermore we note the following useful results, \eq{\spl{\label{grbilfg} \left(\tilde{\Psi}_\rho\Gamma_{(\mu}\Gamma^{M_1\dots M_4}\Gamma^{\rho}\Psi_{\nu)}\right)G_{M_1\dots M_4}&=24(3c_0e^{-4A}+\varphi e^{-4A-4B})\mathcal{A}g_{\mu\nu}\\ \left(\tilde{\Psi}_\rho\Gamma_{\sigma}\Gamma_{(\mu}{}^{M_2M_3 M_4}\Gamma^{\rho}\Psi^{\sigma}\right)G_{\nu)M_2M_3 M_4}&=24 \varphi e^{-4A-4B}\mathcal{A}g_{\mu\nu}\\ \left(\tilde{\Psi}_\rho\Gamma_{\sigma}\Gamma_{(m}{}^{M_2M_3 M_4}\Gamma^{\rho}\Psi^{\sigma}\right)G_{n)M_2M_3 M_4}&=48 c_0\mathcal{A}e^{-4A-2B}g_{mn} ~,}} where on the left-hand sides above we used the warped metric for the contractions, while on the right-hand sides we used the unwarped metric. In the 4d theory, these bilinears receive contributions from the EH instanton at one loop in the gravitational coupling. In the presence of gravitino condensates the equations of motion \eqref{et1}-\eqref{gfeom2} are modified as follows: the internal $(m,n)$-components of the Einstein equations read, \eq{\spl{\label{et12} 0&=e^{-8A-2B}\nabla^{\mu}\left( e^{8A+2B}\partial_{\mu}A \right) +\dots+\frac{1}{4}\left( \varphi e^{\phi/4-4A-4B}-c_0e^{\phi/4-4A} \right)\mathcal{A}-\frac18 e^{2A+2B}L_{\Psi^4} ~,}} where the ellipses stand for terms that are identical to the case without fermion condensates. The external $(\mu,\nu)$-components read, \eq{\spl{\label{et22} R^{(4)}_{\mu\nu}&= \dots -\frac{1}{2} g_{\mu\nu} e^{\phi/4-4A-4B} \varphi \mathcal{A} ~,}} while the mixed $(\mu,m)$-components are automatically satisfied. The dilaton equation reads, \eq{\spl{\label{et32} 0&=e^{-10A-4B}\nabla^{\mu}\left( e^{8A+2B}\partial_{\mu}\phi \right)+\dots +\tfrac{1}{4} (3c_0e^{\phi/4+2A}+\varphi e^{\phi/4+2A-4B})\mathcal{A} ~.}} The $F$-form and $H$-form equations are modified as follows, \eq{\label{ffeom2} \text{d}(e^{3\phi/2+6A} \star_4 \text{d}\alpha) = \dots {+ e^{\phi/4+4A-2B} \mathcal{A}\ \text{d}\beta} ~,} and, \eq{\label{seqh2} \text{d}\left( e^{-\phi+4A-2B}\star_4 \text{d}\beta\right) = \dots { + e^{\phi/4+4A-2B} \mathcal{A}\ \text{d}\alpha} ~,} respecively. The $G$-form equation of motion remains unchanged except for the constraint, % \eq{\label{gfeom22} 0=\text{d}\left( \varphi e^{\phi/2+2A-4B} + 3c_0 \chi-\Xi + e^{\phi/4+4A-2B} \mathcal{A} \right) ~.} In deriving the above we have taken into account that, \eq{ (\tilde{\Psi}^M\Gamma_{[M}\Gamma^{(4)}\Gamma_{N]}\Psi^N) = 2\mathcal{A} e^{2A+2B}\left( \text{vol}_4-\frac12 e^{-4B}J{\scriptscriptstyle \wedge} J \right) ~.} At this stage it is important to notice that the new $\mathcal{A}$ terms in the flux equations (\ref{ffeom2}) and (\ref{seqh2}) exactly compensate the modification of $\varphi$ in (\ref{gfeom22}), so that the form equations are ultimately unchanged in the presence of fermion condensates. Of the 24 quartic gravitino terms that appear in the action of \cite{Giani:1984wc} only the following are nonvanishing, \eq{\spl{\label{grpernici} \left(\tilde{\Psi}_\mu\Gamma_{11}\Psi_{\nu}\right) \left(\tilde{\Psi}^\mu\Gamma_{11}\Psi^{\nu}\right)&= 4\left(\tilde{\psi}^{[\mu}_{+} \psi^{\prime \nu]}_{+}\right)^2e^{-4A-4B}\\ \left(\tilde{\Psi}^{\mu_1}\Gamma_{11}\Gamma_{\mu_1\dots\mu_4}\Psi^{\mu_2}\right) \left(\tilde{\Psi}^{\mu_3}\Gamma_{11}\Psi^{\mu_4}\right)&= -\frac16\left(\tilde{\Psi}^{\mu_1}\Gamma_{\mu_1\dots\mu_4mn}\Psi^{\mu_2}\right) \left(\tilde{\Psi}^{\mu_3}\Gamma^{mn}\Psi^{\mu_4}\right)\\ &= -\left(8\tilde{\psi}_{[\nu+} \psi'_{\rho]+}+4\tilde{\psi}^{\mu}_+ \gamma_{\rho\nu}\psi'_{\mu+}\right) \left(\tilde{\psi}^{\rho}_+ \psi_+^{\prime \nu}\right)e^{-4A-4B}\\ \left(\tilde{\Psi}^{[M_1}\Gamma^{M_2M_3}\Psi^{M_4]}\right)^2 &= 4\left(\tilde{\psi}_+^{[\mu_1}\gamma^{\mu_2\mu_3}\psi_+^{\prime \mu_4]}\right)^2e^{-4A-4B} -\frac23 \left(\tilde{\psi}^{[\mu}_{+} \psi^{\prime \nu]}_{+}\right)^2e^{-4A-4B}~,}} where for the contractions on the left-, right-hand sides above we have used the warped, unwarped metric respectively. We thus obtain, cf. \eqref{action3}, \eq{\spl{\label{qred1} L_{\Psi^4} &= \frac{1}{4} (\tilde{\Psi}_M \Gamma_{11} \Psi_N)^2 +\frac{1}{8} \tilde{\Psi}^{M_1}\Gamma_{11} \Gamma_{M_1 \cdots M_4} \Psi^{M_2} \ \tilde{\Psi}^{M_3}\Gamma_{11} \Psi^{M_4}\\ &+\frac{1}{16} \tilde{\Psi}^{M_1} \Gamma_{M_1\cdots M_6} \Psi^{M_2} \ \tilde{\Psi}^{M_3} \Gamma^{M_4M_5} \Psi^{M_6} +\frac{3}{4} (\tilde{\Psi}_{[M_1} \Gamma_{M_2M_3} \Psi_{M_4]})^2\\ &= e^{-4A-4B} \mathcal{B} ~,}} where we have defined, \eq{\label{calbdef} \mathcal{B}:= -\frac{3}{2} (\tilde{\psi}_{[\mu}\psi'_{\nu]})^2 + (\tilde{\psi}^\mu \gamma_{\rho\nu} \psi'_\mu) (\tilde{\psi}^\rho \psi'^\nu) +3 (\tilde{\psi}_{[\mu_1} \gamma_{\mu_2\mu_3}\psi'_{\mu_4]})^2 ~,} which does not depend on the warp factor. In the 4d theory, at one-loop order in the gravitational coupling, the quartic gravitino term receives contributions from the ALE instanton with $\tau=2$ (four spin-3/2 zeromodes). {\it The Lagrangian} Imposing \eqref{baeq} as before, and solving once again for $\varphi$, \eq{\label{wnth}\varphi = e^{-\phi/2-18A} \left(\Xi -3c_0\chi -e^{\phi/4+12A}\mathcal{A} \right) ~,} where $\Xi$ was defined below \eqref{gfeom2mod}, it can now be seen that the ten-dimensional equations in the presence of gravitino condensates all follow from the 4d action, \eq{\spl{ S_4=\int\text{d}^4 x\sqrt{g}& \Big( R - 24 (\partial A)^2 -\tfrac{1}{2} (\partial \phi)^2 -\tfrac{3}{2} e^{-4A - \phi}(\partial \chi)^2 - \tfrac{1}{2} e^{-6A + \phi/2} \left[(D \xi)^2+(D \xi')^2\right]\\ &-\tfrac{1}{4} e^{3\phi/2 + 6A} \text{d}\alpha^2 -\tfrac{3}{4} e^{\phi/2 + 2A} (\text{d}\gamma - \alpha{\scriptscriptstyle \wedge}\text{d}\chi)^2 -\tfrac{1}{12} e^{-\phi + 12A} \text{d}\beta^2-V \Big)\\ +\int &3c_0 \text{d}(\gamma-\alpha\chi){\scriptscriptstyle \wedge}\beta +3 \chi\ \text{d}\gamma{\scriptscriptstyle \wedge}\text{d}\gamma +\Xi\beta{\scriptscriptstyle \wedge}\text{d}\alpha-\beta{\scriptscriptstyle \wedge} D\xi{\scriptscriptstyle \wedge} D\xi' ~, }} where the potential of the theory is given by, \boxedeq{\spl{\label{58} V(\chi,\xi,\xi',\phi,A) = &\tfrac{3}{2} c_0^2e^{\phi/2 - 14A} + \tfrac{1}{2}|b_0|^2e^{-\phi-12A} -3c_0 {\mathcal{A}}e^{\phi/4 - 4A} +e^{6A}\mathcal{B} \\ +&\tfrac12\Big( {\mathcal{A}}e^{3A}+(3c_0\chi -\Xi)e^{-\phi/4 - 9A} \Big)^2 ~. }} Note that in integrating the 4d Einstein equation \eqref{et22}, care must be taken to first substitute in the right-hand side the value of $\varphi$ from \eqref{wnth}, and take into account the variation of the condensates $\mathcal{A}$, $\mathcal{B}$ with respect to the metric. The modifications due to the condensate in \eqref{seqh2} and \eqref{wnth} are such that the relation (\ref{prhd2}) between $\beta$ and the axion is unchanged. In terms of the axion, the action reads, \boxedeq{\spl{\label{ctr2} S_4=\int&\text{d}^4 x\sqrt{g} \Big( R - 24 (\partial A)^2 -\tfrac{1}{2} (\partial \phi)^2 -\tfrac{3}{2} e^{-4A - \phi}(\partial \chi)^2 - \tfrac{1}{2} e^{-6A + \phi/2} \left[(D \xi)^2+(D \xi')^2\right]\\ &-\tfrac{1}{4} e^{3\phi/2 + 6A} \text{d}\alpha^2 -\tfrac{3}{4} e^{\phi/2 + 2A} (\text{d}\gamma - \alpha{\scriptscriptstyle \wedge}\text{d}\chi)^2 -\tfrac{1}{2} e^{\phi - 12A} (\text{d} b + \tilde{\omega})^2 -V \Big) +\int 3 \chi\ \text{d}\gamma{\scriptscriptstyle \wedge}\text{d}\gamma ~, }} where $\tilde{\omega}$ was defined in \eqref{43}. Note that $\chi$, $\xi$, $\xi'$ enter the potential only through the linear combination $3c_0\chi -\Xi$, so two of these scalars remain flat directions even in the presence of the flux and the condensate, just as the axion $b$. \subsection{Vacua}\label{sec:vacua} Maximally-symmetric solutions of the effective 4d theory \eqref{ctr2} can be obtained by setting the vectors to zero, \eq{\alpha=\gamma=0~,} and minimizing the potential of the theory, \eq{\label{mincond} \overrightarrow{\nabla} V(\chi_0,\xi_0,\xi'_0,\phi_0,A_0)=0~, } where $(\chi,\xi,\xi',\phi,A)=(\chi_0,\xi_0,\xi'_0,\phi_0,A_0)$ is the location of the minimum in field space. Then the Einstein equations determine the scalar curvature of the 4d spacetime to be,\footnote{Note that \eqref{mcD} is different from the standard relation $R=2V_0$. This is because the condensates $\mathcal{A}$, $\mathcal{B}$ have non-trivial variations with respect to the metric.} \eq{\spl{\label{mcD} R&= {3} c_0^2e^{\phi_0/2 - 14A_0} + |b_0|^2e^{-\phi_0-12A_0} -3c_0 {\mathcal{A}}e^{\phi_0/4 - 4A_0}\\ &+( 3c_0\chi_0-\Xi_0)^2e^{-\phi_0/2 - 18A_0}+(3c_0\chi_0-\Xi_0) {\mathcal{A}}e^{-\phi_0/4 - 6A_0} ~,}} where $\Xi_0=b_2\xi_0-b_1\xi'_0$, and we assume that a Wick rotation has been performed back to Minkowski signature. Condition \eqref{mincond} admits two classes of solutions. {\it Case 1}: $c_0=0$ In this case imposing \eqref{mincond} sets $b_0=0$, and the potential only depends on the warp factor $A$. A minimum is obtained at finite value of $A$ provided, \eq{ {\mathcal{B}}= -\tfrac12 {\mathcal{A}}^2 ~,} and requires the quartic condensate to be negative. From \eqref{mcD} it then follows that $R=0$, and we obtain a Minkowski 4d vacuum. In fact the potential vanishes identically. {\it Case 2}: $c_0\neq0$ In this case \eqref{mincond} can be solved for finite values of $\phi$ and $A$. The value of $\chi$ at the minimum is given by, \eq{\chi_0=- \frac{1}{3c_0} ~\!\left( \mathcal{A}g_s^{1/4}e^{ 12A_0} -\Xi_0 \right) ~,} where we have set $g_s:=e^{\phi_0}$. Minimization of $V$ with respect to $\xi$, $\xi'$ does not give additional constraints, so that $\xi_0$, $\xi'_0$ remain undetermined. The values of $\phi_0$ and $A_0$ at the minimum can also be adjusted arbitrarily, and determine $|b_0|$ and $c_0$ in terms of the condensates, \eq{\spl{\label{63a} |b_0|^2 &= \frac{3}{400}~\! g_se^{18A_0}\left( 40 \mathcal{B} - 21 \mathcal{A}^2 \mp 3\mathcal{A} \sqrt{ 49\mathcal{A}^2+80\mathcal{B} } \right)\\ c_0 &= \frac{1}{20}~\!g_s^{-1/4}e^{10A_0}\left( 7\mathcal{A}\pm\sqrt{ 49\mathcal{A}^2+80\mathcal{B} } \right) ~,}} where the signs in $b_0$ and $c_0$ are correlated. Henceforth we will set $e^{A_0}=1$, since the warp factor at the minimum can be absorbed in $l_Y$. Consistency of \eqref{63a} requires the quartic condensate to obey the constraint, \eq{\label{65r} \mathcal{B}> 0 ~,} and correlates the sign of $\mathcal{A}$ with the two branches of the solution: the upper/lower sign in \eqref{63a} corresponds to $\mathcal{A}$ negative/positive, respectively.\footnote{If $\mathcal{B}> 3\mathcal{A}^2/2$, we may also take the upper/lower sign in \eqref{63a} for $\mathcal{A}$ positive/negative, respectively. Equation \eqref{65r} is the weakest condition on the quartic condensate that is sufficient for consistency of the solution.} From \eqref{mcD} it then follows that, \eq{\label{70ad} R_{\text{dS}}= 3 g_s^{-1} |b_0|^2 \propto l_s^{-2}e^{-2c~\!(l_Y/l_s)^2} ~, } up to a proportionality constant of order one. We thus obtain a de Sitter 4d vacuum, provided \eqref{65r} holds. In the equation above we have taken into account that the quadratic and quartic condensates are expected to be of the general form, cf.~the discussion around \eqref{68}, \eq{\label{abv} {\mathcal{A}}\propto l_s^{-1}e^{-c~\!(l_Y/l_s)^2}~;~~~ {\mathcal{B}}\propto l_s^{-2}e^{-2c~\!(l_Y/l_s)^2} ~, } up to proportionality constants of order one. We have verified numerically, as a function of $\mathcal{A}^2/\mathcal{B}$, that all three eigenvalues of the Hessian of the potential are positive at the solution. I.e. the solution is a local minimum of the potential \eqref{58}. {\it Flux quantization} The four-form flux is constrained to obey,\footnote{The Page form corresponding to $G$ is given by $\hat{G}:=G-H\wedge \alpha$, which is closed. The difference between $G$ and $\hat{G}$ vanishes when integrated over four-cycles of $Y$.} \eq{ \frac{1}{l_s^3}\int_{\mathcal{C}_A}G\in\mathbb{Z} ~,} where $\{ \mathcal{C}_A~;~A=1,\dots,h^{2,2}\}$ is a basis of integral four-cycles of the CY, $\mathcal{C}_A\in H_4(Y,\mathbb{Z})$. From \eqref{foranscyb}, \eqref{63a}, \eqref{abv} we then obtain, \eq{\label{67b} n_A \propto g_s^{-1/4}\Big(\frac{l_Y}{l_s}\Big)^4 e^{-c~\!(l_Y/l_s)^2}\text{vol}(\mathcal{C}_A) ~,} up to a proportionality constant of order one; $\text{vol}(\mathcal{C}_A)$ is the volume of the four cycle $\mathcal{C}_A$ in units of $l_Y$, and $n_A\in\mathbb{Z}$. Since the string coupling can be tuned to obey $g_s\ll 1$ independently of the $l_Y/l_s$ ratio, \eqref{67b} can be solved for $\text{vol}(\mathcal{C}_A)$ of order one, provided we take $n_A$ sufficiently close to each other. Given a set of flux quanta $n_A$, this equation fixes the K\"{a}hler moduli in units of $l_Y$; the overall CY volume is set by $l_Y$, which remains unconstrained. Note that even if we allow for large flux quanta in order to solve the flux quantization constraint, it can be seen that higher-order flux corrections are subdominant in the $g_s\ll1$ limit. Indeed the parameter that controls the size of these corrections is $|g_sG|$, which scales as $g_s^{3/4}$. Similarly, the three-form flux is constrained to obey, \eq{ \frac{1}{l_s^2}\int_{\mathcal{C}_\alpha}H\in\mathbb{Z} ~,} where $\{ \mathcal{C}_\alpha~;~A=1,\dots,h^{2,1}\}$ is a basis of integral three-cycles of the CY, $\mathcal{C}_\alpha\in H_3(Y,\mathbb{Z})$. From \eqref{foranscyb} we can see that this equation constrains the periods of $\Omega$, and hence the complex-structure moduli of $Y$. \section{Discussion}\label{sec:discussion} We considered the effect of gravitino condensates from ALE instantons, in the context of a 4d consistent truncation of IIA on CY in the presence of background flux. The 4d theory admits de Sitter solutions, which are local minima of the potential \eqref{58}, provided the quartic condensate has a positive sign, cf.~\eqref{65r}. We do not know whether or not this is the case, as this would require knowledge of the explicit form of the zero modes of the Dirac operator in the $\tau=2$ ALE background. Clearly it would be crucial to construct these zero modes (which, to are knowledge, have never been explicitly computed), generalizing the calculations of \cite{Hawking:1979zs,Konishi:1988mb,Bianchi:1994gi} to the second gravitational instanton in the ALE series. The validity of the de Sitter solutions presented here requires the higher-order string-loop corrections in the 4d action to be subdominant with respect to the ALE instanton contributions to the gravitino condensates. Since the latter do not depend on the string coupling, cf.~\eqref{abv}, there is no obstruction to tuning $g_s$ to be sufficiently small, $g_s\ll 1$, in order for the string-loop corrections to be negligible with respect to the instanton contributions. The $l_Y/l_s$ ratio can be tuned so that the condensates are of the order of the Einstein term in the 4d action, thus dominating 4d higher-order derivative corrections. This requires, \eq{\label{75tyu} l_{4d}^{-2}\sim R_{\text{dS}} \propto l_s^{-2}e^{-2c~\!(l_Y/l_s)^2} ~,} where we have taken \eqref{70ad} into account. Current cosmological data give, \eq{ \frac{ R_{\text{dS}} }{M_{\text{P}}^2 } \sim\Big( \frac{l_s}{l_{4d} } \Big)^2\sim10^{-122} ~. } From \eqref{75tyu} we then obtain $l_Y/l_s\sim 10$ for $c$ of order one, cf.~\eqref{s01}. In addition to the higher-order derivative corrections, the 4d effective action receives corrections at the two-derivative level, of the form $(l_s/l_Y)^{2n}$ with $n\geq1$. These come from a certain subset of the 10d tree-level $\alpha'$ corrections (string loops are subleading), which include the $R^2(\partial F)^2$ corrections of \cite{Policastro:2006vt}. Given the $l_Y/l_s$ ratio derived above, these corrections will be of the order of one percent or less. As is well known, the vacua computed within the framework of consistent truncations, such as the one constructed in the present paper, are susceptible to destabilization by modes that are truncated out of the spectrum. This is an issue that needs to be addressed before one can be confident of the validity of the vacua presented here. The stability issue is particularly important given the fact that, in the presence of a non-vanishing gravitino condensate, supersymmetry will generally be broken. Ultimately, the scope of the path integral over metrics approach to quantum gravity is limited, since the 4d gravity theory is non-renormalizable. Rather it should be thought of as an effective low-energy limit of string theory. A natural approach to gravitino condensation from the string/M-theory standpoint, would be to try to construct brane-instanton analogues of the four-dimensional gravitational instantons. The fermion condensates might then be computed along the lines of \cite{Becker:1995kb,Harvey:1999as,Tsimpis:2007sx}. Another interesting direction would be to try to embed the consistent truncation of the present paper within the framework of $\mathcal{N}=2$ 4d (gauged) supergravity. On general grounds \cite{Cvetic:2000dm}, we expect the existence of a consistent truncation of a higher-dimensional supersymmetric theory to the bosonic sector of a supersymmetric lower-dimensional theory, to guarantee the existence of a consistent truncation to the full lower-dimensional theory. The condensate would then presumably be associated with certain gaugings of the 4d theory. \section*{Acknowledgment} We would like to thank Thomas Grimm and Kilian Mayer for useful correspondence.
proofpile-arXiv_065-4204
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Construction of $\mathbf{M}^R$} \label{App:rank} We give a brief outline of the construction of $\mathbf{M}^R$ and demonstrate that its maximal eigenvalue coincides with $R_0^r$ given in \cite{Ball_etal}, Section 3.1.3. We begin by computing the transition probabilities for a household epidemic in state $(a,b)$, $a$ susceptibles and $b$ infectives. By considering the total amount of infection, $I_b$, generated by the $b$ infectives and the fact that the infectious periods, $T$, are independent and identically distributed, it was shown in \cite{Pellis_etal}, Appendix A, that $X_{(a,b)} | I_b \sim {\rm Bin} (a, 1- \exp(-\lambda_L I_b))$ with \begin{eqnarray} \label{eq:rank:3} \pz (X_{(a,b)} = c ) &=& \ez[ \pz (X_{(a,b)} = c | I_b)] \nonumber \\ &=& \binom{a}{c} \ez \left[ \{1 -\exp(-\lambda_L I_b) \}^c \exp(-\lambda_L I_b)^{a-c} \right] \nonumber \\ &=& \binom{a}{c} \sum_{j=0}^c \binom{c}{j} (-1)^j \ez [ \exp(- \lambda_L (a+j-c) I_b] \nonumber \\ &=& \binom{a}{c} \sum_{j=0}^c \binom{c}{j} (-1)^j \phi_T (\lambda_L (a+j-c))^b, \end{eqnarray} where $\phi_T (\theta) = \ez [ \exp(-\theta T)]$ is the Laplace transform of the infectious period distribution. Given that if a household epidemic transitions from state $(a,b)$ to state $(a-c,c)$, the mean number of infections due to any given infective is simply $c/b$. For the rank generational representation of the epidemic, we can again subsume all states $(0,b)$ into $(0,1)^\ast$. We note that in contrast to Section \ref{S:example:house}, epidemic states $(a,1)$ $(1 \leq a \leq h-2)$ can arise from the epidemic process whilst states $(h-b,b)$ $(b>1)$ will not occur. For all $\{(a,b); b >0, a+b \leq h\}$, we have that $M_{(a,b),(h-1,1)}^R = \mu_G= \lambda_G \ez[T]$ (the mean number of global infectious contacts made by an individual) and for $(d,c) \neq (h-1,1)$, \begin{eqnarray} \label{eq:rank:4} M_{(a,b),(d,c)}^R = \left\{ \begin{array}{ll} \frac{c}{b} \binom{a}{c} \sum_{j=0}^c \binom{c}{j} (-1)^j \phi_T (\lambda_L (a+j-c))^b & \mbox{if } d = a-c >0 \\ \frac{a}{b} \sum_{j=0}^a \binom{a}{j} (-1)^j \phi_T (\lambda_L j)^b & \mbox{if } (d,c)=(0,1)^\ast \\ 0 & \mbox{otherwise}. \end{array} \right. \end{eqnarray} We can proceed along identical lines to \eqref{eq:house:E:6} in decomposing $\mathbf{M}^R$ into \begin{eqnarray} \label{eq:rank:4a} \mathbf{M}^R = \mathbf{G} + \mathbf{U}^R, \end{eqnarray} where $\mathbf{G}$ is the $K \times K$ matrix ($K$ denotes the total number of infectious states) with $G_{k1} = \mu_G$ $(1 \leq k \leq K)$ and $G_{kj}=0$ otherwise. For $i=0,1, \ldots, h-1$, let $\mu_i^R$ denote the mean number of individuals in the $i^{th}$ generation of the rank construction of the epidemic, then we have that $\mu_i^R = \sum_{j=1}^K (u_{ij}^R)^i$, the sum of the first row of $(\mathbf{U}^R)^i$. We can follow essentially identical arguments to those given in Section \ref{S:example:house} to show that $R_0^r$ is the maximal eigenvalue of $\mathbf{M}^R$. To illustrate, this approach we consider households of size $h=3$. The possible infectious units are $(2,1), (1,1)$ and $(0,1)^\ast$ with mean reproductive matrix \begin{eqnarray} \label{eq:rank:5} \mathbf{M}^R = \begin{pmatrix} \mu_G & 2 \{ \phi_T (\lambda_L) - \phi_T (2 \lambda_L) \} & 2 \{ 1- 2 \phi_T (\lambda_L) + \phi_T (2 \lambda_L) \} \\ \mu_G & 0& 1 - \phi_T (\lambda_L) \\ \mu_G & 0 & 0 \end{pmatrix}. \end{eqnarray} The eigenvalues of $\mathbf{M}^R$ are solutions of the cubic equation \begin{eqnarray} \label{eq:rank:6} s^3 - \mu_G s^2 - 2 \left\{1 - \phi_T ( \lambda_L) \right\} \mu_G s - \mu_G \ 2 \{1 - \phi_T (\lambda_L)\}\{ \phi_T (\lambda_L) - \phi_T (2 \lambda_L) \} &=&0 \nonumber \\ s^3 - \mu_G \mu_0^R s^2 - \mu_G \mu_1^R s - \mu_G \mu_2^R &=&0, \end{eqnarray} where $\mu_0^R =1$, $\mu_1^R= 2 \{ 1 - \phi_T (\lambda_L) \}$ and $\mu_2^R = 2 \{\phi (\lambda_L) - \phi (2 \lambda_L) \} \{1 - \phi_T (\lambda_L) \}$ are the mean number of infectives in rank generations 0, 1 and 2, respectively of the household epidemic model. Given that \eqref{eq:rank:6} is equivalent to \cite{Pellis_etal}, (3.3), it follows that that maximal eigenvalue of $\mathbf{M}^R$ is $R_0^r$. \subsection{Comments on $\mathbf{M}$} We make a couple of observations concerning the construction of $\mathbf{M}$. In the $SIR$ household epidemic model, we can reduce the number of states to $h(h+1)/2 - (h-1)$ by noting that in households with 0 susceptibles, no local infections can occur and thus infectives can only make global infections acting independently. Therefore we can subsume the states $(0,1), (0,2), \ldots, (0,h)$ into a single state, $(0,1)^\ast$ say, with a local infection in households with 1 susceptible resulting in the household moving to state $(0,1)^\ast$, see for, example \cite{Neal16}. For $SIR$ epidemics, there will typically be a natural ordering of infectious unit types such that we can order the types with only transitions of infectious unit from type $i$ to type $j$ ($i < j$) being possible. For example, with household epidemics we can order the types such that type $(a,b)$ is said to be less than type $(c,d)$, if $a >c$, or if $a=c$, and $b>d$. In such cases $\mathbf{P}$ is an upper triangular matrix and if the main diagonal of $\mathbf{P}$ is $\mathbf{0}$ then there exists $n_0 \leq K$, such that for all $n > n_0$, $\mathbf{P}^n = \mathbf{0}$. Then \begin{eqnarray} \label{eq:M:3} \mathbf{M} = (\mathbf{I} - \mathbf{P})^{-1} \Phi =\left(\sum_{n=0}^{n_0} \mathbf{P}^n \right) \Phi, \end{eqnarray} and we can compute $\mathbf{M}$ without requiring matrix inversion. \section{Conclusions} \label{S:conc} In this paper we have given a simple definition and derivation of $R_0$ for structured populations by considering the population to consist of different types of infectious units. This multi-type approach to constructing $R_0$, via the mean offspring matrix of the different types of infectious units, follows a widely established mechanism introduced in \cite{Diekmann_etal}. Moreover, we have demonstrated that for $SIR$ household epidemic models that $R_0$ coincides with $R_0^g$, the generational basic reproduction number defined in \cite{Ball_etal}. In \cite{Ball_etal}, the rank generational basic reproduction number, $R_0^r$, is also considered and is taken to be the default choice of $R_0$ in that paper. For the household $SIR$ epidemic model is straightforward to define and construct a rank generational mean reproduction matrix $\mathbf{M}^R$ for a general infectious period distribution, $T$. The approach is to represent the evolution of the epidemic as a discrete time process generation-by-generation. This approach ignores the time dynamics of the epidemic but leaves the final size unaffected and dates back to \cite{Ludwig75}. The initial infective in the household (infected by a global infection) forms generation 0. Then for $i=1,2,\ldots,h-1$, the infectious contacts by the infectives in generation $i-1$ are considered and any susceptible individual contacted by an infective in generation $i-1$ will become an infective in generation $i$. We define an infective to be a type $(a,b)$ individual if the generation of the household epidemic in which they are an infective has $b$ infectives and $a$ susceptibles. In the construction of $\mathbf{M}^R$, we again look at the mean number of infections attributable to a given infective with details provided in Appendix \ref{App:rank}. Let $\mu_i^R$ denote the mean number of infectives in generation $i$ of the household epidemic then it is shown in \cite{Ball_etal}, Section 3.1.3, that for all $k \geq 1$, $\sum_{i=0}^k \mu_i^R \geq \sum_{i=0}^k \mu_i$, which in turn implies $R_0^r \geq R_0^g$. The construction of $\mathbf{M}^R$ is straightforward using \cite{Pellis_etal}, Appendix A and we provide a brief outline in Appendix \ref{App:rank} how similar arguments to those used in Section \ref{S:example:house} can be used to show that $R_0^r$ is the maximal eigenvalue of $\mathbf{M}^R$. The rank generational construction is natural for the $SIR$ epidemic model and allows us to move beyond $T \sim {\rm Exp} (\gamma)$ but does not readily apply to $SIS$ epidemic models. Extensions of the $SIS$ epidemic model are possible by using the method of stages, see \cite{Barbour76}, where $T$ can be expressed as a sum or mixture of exponential distributions and by extending the number of infectious units to allow for individuals in different stages of the infectious period. In principle $\mathbf{P}$ and $\Phi$ can be constructed as above but the number of possible infectious units rapidly grows making the calculations more cumbersome. \subsection{Construction of $\mathbf{M}$} Consider an infective belonging to an infectious unit of state $i$ ($i=1,2,\ldots,K$). Suppose that there are $n_i$ events which can occur to an infectious unit in state $i$. Let $q_{il}$ $(i=1,2,\ldots,K;l=1,2,\ldots,n_i)$ denote the probability that a type $l$ event occurs in the infectious unit. Let $a_{il}$ $(i=1,2,\ldots,K;l=1,2,\ldots,n_i)$ denote the state of the infectious unit following the type $l$ event with $a_{il}= 0$ if the infective recovers and so no longer infectious. Let $Y_{ilj}$ $(i,j=1,2,\ldots,K;l=1,2,\ldots,n_i)$ denote the total number of type $j$ infectious units generated by an infective who undergoes a type $l$ event with $\mu_{ilj} = E[Y_{ilj}]$ and $\mathbf{Y}_{il} = (Y_{il1}, Y_{il2}, \ldots, Y_{ilK})$. For $i,j=1,2,\ldots,K$, let \begin{eqnarray} \label{eq:M:prep:1} p_{ij} = \sum_{l=1}^{n_i} 1_{\{a_{il} =j \}} q_{il}, \end{eqnarray} the probability that an infective belonging to a state $i$ infectious unit moves to a state $j$ infectious unit. Note that typically $\sum_{j=1}^K p_{ij} <1$ as there is the possibility of the infective recovering from the disease and let $p_{i0} = 1- \sum_{j=1}^K p_{ij}$, the probability a type $i$ infective recovers from the disease. For $i,j=1,2,\ldots,K$, let \begin{eqnarray} \label{eq:M:prep:2} \phi_{ij} = \sum_{l=1}^{n_i} q_{il} \mu_{ilj}, \end{eqnarray} the mean number of state $j$ infectious units generated by an event directly involving an infective in a state $i$ infectious unit. It follows using the theorem of total probability and the linearity of expectation that \begin{eqnarray} \label{eq:M:prep:3} m_{ij} &=& \sum_{l=1}^{n_i} q_{il} \ez[\mbox{State $j$ infectious units generated} | \mbox{Type $l$ event}] \nonumber \\ &=& \sum_{l=1}^{n_i} q_{il} \left\{ \mu_{ilj} + m_{a_{il} j} \right\} \nonumber \\ &=& \sum_{l=1}^{n_i} q_{il} \mu_{ilj} + \sum_{l=1}^{n_i} q_{il} m_{a_{il} j} \nonumber \\ &=& \phi_{ij} + \sum_{i=1}^{n_i} q_{il} \left\{ \sum_{k=1}^{K} 1_{\{a_{il} = k \}} m_{kj} \right\} \nonumber \\ &=& \phi_{ij} + \sum_{k=1}^{K} \left\{ \sum_{i=1}^{n_i} q_{il} 1_{\{a_{il} = k \}} \right\} m_{kj} \nonumber \\ &=& \phi_{ij} + \sum_{k=1}^{K} p_{ik} m_{kj}, \end{eqnarray} where for $j=1,2,\ldots,K$, let $m_{0j} =0$, that is, a recovered individual makes no infections. Letting $\Phi = (\phi_{ij})$ and $\mathbf{P} = (p_{ij})$ be $K \times K$ matrices, we can express \eqref{eq:M:prep:3} in matrix notation as \begin{eqnarray} \label{eq:M:1} \mathbf{M} = \Phi + \mathbf{P} \mathbf{M}. \end{eqnarray} Rearranging \eqref{eq:M:1}, with $\mathbf{I}$ denote the $K \times K$ identity matrix, we have that \begin{eqnarray} \label{eq:M:2} \mathbf{M} = \left(\sum_{n=0}^\infty \mathbf{P}^n \right) \Phi = (\mathbf{I} - \mathbf{P})^{-1} \Phi. \end{eqnarray} Since individuals recover from the disease, $\mathbf{P}$ is a substochastic matrix with at least some rows summing to less than 1. Thus the Perron-Frobenius theorem gives that $\mathbf{P}^n \rightarrow \mathbf{0}$ as $n \rightarrow \infty$. The definition of an event, and hence, $\mathbf{Y}_{il}$ can be accomplished in a variety of ways. In this paper, we typically take an event to coincide with a change in the type of infectious unit to which an infective belongs and we take account of the (mean) number of global infections an infective makes in a type $i$ infectious unit before transitioning to a new type of infectious unit. In this way $p_{ii} =0$ $(i=1,2,\ldots,K)$. An alternative approach is to define an event to be any global infection, local infection or recovery within their infectious unit. In this case nothing occurs between events and $\mathbf{Y}_{il}$ is the number of infectious units generated by the type $l$ event. In Section \ref{S:example:sex}, we present both constructions for an $SIS$ sexually transmitted disease model. \subsection{Definition of $R_0$} We define $R_0$ as the maximal eigenvalue of the mean reproduction matrix $\mathbf{M}$, where $\mathbf{M}$ is a $K \times K$ matrix with $m_{ij}$ denoting the mean number of state $j$ infectious units generated by an infective who enters the infectious state as a member of a state $i$ infectious unit. This definition of $R_0$ is consistent with earlier work on computing the basic reproduction number in heterogeneous populations with multiple types of infectives, see for example, \cite{Diekmann_etal}. A key point to note is that $\mathbf{M}$ will capture only the infections made by a specific infective rather than all the infections made by the infectious unit to which they belong. Linking the mean reproduction matrix $\mathbf{M}$ back to the $SIR$ household example. An individual will be classed as a state $(a,b)$ individual if the event which leads to the individual becoming infected results in the infectious unit (household) entering state $(a,b)$. An infective will generate a state $(c,d)$ individual, if they are infectious in an infectious unit in state $(c+1,d-1)$ and the infective is responsible for infecting one of the susceptibles in the household. Note that if $d \geq 3$, the infectious unit can transit from state $(c+1,d-1)$ to state $(c,d)$ without the infective in question having made the infection. \section{Examples} \label{S:Example} In this Section we show how $\mathbf{M}$ is constructed for three different models; the household SIR epidemic model (Section \ref{S:example:house}), an SIS sexually transmitted disease (Section \ref{S:example:sex}) and the great circle SIR epidemic model (Section \ref{S:example:gcm}). \input{house_v2} \input{sex_v3} \input{gcm} \subsection{Great Circle Epidemic model} \label{S:example:gcm} The final example we consider in this paper is the great circle $SIR$ epidemic model, see \cite{Ball_Neal03} and references therein. The model assumes that the population consists of $N$ individuals who are equally spaced on the circumference of a circle with the individuals labeled sequentially from 1 to $N$ and individuals 1 and $N$ are neighbours. Thus individuals $i \pm 1 (mod \, N)$ are the neighbours of individual $i$ $(i=1,2,\ldots,N)$. Individuals, whilst infectious, make both local and global infectious contacts as in the household model. An individual makes global infectious contacts at the points of a homogeneous Poisson point process with rate $\lambda_G$ with the individual contacted chosen uniformly at random from the entire population. An individual makes local infectious contacts with a given neighbour at the points of a homogeneous Poisson point process with rate $ \lambda_L$. Finally, the infectious periods are independently and exponentially distributed with mean $1/\gamma$. An infectious individual in the great circle model can be characterised by the total number of susceptible neighbours that it has which can be $2, 1$ or 0. In the initial stages of the epidemic with $N$ large, with high probability, an individual infected globally will initially have 2 susceptible neighbours, whereas an individual infected locally will, with high probability, have 1 susceptible neighbour when they are infected. An infective with $k$ $(k=0, 1,2)$ susceptible neighbours makes ${\rm Po} (\lambda_G/\{k \lambda_L + \gamma \})$ global infectious contacts before a local infection or recovery event with the probability that the event is the infection of a neighbour $k \lambda_L /(k \lambda_L + \gamma)$. Therefore if we construct $\Phi$ and $\mathbf{P}$ in terms of descending number of susceptible neighbours we have that \begin{eqnarray} \label{eq:gcm:1} \Phi = \begin{pmatrix} \frac{\lambda_G}{2 \lambda_L + \gamma} & \frac{2 \lambda_L}{2 \lambda_L + \gamma} & 0 \\ \frac{\lambda_G}{ \lambda_L + \gamma} & 0 & \frac{ \lambda_L}{ \lambda_L + \gamma} \\ \frac{\lambda_G}{\gamma} & 0& 0 \end{pmatrix}, \end{eqnarray} and \begin{eqnarray} \label{eq:gcm:2} \mathbf{P} = \begin{pmatrix} 0 & \frac{2 \lambda_L}{2 \lambda_L + \gamma} & 0 \\ 0 & 0 & \frac{\lambda_L}{\lambda_L + \gamma} \\ 0 & 0 & 0 \end{pmatrix}. \end{eqnarray} It is then straightforward to show that \begin{eqnarray} \label{eq:gcm:3} \mathbf{M} = \begin{pmatrix} \frac{\lambda_G}{\gamma} & \frac{2 \lambda_L}{\lambda_L +\gamma} & 0 \\ \frac{\lambda_G}{\gamma} & \frac{\lambda_L}{\lambda_L +\gamma} & 0 \\ \frac{\lambda_G}{\gamma} & 0 & 0 \end{pmatrix}. \end{eqnarray} We observe that no individuals are created with 0 susceptible neighbours and we only need to consider the mean offspring distributions for type 1 and type 2 infectives. This gives $R_0$ as the solution of the quadratic equation, \begin{eqnarray} \label{eq:gcm:4} \left( \frac{\lambda_G}{\gamma} - s \right) \left( \frac{\lambda_L}{\lambda_L +\gamma} - s \right) - \frac{\lambda_G}{\gamma} \times \frac{2 \lambda_L}{\lambda_L +\gamma} &=&0 \nonumber \\ s^2 - (\mu_G + p_L) s - \mu_G p_L &=& 0, \end{eqnarray} where $\mu_G = \lambda_G/\gamma$ denotes the mean number of global infectious contacts made by an infective and $p_L = \lambda_L /(\lambda_L + \gamma)$ denotes the probability an infective makes infects a given susceptible neighbour. This yields \begin{eqnarray} \label{eq:gcm:5} R_0 = \frac{p_L + \mu_G + \sqrt{(p_L + \mu_G)^2 + 4 p_L \mu_G}}{2}. \end{eqnarray} In \cite{BMST}, \cite{Ball_Neal02} and \cite{Ball_Neal03}, the threshold parameters $R_\ast$ is defined for the great circle model as the mean number of global infectious contacts emanating from a local infectious clump, where a local infectious clump is defined to be the epidemic generated by a single infective by only considering local (neighbour) infectious contacts. From \cite{Ball_Neal02}, (3.12), \begin{eqnarray} \label{eq:gcm:6} R_\ast = \mu_G \frac{1 + p_L}{1- p_L}. \end{eqnarray} It is trivial to show that $R_0 =1$ $(R_0 <1; R_0 >1)$ if and only if $R_\ast =1$ $(R_\ast <1; R_\ast >1)$ confirming $R_0$ and $R_\ast$ as equivalent threshold parameters for the epidemic model. In contrast to the household $SIR$ epidemic model (Section \ref{S:example:house}) and the $SIS$ sexually transmitted disease (Section \ref{S:example:sex}) for the great circle model it is trivial to extend the above definition of $R_0$ to a general infectious period distribution $T$. Let $\mu_T = \ez [T]$, the mean of the infectious period and $\phi_T (\theta) = \ez [ \exp(- \theta T)]$ $(\theta \in \mathbb{R}^+)$, the Laplace transform of the infectious period. Thus $\mu_G$ and $p_L$ become $\lambda_G \mu_T$ and $1 - \phi (\lambda_L)$, respectively. Then the probability that a globally infected individual infects 0, 1 or 2 of its initially susceptible neighbours is $\phi (2 \lambda_L)$, $2 \{ \phi ( \lambda_L) - \phi (2 \lambda_L) \}$ and $1 - 2 \phi ( \lambda_L) + \phi (2 \lambda_L) $, respectively. Similarly the probability that a locally infected individual infects its initially susceptible neighbour is $p_L =1- \phi (\lambda_L)$. Since the mean number of global infectious contacts made by an infective is $\mu_G (= \lambda_G \mu_T)$ regardless of whether or not the individual is infected globally or locally, we can derive directly the mean offspring matrix $\mathbf{M}$ is terms of those infected globally (initially 2 susceptible neighbours) and those infected locally (initially 1 susceptible neighbour) with \begin{eqnarray} \label{eq:gcm:7} \mathbf{M} = \begin{pmatrix} \mu_G & 2 p_L \\ \mu_G & p_L \end{pmatrix}. \end{eqnarray} Therefore after omitting the final column (and row) of $\mathbf{M}$ from \eqref{eq:gcm:3}, the equation for $\mathbf{M}$ is identical in \eqref{eq:gcm:3} and \eqref{eq:gcm:7}, and hence \eqref{eq:gcm:5} holds for $R_0$ for a general infectious period distribution $T$. \subsection{$SIR$ Household example} An example of an epidemic model which satisfies the above setup is the $SIR$ household epidemic model with exponential infectious periods. We illustrate assuming that all households are of size $h >1$ with the extension to allow for different size households trivial. An individual, whilst infectious, makes global contacts at the points of a homogeneous Poisson point process with rate $\lambda_G$ with the individual contacted chosen uniformly at random from the entire population and local contacts at the points of a homogeneous Poisson point process with rate $(h-1) \lambda_L$ with the individual contacted chosen uniformly at random from the remaining $h-1$ individuals in the infectives household. It is assumed that the local and global contacts are independent. Note that an infective makes contact with a given individual in their household at rate $\lambda_L$. Infectives have independent and identically distributed exponential infectious periods with mean $1/\gamma$ corresponding to infectives recovering at rate $\gamma$. In this case $M=1$ although we could extend to a multitype household model, see \cite{Ball_Lyne}. Infectious units correspond to households containing at least one infective and we classify households by the number of susceptibles and infectives they contain. Therefore the possible infectious states of a household are $\{(a,b); b=1,2,\ldots,h; a=0,1,\ldots, h-b \}$, where $a$ and $b$ denote the number of susceptibles and the number of infectives in the household, respectively. Thus there are $K = h (h+1)/2$ states. A global infection with a previously uninfected household results in the creation of a new infectious unit in state $(h-1,1)$. A local infection in a household in state $(a,b)$ results in the household moving to state $(a-1,b+1)$, whilst a recovery in a household in state $(a,b)$ results in the household moving to state $(a,b-1)$, and no longer being an infectious unit if $b=1$. \subsection{$SIR$ Household epidemic model} \label{S:example:house} We illustrate the computation of $R_0$ in a population with households of size $h$. As noted in Section \ref{S:setup}, we can summarise the epidemic process using $K = h (h+1)/2 - (h-1)$ states by amalgamating states $(0,1), (0,2), \ldots, (0,h)$ into the state $(0,1)^\ast$. We use the labellings $\{(0,1)^\ast, (a,b); a,b=1,2,\ldots,h, (a+b) \leq h\}$ rather than $1,2, \ldots, K$ to define the mean reproduction matrix. We construct $\mathbf{M}$ by first considering the local transitions (infections and recoveries) which occur within a household. Therefore for an individual in state $(a,b)$, the non-zero transitions are \begin{eqnarray} p_{(a,b),(a-1,b+1)} &=& \frac{a b \lambda_L }{b (a \lambda_L+ \gamma)} \hspace{0.5cm} \mbox{if $a>1$} \nonumber \\ p_{(a,b),(0,1)^\ast} &=& \frac{a b\lambda_L}{b (a \lambda_L + \gamma)} \hspace{0.5cm} \mbox{if $a=1$} \label{eq:house:E:0} \\ p_{(a,b),(a,b-1)}&=& \frac{(b-1) \gamma}{b (a \lambda_L + \gamma)}. \nonumber \end{eqnarray} Note that the probability that the next event that the individual of interest recovers is $\gamma/\{ b (a \lambda_L + \gamma)\}$ and an individual only leaves state $(0,1)^\ast$ through recovery. Therefore the transition probabilities in \eqref{eq:house:E:0} define the substochastic matrix $\mathbf{P}$. The time that a household spends in state $(a,b)$ is exponentially distributed with rate $b (a \lambda_L + \gamma)$. Therefore, since infectives are making infectious contacts at the points of a homogeneous Poisson point process with rate $\lambda_G$, the mean number of global contacts made by an infective, whilst the household is in state $(a,b)$, is $\lambda_G/\{ b (a \lambda_L + \gamma)\}$ with all global contacts resulting in an $(h-1,1)$ infectious unit. This gives the non-zero entries of $\Phi$ to be \begin{eqnarray*} \phi_{(a,b),(a-1,b+1)} &=& \frac{a \lambda_L }{b (a \lambda_L + \gamma)} = \frac{p_{(a,b),(a-1,b+1)}}{b} \hspace{0.5cm} \mbox{if $a>1$} \\ \phi_{(a,b),(0,1)^\ast} &=& \frac{\lambda_L}{b (a \lambda_L + \gamma)} = \frac{p_{(a,b),(0,1)^\ast}}{b} \hspace{1.05cm} \mbox{if $a=1$} \\ \phi_{(a,b),(h-1,1)} &=& \frac{\lambda_G}{b (a \lambda_L + \gamma)} \end{eqnarray*} Note that the probability that the infective of interest is responsible for a given local infection in a household in state $(a,b)$ is simply $1/b$. In a population of households of size $3$ with the states ordered $(2,1)$, $(1,2)$, $(1,1)$ and $(0,1)^\ast$, we have that \begin{eqnarray} \label{eq:house:E:1} \mathbf{P} &=& \begin{pmatrix} 0 & \frac{2 \lambda_L}{2 \lambda_L + \gamma} & 0 & 0 \\ 0& 0 & \frac{\gamma}{2 (\lambda_L + \gamma)} & \frac{2\lambda_L}{2 (\lambda_L + \gamma)} \\ 0 & 0 &0& \frac{\lambda_L}{\lambda_L + \gamma} \\ 0 & 0 & 0 & 0 \end{pmatrix} \end{eqnarray} and \begin{eqnarray} \label{eq:house:E:2} \Phi &=& \begin{pmatrix} \frac{\lambda_G}{2 \lambda_L + \gamma} & \frac{2 \lambda_L}{2 \lambda_L + \gamma} & 0 & 0 \\ \frac{\lambda_G}{2 (\lambda_L + \gamma)} & 0 & 0 & \frac{\lambda_L}{2 (\lambda_L + \gamma)} \\ \frac{\lambda_G}{\lambda_L + \gamma} & 0 & 0 & \frac{\lambda_L}{\lambda_L + \gamma} \\ \frac{\lambda_G}{\gamma} & 0 & 0 & 0 \end{pmatrix}. \end{eqnarray} It is then straightforward to show that \begin{eqnarray} \label{eq:house:E:3} \mathbf{M} = (\mathbf{I} - \mathbf{P})^{-1} \Phi = \begin{pmatrix} \frac{\lambda_G}{\gamma} & \frac{2 \lambda_L}{2 \lambda_L +\gamma} & 0 & \frac{\lambda_L^2 (\lambda_L + 2 \gamma)}{(2 \lambda_L + \gamma)(\lambda_L + \gamma)^2} \\ \frac{\lambda_G}{ \gamma} & 0 & 0 & \frac{\lambda_L (\lambda_L + 2 \gamma)}{2 (\lambda_L + \gamma)^2} \\ \frac{\lambda_G}{\gamma} & 0 & 0 & \frac{\lambda_L}{\lambda_L + \gamma} \\ \frac{\lambda_G}{\gamma} & 0 & 0 & 0 \end{pmatrix} \end{eqnarray} There are a couple of observations to make concerning $\mathbf{M}$. Firstly, regardless of at what stage of the household epidemic an individual is infected, the mean number of global contacts, and hence, the number of infectious units of type $(h-1,1)$ that are created by the individual is $\lambda_G/\gamma$. Secondly, no individuals of type $(1,1)$ are created in the epidemic since a household can only reach this state from $(1,2)$ and through the recovery of the other infective. More generally, an individual does not start as an infectious unit of type $(a,1)$, where $1 \leq a \leq h-2$, although it is helpful to define such infectious units for the progression of the epidemic. It follows from \eqref{eq:house:E:3} by removing the redundant row and column for state $(1,1)$ individuals, that the basic reproduction number, $R_0$, solves the cubic equation \begin{eqnarray} \label{eq:house:E:4} s^3 - \frac{\lambda_G}{\gamma} s^2 - \frac{\lambda_G}{\gamma} \left\{\frac{2 \lambda_L}{2 \lambda_L +\gamma} +\frac{\lambda_L^2 (\lambda_L + 2 \gamma)}{(2 \lambda_L + \gamma)(\lambda_L + \gamma)^2} \right\} s - \frac{\lambda_G}{\gamma} \left\{ \frac{2 \lambda_L}{2 \lambda_L +\gamma} \frac{\lambda_L (\lambda_L + 2 \gamma)}{2 (\lambda_L + \gamma)^2} \right\} =0. \end{eqnarray} We note that in the notation of \cite{Pellis_etal}, $\mu_G = \lambda_G/\gamma$, $\mu_0 =1$, \[ \mu_1 = \frac{2 \lambda_L}{2 \lambda_L +\gamma} +\frac{\lambda_L^2 (\lambda_L + 2 \gamma)}{(2 \lambda_L + \gamma)(\lambda_L + \gamma)^2} \] and \[ \mu_2 = \frac{2 \lambda_L}{2 \lambda_L +\gamma} \frac{\lambda_L (\lambda_L + 2 \gamma)}{2 (\lambda_L + \gamma)^2}, \] where $\mu_i$ $(i=0,1,\ldots)$ denotes the number of infectives in generation $i$ of the household epidemic, see also \cite{Ball_etal}, Section 3.1.3. Therefore we can rewrite \eqref{eq:house:E:4} as \begin{eqnarray} \label{eq:house:E:5} s^3 - \sum_{i=0}^2 \mu_G \mu_i s^{2-i} = 0, \end{eqnarray} which is equivalent to \cite{Pellis_etal}, (3.3), and hence obtain an identical $R_0$ to $R_0^g$ defined in \cite{Ball_etal}. We proceed by showing that for the Markov household epidemic model $R_0$ obtained as the maximal eigenvalue of $\mathbf{M}$ corresponds $R_0^g$ defined in \cite{Ball_etal} for any $h \geq 1$. In order to do this it is helpful to write \begin{eqnarray} \label{eq:house:E:6} \mathbf{M} = \mathbf{G} + \mathbf{U}, \end{eqnarray} where $\mathbf{G}$ is the $K \times K$ matrix with $G_{k1} = \mu_G$ $(1 \leq k \leq K)$ and $G_{kj} =0$ otherwise. Then $\mathbf{G}$ and $\mathbf{U}$ denote the matrices of global and local infections, respectively. For $i=0,1,2,\ldots,h-1$, let $\nu_i = \sum_{j=1}^K u_{1j}^i$, the sum of the first row of $\mathbf{U}^i$. The key observation is that $\nu_i$ denotes the mean number of individuals in generation $i$ of the household epidemic with $\mathbf{U}^0 = \mathbf{I}$, the identity matrix (the initial infective in the household is classed as generation 0) and $\mathbf{U}^i = \mathbf{0}$ for $i \geq h$. For $0 \leq a,b \leq h-1$, let $y_{(a,b)}^{(n)}$ denote the mean number of type $(a,b)$ individuals in the $n^{th}$ generation of the epidemic process. Then $y_{(h-1,1)}^{(0)}=1$ (the initial infective) and for all $(a,b) \neq (h-1,1)$, $y_{(a,b)}^{(0)}=0$. Let $\mathbf{y}^{(n)} = (y_{(a,b)}^{(n)})$ denote the mean number of individuals of each type in the $n^{th}$ generation of the epidemic process with the convention that $y_{(h-1,1)}^{(n)}$ is the first entry of $\mathbf{y}^{(n)}$. Then for $n \geq 1$, $\mathbf{y}^{(n)}$ solves \begin{eqnarray} \label{eq:house:E:7} \mathbf{y}^{(n)} = \mathbf{y}^{(n-1)} \mathbf{M}. \end{eqnarray} The proof of \eqref{eq:house:E:7} mimics the proof of \cite{Pellis_etal}, Lemma 2, and it follows by induction that \begin{eqnarray} \label{eq:house:E:8} \mathbf{y}^{(n)} = \mathbf{y}^{(0)} \mathbf{M}^n. \end{eqnarray} Let $x_{n,i}$ $(n=0,1,\ldots;i=0,1,\ldots,h-1)$ be defined as in \cite{Pellis_etal}, Lemma 1, with $x_{n,i}$ denoting the mean number of individuals in the $n^{th}$ generation of the epidemic who belong to the $i^{th}$ generation of the household epidemic. We again employ the convention that the initial infective individual in the household represents generation 0. It is shown in \cite{Pellis_etal}, Lemma 1, (3.5) and (3.6) that \begin{eqnarray} \label{eq:Pellis_etal:1} x_{n,0} = \mu_G \sum_{i=0}^{h-1} x_{n-1,i}, \end{eqnarray} and \begin{eqnarray} \label{eq:Pellis_etal:2} x_{n,i} = \mu_i x_{n-i,0}, \end{eqnarray} where $\mu_i$ is the mean number of infectives in generation $i$ of a household epidemic, $x_{0,0}=1$ and $x_{0,i} =0$ $(i=1,2,\ldots,h-1)$. Let $\mathbf{x}^{(n)} = (x_{n,0}, x_{n,1}, \ldots, x_{n,h-1})$. ${\newtheorem{lemma}{Lemma}[section]}$ \begin{lemma} \label{lem1} For $n=0,1,\ldots$, \begin{eqnarray} \label{eq:house:E:9} y^{(n)}_{(h-1,1)} = x_{n,0}. \end{eqnarray} Let $x_n = \mathbf{x}^{(n)} \mathbf{1} = \sum_{j=0}^{h-1} x_{n,j}$ and $y_n = \mathbf{y}^{(n)} \mathbf{1} = \sum_{(a,b)} y^{(n)}_{(a,b)}$, then for $n=0,1,\ldots$, \begin{eqnarray} \label{eq:house:E:9a} y_n = x_n. \end{eqnarray} \end{lemma} Before proving Lemma \ref{lem1}, we prove Lemma \ref{lem2} which gives $\mu_i$ in terms of the local reproduction matrix $\mathbf{U}$. ${\newtheorem{lemma2}[lemma]{Lemma}}$ \begin{lemma2} \label{lem2} For $i=0,1,\ldots,h-1$, \begin{eqnarray} \label{eq:house:E:10} \mu_i = \sum_{(c,d)} u_{(h-1,1),(c,d)}^i = \nu_i. \end{eqnarray} \end{lemma2} {\bf Proof.} Let $Z_{(a,b)}^{(i)}$ denote the total number of individuals of type $(a,b)$ in generation $i$ of a household epidemic. Note that $Z_{(a,b)}^{(i)}$ will be either 0 or 1 and $Z_{(a,b)}^{(i)} =1$ if an infection takes place in a household in state $(a+1,b-1)$ with the infector belonging to generation $i-1$. Then by definition \begin{eqnarray} \label{eq:house:E:11} \mu_i = \sum_{(a,b)} \ez [ Z_{(a,b)}^{(i)}]. \end{eqnarray} We note that $Z_{(h-1,1)}^{(0)} =1$ and for $(a,b) \neq (h-1,1)$, $Z_{(a,b)}^{(0)} =0$, giving $\mu_0 =1$. Since $\mathbf{U}^0$ is the identity matrix, we have that $\sum_{(c,d)} u_{(h-1,1),(c,d)}^0 = 1$ also. For $i=1,2,\ldots,h-1$, we have that \begin{eqnarray} \label{eq:house:E:12} \ez [ Z_{(a,b)}^{(i)}] = \ez[\ez [Z_{(a,b)}^{(i)} | \mathbf{Z}^{(i-1)}] ], \end{eqnarray} where $\mathbf{Z}^{(i-1)} = (Z_{(a,b)}^{(i-1)})$. Now \begin{eqnarray} \label{eq:house:E:13} \ez [Z_{(a,b)}^{(i)} | \mathbf{Z}^{(i-1)}] = \sum_{(c,d)} u_{(c,d),(a,b)} Z_{(c,d)}^{(i-1)}, \end{eqnarray} since $u_{(c,d),(a,b)}$ is the probability that a type $(c,d)$ individual will infect an individual to create a type $(a,b)$ infective. Therefore taking expectations of both sides of \eqref{eq:house:E:13} yields \begin{eqnarray} \label{eq:house:E:14} \ez [Z_{(a,b)}^{(i)}] = \sum_{(c,d)} u_{(c,d),(a,b)} E[Z_{(c,d)}^{(i-1)}]. \end{eqnarray} Therefore letting $z_{(a,b)}^{(i)} = \ez [Z_{(a,b)}^{(i)}]$ and $\mathbf{z}^{(i)} = (z_{(a,b)}^{(i)})$ it follows from \eqref{eq:house:E:14} that \begin{eqnarray} \label{eq:house:E:15} \mathbf{z}^{(i)} = \mathbf{z}^{(i-1)} \mathbf{U} = \mathbf{z}^{(0)} \mathbf{U}^i. \end{eqnarray} Hence, \begin{eqnarray} \label{eq:house:E:16} \mu_i &=& \sum_{(a,b)} z_{(a,b)}^{(i)} \nonumber \\ &=& \sum_{(a,b)} z_{(a,b)}^{(0)} \sum_{(c,d)} u_{(a,b),(c,d)}^i \nonumber \\ &=& \sum_{(c,d)} u_{(h-1,1),(c,d)}^i = \nu_i, \end{eqnarray} as required. \hfill $\square$ {\bf Proof of Lemma \ref{lem1}.} We prove the lemma by induction and noting that for $n=0$, $y^{(0)}_{(h-1,1)} = x_{0,0}=1$. Before proving the inductive step, we note that it follows from \eqref{eq:house:E:7} that \begin{eqnarray} \label{eq:house:E:17} y_{(h-1,1)}^{(n)} = \frac{\lambda_G}{\gamma} \sum_{(c,d)} y_{(c,d)}^{(n-1)} = \mu_G \sum_{(c,d)} y_{(c,d)}^{(n-1)}\end{eqnarray} and for $(a,b) \neq (h-1,1)$, \begin{eqnarray} \label{eq:house:E:18} y_{(a,b)}^{(n)} &=& \sum_{(c,d)} y_{(c,d)}^{(n-1)} u_{(c,d),(a,b)} \nonumber \\ &=& y_{(h-1,1)}^{(n-1)} u_{(h-1,1),(a,b)} + \sum_{(c,d) \neq (h-1,1)} y_{(c,d)}^{(n-1)} u_{(c,d),(a,b)} \nonumber \\ &=& y_{(h-1,1)}^{(n-1)} u_{(h-1,1),(a,b)} + \sum_{(c,d) \neq (h-1,1)} \left\{ \sum_{(e,f)} y_{(e,f)}^{(n-2)} u_{(e,f),(c,d)} \right\} u_{(c,d),(a,b)} \nonumber \\ &=& y_{(h-1,1)}^{(n-1)} u_{(h-1,1),(a,b)} + \sum_{(e,f)} y_{(e,f)}^{(n-2)} \sum_{(c,d) \neq (h-1,1)} u_{(e,f),(c,d)} u_{(c,d),(a,b)} \nonumber \\ &=& y_{(h-1,1)}^{(n-1)} u_{(h-1,1),(a,b)} + \sum_{(e,f)} y_{(e,f)}^{(n-2)} u_{(e,f),(a,b)}^2. \end{eqnarray} The final line of \eqref{eq:house:E:18} follows from $u_{(e,f),(h-1)} =0$ for all $(e,f)$. Then by a simple recursion it follows from \eqref{eq:house:E:18}, after at most $h-1$ steps, that, for $(a,b) \neq (h-1,1)$, \begin{eqnarray} \label{eq:house:E:19} y_{(a,b)}^{(n)} &=& \sum_{j=1}^{h-1} y_{(h-1,1)}^{(n-j)} u_{(h-1,1),(a,b)}^j. \end{eqnarray} Note that \eqref{eq:house:E:19} can easily be extended to include $(a,b) = (h-1,1)$ giving \begin{eqnarray} \label{eq:house:E:20} y_{(a,b)}^{(n)} &=& \sum_{j=0}^{h-1} y_{(h-1,1)}^{(n-j)} u_{(h-1,1),(a,b)}^j. \end{eqnarray} For $n \geq 1$, we assume the inductive hypothesis that for $0 \leq k \leq n-1$, $y^{(k)}_{(h-1,1)} = x_{k,0}$. Then from \eqref{eq:house:E:20}, we have that \begin{eqnarray} \label{eq:house:E:21} y^{(n)}_{(h-1,1)} &=& \sum_{(a,b)} m_{(a,b),(h-1,1)} y^{(n-1)}_{(a,b)} \nonumber \\ &=& \mu_G \sum_{(a,b)} y^{(n-1)}_{(a,b)} \nonumber \\ &=& \mu_G \sum_{(a,b)} \left\{ \sum_{j=0}^{h-1} y_{(h-1,1)}^{(n-1-j)} u_{(h-1,1),(a,b)}^j \right\} \nonumber \\ &=& \mu_G \sum_{j=0}^{h-1} y_{(h-1,1)}^{(n-1-j)} \left( \sum_{(a,b)}u_{(h-1,1),(a,b)}^j \right). \end{eqnarray} Using the inductive hypothesis and Lemma \ref{lem2}, we have from \eqref{eq:house:E:21} that \begin{eqnarray} \label{eq:house:E:22} y^{(n)}_{(h-1,1)}&=& \mu_G \sum_{j=0}^{h-1} x_{(n-1-j),0} \mu_j =x_{n,0}, \end{eqnarray} as required for \eqref{eq:house:E:9}. Using a similar line of argument, \begin{eqnarray} \label{eq:house:E:22a} y_n = \mathbf{y}^{(n)} \mathbf{1} &=& \sum_{(a,b)} y^{(n)}_{(a,b)} \nonumber \\ &=& \sum_{(a,b)} \left\{ \sum_{j=0}^{h-1} y^{(n-j)}_{(h-1,1)} u_{(h-1,1),(a,b)}^j \right\} \nonumber \\ &=& \sum_{j=0}^{h-1} y^{(n-j)}_{(h-1,1)} \sum_{(a,b)} \left\{ u_{(h-1,1),(a,b)}^j \right\} \nonumber \\ &=& \sum_{j=0}^{h-1} x_{n-j,0} \mu_j = x_n, \end{eqnarray} as required for \eqref{eq:house:E:9a}. \hfill $\square$ Therefore we have shown that the two representations of the household epidemic given in \cite{Pellis_etal} and in this paper give the same mean number of infectives and the same mean number of new household epidemics in generation $n$ $(n=0,1,\ldots)$. This is a key component in showing that $\mathbf{M}$ and $\mathbf{A}$, the mean reproductive matrix given in \cite{Pellis_etal} by \begin{eqnarray} \label{eq:house:E:23} \mathbf{A} = \begin{pmatrix} \mu_G \mu_0 & 1 & 0 & \cdots & 0 \\ \mu_G \mu_1 & 0 & 1 & & 0 \\ \vdots & && \ddots & 0 \\ \mu_G \mu_{h-2} & 0 & 0 & & 1 \\ \mu_G \mu_{h-1} & 0 & 0& \cdots & 0 \\ \end{pmatrix} \end{eqnarray} have the same largest eigenvalue. Let $\rho_A$ and $\rho_M$ denote the largest eigenvalues of $\mathbf{A}$ and $\mathbf{M}$, respectively. Let $\mathbf{z}_L$ and $\mathbf{z}_R$ denote the normalised left and right eigenvectors corresponding to $\rho_A$ with $\mathbf{z}_L \mathbf{z}_R = 1$. In \cite{Pellis_etal}, Lemma 3, it is note that \begin{eqnarray} \label{eq:house:E:24} \mathbf{A} = \rho_A \mathbf{C}_A + B_A, \end{eqnarray} where $\mathbf{C}_A = \mathbf{z}_R \mathbf{z}_L$ and $\rho_A^{-n} B_A^n \rightarrow \mathbf{0}$ as $n \rightarrow \infty$. This implies that if $x_n = \mathbf{x}^{(n)} \mathbf{1}$, the mean number of individuals infected in the $n^{th}$ generation of the epidemic then \begin{eqnarray} \label{eq:house:E:25} (y_n^{1/n} =) x_n^{1/n} \rightarrow \rho_A \hspace{0.5cm} \mbox{as } n \rightarrow \infty. \end{eqnarray} As observed earlier the construction of $\mathbf{M}$ results in $\mathbf{0}$ columns corresponding to infectious units which can arise through the removal of an infective. Let $\tilde{\mathbf{M}}$ denote the matrix obtained by removing the $\mathbf{0}$ columns and corresponding rows from $\mathbf{M}$. The eigenvalues of $\mathbf{M}$ will consist of the eigenvalues of $\tilde{\mathbf{M}}$ plus repeated 0 eigenvalues, one for each $\mathbf{0}$ column. Let $\mathbf{w}_L$ and $\mathbf{w}_R$ denote the normalised left and right eigenvectors corresponding to $\rho_{\tilde{M}}$ with $\mathbf{w}_L \mathbf{w}_R = 1$. Then since $\tilde{\mathbf{M}}$ is a positively regular matrix by the Perron-Frobenius theorem, $\tilde{\mathbf{M}}$ (and hence $\mathbf{M}$) has a unique real and positive largest eigenvalue, $\rho_M$. Moreover, \begin{eqnarray} \label{eq:house:E:26} \tilde{\mathbf{M}} = \rho_M \mathbf{C}_M + B_M, \end{eqnarray} where $\mathbf{C}_M = \mathbf{w}_R \mathbf{w}_L$ and $\rho_M^{-n} B_M^n \rightarrow \mathbf{0}$ as $n \rightarrow \infty$. Then following the arguments in the proof of \cite{Pellis_etal}, Lemma 3, \begin{eqnarray} \label{eq:house:E:27} y_n^{1/n} \rightarrow \rho_M \hspace{0.5cm} \mbox{as } n \rightarrow \infty. \end{eqnarray} Since $x_n = y_n$ $(n=0,1,2,\ldots)$, it follows from \eqref{eq:house:E:25} and \eqref{eq:house:E:27} that $\rho_M = \rho_A$ and therefore that the two constructions of the epidemic process give the same basic reproduction number $R_0$. \section{Introduction} \label{S:intro} The basic reproduction number, $R_0$, is a key summary in infectious disease modelling being defined as the expected number of individuals infected by a typical individual in a completely susceptible population. This definition of $R_0$ is straightforward to define and compute in a homogeneous population consisting of a single type of infective (homogeneous behaviour) and with uniform random mixing of infectives (homogeneous mixing). This yields the celebrated threshold theorem, see for example, \cite{Whittle55}, for the epidemic with a non-zero probability of a major epidemic outbreak if and only if $R_0 >1$. The extension of the definition $R_0$ to non-homogeneous populations is non-trivial. Important work in this direction includes \cite{Diekmann_etal} which considers heterogeneous populations consisting of multiple types of infectives and \cite{Pellis_etal}, \cite{Ball_etal} which consider heterogeneity in mixing through population structure. Specifically \cite{Diekmann_etal} defines for a population consisting of $K$ types of infectives, the $K \times K$ mean reproduction matrix $\mathbf{M}$ (also known as the next-generation-matrix), where $M_{ij}$ denotes the mean number of infectives of type $j$ generated by a typical type $i$ infective during its infectious period. Then $R_0$ is defined as the Perron-Frobenius (dominant) eigenvalue of $\mathbf{M}$. By contrast \cite{Pellis_etal} and \cite{Ball_etal} focus on a household epidemic model with a single type of infective and consider a branching process approximation for the initial stages of the epidemic process, see, for example, \cite{Whittle55}, \cite{Ball_Donnelly} and \cite{BMST}. \cite{Pellis_etal} consider the asymptotic growth rate of the epidemic on a generational basis using an embedded Galton-Watson branching process and define $R_0$ to be \begin{eqnarray} \label{eq:intro:1} R_0 = \lim_{n \rightarrow \infty} \ez [X_n]^{1/n}, \end{eqnarray} where $X_n$ is the number of infectives in the $n^{th}$ generation of the epidemic. Given that the mean reproduction matrix $\mathbf{M}$ represents the mean number of infectives generated by an infective in the next generation, we observe that the computation of $R_0$ in both cases is defined in terms of the generational growth rate of the epidemic. The current work applies the approach of \cite{Diekmann_etal} to Markovian epidemics in structured populations and thus assumes individuals have exponentially distributed infectious periods. Using the method of stages \cite{Barbour76} it is straightforward to extend the work to more general infectious periods consisting of sums or mixtures of exponential random variables. Alternatively by considering the epidemic on a generational basis in the spirit of \cite{Pellis_etal} we can adapt our approach to general infectious periods for $SIR$ or $SEIR$ epidemics. Note that as demonstrated in a sexually transmitted disease example in Section \ref{S:example:sex}, our approach applies to $SIS$ epidemics as well. The key idea is that in structured populations we can define infectives by the type of infectious unit to which they belong, where for many models the number of type of infectious units is relatively small and easy to classify. By characterising an infective by the type of infectious unit they originate in (belong to at the point of infection) and considering the possible events involving the infectious unit, we can write down a simple recursive equation for the mean reproduction matrix $\mathbf{M}$. Then as in \cite{Diekmann_etal} we can simply define $R_0$ to be the Perron-Frobenius eigenvalue of $\mathbf{M}$. Our approach is similar to \cite{LKD15}, who also consider classifying infectives by the type of infectious unit to which they belong in an dynamic $SI$ sexually transmitted disease model which is similar to the $SIS$ model studied in Section \ref{S:example:sex}. The modelling in \cite{LKD15} is presented in a deterministic framework with \cite{Lashari_Trapman} considering the model from a stochastic perspective. The key difference to \cite{LKD15} is that we work with the embedded discrete Markov process of the transition events rather than the continuous time Markov rate matrices. The advantages of studying the discretised process is that it is easier to incorporate both local (within-infectious unit) infections and global (creation of new infectious units) infections and in generalisations of the construction of $\mathbf{M}$ beyond exponential infectious periods, see Section \ref{S:example:gcm} and Appendix \ref{App:rank}. The remainder of the paper is structured as follows. In Section \ref{S:setup} we define the generic epidemic model which we consider along with the derivation of $\mathbf{M}$ and $R_0$. To assist with the understanding we illustrate with an $SIR$ household epidemic model (\cite{BMST}, \cite{Pellis_etal} and \cite{Ball_etal}). In Section \ref{S:Example}, we detail the computing of $\mathbf{M}$ and $R_0$, for the $SIR$ household epidemic model (Section \ref{S:example:house}), an $SIS$ sexually transmitted disease model (Section \ref{S:example:sex}), see, for example, \cite{Kret}, \cite{LKD15} and \cite{Lashari_Trapman} and the great circle $SIR$ epidemic model (Section \ref{S:example:gcm}), see \cite{BMST}, \cite{Ball_Neal02} and \cite{Ball_Neal03}. In Section \ref{S:example:house} we show that the computed $R_0$ agrees with $R_0^g$ obtained in \cite{Ball_etal}. \section{Introduction} \label{S:intro} The basic reproduction number, $R_0$, is a key summary in infectious disease modelling being defined as the expected number of individuals infected by a typical individual in a completely susceptible population. This definition of $R_0$ is straightforward to define and compute in a homogeneous population consisting of a single type of infective (homogeneous behaviour) and with uniform random mixing of infectives (homogeneous mixing). This yields the celebrated threshold theorem, see for example, \cite{Whittle55}, for the epidemic with a non-zero probability of a major epidemic outbreak if and only if $R_0 >1$. The extension of the definition $R_0$ to non-homogeneous populations is non-trivial. Important work in this direction includes \cite{Diekmann_etal} which considers heterogeneous populations consisting of multiple types of infectives and \cite{Pellis_etal}, \cite{Ball_etal} which consider heterogeneity in mixing through population structure. Specifically \cite{Diekmann_etal} defines for a population consisting of $K$ types of infectives, the $K \times K$ mean reproduction matrix $\mathbf{M}$ (also known as the next-generation-matrix), where $M_{ij}$ denotes the mean number of infectives of type $j$ generated by a typical type $i$ infective during its infectious period. Then $R_0$ is defined as the Perron-Frobenius (dominant) eigenvalue of $\mathbf{M}$. By contrast \cite{Pellis_etal} and \cite{Ball_etal} focus on a household epidemic model with a single type of infective and consider a branching process approximation for the initial stages of the epidemic process, see, for example, \cite{Whittle55}, \cite{Ball_Donnelly} and \cite{BMST}. \cite{Pellis_etal} consider the asymptotic growth rate of the epidemic on a generational basis using an embedded Galton-Watson branching process and define $R_0$ to be \begin{eqnarray} \label{eq:intro:1} R_0 = \lim_{n \rightarrow \infty} \ez [X_n]^{1/n}, \end{eqnarray} where $X_n$ is the number of infectives in the $n^{th}$ generation of the epidemic. Given that the mean reproduction matrix $\mathbf{M}$ represents the mean number of infectives generated by an infective in the next generation, we observe that the computation of $R_0$ in both cases is defined in terms of the generational growth rate of the epidemic. The current work applies the approach of \cite{Diekmann_etal} to Markovian epidemics in structured populations and thus assumes individuals have exponentially distributed infectious periods. The key idea is that in structured populations we can define infectives by the type of infectious unit to which they belong, where for many models the number of type of infectious units is relatively small and easy to classify. By characterising an infective by the type of infectious unit they originate in (belong to at the point of infection) and considering the possible events involving the infectious unit, we can write down a simple recursive equation for the mean reproduction matrix $\mathbf{M}$. Then as in \cite{Diekmann_etal} we can simply define $R_0$ to be the Perron-Frobenius eigenvalue of $\mathbf{M}$. Our approach is similar to \cite{LKD15}, who also consider classifying infectives by the type of infectious unit to which they belong in an dynamic $SI$ sexually transmitted disease model which is similar to the $SIS$ model studied in Section \ref{S:example:sex}. The modelling in \cite{LKD15} is presented in a deterministic framework with \cite{Lashari_Trapman} considering the model from a stochastic perspective. The key difference to \cite{LKD15} is that we work with the embedded discrete Markov process of the transition events rather than the continuous time Markov rate matrices. The advantages of studying the discretised process is that it is easier to incorporate both local (within-infectious unit) infections and global (creation of new infectious units) infections and in generalisations of the construction of $\mathbf{M}$ beyond exponential infectious periods, see Section \ref{S:example:gcm} and Appendix \ref{App:rank}. Moreover, we present the approach in a general framework which easily incorporates both $SIR$ and $SIS$ epidemic models and allows for population as well as epidemic dynamics. The remainder of the paper is structured as follows. In Section \ref{S:setup} we define the generic epidemic model which we consider along with the derivation of $\mathbf{M}$ and $R_0$. To assist with the understanding we illustrate with an $SIR$ household epidemic model (\cite{BMST}, \cite{Pellis_etal} and \cite{Ball_etal}). In Section \ref{S:Example}, we detail the computing of $\mathbf{M}$ and $R_0$, for the $SIR$ household epidemic model (Section \ref{S:example:house}), an $SIS$ sexually transmitted disease model (Section \ref{S:example:sex}), see, for example, \cite{Kret}, \cite{LKD15} and \cite{Lashari_Trapman} and the great circle $SIR$ epidemic model (Section \ref{S:example:gcm}), see \cite{BMST}, \cite{Ball_Neal02} and \cite{Ball_Neal03}. In Section \ref{S:example:house} we show that the computed $R_0$ agrees with $R_0^g$ obtained in \cite{Ball_etal}. \section{Introduction} \input{abstract} \input{intro_v2} \input{setup} \input{example} \input{conc_v2} \section*{Acknowledgements} TT was supported by a PhD scholarship, grant number ST\_2965 Lancaster U, from the Royal Thai Office of Education Affairs. \input{bib} \input{appendix_v2} \end{document} \section{Model setup} \label{S:setup} In this Section we characterise the key elements of the modelling. In order to keep the results as general as possible, we present a generic description of the model before illustrating with examples to make the more abstract concepts more concrete. We assume that the population consists of $M$ types of individuals and for illustrative purposes we will assume that $M=1$. We allow individuals to be grouped together in local units and a unit is said to be an infectious unit if it contains at least one infectious individual. The local units might be static (remain fixed through time) or dynamic (varying over time). We assume that there are $K$ states that local infectious units can be in. Note that different local units may permit different local infectious unit states. Finally, we assume that all dynamics within the population and epidemic are Markovian. That is, for any infectious unit there is an exponential waiting time until the next event involving the infectious unit and no changes occur in the infectious unit between events. We assume that there are three types of events which take place within the population with regards the epidemic process. These are:- \begin{enumerate} \item {\bf Global infections}. These are infections where the individual contacted is chosen uniformly at random from a specified type of individual in the population. If the population consists of one single type of individual then the individual is chosen uniformly at random from the whole population. It is assumed that the number of individuals of each type are large, so that in the early stages of the epidemic with probability tending to 1, a global infectious contact is with a susceptible individual, and thus, results in an infection. \item {\bf Local transitions}. These are transitions which affect an infectious unit. These transitions can include infection within an infectious unit leading to a change of state of an infectious unit or an infectious individual moving to a different type. \item {\bf Recovery}. An infectious individual recovers from the disease and is no longer able to infect individuals within their current infectious episode. Given that we allow for $SIR$ and $SIS$ epidemic dynamics, a given individual may have at most one, or possibly many, infectious episodes depending upon the disease dynamics. \end{enumerate} \input{house_example} \input{defn_R0} \input{construct_M_v2} \input{comment_M_v2} \subsection{$SIS$ sexually transmitted disease model} \label{S:example:sex} We begin by describing the $SIS$ sexually transmitted disease model which provided the motivation for this work and then study the construction of $\mathbf{M}$. \subsubsection{Model} We consider a model for a population of sexually active individuals who alternate between being in a relationship and being single. For simplicity of presentation, we assume a homosexual model where each relationship comprises of two individuals. The extension to a heterosexual population with equal numbers of males and females is straightforward. We assume $SIS$ disease dynamics with infectious individuals returning to the susceptible state on recovery from the disease. There are two key dynamics underlying the spread of the disease. The formation and dissolution of relationships and the transmission of the disease. Individuals are termed as either single (not currently in a relationship) or coupled (currently in a relationship). We assume that each single individual seeks to instigate the formation of relationship at the points of a homogeneous Poisson point process with rate $\alpha/2$ with the individual with whom they seek to form a relationship chosen uniformly at random from the population. (The rate $\alpha/2$ allows for individuals to be both instigators and contacted individuals.) If a contacted individual is single, they agrees to form a relationship with the instigator, otherwise the individual is already in a relationship and remains with their current partner. The lifetimes of relationships are independent and identically distributed according to a non-negative random variable $T$ with mean $1/\delta$. For a Markovian model we take $T \sim {\rm Exp} (\delta)$ corresponding to relationships dissolving at rate $\delta$. When a relationship dissolves the individuals involved return to the single state. Therefore there is a constant flux of individuals going from single to coupled and back again. We assume that the disease is introduced into the population at time $t=0$ with the population in stationarity with regards relationship status. The proportion, $\sigma$, of the population who are single in stationarity is given in \cite{Lashari_Trapman} with \begin{eqnarray} \label{eq:model:2} \sigma^2 \alpha &=& \delta (1 -\sigma) \nonumber \\ \sigma &=& \frac{- \delta + \sqrt{\delta^2 + 4 \delta \alpha}}{2 \alpha}. \end{eqnarray} Thus $\tilde{\alpha} = \alpha \sigma$ is the rate at which a single individual enters a relationship. We assume that the relationship dynamics are in a steady state when the disease is introduced and the introduction of the diseases does not affect relationship dynamics. We now turn to the disease dynamics. We assume $SIS$ dynamics, in that individuals alternate between being susceptible and infectious and on recovery from being infectious an individual immediately reenters the susceptible state. We allow for two types of sexual contacts, those within relationships and {\it casual} contacts which occur outside relationships. The casual contacts which we term {\it one-night stands} represent single sexual encounters capturing short term liaisons. We assume that the infectious periods are independent and identically distributed according to a non-negative random variable $Q$, where $Q \sim {\rm Exp} (\gamma)$ for a Markovian model. Whilst in a relationship, we assume that an infectious individual makes infectious contact with their partner at the points of a homogeneous Poisson point process with rate $\beta$. We assume that individuals can also partake in, and transmit the disease via, one-off sexual contacts (one-night stands). We assume that individuals in relationships are less likely to take part in a one-night stand with probability $\rho$ of having a one-night stand. Therefore we assume that a single individual (individual in a relationship) seeks to make infectious contact via one-night stands at the points of a homogeneous Poisson point process with rate $\omega$ $(\rho \omega)$, where $\omega$ amalgamates the propensity for partaking in a one-night stand with the transmissibility of the disease during a one-night stand. If an individual attempts to have a one-night stand with somebody in a relationship, there is only probability $\rho$ of the one-night stand occurring. Thus $\rho=0$ denotes that individuals in relationships are faithful, whilst $\rho =1$ denotes that there is no difference between those in or out of a relationship with regards one-night stands. In the early stages of the epidemic with high probability a single infective will form a relationship with a susceptible individual and one-night stands with an individual in a relationship will be with a totally susceptible relationship. \subsubsection{Construction of $\mathbf{M}$} For this model there are three types of infectious units; single infective, couple with one infective and couple with two infectives which we will term types 1, 2 and 3, respectively. The possible events and their rates of occurring are presented in Table \ref{tab:sex:1}. \begin{table} \begin{tabular}{l|ccc} Event Type & Single infective & \multicolumn{2}{c}{Infective in a relationship} \\ & & Susceptible Partner & Infectious partner \\ \hline Relationship form & $\alpha \sigma$ & -- & -- \\ Relationship dissolve & -- & $\delta$ & $\delta$ \\ One-night stand single & $\omega \sigma$ & $\rho \omega \sigma$ & $\rho \omega \sigma$ \\ One-night stand relationship & $\rho \omega \sigma$ & $\rho^2 \omega \sigma$ & $\rho^2 \omega \sigma$ \\ Infect partner & -- & $\beta$ & -- \\ Partner recovers & -- & -- & $\gamma$ \\ Recovers & $\gamma$ & $\gamma$ & $\gamma$ \\ \end{tabular} \caption{Events and their rates of occurring for an infectious individual in each type of infectious unit.} \label{tab:sex:1} \end{table} It is straightforward from Table \ref{tab:sex:1} to construct $\Phi^E$ and $\mathbf{P}^E$ in terms of the next event to occur. For $\Phi^E$, the next event will create at most one infective and we only need to compute the probability of each type of infection. Hence, \begin{eqnarray} \label{eq:sex:1} \Phi^E = \begin{pmatrix} \frac{\omega \sigma}{\alpha \sigma + \omega \{1- (1-\rho) (1-\sigma)\} + \gamma} & \frac{\rho \omega \sigma}{\alpha \sigma + \omega \{1- (1-\rho) (1-\sigma)\} + \gamma} & 0 \\ \frac{\rho \omega \sigma}{\delta + \rho\omega \{1- (1-\rho) (1-\sigma)\} + \beta + \gamma} & \frac{\rho^2 \omega \sigma}{\delta + \rho\omega \{1- (1-\rho) (1-\sigma)\} + \beta + \gamma} & \frac{\beta}{\delta + \rho\omega \{1- (1-\rho) (1-\sigma)\} + \beta + \gamma} \\ \frac{\rho\omega \sigma}{\delta + \rho \omega \{1- (1-\rho) (1-\sigma)\} + 2 \gamma} & \frac{\rho^2 \omega \sigma}{\delta + \rho \omega \{1- (1-\rho) (1-\sigma)\} + 2 \gamma} & 0 \end{pmatrix}. \end{eqnarray} Similarly by considering the transition at each event, we have that \begin{eqnarray} \label{eq:sex:2} \mathbf{P}^E = \begin{pmatrix} \frac{\omega \{1- (1-\rho) (1-\sigma)\}}{\alpha \sigma + \omega \{1- (1-\rho) (1-\sigma)\} + \gamma} & \frac{\alpha \sigma}{\alpha \sigma + \omega \{1- (1-\rho) (1-\sigma)\} + \gamma} & 0 \\ \frac{\delta}{\delta + \rho\omega \{1- (1-\rho) (1-\sigma)\} + \beta + \gamma} & \frac{\rho\omega \{1- (1-\rho) (1-\sigma)\}}{\delta + \rho\omega \{1- (1-\rho) (1-\sigma)\} + \beta + \gamma} & \frac{\beta}{\delta + \rho\omega \{1- (1-\rho) (1-\sigma)\} + \beta + \gamma} \\ \frac{\delta}{\delta + \rho \omega \{1- (1-\rho) (1-\sigma)\} + 2 \gamma} & \frac{\gamma}{\delta + \rho \omega \{1- (1-\rho) (1-\sigma)\} + 2 \gamma} & \frac{\rho\omega \{1- (1-\rho) (1-\sigma)\}}{\delta + \rho \omega \{1- (1-\rho) (1-\sigma)\} + 2 \gamma} \end{pmatrix}. \end{eqnarray} One-night stands do not alter the relationship and hence do not constitute transition events. Given that the number of one-night stands made by an infectious individual in an interval of a given length follows a Poisson distribution with mean proportional to the length of the interval it is straightforward to show that \begin{eqnarray} \label{eq:sex:3} \Phi = \begin{pmatrix} \frac{\omega \sigma}{\alpha \sigma + \gamma} & \frac{\rho \omega \sigma}{\alpha \sigma + \gamma} & 0 \\ \frac{\rho \omega \sigma}{\delta + \beta + \gamma} & \frac{\rho^2 \omega \sigma}{\delta + \beta + \gamma} & \frac{\beta}{\delta + \beta + \gamma} \\ \frac{\rho\omega \sigma}{\delta + 2 \gamma} & \frac{\rho^2 \omega \sigma}{\delta + 2 \gamma} & 0 \end{pmatrix}, \end{eqnarray} and that the transition matrix is given by \begin{eqnarray} \label{eq:sex:4} \mathbf{P} = \begin{pmatrix} 0 & \frac{\alpha \sigma}{\alpha \sigma + \gamma} & 0 \\ \frac{\delta}{\delta + \beta + \gamma} & 0 & \frac{\beta}{\delta + \beta + \gamma} \\ \frac{\delta}{\delta + 2 \gamma} & \frac{\gamma}{\delta + 2 \gamma} & 0 \end{pmatrix}. \end{eqnarray} Straightforward, but tedious algebra, gives that \begin{eqnarray} \label{eq:sex:5} \mathbf{M} &=& (\mathbf{I} - \mathbf{P})^{-1}\Phi =(\mathbf{I} - \mathbf{P}^E)^{-1}\Phi^E \nonumber \\ &=& \begin{pmatrix} \frac{\sigma \omega (\alpha \sigma \rho + \delta + \gamma)}{(\alpha \sigma + \delta + \gamma) \gamma} & \frac{\sigma \omega \rho (\alpha \sigma \rho + \delta + \gamma)}{(\alpha \sigma + \delta + \gamma) \gamma} & \frac{\alpha \sigma \beta (\delta + 2 \gamma)}{\gamma (\alpha \sigma + \delta + \gamma)(\beta+ \delta +2 \gamma)} \\ \frac{\sigma \omega (\alpha \sigma \rho + \delta + \gamma \rho)}{(\alpha \sigma + \delta + \gamma) \gamma} & \frac{\sigma \omega \rho (\alpha \sigma \rho + \delta + \gamma \rho)}{(\alpha \sigma + \delta + \gamma) \gamma} & \frac{ \beta (\delta + 2 \gamma) (\alpha \sigma + \gamma)}{\gamma (\alpha \sigma + \delta + \gamma)(\beta+ \delta +2 \gamma)} \\ \frac{\sigma \omega (\alpha \sigma \rho + \delta + \gamma \rho)}{(\alpha \sigma + \delta + \gamma) \gamma} & \frac{\sigma \omega \rho (\alpha \sigma \rho + \delta + \gamma \rho)}{(\alpha \sigma + \delta + \gamma) \gamma} & \frac{ \beta \{ \alpha \sigma (\delta + \gamma) +\gamma^2\}}{\gamma (\alpha \sigma + \delta + \gamma)(\beta+ \delta +2 \gamma)} \end{pmatrix}. \end{eqnarray} Note that the mean number of one-night stands is the same for all individuals who start their infectious period in a relationship, regardless of the infectious status of their partner. The eigenvalues of $\mathbf{M}$ can be obtained from solving the characteristic equation ${\rm det} (\mathbf{M} - s \mathbf{I}) = 0$, a cubic polynomial in $s$. The resulting algebraic expressions are not very illuminating about $R_0$ and its properties. However, this does allow for simple computation of $R_0$ for specified parameter values. In the special case $\rho =0$ where only single individuals can have one-night stands, we note that individuals can only enter the infectious state as a member of an infectious unit of type 1 or 3. Furthermore, if $\omega =0$, there are no one-night stands and individuals only become infected via an infectious partner within a relationship. In this case the first two columns of $\mathbf{M}$ become $\mathbf{0}$ and \begin{eqnarray} \label{eq:sex:6} R_0 = M_{3,3} =\frac{ \beta \{ \alpha \sigma (\delta + \gamma) +\gamma^2\}}{\gamma (\alpha \sigma + \delta + \gamma)(\beta+ \delta +2 \gamma)}, \end{eqnarray} the mean number of times an individual will successfully infect a partner whilst infectious. The expression for $R_0$ given in \eqref{eq:sex:6} is very similar to that given in \cite{LKD15}, (30). The only difference for $\omega=0$ between our model and the $SI$ sexually transmitted disease model presented in \cite{LKD15} and \cite{Lashari_Trapman} for individuals with a maximum of one sexual partner is that the model of \cite{LKD15} replaces recovery ($\gamma$) by death ($\mu$) which results in the relationship ending as well as removal of the infective. The model of \cite{LKD15} and \cite{Lashari_Trapman} incorporates birth of susceptibles at rate $N \mu$ to maintain the population size of $O(N)$.
proofpile-arXiv_065-4216
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In $2011$, the journal Science \cite{Hilbert} reported that the total computing power of the world was approximately equal to that of one human brain. Since then, the increase in total computing power has been considerable and if we accept that Moore's Law is valid \cite{Moore} (and the implications thereon), it would seem that our humble biological devices are quickly becoming obsolete.\\ The original article used ``the maximum number of nerve impulses executed by one human brain per second." as a measure of computing power. If we take an average human brain of $8.6 \times 10^{12}$ spiking neurons \cite{Azevedo}, firing at a maximum frequency of $300$ Hertz, we arrive at an estimate of the human brain's computing power of $2.58 \times 10^{15}$ operations per second. This is less than the estimate in Science, which also considered the number of connections between each neuron (although it is difficult to equate connectivity to computational power in any direct way).\footnote{We are not using the standard floating point operations per second (FLOPS) as this infers too much about how brains are operating.} If we are considering the number of discrete states a brain could exist in if neurons operated in a simple binary on/off manner (I.E. each neuron were performing in a manner similar to a transistor), we obtain the total number to be approximately $2^{8.6\times 10^{12}}\approx 10^{258,885,796,271}$ (bits) - which would seem a fairly impressive memory capacity and massively larger than the $2011$ estimate for the total memory power of all the computers in the world at $2.36 \times 10^{21}$ bits \cite{Hilbert}.\\ However, brains are not simple binary computing devices and operate in a very different manner to standard computers. The fundamental mechanisms by which brains process and store information may give rise to higher or lower numbers of operations to that stated above.\\ Here we ask a hypothetical question: \textit{``Given the various theories of neural coding, what is the theoretical upper bound on the computational capacity of the human brain?"} This is in many ways akin to asking ``What is the lifespan of the universe?'' and concluding this as the theoretical upper bound on ``How long will I live?'' As such, the answer to the first question gives little information as to the answer to the second but is nonetheless a valid and interesting question in its own right. I.e. We are not concerned with how many discrete dynamical states the human brain can actually exist in, but what is the theoretical upper bound on this.\\ Clearly the number of discrete states cannot be infinite. If it were possible to store an infinite number of bits in a brain of 8 billion neurons, it would be also possible to do the same with half that number, and half again - there would be no need for a big human sized brain and we would all have much smaller heads.\\ \section{The Human Brain} The human brain is massively complex and it is beyond the scope of this article to fully describe how it operates. It is useful though to have a basic idea of what we are considering, if only to frame the concept we are trying to investigate. \\ From a functional perspective the brain is composed of an enormous number of cells, connected via an extremely complex network. These cells are, in the majority, glial cells, which provide the physical structure of the brain and are involved in removal of waste and other non-information processing functions. About $10\%$ of brain cells are neurons, which are the fundamental entities that perform the processing and memory.\\ Neurons are themselves very complex entities. Long, thin and able to form multiple branches (dendrites). They use electrochemical pulses to transmit information to other neurons. Each neuron connects to on average $1000$ other neurons \cite{Williams} via a synapse - a small gap between neurons across which neurotransmitters diffuse. In turn, the neurotransmitters can either polarize or depolarize the neuron to which it is connected - causing the post-synaptic neuron to pulse or not to pulse.\\ It is possible to model the pulsating behaviour of neurons using systems of ordinary differential equations. The most famous of these being the Hodgkin-Huxley model \cite{Hodgkin}, which accurately simulates the electrochemical pulse moving down the axon of a neuron. This model is highly detailed, having differential equations to describe gating channels as well as the voltage. The equations are somewhat tricky to numerically integrate and the number of underlying parameters is large.\\ A simpler and more accessible neuron model is the Fitzhugh-Nagumo model \cite{Fitzhugh,Nagumo}, which is essentially a reduction of the Hodgkin-Huxley model described above. The governing equations are given as: \begin{eqnarray} \label{eqn:FHN} \dot{u}&=&c(-v+u-u^3/3+I)\nonumber \\ \dot{v}&=&u-bv+a, \end{eqnarray} \noindent where $I$ is some external current applied to the neuron, $a, \, b$ and $c$ are the neuron's parameters and the variables $u$ and $v$ correspond to the fast (spiking) and slow voltages.\\ \begin{figure} \begin{center} \includegraphics*[height=40mm, width=70mm]{FHN1.eps} \caption{\label{fig:FN1} {\bf{Fast Voltage of Fitzhugh Nagumo Oscillator}} The parameter values are: $a=0.7;b=0.8;c=10;I=0.5$.} \end{center} \end{figure} \section{Theories of Neural Coding} There is still some debate as to how brains store and process information. There are various theories of neural coding and it is generally believed that more than one fundamental mechanism is used. As a summary, the following classification would cover most of the available theories: \begin{itemize} \item Population \item Rate \begin{itemize} \item Spike Count \item Time Dependent Firing Rate \end{itemize} \item Spatio-Temporal \begin{itemize} \item Binary \item Receptive Field (this generally applies only to the retina) \item Synchronization \end{itemize} \end{itemize} If we are concerned with determining an upper bound on the computational power we would need only consider the coding mechanism which gives the highest theoretical number of states, in this case, synchronization coding.\\ \section{Synchronization Coding} Synchronization coding is a form of spatio-temporal coding in which information is stored not in the individual firing of neurons but in the similar response to stimulus of groups of neurons. \\ The Fitzhugh-Nagumo Model (Equation \ref{eqn:FHN} above) can be adapted to demonstrate synchronization similar to that observed in neurons via coupling through the fast gating variable $u$. For a population of $n$ neurons the governing equations are: \begin{eqnarray} \label{eqn:FHN2} \dot{u_i}&=&c(-v_i+u_i-u_i^3/3+I)+\sum_{j=1}^n k_{i,j}u_j\nonumber \\ \dot{v_i}&=&u_i-bv_i+a_i, \end{eqnarray} where $k_{i,j}$ represents the coupling strength between neuron $i$ and $j$. For $k>0$ we tend to observe synchronization between neuron $i$ and $j$ and for $k<0$ the neurons tend to desynchronize.\\ Although we present here a very simplified form of coupling we are in essence, retaining the underlying neural dynamics of excitation and inhibition observed in biology.\\ Figure \ref{fig:FN2Sync} demonstrates synchronization in two coupled Fitzhugh-Nagumo neurons. Although beginning with different dynamics they rapidly synchronize. A perturbation to one neuron will in turn affect the other and synchronization would be restored.\\ \begin{figure} \begin{center} \includegraphics*[height=40mm, width=70mm]{FN2Sync.eps} \caption{\label{fig:FN2Sync} {\bf{Synchronization in 2 Coupled Fitzhugh Nagumo Equations:}} The parameter values are: $a=0.7;b=0.8;c=10;I=0.5;$ The equations are coupled through the fast gating variable ($u$) with coupling strength $0.01$. From non-identical initial conditions the neurons quickly synchronize.} \end{center} \end{figure} We can, by selecting suitable coupling strengths, cause larger populations of neurons to form into groups (known as clusters) performing similar actions. A variety of cluster states can be achieved using varying coupling strengths between the neurons. \\ \begin{figure} \begin{center} \includegraphics*[height=40mm, width=70mm]{FN2Desync.eps} \caption{\label{fig:FN2Desync} {\bf{Desynchronization in 2 Coupled Fitzhugh Nagumo Equations:}}(Parameter values as in \ref{fig:FN2Sync} with $k=-0.1$). From identical initial conditions the oscillators quickly desynchronize to give alternating spikes.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics*[height=40mm, width=70mm]{FN3Desync.eps}~~ \includegraphics*[height=40mm, width=70mm]{FN4Desync.eps} \caption{\label{fig:FN3Desync} {\bf{Phase Shifted Synchronization in 3 and 4 Coupled Fitzhugh Nagumo Equations:}}(Parameter values as in \ref{fig:FN2Sync}). From identical initial conditions the oscillators quickly desynchronize - the resulting dynamics gives spikes evenly distributed within each period.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics*[height=40mm, width=70mm]{FN5Cluster.eps} \caption{\label{fig:FN5Desync} {\bf{Phase Shifted Synchronization (Clustering) in 5 Coupled Fitzhugh-Nagumo Equations:}}(Parameter values as in \ref{fig:FN2Sync}). From non identical initial conditions the oscillators form into a cluster of 2 and a cluster of 3. The clusters are evenly distributed around the phase.} \end{center} \end{figure} It is straightforward using Equation \ref{eqn:FHN2}, to cause, for instance a population of $5$ neurons similar to Figure \ref{fig:FN5Desync} to exhibit all possible clusterings.\\ The number of possible arrangements of such clusters is given by the number of \textit{set partitions} on those neurons. For instance: A very small brain consisting of $3$ neurons may possibly organize into all $3$ neurons acting in unison, $2$ neurons acting in unison and $1$ acting independently etc. The set of all $5$ possible cluster formations can be summarized as: \begin{eqnarray*} & &\{1, 2, 3\}\\ \nonumber & &\{ \{1, 2\}, \, \{3\} \}\\ \nonumber & &\{ \{1, 3\}, \, \{2\} \}\\ \nonumber & &\{ \{2, 3\}, \, \{1\} \}\\ \nonumber & &\{ \{1\}, \, \{2\}, \, \{3\}\}. \nonumber \end{eqnarray*} The enumeration of the set partitions for a given number of objects (n) is given by the Bell number B(n). Bell numbers can be calculated using the recurrence relation: \begin{equation} B_n=\sum_{k=0}^{n-1}B_k \left( \begin{array}{c} n-1 \\ k \end{array}\right). \end{equation} The first 10 bell numbers are: 1, 1, 2, 5, 15, 52, 203, 877, 4140, 21147, 115975, 678570.\\ B(15) is $190899322$. This is only bell(bell(5)).\\ As Bell numbers increase very rapidly it is not possible to directly calculate the exact number of set partitions for $n=8.6\times10^{12}$. Computational requirements make this unfeasible. We can approximate large Bell numbers using the asymptotic approximation \cite{Lovasz}: \begin{equation} B(n) \sim \frac{1}{\sqrt{n}}\left(\frac{n}{W(n)}\right)^{n+\frac{1}{2}}e^{\left({\frac{n}{W(n)}-n-1}\right),} \end{equation} where $W(n)$ is the Lambert W function.\\ This gives as an initial upper bound based solely on the number of set partitions (and therefore synchronization cluster states) as: \begin{equation} B(8.6\times 10^{12}) \sim 5.927 \times 10^{95,401,985,845,526}. \end{equation} Which is considerably larger than the previous estimate in Science and massively larger than the total computing power of the world to date. Score one for evolution!\\ \section{Ordering of Cluster States} So far we have only considered the number of possible cluster states that $8.6\times10^{12}$ could exist in. We are only considering set partitions which are not ordered. For instance $$\{ \{1, 2\}, \, \{3\}\} = \{\{3\}, \{1,2\} \},$$ but what if the temporal order in which each cluster of neurons fired was also a part of the coding mechanism. We now have to consider the number of {\em{permutations}} of the possible cluster states.\\ For a given set $n$ the number of permutations $P(n)=n!$ - however it would be an oversimplification to just take the factorial of the Bell number calculated above.\\ Consider a brain in which the neurons had formed into $3$ clusters and the order in which the clusters fire is relevant. The possible orderings are: \begin{eqnarray*} {1,2,3}\\ {1,3,2}\\ {2,1,3}\\ {2,3,1}\\ {3,1,2}\\ {3,2,1} \end{eqnarray*} but from a coding perspective many of these would be equivalent. For instance, if we take the first permutation and imagine the neuron clusters repeatedly firing in this order, we would have the firing pattern ${1,2,3,1,2,3,1,2,3,\dots}$ which would be the same ordering as if we took the second or fourth example above - we need to discount any {\em{cyclic permutations}} of orderings which we have already considered. The formula for this is given as \cite{Weisstein} \begin{equation} P(n)=(n-1)! \end{equation}\\ If, as previously explained, we allow for all possible cluster states, a brain of $n$ neurons could form into any number of clusters $n_c \in {1,\, 2,\, \dots,\, n}$ where $n_c=n$ would be the completely desynchronized state and $n_c=1$ would be completely synchronized (neither of which would be particularly healthy).\\ We are therefore required to compute \begin{equation} \sum_{k=1}^n(k-1)! \textrm{ for } n=5.927 \times 10^{95,401,985,845,526} \end{equation}. We can approximate the factorial sum using the expansion \begin{equation} \sum_{k=1}^nk! \sim n!\left(1+\frac{1}{n}+\frac{1}{n^2}+\frac{2}{n^3}+\frac{5}{n^4}+\frac{15}{n^5}+\mathcal{O}\frac{1}{n^5}\right) \end{equation} which can be derived from Stirling's formula \cite{abramowitz}.\\ Again it is not possible to directly determine $(n-1)!$ for such a large number but we can approximate the factorial using the asymptotic formula of Ramanujan \cite{Karatsuba} which gives: \begin{equation} \label{eqn:Ramanujan} n! \sim \sqrt{\pi}\left(\frac{n}{e}\right)^n \sqrt[6]{8n^3+4n^2+n+\frac{1}{30}}. \end{equation} Clearly for such a large $n$ the $4n^2+n+\frac{1}{30}$ terms in the Eqn. \ref{eqn:Ramanujan} are significantly smaller than the $n^3$ terms and as such will not be considered. Taking logarithms of Eqn. \ref{eqn:Ramanujan} gives us the approximation: \begin{eqnarray} \log(n!) & \sim &\log(\sqrt{\pi}\left(\frac{n}{e}\right)^n \sqrt[6]{8n^3}) \nonumber\\ & \sim & \log\sqrt{\pi}+n\log\left(\frac{n}{e}\right)+\log\sqrt[6]{8n^3} \nonumber\\ & \sim & n\log\left(\frac{n}{e}\right) \end{eqnarray} if we consider only the highest order terms.\\ This gives us a final estimate of the upper bound on the number of computational states for the human brain to be of the order:\\ {\Large{$$10^{565447570106432 \times 10^{95,401,985,845,526}}$$}}\\ which is considerably larger than the total number $2^n$ of binary states possible \footnote{where $n$ is the number of transistors on the planet today} if every computer, mobile phone, pocket calculator and wi-fi enable refrigerator, ever built, were wired together into one giant energy sucking super-machine. Yet we power a human brain for an hour on the calorific content of one apple. \footnote{The average human brain uses about 20 Watts \cite{Drubach}, the total power of all computing devices is in the order of many petaWatts ie $>10^{15}$ } \bibliographystyle{plain}
proofpile-arXiv_065-4217
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:introduction}} \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \section{Batch-Channel Normalization}\label{sec:norm} The previous section discusses elimination singularities and shows WS is able to keep models away from them. To fully address the issue of elimination singularities, we propose Batch-Channel Normalization (BCN). This section presents the definition of BCN, discusses why adding batch statistics to channel normalization is not redundant, and shows how BCN runs in large-batch and micro-batch training settings. \subsection{Definition} Batch-Channel Normalization (BCN) adds batch information and constraints to channel-based normalization methods. Let $\vect{X}\in\mathds{R}^{B\times C\times H\times W}$ be the features to be normalized. Then, the normalization is done as follows. $\forall c$, \begin{equation}\label{eq:bcnbn} \vect{\dot{X}}_{\cdot c\cdot\cdot}=\gamma_c^b \dfrac{\vect{X}_{\cdot c\cdot\cdot} - \hat{\mu}_{ c}}{\hat{\sigma}_{c}}+\beta_{c}^b, \end{equation} where the purpose of $\hat{\mu}_{c}$ and $\hat{\sigma}_{c}$ is to make \begin{equation}\label{eq:bcn_purpose} \mathbb{E}\Big\{\dfrac{\vect{X}_{\cdot c\cdot\cdot} - \hat{\mu}_{ c}}{\hat{\sigma}_{c}}\Big\}= 0~\text{and}~\mathbb{E}\Big\{\big(\dfrac{\vect{X}_{\cdot c\cdot\cdot} - \hat{\mu}_{ c}}{\hat{\sigma}_{c}}\big)^2\Big\}= 1. \end{equation} Then, $\vect{\dot{X}}$ is reshaped as $\vect{\dot{X}}\in\mathds{R}^{B\times G\times C/G\times H\times W}$ to have $G$ groups of channels. Next, $\forall g, b$, \begin{equation}\label{eq:bcngn} \vect{\dot{Y}}_{bg\cdot\cdot\cdot} = \gamma_g^c\dfrac{\vect{\dot{X}}_{bg\cdot\cdot\cdot} - \mu_{bg\cdot\cdot\cdot}}{\sigma{_{bg\cdot\cdot\cdot}}}+ \beta_g^c. \end{equation} Finally, $\vect{\dot{Y}}$ is reshaped back to $\vect{Y}\in\mathds{R}^{B\times C\times H\times W}$, which is the output of the Batch-Channel Normalization. \subsection{Large- and Micro-batch Implementations} Note that in Eq.~\ref{eq:bcnbn} and \ref{eq:bcngn}, only two statistics need batch information: $\hat{\mu}_{c}$ and $\hat{\sigma}_{c}$, as their values depend on more than one sample. Depending on how we obtain the values of $\hat{\mu}_{c}$ and $\hat{\sigma}_{c}$, we have different implementations for large-batch and micro-batch training settings. \subsubsection{Large-batch training} When the batch size is large, estimating $\hat{\mu}_{c}$ and $\hat{\sigma}_{c}$ is easy: we just use a Batch Normalization layer to achieve the function of Eq.~\ref{eq:bcnbn} and \ref{eq:bcn_purpose}. As a result, the proposed BCN can be written as \begin{equation} \text{BCN}(\vect{X})=\text{CN}(\text{BN}(\vect{X})). \end{equation} Implementing it is also easy with modern deep learning libraries, which is omitted here. \subsubsection{Micro-batch training} One of the motivations of channel normalization is to allow deep networks to train on tasks where the batch size is limited by the GPU memory. Therefore, it is important for Batch-Channel Normalization to be able to work in the micro-batch training setting. \begin{algorithm}[t] \SetAlgoLined \SetKwInput{KwInput}{Input} \SetKwInput{KwOutput}{Output} \KwInput{$\vect{X}\in\mathds{R}^{B\times C\times H\times W}$, the current estimates of $\hat{\mu}_c$ and $\hat{\sigma}^2_c$, and the update rate $r$.} \KwOutput{Normalized $\vect{Y}$.} Compute $\dot{\mu}_{c}\leftarrow\frac{1}{BHW}\sum_{b,h,w}\vect{X}_{b,c,h,w}$\; Compute $\dot{\sigma}^2_{c}\leftarrow\frac{1}{BHW}\sum_{b,h,w} \big( \vect{X}_{b,c,h,w} - \hat{\mu}_c \big)^2$\; Update $\hat{\mu}_c\leftarrow \hat{\mu}_c + r (\dot{\mu}_c - \hat{\mu}_c)$\; Update $\hat{\sigma}^2_c\leftarrow \hat{\sigma}^2_c + r (\dot{\sigma}^2_c - \hat{\sigma}^2_c)$\; Normalize $\vect{\dot{X}}_{\cdot c\cdot\cdot}=\gamma_c^b \dfrac{\vect{X}_{\cdot c\cdot\cdot} - \hat{\mu}_{ c}}{\hat{\sigma}_{c}}+\beta_{c}^b$\; Reshape $\vect{\dot{X}}$ to $\vect{\dot{X}}\in\mathds{R}^{B\times G\times C/G\times H\times W}$\; Normalize $\vect{\dot{Y}}_{bg\cdot\cdot\cdot} = \gamma_g^c\dfrac{\vect{\dot{X}}_{bg\cdot\cdot\cdot} - \mu_{bg\cdot\cdot\cdot}}{\sigma{_{bg\cdot\cdot\cdot}}}+ \beta_g^c$\; Reshape $\vect{\dot{Y}}$ to $\vect{{Y}}\in\mathds{R}^{B\times C\times H\times W}$\; \caption{Micro-batch BCN}\label{alg:1} \end{algorithm} Algorithm~\ref{alg:1} shows the feed-forwarding implementation of the micro-batch Batch-Channel Normalization. The basic idea behind this algorithm is to constantly estimate the values of $\hat{\mu}_c$ and $\hat{\sigma}_c$, which are initialized as $0$ and $1$, respectively, and normalize $\vect{X}$ based on these estimates. It is worth noting that in the algorithm, $\hat{\mu}_c$ and $\hat{\sigma}_c$ are not updated by the gradients computed from the loss function; instead, they are updated towards more accurate estimates of those statistics. Step 3 and 4 in Algorithm~\ref{alg:1} resemble the update steps in gradient descent; thus, the implementation can also be written in gradient descent by storing the difference $\Delta\hat{\mu}_c$ and $\Delta\hat{\sigma}_c$ as their gradients. Moreover, we set the update rate $r$ to be the learning rate of trainable parameters. Algorithm~\ref{alg:1} also raises an interesting question: when researchers study the micro-batch issue of BN before, why not just using the estimates to batch-normalize the features? In fact, \cite{batchrenorm} tries a similar idea, but does not fully solve the micro-batch issue: it needs a bootstrap phase to make the estimates meaningful, and the performances are usually not satisfactory. The underlying difference between micro-batch BCN and \cite{batchrenorm} is that BCN has a channel normalization following the estimate-based normalization. This makes the previously unstable estimate-based normalization stable, and the reduction of Lipschitz constants which speeds up training is also done in the channel-based normalization part, which is also impossible to do in estimate-based normalization. In summary, \textit{channel-based normalization makes estimate-based normalization possible, and estimate-based normalization helps channel-based normalization to keep models away from elimination singularities}. \begin{table}[t] \small \setlength{\tabcolsep}{1.2em} \centering \begin{tabular}{l|c|l|c} \toprule Method & Top-1 & Method & Top-1 \\ \midrule LN~\cite{layernorm} & 27.22 & LN+WS & 24.60 \\ IN~\cite{instnorm} & 29.49 & IN+WS & 28.24 \\ GN~\cite{groupnorm} & 24.81 & GN+WS & 23.72 \\ BN~\cite{batchnorm} & 24.30 & BN+WS & 23.76 \\ \bottomrule \end{tabular} \vspace{0.05in} \caption{Top-1 error rates of ResNet-50 on ImageNet. All models except BN are trained with batch size 1 per GPU. BN models are trained with batch size 64 per GPU.} \label{tab:abl} \end{table} \subsection{Is Batch-Channel Normalization Redundant?} Batch- and channel-based normalizations are similar in many ways. Is BCN thus redundant as it normalizes normalized features? Our answer is \textbf{no}. Channel normalizations need batch knowledge to keep the models away from elimination singularities; at the same time, it also brings benefits to the batch-based normalization, including: {\noindent\bf Batch knowledge without large batches.} Since BCN runs in both large-batch and micro-batch settings, it provides a way to utilize batch knowledge to normalize activations without relying on large training batch sizes. {\noindent\bf Additional non-linearity.} Batch Normalization is linear in the test mode or when the batch size is large in training. By contrast, channel-based normalization methods, as they normalize each sample individually, are not linear. They will add strong non-linearity and increase the model capacity. {\noindent\bf Test-time normalization.} Unlike BN that relies on estimated statistics on the training dataset for testing, channel normalization normalizes testing data again, thus allows the statistics to adapt to different samples. As a result, channel normalization will be more robust to statistical changes and show better generalizability for unseen data. \section{Conclusion} In this paper, we proposed two novel normalization methods, Weight Standardization (WS) and Batch-Channel Normalization (BCN) to bring the success factors of Batch Normalization (BN) into micro-batch training, including 1) the smoothing effects on the loss landscape and 2) the ability to avoid harmful elimination singularities along the training trajectory. WS standardizes the weights in convolutional layers and BCN leverages estimated batch statistics of the activations in convolutional layers. We provided theoretical analysis to show that WS reduces the Lipschitz constants of the loss and the gradients, and thus it smooths the loss landscape. By investigating normalization methods from the perspective of elimination singularities, we found that channel-based normalization methods, such as Layer Normalization (LN) and Group Normalization (GN) are unable to keep far distances from elimination singularities, caused by lack of batch knowledge. We showed that WS is able to alleviate this issue and BCN can further push models away from elimination singularities by incorporating estimated batch statistics channel-normalized models. Experiments on comprehensive computer vision tasks, including image classification, object detection, instance segmentation, video recognition and semantic segmentation, demonstrate 1) WS and BCN improve micro-batch training significantly, 2) WS+GN with batch size 1 is even able to match or outperform the performances of BN with large batch sizes, and 3) replacing GN by BCN leads to further improvement. \section{Experimental Results}\label{sec:exp} \begin{table*}[t] \small \centering \begin{tabular}{l|cc|cc|cc|cc|cc} \toprule Method -- Batch Size & \multicolumn{2}{c|}{BN~\cite{batchnorm} -- 64 / 32} & \multicolumn{2}{c|}{SN~\cite{switchnorm} -- 1} & \multicolumn{2}{c|}{GN~\cite{groupnorm} -- 1} & \multicolumn{2}{c|}{BN+WS -- 64 / 32} & \multicolumn{2}{c}{GN+WS -- 1} \\ \cmidrule{2-11} & Top-1 & Top-5 & Top-1 & Top-5 & Top-1 & Top-5 & Top-1 & Top-5 & Top-1 & Top-5 \\ \midrule ResNet-50~\cite{resnet} & 24.30 & 7.19 & 25.00 & -- & 24.81 & 7.46 & 23.76 & 7.13 & 23.72 & 6.99 \\ ResNet-101~\cite{resnet} & 22.44 & 6.21 & -- & -- & 22.87 & 6.51 & 21.89 & 6.01 & 22.10 & 6.07 \\ \bottomrule \end{tabular} \vspace{0.05in} \caption{Error rates of ResNet-50 and ResNet-101 on ImageNet. ResNet-50 models with BN are trained with batch size 64 per GPU, and ResNet-101 models with BN are trained with 32 images per GPU. The others are trained with 1 image per GPU. } \label{tab:imagenet_ws} \end{table*} In this section, we will present the experimental results of using our proposed Weight Standardization and Batch-Channel Normalization, including image classification on CIFAR-10/100~\cite{cifar} and ImageNet~\cite{ILSVRC15}, object detection and instance segmentation on COCO~\cite{coco}, video recognition on Something-SomethingV1 dataset~\cite{something}, and semantic segmentation on PASCAL VOC~\cite{pascal}. \subsection{Image Classification on ImageNet} \subsubsection{Weight Standardization} ImageNet is a large-scale image classification dataset. There are about 1.28 million training samples and 50K validation images. It has 1000 categories, each has roughly 1300 training images and exactly 50 validation samples. Table~\ref{tab:abl} shows the top-1 error rates of ResNet-50 on ImageNet when it is trained with different normalization methods, including Layer Normalization~\cite{layernorm}, Instance Normalization~\cite{instnorm}, Group Normalization~\cite{groupnorm} and Batch Normalization. From Table~\ref{tab:abl}, we can see that when the batch size is limited to 1, GN+WS is able to achieve performances comparable to BN with large batch size. Therefore, we will use GN+WS for micro-batch training because GN shows the best results among all the normalization methods that can be trained with 1 image per GPU. Table~\ref{tab:imagenet_ws} shows our major experimental results of WS on the ImageNet dataset~\cite{ILSVRC15}. Note that Table~\ref{tab:imagenet_ws} only shows the error rates of ResNet-50 and ResNet-101. This is to compare with the previous work that focus on micro-batch training problem, e.g. Switchable Normalization~\cite{switchnorm} and Group Normalization~\cite{groupnorm}. We run all the experiments using the official PyTorch implementations of the layers except for SN~\cite{switchnorm} which are the performances reported in their paper. This makes sure that all the experimental results are comparable, and our improvements are reproducible. \begin{table}[t] \small \setlength{\tabcolsep}{0.8em} \centering \begin{tabular}{l|ccc|c} \toprule Backbone & WN & CWN & WS & Top-1 \\ \midrule ResNet-50 + GN & & & & 24.81 \\ ResNet-50 + GN & \cmark & & & 25.09 \\ ResNet-50 + GN & & \cmark & & 24.23 \\ ResNet-50 + GN & & & \cmark & 23.72 \\ \bottomrule \end{tabular} \vspace{0.05in} \caption{Comparing Top-1 error rates between WS, WN and CWN on ImageNet. The backbone is a ResNet-50 normalized by GN and trained with batch size 1 per GPU.} \label{tab:cwn} \end{table} Table~\ref{tab:cwn} compares WS with other weight-based normalization methods including WN and CWN. To show the comparisons, we train the same ResNet-50 normalized by GN on activations with different weight-based normalizations. The code of WN uses the official PyTorch implementation, and the code of CWN is from the official implementation of their github. From the results, we can observe that these normalization methods have different effects on the performances of the models. Compared with WN and CWN, the proposed WS achieves lower top-1 error rate. Table~\ref{tab:ind_abl} shows the individual effects of Eq.~\ref{eq:9} and \ref{eq:10} on training deep neural networks. Consistent with Fig.~\ref{fig:mean_div}, Eq.~\ref{eq:9} is the major component that brings performance improvements. These results are also consistent with the theoretical results we have on the Lipschitz analysis. \begin{table}[t] \small \setlength{\tabcolsep}{0.8em} \centering \begin{tabular}{l|cc|c} \toprule Backbone & - mean & / std & Top-1 \\ \midrule ResNet-50 + GN & & & 24.81 \\ ResNet-50 + GN & \cmark & & 23.96 \\ ResNet-50 + GN & & \cmark & 24.60 \\ ResNet-50 + GN & \cmark & \cmark & 23.72 \\ \bottomrule \end{tabular} \vspace{0.05in} \caption{Comparing Top-1 error rates between WS (``- mean": Eq.~\ref{eq:9}, and ``/ div": Eq.~\ref{eq:10}) and its individual effects. The backbone is a ResNet-50-GN trained with batch size 1 per GPU.} \label{tab:ind_abl} \end{table} \begin{table}[t] \small \centering \begin{tabular}{l|cc|cc} \toprule Method & \multicolumn{2}{c|}{GN~\cite{groupnorm}} & \multicolumn{2}{c}{GN+WS~\cite{groupnorm}} \\ \cmidrule{2-5} Batch Size = 1 & Top-1 & Top-5 & Top-1 & Top-5 \\ \midrule ResNeXt-50~\cite{resnext} & 24.24 & 7.27 & 22.71 & 6.38\\ ResNeXt-101~\cite{resnext} & 22.86 & 6.51 & 21.80 & 6.03\\ \bottomrule \end{tabular} \vspace{0.05in} \caption{ResNeXt-50 and ResNeXt-101 on ImageNet. All models are trained with batch size 1 per GPU.} \label{tab:resnext} \end{table} In Table~\ref{tab:resnext}, we also provide the experimental results on ResNeXt~\cite{resnext}. Here, we show the performance comparisons between ResNeXt+GN and ResNeXt+GN+WS. Note that GN originally did not provide results on ResNeXt. Without tuning the hyper-parameters in the Group Normalization layers, we use 32 groups for each of them which is the default configuration for ResNet that GN was originally proposed for. ResNeXt-50 and 101 are 32x4d. We train the models for 100 epochs with batch size set to 1 and iteration size set to 32. As the table shows, the performance of GN on training ResNeXt is unsatisfactory: they perform closely to the original ResNets. In the same setting, WS is able to make training ResNeXt a lot easier. \subsubsection{Batch-Channel Normalization} Fig.~\ref{fig:imagenet} shows the training dynamics of ResNet-50 with GN, GN+WS and BCN+WS, and Table~\ref{tab:imagenet} shows the top-1 and top-5 error rates of ResNet-50 and ResNet-101 trained with different normalization methods. From the results, we observe that adding batch information to channel-based normalizations strongly improves their accuracy. As a result, GN, whose performances are similar to BN when used with WS, now is able to achieve better results than the BN baselines. And we find improvements not only in the final model accuracy, but also in the training speed. As shown in Fig.~\ref{fig:imagenet}, we see a big drop of training error rates at each epoch. This demonstrates that the model is now farther from elimination singularities, resulting in an easier and faster learning. \begin{figure} \centering \includegraphics[width=\linewidth]{img/rn50_pbn.pdf} \caption{Training and validation error rates of ResNet-50 on ImageNet. The comparison is between the baselines GN ~\cite{groupnorm}, GN + WS, and Batch-Channel Normalization (BCN) with WS. Our method BCN and WS not only significantly improve the training speed, they also lower the error rates of the final models by a comfortable margin.} \label{fig:imagenet} \end{figure} \begin{table}[] \small \centering \begin{tabular}{l|ccc|cc} \toprule Backbone & GN & WS & BCN & Top-1 & Top-5 \\ \midrule ResNet-50 & \cmark & & & 24.81 & 7.46 \\ ResNet-50 & \cmark & \cmark & & 23.72 & 6.99 \\ ResNet-50 & & \cmark & \cmark & \bf23.09 & \bf6.55 \\\midrule ResNet-101 & \cmark & & & 22.87 & 6.51 \\ ResNet-101 & \cmark & \cmark & & 22.10 & 6.07 \\ ResNet-101 & & \cmark & \cmark & \bf 21.29 & \bf 5.60 \\ \bottomrule \end{tabular} \caption{Top-1/5 error rates of ResNet-50, ResNet-101, and ResNeXt-50 on ImageNet. The test size is $224\times224$ with center cropping. All normalizations are trained with batch size $32$ or $64$ per GPU without synchronization.} \label{tab:imagenet} \end{table} \subsubsection{Experiment settings} Here, we list the hyper-parameters used for getting all those results. For all models, the learning rate is set to 0.1 initially, and is multiplied by $0.1$ after every 30 epochs. We use SGD to train the models, where the weight decay is set to 0.0001 and the momentum is set to 0.9. For ResNet-50 with BN or BN+WS, the training batch is set to 256 for 4 GPUs. Without synchronized BN~\cite{syncbn}, the effective batch size is 64. For other ResNet-50 where batch size is $1$ per GPU, we set the iteration size to $64$, \textit{i.e.}, the gradients are averaged across every 64 iterations and then one step is taken. This is to ensure fair comparisons because by doing so the total numbers of parameter updates are the same even if their batch sizes are different. We train ResNet-50 with different normalization techniques for 90 epochs. For ResNet-101, we set the batch size to 128 because some of the models will use more than 12GB per GPU when setting their batch size to 256. In total, we train all ResNet-101 models for 100 epochs. Similarly, we set the iteration size for models trained with 1 image per GPU to be 32 in order to compensate for the total numbers of parameter updates. \subsection{Image Classification on CIFAR} CIFAR has two image datasets, CIFAR-10 (C10) and CIFAR-100 (C100). Both C10 and C100 have color images of size $32\times 32$. C10 dataset has 10 categories while C100 dataset has 100 categories. Each of C10 and C100 has 50,000 images for training and 10,000 images for testing and the categories are balanced in terms of the number of samples. In all the experiments shown here, the standard data augmentation schemes are used, \textit{i.e.}, mirroring and shifting, for these two datasets. We also standardizes each channel of the datasets for data pre-processing. Table~\ref{tab:cifar1} shows the experimental results that compare our proposed BCN with BN and GN. The results are grouped into 4 parts based on whether the training is large-batch or micro-batch, and whether the dataset is C10 and C100. On C10, our proposed BCN is better than BN on large-batch training, and is better than GN (with or without WS) which is specifically designed for micro-batch training. Here, micro-batch training assumes the batch size is 1, and RN110 is the 110-layer ResNet~\cite{resnet} with basic block as the building block. The number of groups here for GN is $\min\{32, (\text{the number of channels}) / 4\}$. Table~\ref{tab:cifar2} shows comparisons with more recent normalization methods, Switchable Normalization (SN)~\cite{switchnorm} and Dynamic Normalization (DN)~\cite{dynamicnorm} which were tested for a variant of ResNet for CIFAR: ResNet-18. To provide readers with direct comparisons, we also evaluate BCN on ResNet-18 with the group number set to $32$ for models that use GN. Again, all the results are organized based on whether they are trained in the micro-batch setting. Based on the results shown in Table~\ref{tab:cifar1} and \ref{tab:cifar2}, it is clear that BCN is able to outperform the baselines effortlessly in both large-batch and micro-batch training settings. \begin{table}[] \small \centering \begin{tabular}{ll|c|cccc|c} \toprule & Model & Micro & BN & GN & BCN & WS & Error \\\midrule C10 & RN110 & & \cmark & & & & 6.43 \\ C10 & RN110 & & & & \cmark & \cmark & 5.90 \\\midrule C10 & RN110 & \cmark & & \cmark & & & 7.45 \\ C10 & RN110 & \cmark & & \cmark & & \cmark & 6.82 \\ C10 & RN110 & \cmark & & & \cmark & \cmark & 6.31 \\ \midrule C100 & RN110 & & \cmark & & & & 28.86 \\ C100 & RN110 & & & & \cmark & \cmark & 28.36 \\\midrule C100 & RN110 & \cmark & & \cmark & & & 32.86 \\ C100 & RN110 & \cmark & & \cmark & & \cmark & 29.49 \\ C100 & RN110 & \cmark & & & \cmark & \cmark & 28.28 \\ \bottomrule \end{tabular} \caption{Error rates of a 110-layer ResNet~\cite{resnet} on CIFAR-10/100~\cite{cifar} trained with BN~\cite{batchnorm}, GN~\cite{groupnorm}, and our BCN and WS. The results are grouped based on dataset and large/micro-batch training. Micro-batch assumes $1$ sample per batch while large-batch uses 128 samples in each batch. WS indicates whether WS is used for weights.} \label{tab:cifar1} \end{table} \begin{table}[] \small \setlength{\tabcolsep}{1.2em} \centering \begin{tabular}{cc|cc|c} \toprule Dataset & Model & Micro & Method & Error \\\midrule C10 & RN18 & & BN & 5.20 \\ C10 & RN18 & & SN & 5.60 \\ C10 & RN18 & & DN & 5.02 \\ C10 & RN18 & & BCN+WS & 4.96 \\\midrule C10 & RN18 & \cmark & BN & 8.45 \\ C10 & RN18 & \cmark & SN & 7.62 \\ C10 & RN18 & \cmark & DN & 7.55 \\ C10 & RN18 & \cmark & BCN+WS & 5.43 \\ \hline \end{tabular} \caption{Error rates of ResNet-18 on CIFAR-10 trained with SN~\cite{switchnorm}, DN~\cite{dynamicnorm}, and our BCN and WS. The results are grouped based on large/micro-batch training. The performances of BN, SN and DN are from \cite{dynamicnorm}. Micro-batch for BN, SN and DN uses 2 images per batch, while BCN uses 1.} \label{tab:cifar2} \end{table} \subsection{Object Detection and Instance Segmentation} \begin{table*}[] \small \centering \begin{tabular}{c|ccc|ccc|ccc|ccc|ccc} \toprule Model & GN & WS & BCN & AP$^b$ & AP$^b_{.5}$ & AP$^b_{.75}$ & AP$^b_{l}$ & AP$^b_{m}$ & AP$^b_{s}$ & AP$^m$ & AP$^m_{.5}$ & AP$^m_{.75}$ & AP$^m_{l}$ & AP$^m_{m}$ & AP$^m_{s}$\\\midrule RN50 & \cmark & & & 39.8 & 60.5 & 43.4 & 52.4 & 42.9 & 23.0 & 36.1 & 57.4 & 38.7 & 53.6 & 38.6 & 16.9 \\ RN50 & \cmark & \cmark & & 40.8 & 61.6 & 44.8 & 52.7 & 44.0 & 23.5 & 36.5 & 58.5 & 38.9 & 53.5 & 39.3 & 16.6\\ RN50 & & \cmark & \cmark & \bf 41.4 & \bf 62.2 & \bf 45.2 & \bf 54.7 & \bf 45.0 & \bf 24.2 & \bf 37.3 & \bf 59.4 & \bf 39.8 & \bf 55.0 & \bf 40.1 & \bf 17.9 \\\midrule RN101 & \cmark & & & 41.5 & 62.0 & 45.5 & 54.8 &45.0 &24.1 & 37.0 &59.0 &39.6 & 54.5 &40.0 &17.5\\ RN101 & \cmark & \cmark & & 42.7 & 63.6 & 46.8 & 56.0 & 46.0 & 25.7 & 37.9 & 60.4 & 40.7 & 56.3 & 40.6 & 18.2 \\ RN101 & & \cmark & \cmark & \bf43.6 & \bf64.4 & \bf47.9 & \bf57.4 & \bf47.5 & \bf25.6 & \bf 39.1 &\bf 61.4 &\bf 42.2 &\bf 57.3 &\bf 42.1 &\bf 19.1\\ \bottomrule \end{tabular} \caption{Object detection and instance segmentation results on COCO val2017~\cite{coco} of Mask R-CNN~\cite{maskrcnn} and FPN~\cite{fpn} with ResNet-50 and ResNet-101~\cite{resnet} as backbone. The models are trained with different normalization methods, which are used in their backbones, bounding box heads, and mask heads.} \label{tab:mask} \end{table*} Unlike image classification on ImageNet where we could afford large batch training when the models are not too big, object detection and segmentation on COCO~\cite{coco} usually use 1 or 2 images per GPU for training due to the high resolution. Given the good performances of our method on ImageNet which are comparable to the large-batch BN training, we expect that our method is able to significantly improve the performances on COCO because of the training setting. We use a PyTorch-based Mask R-CNN framework\footnote{https://github.com/facebookresearch/maskrcnn-benchmark} for all the experiments. We take the models pre-trained on ImageNet, fine-tune them on COCO train2017 set, and test them on COCO val2017 set. To maximize the comparison fairness, we use the models we pre-trained on ImageNet instead of downloading the pre-trained models available online. We use 4 GPUs to train the models and apply the learning rate schedules for all models following the practice used in the Mask R-CNN framework our work is based on. We use 1X learning rate schedule for Faster R-CNN and 2X learning rate schedule for Mask R-CNN. For ResNet-50, we use 2 images per GPU to train the models, and for ResNet-101, we use 1 image per GPU because the models cannot fit in 12GB GPU memory. We then adapt the learning rates and the training steps accordingly. The configurations we run use FPN~\cite{fpn} and a 4conv1fc bounding box head. All the training procedures strictly follow their original settings. Table~\ref{tab:mask} reports the Average Precision for bounding box (AP$^b$) and instance segmentation (AP$^m$) and Table~\ref{tab:fast} reports the Average Precision (AP) of Faster R-CNN trained with different methods. From the two tables, we can observe results similar to those on ImageNet. GN has limited performance improvements when it is used on more complicated architectures such as ResNet-101 and ResNet-101. But when we add WS to GN or use BCN, we are able to train the models much better. The improvements become more significant when the network complexity increases. Considering nowadays deep networks are becoming deeper and wider, having a normalization technique such as our WS will ease the training a lot without worrying about the memory and batch size issues. \begin{table}[] \small \setlength{\tabcolsep}{0.35em} \centering \begin{tabular}{c|ccc|ccc|ccc} \toprule Model & GN & WS & BCN & AP$^b$ & AP$^b_{.5}$ & AP$^b_{.75}$ & AP$^b_{l}$ & AP$^b_{m}$ & AP$^b_{s}$ \\\midrule RN50 & \cmark & & & 38.0 & 59.1 & 41.2 & 49.5 &40.9 &22.4 \\ RN50 & \cmark & \cmark & & 38.9 & 60.4 & 42.1 & 50.4 &42.4 &23.5 \\ RN50 & & \cmark & \cmark & \bf 39.7 & \bf 60.9 & \bf 43.1 & \bf 51.7 & \bf 43.2 & \bf 24.0 \\\midrule RN101 &\cmark & & & 39.7 & 60.9 & 43.3 & 51.9 & 43.3 &23.1 \\ RN101 & \cmark & \cmark & & 41.3 & 62.8 & 45.1 & 53.9 & 45.2 & 24.7 \\ RN101 & & \cmark & \cmark & \bf41.8 & \bf 63.4 & \bf 45.8 & \bf 54.1 & \bf 45.6 & \bf 25.6 \\ \bottomrule \end{tabular} \caption{Object detection results on COCO using Faster R-CNN~\cite{fasterrcnn} and FPN with different normalization methods.} \label{tab:fast} \end{table} \subsection{Semantic Segmentation on PASCAL VOC} \begin{table}[] \small \centering \begin{tabular}{cc|cccc|c} \toprule Dataset & Model & GN & BN & WS & BCN & mIoU \\\midrule VOC Val & RN101 & \cmark & & & & 74.90 \\ VOC Val & RN101 & \cmark & & \cmark & & 77.20 \\\midrule VOC Val & RN101 & & \cmark & & & 76.49 \\ VOC Val & RN101 & & \cmark & \cmark & & 77.15 \\\midrule VOC Val & RN101 & & & \cmark & \cmark & 78.22 \\ \bottomrule \end{tabular} \caption{Comparisons of semantic segmentation performance of DeepLabV3~\cite{deeplabv3} trained with different normalizations on PASCAL VOC 2012~\cite{pascal} validation set. Output stride is 16, without multi-scale or flipping when testing.} \label{tab:voc} \end{table} After evaluating BCN and WS on classification and detection, we test it on dense prediction tasks. We start with semantic segmentation on PASCAL VOC~\cite{pascal}. We choose DeepLabV3~\cite{deeplabv3} as the evaluation model for its good performances and its use of the pre-trained ResNet-101 backbone. Table~\ref{tab:voc} shows our results on PASCAL VOC, which has $21$ different categories with background included. We take the common practice to prepare the dataset, and the training set is augmented by the annotations provided in \cite{pascalextra}, thus has 10,582 images. We take our ResNet-101 pre-trained on ImageNet and finetune it for the task. Here, we list all the implementation details for easy reproductions of our results: the batch size is set to $16$, the image crop size is $513$, the learning rate follows polynomial decay with an initial rate $0.007$. The model is trained for $30K$ iterations, and the multi-grid is $(1,1,1)$ instead of $(1, 2, 4)$. For testing, the output stride is set to $16$, and we do not use multi-scale or horizontal flipping test augmentation. As shown in Table~\ref{tab:voc}, by only changing the normalization methods from BN and GN to our BCN, mIoU increases by about $2\%$, which is a significant improvement for PASCAL VOC dataset. As we strictly follow the hyper-parameters used in the previous work, there could be even more room of improvements if we tune them to favor BCN or WS, which we do not explore in this paper and leave to future work. \subsection{Video Recognition on Something-Something} \begin{table}[] \small \setlength{\tabcolsep}{0.6em} \centering \begin{tabular}{lc|cccc|cc} \toprule Model & \#Frame & GN & BN & WS & BCN & Top-1 & Top-5 \\ \midrule RN50 & 8 & \cmark & & & & 42.07 & 73.20 \\ RN50 & 8 & \cmark & & \cmark & & 44.26 & 75.51 \\ \midrule RN50 & 8 & & \cmark & & & 44.30 & 74.53 \\ RN50 & 8 & & \cmark & \cmark & & 46.49 & 76.46 \\ \midrule RN50 & 8 & & & \cmark & \cmark & 45.27 & 75.22 \\ \bottomrule \end{tabular} \caption{Comparing video recognition accuracy of TSM~\cite{tsm} on Something-SomethingV1~\cite{something}.} \label{tab:tsm} \end{table} In this subsection, we show the results of applying our method on video recognition on Something-SomethingV1 dataset~\cite{something}. Something-SomethingV1 is a video dataset which includes a large number of video clips that show humans performing pre-defined basic actions. The dataset has 86,017 clips for training and 11,522 clips for validation. We use the state-of-the-art method TSM~\cite{tsm} for video recognition, which uses a ResNet-50 with BN as its backbone network. The codes are based on TRN~\cite{trn} and then adapted to TSM. The reimplementation is different from the original TSM~\cite{tsm}: we use models pre-trained on ImageNet rather than Kinetics dataset~\cite{kinetics} as the starting points. Then, we fine-tune the pre-trained models on Something-SomethingV1 for 45 epochs. The batch size is set to 32 for 4 GPUs, and the learning rate is initially set to 0.0125, then divided by 10 at the 26th and the 36th epochs. The batch normalization layers are not fixed during training. With all the changes, the reimplemented TSM-BN achieves top-1/5 accuracy 44.30/74.53, higher than 43.4/73.2 originally reported in the paper. Then, we compare the performances when different normalization methods are used in training TSM. Table~\ref{tab:tsm} shows the top-1/5 accuracies of TSM when trained with GN, GN+WS, BN and BN+WS. From the table we can see that WS increases the top-1 accuracy about $2\%$ for both GN and BN. The improvements help GN to cache up the performances of BN, and boost BN to even better accuracies, which roughly match the performances of the ensemble TSM with 24 frames reported in the paper. Despite that BCN improves performances of GN, it does not surpass BN. This shows the limitation of BCN. \section{Introduction}\label{sec:intro}} Deep learning has advanced the state-of-the-arts in many vision tasks~\cite{deeplab,resnet}. Many deep networks use Batch Normalization (BN)~\cite{batchnorm} in their architectures because BN in most cases is able to accelerate training and help the models to converge to better solutions. BN stabilizes the training by controlling the first two moments of the distributions of the layer outputs in each mini-batch during training and is especially helpful for training very deep networks that have hundreds of layers~\cite{resnetv2,densenet}. Despite its practical success, BN has a shortcoming that it works well only when the batch size is sufficiently large, which prohibits it from being used in micro-batch training. Micro-batch training, \emph{i.e.}, the batch size is small, \emph{e.g.}, 1 or 2, is inevitable for many computer vision tasks, such as object detection and semantic segmentation, due to limited GPU memory. This shortcoming draws a lot of attentions from researchers, which urges them to design specific normalization methods for micro-batch training, such as Group Normalization (GN)~\cite{groupnorm} and Layer Normalization (LN)~\cite{layernorm}, but they have difficulty matching the performances of BN in large-batch training (Fig.~\ref{fig:front}). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{img/front_mask.pdf} \caption{Comparing BN~\cite{batchnorm}, GN~\cite{groupnorm}, our WS used with GN, and WS used with BCN on ImageNet and COCO. On ImageNet, BN and BCN+WS are trained with large batch sizes while GN and GN+WS are trained with 1 image/GPU. On COCO, BN is frozen for micro-batch training, and BCN uses its micro-batch implementation. GN+WS outperforms both BN and GN comfortably and BCN+WS further improves the performances.} \label{fig:front} \end{figure} In this paper, our goal is to bring the success factors of BN into micro-batch training but without relying on large batch sizes during training. This requires good understandings of the reasons of BN's success, among which we focus on two factors: \begin{enumerate}[wide] \item {\bf BN's smoothing effects:} \cite{whybnworks} proves that BN makes the landscape of the corresponding optimization problem significantly smoother, thus is able to stabilize the training process and accelerate the convergence speed of training deep neural networks. \item {\bf BN avoids elimination singularities:} Elimination singularities refer to the points along the training trajectory where neurons in the networks get eliminated. Eliminable neurons waste computations and decrease the effective model complexity. Getting closer to them will harm the training speed and the final performances. By forcing each neuron to have zero mean and unit variance, BN keeps the networks at far distances from elimination singularities caused by non-linear activation functions. \end{enumerate} We find that these two success factors are not properly addressed by some methods specifically designed for micro-batch training. For example, channel-based normalizations, \textit{e.g.}, Layer Normalization (LN)~\cite{layernorm} and Group Normalization (GN)~\cite{groupnorm}, are \textbf{unable} to guarantee far distances from elimination singularities. This might be the reason for their inferior performance compared with BN in large-batch training. To bring the above two success factors into micro-batch training, we propose Weight Standardization (WS) and Batch-Channel Normalization (BCN) to improve network training. WS standardizes the weights in convolutional layers, \emph{i.e.}, making the weights have zero mean and unit variance. BCN uses estimated means and variances of the activations in convolutional layers by combining batch and channel normalization. WS and BCN are able to run in both large-batch and micro-batch settings and accelerate the training and improve the performances. We study WS and BCN from both theoretical and experimental viewpoints. The highlights of the results are: \begin{enumerate} \item Theoretically, we prove that WS reduces the Lipschitz constants of the loss and the gradients. Hence, WS smooths loss landscape and improves training. \item We empirically show that WS and BCN are able to push the models away from the elimination singularities. \item Experiments show that on tasks where large-batches are available (\textit{e.g.} ImageNet~\cite{ILSVRC15}), GN~\cite{groupnorm} + WS with batch size 1 is able to match or outperform the performances of BN with large batch sizes (Fig.~\ref{fig:front}). \item For tasks where only micro-batch training is available (\textit{e.g.} COCO~\cite{coco}), GN + WS will significantly improve the performances (Fig.~\ref{fig:front}). \item Replacing GN with BCN further improves the results in both large-batch and micro-batch training settings. \end{enumerate} To show that our WS and BCN are applicable to many vision tasks, we conduct comprehensive experiments, including image classification on CIFAR-10/100~\cite{cifar} and ImageNet dataset~\cite{ILSVRC15}, object detection and instance segmentation on MS COCO dataset~\cite{coco}, video recognition on Something-SomethingV1 dataset~\cite{something}, and semantic image segmentation on PASCAL VOC~\cite{pascal}. The experimental results show that our WS and BCN are able to accelerate training and improve performances. \section{Lipschitz Smoothness and Elimination Singularities}\label{sec:pre} We first describe Lipschitz Smoothness and Elimination Singularities to provide the background of our analyses. \subsection{Lipschitz Smoothness} A function $f: A\rightarrow \mathbb{R}^m$, $A\in\mathbb{R}^n$ is $L$-Lipschitz \cite{basiccourse} if \begin{equation} \forall a, b \in A: ~~ ||f(a) - f(b)|| \leq L || a - b ||. \end{equation} A continuously differentiable function $f$ is $\beta$-smooth if the gradient $\nabla f$ is $\beta$-Lipschitz, \textit{i.e.}, \begin{equation} \forall a, b \in A: ~~ ||\nabla f(a) - \nabla f(b) || \leq \beta || a - b ||. \end{equation} Many results show that training smooth functions using gradient descent algorithms is faster than training non-smooth functions~\cite{bubeck2015convex}. Intuitively, gradient descent based training algorithms can be unstable due to exploding or vanishing gradients. As a result, they are sensitive to the selections of the learning rate and initialization if the loss landscape is not smooth. Using an algorithm (\textit{e.g.} WS) that smooths the loss landscape will make the gradients more reliable and predictive; thus, larger steps can be taken and the training will be accelerated. \subsection{Elimination singularities} Deep neural networks are hard to train partly due to the singularities caused by the non-identifiability of the model~\cite{wei2008dynamics}. These singularities include overlap singularities, linear dependence singularities, elimination singularities, \textit{etc}. Degenerate manifolds in the loss landscape will be caused by these singularities, getting closer to which will slow down learning and impact model performances~\cite{orhan2017skip}. In this paper, we focus on elimination singularities, which correspond to the points on the training trajectory where neurons in the model become constantly deactivated. The original definition of elimination singularities is based on weights~\cite{wei2008dynamics}: if we use $\vect{\omega}_c$ to denote the weights that take the channel $c$ as input, then an elimination singularity is encountered when $\vect{\omega}_c=\bm{0}$. However, this definition is not suitable for real-world deep network training as most of $\vect{\omega}_c$ will not be close to $0$. For example, in a ResNet-50~\cite{resnet} well-trained on ImageNet~\cite{ILSVRC15}, $ \frac{1}{L}\sum_{l}\frac{\text{min}_{c\in l}{ |\vect{\omega}_c|_1 }}{\text{avg}_{c\in l}{ |\vect{\omega}_c|_1 }} = 0.55 $, where $L$ is the number of all the layers $l$ in the network. Note that weight decay is already used in training this network to encourage weight sparsity. In other words, defining elimination singularities based on weights is not proper for networks trained in real-world settings. In this paper, we consider elimination singularities for networks that use ReLU as their activation functions. We focus on a basic building element that is widely used in neural networks: a convolutional layer followed by a normalization method (\textit{e.g.} BN, LN) and ReLU~\cite{relu}, \textit{i.e.}, \begin{equation} \vect{X^{\text{out}}} = \text{ReLU}(\text{Norm}(\text{Conv}(\vect{X^{\text{in}}}))). \end{equation} When ReLU is used, $\vect{\omega}_c=\bm{0}$ is no longer necessary for a neuron to be eliminatable. This is because ReLU sets any values below $0$ to $0$; thus a neuron is constantly deactivated if its maximum value after the normalization layer is below $0$. Their gradients will also be $0$ because of ReLU, making them hard to revive; hence, a singularity is created. \section{Related Work}\label{sec:related} Deep neural networks advance state-of-the-arts in many computer vision tasks~\cite{deeplab,densenet,alexnet,fcnn,fewshot,qiao2018deep,qiu2017unrealcv,vggnet,sort,wang2018multi,yang2018knowledge,zhang2018single}. But deep networks are hard to train. To speed up training, proper model initialization strategies are widely used as well as data normalization based on the assumption of the data distribution~\cite{glorot2010understanding,he2015delving}. On top of data normalization and model initialization, Batch Normalization~\cite{batchnorm} is proposed to ensure certain distributions so that the normalization effects will not fade away during training. By performing normalization along the batch dimension, Batch Normalization achieves state-of-the-art performances in many tasks in addition to accelerating the training process. When the batch size decreases, however, the performances of Batch Normalization drop dramatically since the batch statistics are not representative enough of the dataset statistics. Unlike Batch Normalization that works on the batch dimension, Layer Normalization~\cite{layernorm} normalizes data on the channel dimension, Instance Normalization~\cite{instnorm} does Batch Normalization for each sample individually. Group Normalization~\cite{groupnorm} also normalizes features on the channel dimension, but it finds a better middle point between Layer Normalization and Instance Normalization. Batch Normalization, Layer Normalization, Group Normalization, and Instance Normalization are all activation-based normalization methods. Besides them, there are also weight-based normalization methods, such as Weight Normalization~\cite{weightnorm} and Centered Weight Normalization~\cite{huang2017centered}. Weight Normalization decouples the length and the direction of the weights, and Centered Weight Normalization also centers the weights to have zero mean. Weight Standardization is similar, but removes the learnable weight length. Instead, the weights are standardized to have zero mean and unit variance, and then directly sent to the convolution operations. When used with GN, it narrows the performance gap between BN and GN. In this paper, we study normalization from the perspective of elimination singularity~\cite{orhan2017skip,wei2008dynamics} and smoothness~\cite{whybnworks}. There are also other perspectives to understand normalization methods. For example, from training robustness, BN is able to make optimization trajectories more robust to parameter initialization~\cite{im2016empirical}. \cite{whybnworks} shows that normalizations are able to reduce the Lipschitz constants of the loss and the gradients, thus the training becomes easier and faster. From the angle of model generalization, \cite{morcos2018importance} shows that Batch Normalization relies less on single directions of activations, thus has better generalization properties, and \cite{luo2018towards} studies the regularization effects of Batch Normalization. \cite{kohler2018towards} also explores length-direction decoupling in BN and WN~\cite{weightnorm}. Other work also approaches normalizations from the gradient explosion issues~\cite{DBLP:journals/corr/abs-1902-08129} and learning rate tuning~\cite{DBLP:journals/corr/abs-1812-03981}. Our WS is also related to converting constrained optimization to unconstrained optimization~\cite{absil2009optimization,cho2017riemannian}. Our BCN uses Batch Normalization and Group Normalization at the same time for one layer. Some previous work also uses multiple normalizations or a combined version of normalizations for one layer. For example, SN~\cite{switchnorm} computes BN, IN, and LN at the same time and uses AutoML~\cite{pnas} to determine how to combine them. SSN~\cite{shao2019ssn} uses SparseMax to get sparse SN. DN~\cite{dynamicnorm} proposes a more flexible form to represent normalizations and finds better normalizations. Unlike them, our method is based on analysis and theoretical understandings instead of searching solutions through AutoML, and our normalizations are used together as a composite function rather than linearly adding up the normalization effects in a flat way. \section{WS's effects on elimination singularities} In this section, we will provide the background of BN, GN and LN, discuss the negative correlation between the performance and the distance to elimination singularities, and show LN and GN are unable to keep the networks away from elimination singularities as BN. Next, we will show that WS helps avoiding elimination singularities. \subsection{Batch- and channel-based normalizations and their effects on elimination singularities} \subsubsection{Batch- and channel-based normalizations} Based on how activations are normalized, we group the normalization methods into two types: batch-based normalization and channel-based normalization, where the batch-based normalization method corresponds to BN and the channel-based normalization methods include LN and GN. Suppose we are going to normalize a 2D feature map $\vect{X}\in\mathds{R}^{B\times C\times H\times W}$, where $B$ is the batch size, $C$ is the number of channels, $H$ and $W$ denote the height and the width. For each channel $c$, BN normalizes $\vect{X}$ by \begin{equation}\label{eq:bn} \vect{Y}_{\cdot c \cdot\cdot} = \dfrac{\vect{X}_{\cdot c\cdot\cdot} - \mu_{\cdot c\cdot\cdot}}{\sigma_{\cdot c\cdot\cdot}}, \end{equation} where $\mu_{\cdot c\cdot\cdot}$ and $\sigma_{\cdot c\cdot\cdot}$ denote the mean and the standard deviation of all the features of the channel $c$, $\vect{X}_{\cdot c\cdot\cdot}$. Throughout the paper, we use $\cdot$ in the subscript to denote all the features along that dimension for convenience. Unlike BN which computes statistics on the batch dimension in addition to the height and width, channel-based normalization methods (LN and GN) compute statistics on the channel dimension. Specifically, they divide the channels to several groups, and normalize each group of channels, \textit{i.e.}, $\vect{X}$ is reshaped as $\vect{\dot{X}}\in\mathds{R}^{B\times G\times C/G\times H\times W}$, and then: \begin{equation}\label{eq:cn} \vect{\dot{Y}}_{bg\cdot\cdot\cdot} = \dfrac{\vect{\dot{X}}_{bg\cdot\cdot\cdot} - \mu_{bg\cdot\cdot\cdot}}{\sigma{_{bg\cdot\cdot\cdot}}}, \end{equation} for each sample $b$ of $B$ samples in a batch and each channel group $g$ out of all $G$ groups. After Eq.~\ref{eq:cn}, the output $\vect{\dot{Y}}$ is reshaped as $\vect{\dot{X}}$ and denoted by $\vect{Y}$. Both batch- and channel-based normalization methods have an optional affine transformation, \textit{i.e.}, \begin{equation}\label{eq:at} \vect{Z}_{\cdot c\cdot\cdot} = \gamma_c\vect{Y}_{\cdot c\cdot\cdot} + \beta_c. \end{equation} \subsubsection{BN avoids elimination singularities} Here, we study the effect of BN on elimination singularities. Since the normalization methods all have an optional affine transformation, we focus on the distinct part of BN, which normalizes all channels to zero mean and unit variance, \textit{i.e.}, \begin{equation}\label{eq:bnes} \mathbb{E}_{y\in Y_{\cdot c\cdot\cdot}}\big[y\big]=0,~~ \mathbb{E}_{y\in Y_{\cdot c\cdot\cdot}}\big[y^2\big]=1,~~\forall c. \end{equation} As a result, regardless of the weights and the distribution of the inputs, it guarantees that the activations of each channel are zero-centered with unit variance. Therefore, each channel cannot be constantly deactivated because there are always some activations that are $>0$, nor underrepresented due to the channel having a very small activation scale compared with the others. \begin{figure} \centering \includegraphics[width=\linewidth]{img/dist.pdf} \caption{Model accuracy and distance to singularities. Larger circles correspond to higher performances. Red crosses represent failure cases (accuracy $<70\%$). Circles are farther from singularities/closer to BN if they are closer to the origin.} \label{fig:dist} \end{figure} \subsubsection{Statistical distance and its affects on performance} \begingroup \begin{quote} \it BN avoids singularities by normalizing each channel to zero mean and unit variance. What if they are normalized to other means and variances? \end{quote} \endgroup We ask this question because this is similar to what happens in channel-normalized models. In the context of activation-based normalizations, BN completely resolve the issue of elimination singularities as each channel is zero-centered with unit variance. By contrast, channel-based normalization methods, as they do \textbf{not} have batch information, are \textbf{unable} to make sure that all neurons have zero mean and unit variance after normalization. In other words, there are likely some underrepresented channels after training if the model is using channel-based normalizations. Since BN represents the ideal case which has the furthest distance to elimination singularities, and any dissimilarity with BN will lead to lightly or heavily underrepresented channels and thus make the models closer to singularities, \emph{we use the distance to BN as the distance to singularities for activation-based normalizations.} Specifically, in this definition, the model is \textit{closer} to singularities when it is \textit{far} from BN. Fig.~\ref{fig:dist} shows that this definition is useful, where we study the relationship between the performance and the distance to singularities (\textit{i.e.}, how far from BN) caused by statistical differences. We conduct experiments on a 4-layer convolutional network, the results of which are shown in Fig~\ref{fig:dist}. Each convolutional layer has 32 output channels, and is followed by an average pooling layer which down-samples the features by a factor of 2. Finally, a global average pooling layer and a fully-connected layer output the logits for Softmax. The experiments are done on CIFAR-10~\cite{cifar}. In the experiment, each channel $c$ will be normalized to a pre-defined mean $\hat{\mu}_{c}$ and a pre-defined variance $\hat{\sigma}_{c}$ that are drawn from two distributions, respectively: \begin{equation} \hat{\mu}_{c}\sim\mathcal{N}(0, \sigma_{\mu}) ~~\text{and}~~ \hat{\sigma}_{c}=e^{\dot{\sigma}_{c}}~\text{where}~ \dot{\sigma}_{c}\sim\mathcal{N}(0,\sigma_\sigma). \end{equation} \textit{The model will be closer to singularities when $\sigma_{\mu}$ or $\sigma_{\sigma}$ increases. BN corresponds to the case where $\sigma_{\mu}=\sigma_{\sigma}=0$}. After getting $\hat{\mu}_{c}$ and $\hat{\sigma}_{c}$ for each channel, we compute \begin{equation} \vect{Y}_{\cdot c\cdot\cdot} = \gamma_c\big(\hat{\sigma}_{c}\dfrac{\vect{X}_{\cdot c\cdot\cdot} - \mu_{\cdot c\cdot\cdot}}{\sigma_{\cdot c\cdot\cdot}} + \hat{\mu}_{c}\big) + \beta_c. \end{equation} Note that $\hat{\mu}_{c}$ and $\hat{\sigma}_{c}$ are fixed during training while $\gamma_c$ and $\beta_c$ are trainable parameters in the affine transformation. \begin{figure} \centering \includegraphics[width=\linewidth]{img/chn.pdf} \caption{ Means and standard deviations of the statistical differences (StatDiff defined in Eq.~\ref{eq:sd}) of all layers in a ResNet-110 trained on CIFAR-10 with GN, GN+WS, LN, and LN+WS. } \label{fig:stat_diff} \end{figure} Fig.~\ref{fig:dist} shows the experimental results. When $\sigma_{\mu}$ and $\sigma_{\sigma}$ are closer to the origin, the normalization method is more close to BN. When their values increase, we observe performance decreases. For extreme cases, we also observe training failures. These results indicate that although the affine transformation theoretically can find solutions that cancel the negative effects of normalizing channels to different statistics, their capability is limited by the gradient-based training. They show that defining distance to singularities as the distance to BN is useful. They also raise concerns about channel normalizations regarding their distances. \subsubsection{Statistics in Channel-based Normalization} Following our concerns about channel-based normalization and their distance to singularities, we study the statistical differences between channels when they are normalized by a channel-based normalization such as GN or LN. \noindent\textbf{Statistical differences in GN, LN and WS:} We train a ResNet-110~\cite{resnet} on CIFAR-10~\cite{cifar} normalized by GN, LN, with and without WS. During training, we keep record of the running mean $\mu^r_c$ and variance $\sigma_c^r$ of each channel $c$ after convolutional layers. For each group $g$ of the channels that are normalized together, we compute their channel \textbf{statistical difference} defined as the standard deviation of their means divided by the mean of their standard deviations, \textit{i.e.}, \begin{equation}\label{eq:sd} \text{StatDiff}(g) = \dfrac{ \sqrt{\mathbb{E}_{c\in g}\big[(\mu^r_{c})^2\big] - \big(\mathbb{E}_{c\in g}\big[\mu^r_{c}\big]\big)^2}}{\mathbb{E}_{c\in g}\big[ \sigma_{c} \big]}. \end{equation} We plot the average statistical differences of all the groups after every training epoch as shown in Fig.~\ref{fig:stat_diff}. By Eq.~\ref{eq:sd}, $\text{StatDiff}(g)\geq 0,~\forall g$. In BN, all their means are the same, as well as their variances, thus $\text{StatDiff}(g)=0$. As the value of $\text{StatDiff}(g)$ goes up, the differences between channels within a group become larger. Since they will be normalized together as in Eq.~\ref{eq:cn}, large differences will inevitably lead to underrepresented channels. Fig.~\ref{fig:dist_exp} plots 3 examples of 2 channels before and after normalization in Eq.~\ref{eq:cn}. Compared with those examples, it is clear that the models in Fig.~\ref{fig:stat_diff} have many underrepresented channels. \begin{figure} \centering \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{img/near_std.pdf} \end{subfigure} \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{img/mid_std.pdf} \end{subfigure} \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{img/far_std.pdf} \end{subfigure} \caption{Examples of normalizing two channels in a group when they have different means and variances. Transparent bars mean they are $0$ after ReLU. StatDiff is defined in Eq.~\ref{eq:sd}.} \label{fig:dist_exp} \end{figure} \noindent\textbf{Why GN performs better than LN:} Fig.~\ref{fig:stat_diff} also provides explanations why GN performs better than LN. Comparing GN and LN, the major difference is their numbers of groups for channels: LN has only one group for all the channels in a layer while GN collects them into several groups. A strong benefit of having more than one group is that it guarantees that each group will at least have one neuron that is not suppressed by the others from the same group. Therefore, GN provides a mechanism to prevent the models from getting too close to singularities. Fig.~\ref{fig:stat_diff} also shows the statistical differences when WS is used. From the results, we can clearly see that WS makes StatDiff much closer to $0$. Consequently, the majority of the channels are not underrepresented in WS: most of them are frequently activated and they are at similar activation scales. This makes training with WS easier and their results better. \subsection{WS helps avoiding elimination singularities} The above discussions show that WS helps keeping models away from elimination singularities. Here, we discuss why WS is able to achieve this. Recall that WS adds constraints to the weight $\vect{W}\in\mathds{R}^{\text{O}\times\text{I}}$ of a convolutional layer with O output channels and I inputs such that $\forall c$, \begin{equation}\label{eq:ws1} \sum_{i=1}^I\vect{W}_{c,i} = 0,~~~\sum_{i=1}^I\vect{W}^2_{c,i} = 1. \end{equation} With the constraints of WS, $\mu_{c}^{\text{out}}$ and $\sigma_{c}^{\text{out}}$ become \begin{equation}\label{eq:ws2} \mu_{c}^{\text{out}}=\sum_{i=1}^I\vect{W}_{c,i}\mu_i^{\text{in}},~~~(\sigma_{c}^{\text{out}})^2=\sum_{i=1}^I\vect{W}_{c,i}^2(\sigma_i^{\text{in}})^2, \end{equation} when we follow the assumptions in Xavier initialization~\cite{glorot2010understanding}. When the input channels are similar in their statistics, \textit{i.e.}, $\mu_{i}^{\text{in}}\approx\mu_{j}^{\text{in}}$, $\sigma_{i}^{\text{in}}\approx\sigma_{j}^{\text{in}}$, $\forall i,j$, \begin{eqnarray} \mu_{c}^{\text{out}}&\approx&\mu_1^{\text{in}}\sum_{i=1}^I\vect{W}_{c,i}=0, \\ (\sigma_{c}^{\text{out}})^2&\approx&(\sigma_{1}^{\text{in}})^2\sum_{i=1}^I\vect{W}_{c,i}^2=(\sigma_{1}^{\text{in}})^2. \end{eqnarray} In other words, WS can pass the statistical similarities from the input channels to the output channels, all the way from the image space where RGB channels are properly normalized. This is similar to the objective of Xavier initialization~\cite{glorot2010understanding} or Kaiming initialization~\cite{he2015delving}, except that WS enforces it by reparameterization throughout the entire training process, thus is able to reduce the statistical differences a lot, as shown in Fig.~\ref{fig:stat_diff}. Here, we summarize this subsection. We have shown that channel-based normalization methods, as they do not have batch information, are not able to ensure a far distance from elimination singularities. Without the help of batch information, GN alleviates this issue by assigning channels to more than one group to encourage more activated neurons, and WS adds constraints to pull the channels to be not so statistically different. We notice that the batch information is not hard to collect in reality. This inspires us to equip channel-based normalization with batch information, and the result is Batch-Channel Normalization. \section{Weight Standardization}\label{sec:ws} In this section, we introduce Weight Standardization, which is inspired by BN. It has been demonstrated that BN influences network training in a fundamental way: it makes the landscape of the optimization problem significantly smoother~\cite{whybnworks}. Specifically, \cite{whybnworks} shows that BN reduces the Lipschitz constants of the loss function, and makes the gradients more Lipschitz, too, \textit{i.e.}, the loss will have a better $\beta$-smoothness~\cite{basiccourse}. We notice that BN considers the Lipschitz constants with respect to \emph{activations}, not the \emph{weights} that the optimizer is directly optimizing. Therefore, we argue that we can also standardize the \emph{weights} in the convolutional layers to further smooth the landscape. By doing so, we do not have to worry about transferring smoothing effects from activations to weights; moreover, the smoothing effects on activations and weights are also additive. Based on these motivations, we propose Weight Standardization. \subsection{Weight Standardization} Here, we show the detailed modeling of Weight Standardization (WS) (Fig.~\ref{fig:all}). Consider a standard convolutional layer with its bias term set to 0: \begin{equation} \vect{y} = \hat{\vect{W}}*\vect{x}, \end{equation} where $\hat{\vect{W}}\in\mathbb{R}^{O\times I}$ denotes the weights in the layer and $*$ denotes the convolution operation. For $\hat{\vect{W}}\in\mathbb{R}^{O\times I}$, $O$ is the number of the output channels, $I$ corresponds to the number of input channels within the kernel region of each output channel. Taking Fig.~\ref{fig:all} as an example, $O=C_{\text{out}}$ and $I=C_{\text{in}}\times\text{Kernel\_Size}$. In Weight Standardization, instead of directly optimizing the loss $\mathcal{L}$ on the original weights $\hat{\vect{W}}$, we reparameterize the weights $\hat{\vect{W}}$ as a function of $\vect{W}$, \textit{i.e.}, $\hat{\vect{W}}=\text{WS}(\vect{W})$, and optimize the loss $\mathcal{L}$ on $\vect{W}$ by SGD: \begin{align} \hat{\vect{W}} &= \Big[ \hat{\vect{W}}_{i,j}~\big|~ \hat{\vect{W}}_{i,j} = \dfrac{\vect{W}_{i,j} - \mu_{\vect{W}_{i,\cdot}}}{\sigma_{\vect{W}_{i,\cdot}}}\Big]\label{eq:6},\\ \vect{y} &= \hat{\vect{W}}*\vect{x}\label{eq:7}, \end{align} where \begin{align}\label{eq:epsilon} \mu_{\vect{W}_{i,\cdot}} = \dfrac{1}{I}\sum_{j=1}^{I}\vect{W}_{i, j},~~\sigma_{\vect{W}_{i,\cdot}}=\sqrt{\dfrac{1}{I}\sum_{j=1}^I\vect{W}_{i,j}^2 - \mu_{\vect{W}_{i,\cdot}}^2 + \epsilon}. \end{align} Similar to BN, WS controls the first and second moments of the weights of each output channel individually in convolutional layers. Note that many initialization methods also initialize the weights in some similar ways. Different from those methods, WS standardizes the weights in a differentiable way which aims to normalize gradients during back-propagation. Note that we do not have any affine transformation on $\hat{\vect{W}}$. This is because we assume that normalization layers such as BN or GN will normalize this convolutional layer again, and having affine transformation will confuse and slow down training. In the following, we first discuss the normalization effects of WS to the gradients. \subsection{Comparing WS with WN and CWN} Weight Normalization (WN)~\cite{weightnorm} and Centered Weight Normalization (CWN)~\cite{huang2017centered} also normalize weights to speed up deep network training. Weight Normalization reparameterizes weights by separating the direction $\frac{W}{\norm{W}}$ and length $g$: \begin{equation} \hat{\vect{W}} = g\frac{\vect{W}}{\norm{\vect{W}}}. \end{equation} WN is able to train good models on many tasks. But as shown in \cite{gitman2017comparison}, WN has difficulty matching the performances of models trained with BN on large-scale datasets. Later, CWN adds a centering operation for WN, \textit{i.e.}, \begin{equation}\label{eq:cwn} \hat{W} = g\frac{\vect{W}-\overline{\vect{W}}}{\norm{\vect{W} - \overline{\vect{W}}}}. \end{equation} To compare with WN and CWN, we consider the weights for only one of the output channel and reformulate the corresponding weights output by WS in Eq.~\ref{eq:6} as \begin{equation} \hat{W} = \dfrac{\vect{W}-\overline{\vect{W}}}{\sqrt{\overline{\vect{W}^2} - {\overline{\vect{W}}}^2}}, \end{equation} which removes the learnable length $g$ from Eq.~\ref{eq:cwn} and divides the weights with their standard deviation instead. Experiments in Sec.~\ref{sec:exp} show that WS outperforms WN and CWN on large-scale tasks~\cite{ILSVRC15}. \section{The smoothing effects of WS} In this section, we discuss the smoothing effects of WS. Sec.~\ref{sec:wng} shows that WS normalizes the gradients. This normalization effect on gradients lowers the Lipschitz constants of the loss and the gradients as will be shown in Sec.~\ref{sec:wsl}, where Sec.~\ref{sec:eowot} discusses the effects on the loss and Sec.~\ref{sec:eowotl} discusses the effects on the gradients. \subsection{WS normalizes gradients}\label{sec:wng} For convenience, we set $\epsilon=0$ (in Eq.~\ref{eq:epsilon}). We first focus on one output channel $c$. Let $\vect{y}_c\in\mathbb{R}^{b}$ be all the outputs of channel $c$ during one pass of feedforwarding and back-propagation, and $\vect{x}_c\in\mathbb{R}^{b,I}$ be the corresponding inputs. Then, we can rewrite Eq.~\ref{eq:6} and ~\ref{eq:7} as \begin{align} \dot{\vect{W}}_{c,\cdot} &= \vect{W}_{c,\cdot} - \dfrac{1}{I}\mathbf{1}\langle\mathbf{1}, \vect{W}_{c,\cdot}\rangle\label{eq:9},\\ \hat{\vect{W}}_{c,\cdot} &= \dot{\vect{W}}_{c,\cdot} / \Big( \sqrt{\dfrac{1}{I}\langle\mathbf{1}, \dot{\vect{W}}^{\circ 2}_{c,\cdot}\rangle}\Big)\label{eq:10},\\ \vect{y}_c &= \vect{x}_c \hat{\vect{W}}_{c,\cdot}\label{eq:11}, \end{align} where $\langle~,~\rangle$ denotes dot product and $^{\circ 2}$ denotes Hadamard power. Then, the gradients are \begin{align} &\nabla_{\dot{\vect{W}}_{c,\cdot}}\mathcal{L} = \dfrac{1}{\sigma_{\vect{W}_{c,\cdot}}} \Big( \nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L} - \dfrac{1}{I} \langle \hat{\vect{W}}_{c,\cdot}, \nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L} \rangle \hat{\vect{W}}_{c,\cdot} \Big), \label{eq:12}\\ &\nabla_{\vect{W}_{c,\cdot}}\mathcal{L} = \nabla_{\dot{\vect{W}}_{c,\cdot}}\mathcal{L} - \dfrac{1}{I}\mathbf{1}\langle\mathbf{1}, \nabla_{\dot{\vect{W}}_{c,\cdot}}\mathcal{L}\rangle\label{eq:13}. \end{align} Fig.~\ref{fig:ws_eqn} shows the computation graph. Based on the equations, we observe that different from the original gradients $\nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L}$ which is back-propagated through Eq.~\ref{eq:11}, the gradients are normalized by Eq.~\ref{eq:12} \& \ref{eq:13}. In Eq.~\ref{eq:12}, to compute $\nabla_{\dot{\vect{W}}_{c,\cdot}}\mathcal{L}$, $\nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L}$ is first subtracted by a weighted average of $\nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L}$ and then divided by $\sigma_{\hat{\vect{W}}_{c,\cdot}}$. Note that when BN is used to normalize this convolutional layer, as BN will compute again the scaling factor $\sigma_{\vect{u}}$, the effects of dividing the gradients by $\sigma_{\hat{\vect{W}}_{c,\cdot}}$ will be canceled in both feedforwarding and back-propagation. As for the additive term, its effect will depend on the statistics of $\nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L}$ and ${\hat{\vect{W}}}_{c,\cdot}$. We will later show that this term will reduce the gradient norm regardless of the statistics. As for Eq.~\ref{eq:13}, it zero-centers the gradients from $\dot{\vect{W}}_{c,\cdot}$. When the mean gradient is large, zero-centering will significantly affect the gradients passed to $\vect{W}_{c,\cdot}$. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{img/ws_eqn.pdf} \caption{Computation graph for WS in {\color{feedforwarding}feed-forwarding} and {\color{backpropagation}back-propagation}. $\vect{W}$, $\dot{\vect{W}}$ and $\hat{\vect{W}}$ are defined in Eq.~\ref{eq:9}, \ref{eq:10} and \ref{eq:11}.} \label{fig:ws_eqn} \end{figure} \begin{figure*} \centering \includegraphics[width=\linewidth]{img/mean_div_grad.pdf} \caption{Training ResNet-50 on ImageNet with GN, Eq.~\ref{eq:9} and \ref{eq:10}. The left and the middle figures show the training dynamics. The right figure shows the reduction percentages on the Lipschitz constant. Note that the y-axis of the right figure is in {\bf log} scale. } \label{fig:mean_div} \end{figure*} \subsection{WS smooths landscape}\label{sec:wsl} We will show that WS is able to make the loss landscape smoother. Specifically, we show that optimizing $\mathcal{L}$ on $\vect{W}$ has smaller Lipschitz constants on both the loss and the gradients than optimizing $\mathcal{L}$ on $\hat{\vect{W}}$. Lipschitz constant of a function $f$ is the value of $L$ if $f$ satisfies $|f(x_1) - f(x_2)| \leq L \lVert x_1 - x_2\rVert,~\forall x_1,x_2$. For the loss and gradients, $f$ will be $\mathcal{L}$ and $\nabla_{\vect{W}}\mathcal{L}$, and $x$ will be $\vect{W}$. Smaller Lipschitz constants on the loss and gradients mean that the changes of the loss and the gradients during training will be bounded more. They will provide more confidence when the optimizer takes a big step in the gradient direction as the gradient direction will vary less within the range of the step. In other words, the optimizer can take longer steps without worrying about sudden changes of the loss landscape and gradients. Therefore, WS is able to accelerate training. \subsubsection{Effects of WS on the Lipschitz constant of the loss}\label{sec:eowot} Here, we show that both Eq.~\ref{eq:12} and Eq.~\ref{eq:13} are able to reduce the Lipschitz constant of the loss. We first study Eq.~\ref{eq:12}: \begin{align} \begin{split} \big\lVert \nabla_{\dot{{\vect{W}}}_{c,\cdot}}\mathcal{L} \big\rVert^2=\dfrac{1}{\sigma_{{W}_{c,\cdot}}^2}\Big( \big\lVert \nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L} \big\rVert^2 + \\ \dfrac{1}{I^2}\big\langle \hat{\vect{W}}_{c,\cdot}, \nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L} \big\rangle^2\big( \langle \hat{\vect{W}}_{c,\cdot}, \hat{\vect{W}}_{c,\cdot} \rangle - 2I \big)\Big). \end{split} \end{align} By Eq.~\ref{eq:10}, we know that $\lVert\hat{\vect{W}}_{c,\cdot}\rVert^2=I$. Then, \begin{align} \begin{split} \big\lVert \nabla_{\dot{{\vect{W}}}_{c,\cdot}}\mathcal{L} \big\rVert^2=\dfrac{1}{\sigma_{{W}_{c,\cdot}}^2}\Big( \big\lVert \nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L} \big\rVert^2 - \\\dfrac{1}{I}\big\langle \hat{\vect{W}}_{c,\cdot}, \nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L} \big\rangle^2\Big). \end{split} \end{align} Since we assume that this convolutional layer is followed by a normalization layer such as BN or GN, the effect of $1/\sigma_{{W}_{c,\cdot}}^2$ will be canceled. Therefore, the real effect on the gradient norm is the reduction $\dfrac{1}{I}\big\langle \hat{\vect{W}}_{c,\cdot}, \nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L} \big\rangle^2$, which reduces the Lipschitz constant of the loss. Next, we study the effect of Eq.~\ref{eq:13}. By definition, \begin{equation} \big\lVert \nabla_{\vect{W}_{c,\cdot}}\mathcal{L} \big\rVert^2 = \big\lVert \nabla_{\dot{{\vect{W}}}_{c,\cdot}}\mathcal{L} \big\rVert^2 - \dfrac{1}{I} \langle\mathbf{1}, \nabla_{\dot{{\vect{W}}}_{c,\cdot}}\mathcal{L}\rangle^2. \end{equation} By Eq.~\ref{eq:12}, we rewrite the second term: \begin{align} \begin{split} \dfrac{1}{I} \langle\mathbf{1}, \nabla_{\dot{{\vect{W}}}_{c,\cdot}}\mathcal{L}\rangle^2 = \dfrac{1}{I\cdot\sigma^2_{W_{c,\cdot}}} \Big( \langle\mathbf{1}, \nabla_{\hat{{\vect{W}}}_{c,\cdot}}\mathcal{L}\rangle \\ - \dfrac{1}{I} \langle \hat{\vect{W}}_{c,\cdot}, \nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L} \rangle \cdot \langle \mathbf{1},~ \hat{\vect{W}}_{c,\cdot} \rangle\Big)^2. \end{split} \end{align} Since $\langle \mathbf{1},~ \hat{\vect{W}}_{c,\cdot} \rangle=0$, we have \begin{equation} \big\lVert \nabla_{\vect{W}_{c,\cdot}}\mathcal{L} \big\rVert^2 = \big\lVert \nabla_{\dot{{\vect{W}}}_{c,\cdot}}\mathcal{L} \big\rVert^2 - \dfrac{1}{I\cdot\sigma^2_{W_{c,\cdot}}} \langle\mathbf{1}, \nabla_{\hat{{\vect{W}}}_{c,\cdot}}\mathcal{L}\rangle^2. \end{equation} Summarizing the effects of Eq.~\ref{eq:12} and \ref{eq:13} on the Lipschitz constant of the loss: ignoring $1/\sigma_{{W}_{c,\cdot}}^2$, Eq.~\ref{eq:12} reduces it by $\frac{1}{I}\big\langle \hat{\vect{W}}_{c,\cdot}, \nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L} \big\rangle^2$, and Eq.~\ref{eq:13} reduces it by $\frac{1}{I} \langle\mathbf{1}, \nabla_{\hat{{\vect{W}}}_{c,\cdot}}\mathcal{L}\rangle^2$. Although both Eq.~\ref{eq:12} and \ref{eq:13} reduce the Lipschitz constant, their real effects depend on the statistics of the weights and the gradients. For example, the reduction effect of Eq.~\ref{eq:13} depends on the average gradients on $\hat{\vect{W}}$. As for Eq.~\ref{eq:12}, note that $\langle \mathbf{1},~ \hat{\vect{W}}_{c,\cdot} \rangle=0$, its effect might be limited when $\hat{\vect{W}}_{c,\cdot}$ is evenly distributed around $0$. To understand their real effects, we conduct a case study on ResNet-50 trained on ImageNet to see which one of Eq.~\ref{eq:9} and \ref{eq:10} has bigger effects or they contribute similarly to smoothing the landscape. \subsubsection{Effects of WS on the Lipschitz constant of gradients}\label{sec:eowotl} Before the Lipschitzness study on the gradients, we first show a case study where we train ResNet-50 models on ImageNet following the conventional training procedure~\cite{resnet}. In total, we train four models, including ResNet-50 with GN, ResNet-50 with GN+Eq.~\ref{eq:9}, ResNet-50 with GN+Eq.~\ref{eq:10} and ResNet-50 with GN+Eq.~\ref{eq:9}\&\ref{eq:10}. The training dynamics are shown in Fig.~\ref{fig:mean_div}, from which we observe that Eq.~\ref{eq:10} slightly improves the training speed and performances of models with or without Eq.~\ref{eq:9} while the major improvements are from Eq.~\ref{eq:9}. This observation motivates us to study the real effects of Eq.~\ref{eq:9} and~\ref{eq:10} on the Lipschitz constant of the loss. To investigate this, we take a look at the values of $\frac{1}{I}\big\langle \hat{\vect{W}}_{c,\cdot}, \nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L} \big\rangle^2$, and $\frac{1}{I} \langle\mathbf{1}, \nabla_{\hat{{\vect{W}}}_{c,\cdot}}\mathcal{L}\rangle^2$ during training. To compute the two values above, we gather and save the intermediate gradients $\nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L}$, and the weights for the convolution $\hat{\vect{W}}_{c,\cdot}$. In total, we train ResNet-50 with GN, Eq.~\ref{eq:9} and \ref{eq:10} for 90 epochs, and we save the gradients and the weights of the first training iteration of each epoch. The right figure of Fig.~\ref{fig:mean_div} shows the average percentages of $\frac{1}{I}\big\langle \hat{\vect{W}}_{c,\cdot}, \nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L} \big\rangle^2$, $\frac{1}{I} \langle\mathbf{1}, \nabla_{\hat{{\vect{W}}}_{c,\cdot}}\mathcal{L}\rangle^2$, and $\sigma^2_{\vect{W}_{c,\cdot}}\lVert \nabla_{\vect{W}_{c,\cdot}}\mathcal{L} \rVert^2$. From the right figure we can see that $\frac{1}{I}\big\langle \hat{\vect{W}}_{c,\cdot}, \nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L} \big\rangle^2$ is small compared with other two components ($<0.02$). In other words, although Eq.~\ref{eq:10} decreases the gradient norm regardless of the statistics of the weights and gradients, its real effect is limited due to the distribution of $\hat{\vect{W}}_{c,\cdot}$ and $\nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L}$. Nevertheless, from the left figures we can see that Eq.~\ref{eq:10} still improves the training. Since Eq.~\ref{eq:10} requires very little computations, we will keep it in WS. From the experiments above, we observe that the training speed boost is mainly due to Eq.~\ref{eq:9}. As the effect of Eq.~\ref{eq:10} is limited, in this section, we only study the effect of Eq.~\ref{eq:9} on the Hessian of $\vect{W}_{c,\cdot}$ and $\dot{\vect{W}}_{c,\cdot}$. Here, we will show that Eq.~\ref{eq:9} decreases the Frobenius norm of the Hessian matrix of the weights, \textit{i.e.}, $\lVert \nabla^2_{\vect{W}_{c,\cdot}}\mathcal{L} \rVert_{F} \leq \lVert \nabla^2_{\dot{\vect{W}}_{c,\cdot}}\mathcal{L} \rVert_{F}$. With smaller Frobenius norm, the gradients of $\vect{W}_{c,\cdot}$ are more predictable, thus the loss is smoother and easier to optimize. We use $\vect{H}$ and $\dot{\vect{H}}$ to denote the Hessian matrices of $\vect{W}_{c,\cdot}$ and $\dot{\vect{W}}_{c,\cdot}$, respectively, \textit{i.e.}, \begin{equation} \vect{H}_{i,j} = \dfrac{\partial^2\mathcal{L}}{\partial\vect{W}_{c,i}\partial\vect{W}_{c,j}}, ~~~\dot{\vect{H}}_{i,j} = \dfrac{\partial^2\mathcal{L}}{\partial\vect{\dot{W}}_{c,i}\partial\vect{\dot{W}}_{c,j}}. \end{equation} We first derive the relationship between $\vect{H}_{i,j}$ and $\dot{\vect{H}}_{i,j}$: \begin{equation} \vect{H}_{i,j} = \dot{\vect{H}}_{i,j} - \dfrac{1}{I}\sum_{k=1}^I(\dot{\vect{H}}_{i,k} + \dot{\vect{H}}_{k,j}) + \dfrac{1}{I^2}\sum_{p=1}^I\sum_{q=1}^I\dot{\vect{H}}_{p,q}. \end{equation} Note that \begin{equation} \sum_{i=1}^I\sum_{j=1}^I\vect{H}_{i,j}=0. \end{equation} Therefore, Eq.~\ref{eq:9} not only zero-centers the feedforwarding outputs and the back-propagated gradients, but also the Hessian matrix. Next, we compute its Frobenius norm: \begin{align} \begin{split} \lVert \vect{H} \rVert_{F} =& \sum_{i=1}^I\sum_{j=1}^I\vect{H}^2_{i,j} \\ =& \lVert \vect{\dot{H}} \rVert_{F} + \dfrac{1}{I^2}\big(\sum_{i=1}^I\sum_{j=1}^I\vect{\dot{H}}_{i,j}\big)^2 \\ &- \dfrac{1}{I}\sum_{i=1}^I\Big(\sum_{j=1}^I \vect{\dot{H}}_{i,j}\Big)^2 - \dfrac{1}{I}\sum_{j=1}^I\Big(\sum_{i=1}^I \vect{\dot{H}}_{i,j}\Big)^2 \\ \leq & \lVert \vect{\dot{H}} \rVert_{F} - \dfrac{1}{I^2} \big(\sum_{i=1}^I\sum_{j=1}^I\vect{\dot{H}}_{i,j}\big)^2\label{eq:27}. \end{split} \end{align} As shown in Eq.~\ref{eq:27}, Eq.~\ref{eq:9} reduces the Frobenius norm of the Hessian matrix by at least $\big(\sum_{i=1}^I\sum_{j=1}^I\vect{\dot{H}}_{i,j}\big)^2/I^2$, which makes the gradients more predictable than directly optimizing on the weights of the convolutional layer. \subsection{Connections to constrained optimization} WS imposes constraints to the weight $\hat{\vect{W}}_{c,\cdot}$ such that \begin{equation}\label{eq:pgd_constraint} \sum_{i=1}^I \hat{\vect{W}}_{c,i}=0,~~~\sum_{i=1}^I \hat{\vect{W}}^2_{c,i}=I,~~~\forall c. \end{equation} Therefore, an alternative to the proposed WS is to consider the problem as constrained optimization and uses Projected Gradient Descent (PGD) to find the solution. The update rule for PGD can be written as \begin{align}\label{eq:pgd_update} \hat{\vect{W}}^{t+1}_{c,i} = \texttt{Proj}\big(\hat{\vect{W}}^t_{c,i} - \epsilon\cdot\nabla_{\hat{\vect{W}}^t_{c,i}}\mathcal{L}\big)\\\nonumber \end{align} where $\texttt{Proj}(\cdot)$ denotes the projection function and $\epsilon$ denotes the learning rate. To satisfy Eq.~\ref{eq:pgd_constraint}, $\texttt{Proj}(\cdot)$ standardizes its input. We can approximate the right hand side of Eq.~\ref{eq:pgd_update} by minimizing the \textbf{Lagrangian} of the loss function $\mathcal{L}$, which obtains \begin{align}\label{eq:pgd_update_lagrangian} \hat{\vect{W}}^{t+1}_{c,i}&\approx \hat{\vect{W}}^t_{c,i} - \epsilon\Big(\nabla_{\hat{\vect{W}}^t_{c,i}}\mathcal{L} - \langle \hat{\vect{W}}_{c,\cdot},\nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L} \rangle\hat{\vect{W}}_{c,i}\\\nonumber&-\frac{1}{I}\langle \mathbf{1}, \nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L}\rangle\Big) \end{align} Different from Eq.~\ref{eq:pgd_update}, the update rule of WS is \begin{align}\label{eq:ws_update} \begin{split} \hat{\vect{W}}^{t+1}_{c,i} =& \texttt{Proj}\Big( \vect{W}^{t}_{c,i} - \epsilon\cdot\nabla_{{\vect{W}}^t_{c,i}}\mathcal{L}\Big) \\ =& \texttt{Proj}\Big( \vect{W}^{t}_{c,i} - \dfrac{\epsilon}{\sigma_{\vect{W}_{c,\cdot}}} \big( \nabla_{\hat{\vect{W}}_{c,i}}\mathcal{L} - \dfrac{1}{I} \langle \hat{\vect{W}}_{c,\cdot}, \\ &\nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L} \rangle \hat{\vect{W}}_{c,i} - \dfrac{1}{I} \langle \mathbf{1}, \nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L} \rangle \big) \\ &-\dfrac{\epsilon}{I^2\cdot\sigma_{\vect{W}_{c,\cdot}}} \big\langle \mathbf{1}, \langle \hat{\vect{W}}_{c,\cdot}, \nabla_{\hat{\vect{W}}_{c,\cdot}}\mathcal{L} \rangle \hat{\vect{W}}_{c,\cdot} \big\rangle \Big). \end{split} \end{align} Eq.~\ref{eq:ws_update} is more complex than Eq.~\ref{eq:pgd_update_lagrangian}, but the increased complexity is neglectable compared with training deep networks. For simplicity, Eq.~\ref{eq:ws_update} reuses $\texttt{Proj}$ to denote the standardization process, despite that WS uses Stochastic Gradient Descent instead of Projected Gradient Descent to optimize the weights.
proofpile-arXiv_065-4218
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Word embeddings quickly achieved wide adoption in natural language processing (NLP), precipitating the development of efficient, word-level neural models of human language. However, prominent word embeddings such as word2vec~\cite{mikolov2013distributed} and GloVe~\cite{pennington2014glove} encode systematic biases against women and black people~\citep[][i.a.]{bolukbasi2016man,garg2018word}, implicating many NLP systems in scaling up social injustice. We investigate whether sentence encoders, which extend the word embedding approach to sentences, are similarly biased.\footnote{ While encoder training data may contain perspectives from outside the U.S., we focus on biases in U.S.\ contexts. } The previously developed Word Embedding Association Test~\citep[WEAT; ][]{caliskan2017semantics} measures bias in word embeddings by comparing two sets of target-concept words to two sets of attribute words. We propose a simple generalization of WEAT to phrases and sentences: the Sentence Encoder Association Test (SEAT). We apply SEAT to sentences generated by inserting individual words from \citeauthor{caliskan2017semantics}'s tests into simple templates such as ``This is a[n] $<$word$>$.'' To demonstrate the new potential of a sentence-level approach and advance the discourse on bias in NLP, we also introduce tests of two biases that are less amenable to word-level representation: the \emph{angry black woman} stereotype~\cite{collins2004black,madison2009crazy,harrisperry2011sister,hooks2015aint,gillespie2016race} and a \emph{double bind} on women in professional settings~\cite{heilman2004penalties}. The use of sentence-level contexts also facilitates testing the impact of different experimental designs. For example, several of \citeauthor{caliskan2017semantics}'s tests rely on given names associated with European American and African American people or rely on terms referring to women and men as groups (such as ``woman'' and ``man''). We explore the effect of using given names versus group terms by creating alternate versions of several bias tests that swap the two. This is not generally feasible with WEAT, as categories like \emph{African Americans} lack common single-word group terms. We find varying evidence of human-like bias in sentence encoders using SEAT. Sentence-to-vector encoders largely exhibit the angry black woman stereotype and Caliskan biases, and to a lesser degree the double bind biases. Recent sentence encoders such as BERT \citep{devlin2018bert} display limited evidence of the tested biases. However, while SEAT can confirm the existence of bias, negative results do not indicate the model is bias-free. Furthermore, discrepancies in the results suggest that the confirmed biases may not generalize beyond the specific words and sentences in our test data, and in particular that cosine similarity may not be a suitable measure of representational similarity in recent models, indicating a need for alternate bias detection techniques. \begin{table}[t] \small \centering \begin{tabularx}{\linewidth}{p{.5\linewidth}p{.40\linewidth}} \toprule \textbf{Target Concepts} & \textbf{Attributes} \\ \midrule \textit{European American names}: Adam, Harry, Nancy, Ellen, Alan, Paul, Katie, \dots & \textit{Pleasant}: love, cheer, miracle, peace, friend, happy, \dots \\ \midrule \textit{African American names}: Jamel, Lavar, Lavon, Tia, Latisha, Malika, \dots & \textit{Unpleasant}: ugly, evil, abuse, murder, assault, rotten, \dots \\ \bottomrule \end{tabularx} \caption{Subsets of target concepts and attributes from Caliskan Test 3. Concept and attribute names are in italics. The test compares the strength of association between the two target concepts and two attributes, where all four are represented as sets of words. } \label{tab:caliskan-examples} \end{table} \begin{table}[t] \small \centering \begin{tabularx}{\linewidth}{p{.5\linewidth}p{.4\linewidth}} \toprule \textbf{Target Concepts} & \textbf{Attributes} \\ \midrule \textit{European American names}: ``This is Katie.'', ``This is Adam.'' ``Adam is there.'', \dots & \textit{Pleasant}: ``There is love.'', ``That is happy.'', ``This is a friend.'', \dots \\ \midrule \textit{African American names}: ``Jamel is here.'', ``That is Tia.'', ``Tia is a person.'', \dots & \textit{Unpleasant}: ``This is evil.'', ``They are evil.'', ``That can kill.'', \dots \\ \bottomrule \end{tabularx} \caption{Subsets of target concepts and attributes from the bleached sentence version of Caliskan Test 3. } \label{tab:caliskan-sent-examples} \end{table} \section{Methods} \paragraph{The Word Embedding Association Test} WEAT imitates the human implicit association test \citep{greenwald1998measuring} for word embeddings, measuring the association between two sets of target concepts and two sets of attributes. Let $X$ and $Y$ be equal-size sets of target concept embeddings and let $A$ and $B$ be sets of attribute embeddings. The test statistic is a difference between sums over the respective target concepts, \begin{align*} s(X, Y, A, B) = \big[ &\textstyle{\sum}_{x \in X} s(x, A, B) - \\ &\textstyle{\sum}_{y \in Y} s(y, A, B) \big], \end{align*} where each addend is the difference between mean cosine similarities of the respective attributes, \begin{align*} s(w, A, B) = \big[ &\mathrm{mean}_{a \in A} \cos(w, a) - \\ &\mathrm{mean}_{b \in B} \cos(w, b) \big] \end{align*} A permutation test on $s(X, Y, A, B)$ is used to compute the significance of the association between $(A, B)$ and $(X, Y)$, \begin{align*} p = \Pr \left[s(X_i, Y_i, A, B) > s(X, Y, A, B)\right], \end{align*} where the probability is computed over the space of partitions $(X_i, Y_i)$ of $X \cup Y$ such that $X_i$ and $Y_i$ are of equal size, and a normalized difference of means of $s(w, A, B)$ is used to measure the magnitude of the association~\citep[the effect size; ][]{caliskan2017semantics}, \begin{align*} d = \frac{ \ensuremath{\mathrm{mean}}_{x \in X} s(x, A, B) - \ensuremath{\mathrm{mean}}_{y \in Y} s(y, A, B) }{ \ensuremath{\mathrm{std\_dev}}_{w \in X \cup Y} s(w, A, B) }. \end{align*} Controlling for significance, a larger effect size reflects a more severe bias. We detail our implementations in the supplement. \paragraph{The Sentence Encoder Association Test} SEAT compares sets of sentences, rather than sets of words, by applying WEAT to the vector representation of a sentence. Because SEAT operates on fixed-sized vectors and some encoders produce variable-length vector sequences, we use pooling as needed to aggregate outputs into a fixed-sized vector. We can view WEAT as a special case of SEAT in which the sentence is a single word. In fact, the original WEAT tests have been run on the Universal Sentence Encoder~\cite{cer2018universal}. To extend a word-level test to sentence contexts, we slot each word into each of several semantically bleached sentence templates such as ``This is $<$word$>$.'', ``$<$word$>$ is here.'', ``This will $<$word$>$.'', and ``$<$word$>$ are things.''. These templates make heavy use of deixis and are designed to convey little specific meaning beyond that of the terms inserted into them.\footnote{ See the supplement for further details and examples. } For example, the word version of Caliskan Test 3 is illustrated in Table~\ref{tab:caliskan-examples} and the sentence version is illustrated in Table~\ref{tab:caliskan-sent-examples}. We choose this design to focus on the associations a sentence encoder makes with a given term rather than those it happens to make with the contexts of that term that are prevalent in the training data; a similar design was used in a recent sentiment analysis evaluation corpus stratified by race and gender~\cite{kiritchenko2018examining}. To facilitate future work, we publicly release code for SEAT and all of our experiments.\footnote{ \url{http://github.com/W4ngatang/sent-bias} } \section{Biases Tested} \paragraph{Caliskan Tests} We first test whether the sentence encoders reproduce the same biases that word embedding models exhibited in \citet{caliskan2017semantics}. These biases correspond to past social psychology studies of implicit associations in human subjects.\footnote{ See \citet{greenwald2009understanding} for a review of this work. } We apply both the original word-level versions of these tests as well as our generated sentence-level versions. \paragraph{Angry Black Woman Stereotype} In the \emph{Sapphire} or \emph{angry black woman (ABW)} stereotype, black women are portrayed as loud, angry, and imposing~\cite{collins2004black,madison2009crazy,harrisperry2011sister,hooks2015aint,gillespie2016race}. This stereotype contradicts common associations made with the ostensibly race-neutral (unmarked) category of \textit{women}~\cite{bem1974measurement}, suggesting that that category is implicitly white. \emph{Intersectionality} reveals that experiences considered common to women are not necessarily shared by black women, who are marginalized both among women and among black people~\cite{crenshaw1989demarginalizing}. Recently, intersectionality has been demonstrated in English Wikipedia using distributional semantic word representations~\cite{herbelot2012distributional}, and in the disparate error rates of machine learning technologies like face recognition~\cite{buolamwini2018gender}. To measure sentence encoders' reproduction of the angry black woman stereotype, we create a test whose target concepts are black-identifying and white-identifying female given names from \citet[Table 1]{sweeney2013discrimination} and whose attributes are adjectives used in the discussion of the stereotype in \citet[pp. 87-90]{collins2004black} and their antonyms. We also produce a version of the test with attributes consisting of terms describing black women and white women as groups, as well as sentence versions in which attribute and target concept terms are inserted in sentence templates. \paragraph{Double Binds} Women face many \emph{double binds}, contradictory or unsatisfiable expectations of femininity and masculinity~\cite{stone2004fasttrack,harrisperry2011sister,mitchell2012raunch}. If women clearly succeed in a male gender-typed job, they are perceived less likable and more hostile than men in similar positions; if success is ambiguous, they are perceived less competent and achievement-oriented than men. Both outcomes can interfere in performance evaluations~\cite{heilman2004penalties}, contributing to the \emph{glass ceiling} impeding women's career advancement.\footnote{ See \citet{schluter2018glass} for a recent exposition of the glass ceiling in the NLP research community. } We test this double bind in sentence encoders by translating \citeauthor{heilman2004penalties}'s experiment to two SEAT tests. In the first, we represent the two target concepts by names of women and men, respectively, in the single sentence template ``$<$word$>$ is an engineer with superior technical skills.''; the attributes are \emph{likable} and \emph{non-hostile} terms, based on \citeauthor{heilman2004penalties}'s design, in the sentence template ``The engineer is $<$word$>$.'' In the second, we use the shortened target concept sentence template ``$<$word$>$ is an engineer'' and fill the attribute templates from before with \emph{competent} and \emph{achievement-oriented} terms based on \citeauthor{heilman2004penalties}'s design.\footnote{ We consider other formulations in the supplement. } We refer to these tests as semantically \emph{unbleached} because the context contains important information about the bias. We produce two variations of these tests: word-level tests in which target concepts are names in isolation and attributes are adjectives in isolation, as well as corresponding semantically bleached sentence-level tests. These control conditions allow us to probe the extent to which observed associations are attributable to gender independent of context. \section{Experiments and Results} We apply SEAT to seven sentence encoders (listed in Table~\ref{tab:models}) including simple bag-of-words encoders, sentence-to-vector models, and state-of-the-art sequence models.\footnote{ We provide further details and explore variations on these model configurations in the supplement. } For all models, we use publicly available pretrained parameters. \begin{table}[t] \small \centering \renewcommand{\arraystretch}{1.25} \begin{tabular}{p{.66\linewidth} p{.08\linewidth} p{.08\linewidth}} \toprule \textbf{Model} & \textbf{Agg.} & \textbf{Dim.} \\\midrule CBoW (GloVe), 840 billion token web corpus version~\cite{pennington2014glove} & \texttt{mean} & 300 \\ InferSent, AllNLI~\cite{conneau2017supervised} & \texttt{max} & 4096 \\ GenSen, +STN +Fr +De +NLI +L +STP +Par~\cite{subramanian2018learning} & \texttt{last} & 4096 \\ Universal Sentence Encoder (USE), DAN version \citep{cer2018universal} & N/A & 512 \\ ELMo~\cite{peters2018deep}, sum over layers after mean-pooling over sequence & \texttt{mean} & 1024 \\ GPT~\cite{radford2018improving} & \texttt{last} & 768 \\ BERT, large, cased~\cite{devlin2018bert} & \texttt{[CLS]} & 1024 \\ \bottomrule \end{tabular} \caption{Models tested (disambiguated with notation from cited paper), aggregation functions applied across token representations, and representation dimensions.} \label{tab:models} \end{table} \begin{table*}[t] \small \centering \begin{tabular}{llrrrrrrr} \toprule \textbf{Test} & \textbf{Context} & \textbf{CBoW} & \textbf{InferSent} & \textbf{GenSen} & \textbf{USE}~~~ & \textbf{ELMo} & \textbf{GPT}~~~ & \textbf{BERT}~ \\ \midrule C1: Flowers/Insects & word & $1.50^{**}$ & $1.56^{**}$ & $1.24^{**}$ & $1.38^{**}$ & $-0.03\phantom{^{**}}$ & $0.20\phantom{^{**}}$ & $0.22\phantom{^{**}}$ \\ C1: Flowers/Insects & sent & $1.56^{**}$ & $1.65^{**}$ & $1.22^{**}$ & $1.38^{**}$ & $0.42^{**}$ & $0.81^{**}$ & $0.62^{**}$ \\ C3: EA/AA Names & word & $1.41^{**}$ & $1.33^{**}$ & $1.32^{**}$ & $0.52\phantom{^{**}}$ & $-0.40\phantom{^{**}}$ & $0.60^{*\phantom{*}}$ & $-0.11\phantom{^{**}}$ \\ C3: EA/AA Names & sent & $0.52^{**}$ & $1.07^{**}$ & $0.97^{**}$ & $0.32^{*\phantom{*}}$ & $-0.38\phantom{^{**}}$ & $0.19\phantom{^{**}}$ & $0.05\phantom{^{**}}$ \\ C6: M/F Names, Career & word & $1.81^{*\phantom{*}}$ & $1.78^{*\phantom{*}}$ & $1.84^{*\phantom{*}}$ & $0.02\phantom{^{**}}$ & $-0.45\phantom{^{**}}$ & $0.22\phantom{^{**}}$ & $0.21\phantom{^{**}}$ \\ C6: M/F Names, Career & sent & $1.74^{**}$ & $1.69^{**}$ & $1.63^{**}$ & $0.83^{**}$ & $-0.38\phantom{^{**}}$ & $0.35\phantom{^{**}}$ & $0.08\phantom{^{**}}$ \\ ABW Stereotype & word & $1.10^{*\phantom{*}}$ & $1.18^{*\phantom{*}}$ & $1.57^{**}$ & $-0.39\phantom{^{**}}$ & $0.53\phantom{^{**}}$ & $0.08\phantom{^{**}}$ & $-0.32\phantom{^{**}}$ \\ ABW Stereotype & sent & $0.62^{**}$ & $0.98^{**}$ & $1.05^{**}$ & $-0.19\phantom{^{**}}$ & $0.52^{*\phantom{*}}$ & $-0.07\phantom{^{**}}$ & $-0.17\phantom{^{**}}$ \\ Double Bind: Competent & word & $1.62^{*\phantom{*}}$ & $1.09\phantom{^{**}}$ & $1.49^{*\phantom{*}}$ & $1.51^{*\phantom{*}}$ & $-0.35\phantom{^{**}}$ & $-0.28\phantom{^{**}}$ & $-0.81\phantom{^{**}}$ \\ Double Bind: Competent & sent & $0.79^{**}$ & $0.57^{*\phantom{*}}$ & $0.83^{**}$ & $0.25\phantom{^{**}}$ & $-0.15\phantom{^{**}}$ & $0.10\phantom{^{**}}$ & $0.39\phantom{^{**}}$ \\ Double Bind: Competent & sent (u) & $0.84\phantom{^{**}}$ & $1.42^{*\phantom{*}}$ & $1.03\phantom{^{**}}$ & $0.71\phantom{^{**}}$ & $0.20\phantom{^{**}}$ & $0.71\phantom{^{**}}$ & $1.17^{*\phantom{*}}$ \\ Double Bind: Likable & word & $1.29^{*\phantom{*}}$ & $0.65\phantom{^{**}}$ & $1.31^{*\phantom{*}}$ & $0.16\phantom{^{**}}$ & $-0.60\phantom{^{**}}$ & $0.91\phantom{^{**}}$ & $-0.55\phantom{^{**}}$ \\ Double Bind: Likable & sent & $0.69^{*\phantom{*}}$ & $0.37\phantom{^{**}}$ & $0.25\phantom{^{**}}$ & $0.32\phantom{^{**}}$ & $-0.45\phantom{^{**}}$ & $-0.20\phantom{^{**}}$ & $-0.35\phantom{^{**}}$ \\ Double Bind: Likable & sent (u) & $0.51\phantom{^{**}}$ & $1.33^{*\phantom{*}}$ & $0.05\phantom{^{**}}$ & $0.48\phantom{^{**}}$ & $-0.90\phantom{^{**}}$ & $-0.87\phantom{^{**}}$ & $0.99\phantom{^{**}}$ \\ \bottomrule \end{tabular} \caption{SEAT effect sizes for select tests, including word-level (word), bleached sentence-level (sent), and unbleached sentence-level (sent (u)) versions. C$N$: test from \citet[Table 1]{caliskan2017semantics} row $N$; *: significant at \num{0.01}, **: significant at \num{0.01} after multiple testing correction.} \label{tab:effect-sizes} \end{table*} Table~\ref{tab:effect-sizes} shows effect size and significance at 0.01 before and after applying the Holm-Bonferroni multiple testing correction~\cite{holm1979simple} for a subset of tests and models; complete results are provided in the supplement.\footnote{ We use the full set of tests and models when computing the multiple testing correction, including those only presented in the supplement. } Specifically, we select Caliskan Test 1 associating flowers/insects with pleasant/unpleasant, Test 3 associating European/African American names with pleasant/unpleasant, and Test 6 associating male/female names with career/family, as well as the angry black woman stereotype and the competent and likable double bind tests. We observe that tests based on given names more often find a significant association than those based on group terms; we only show the given-name results here. We find varying evidence of bias in sentence encoders according to these tests. Bleached sentence-level tests tend to elicit more significant associations than word-level tests, while the latter tend to have larger effect sizes. We find stronger evidence for the Caliskan and ABW stereotype tests than for the double bind. After the multiple testing correction, we only find evidence of the double bind in bleached, sentence-level \emph{competent} control tests; that is, we find women are associated with incompetence independent of context.\footnote{ However, the double bind results differ across models; we show no significant associations for ELMo or GPT and only one each for USE and BERT. } Some patterns in the results cast doubt on the reasonableness of SEAT as an evaluation. For instance, Caliskan Test 7 (association between \textit{math/art} and \textit{male/female}) and Test 8 (\textit{science/art} and \textit{male/female}) elicit counterintuitive results from several models. These tests have the same sizes of target concept and attribute sets. For CBoW on the word versions of those tests, we see $p$-values of 0.016 and $10^{-2}$, respectively. On the sentence versions, we see $p$-values of $10^{-5}$ for both tests. Observing similar $p$-values agrees with intuition: The \textit{math/art} association should be similar to the \textit{science/art} association because they instantiate a disciplinary dichotomy between \textit{math/science} and \textit{arts/language}~\cite{nosek2002math}. However, for BERT on the sentence version, we see discrepant $p$-values of $10^{-5}$ and 0.14; for GenSen, 0.12 and $10^{-3}$; and for GPT, 0.89 and $10^{-4}$. Caliskan Tests 3, 4, and 5 elicit even more counterintuitive results from ELMo. These tests measure the association between \textit{European American/African American} and \textit{pleasant/unpleasant}. Test 3 has larger attribute sets than Test 4, which has larger target concept sets than Test 5. Intuitively, we expect increasing $p$-values across Tests 3, 4, and 5, as well-designed target concepts and attributes of larger sizes should yield higher-power tests. Indeed, for CBoW, we find increasing $p$-values of $10^{-5}$, $10^{-5}$, and $10^{-4}$ on the word versions of the tests and $10^{-5}$, $10^{-5}$, and $10^{-2}$ on the sentence versions, respectively.\footnote{ Our SEAT implementation uses sampling with a precision of $10^{-5}$, so $10^{-5}$ is the smallest $p$-value we can observe. } However, for ELMo, we find \emph{decreasing} $p$-values of \num{0.95}, \num{0.45}, and \num{0.08} on the word versions of the tests and \num{1}, \num{0.97}, and $10^{-4}$ on the sentence versions. We interpret these results as ELMo producing substantially different representations for conceptually similar words. Thus, SEAT's assumption that the sentence representations of each target concept and attribute instantiate a coherent concept appears invalid. \section{Conclusion} At face value, our results suggest recent sentence encoders exhibit less bias than previous models do, at least when ``bias'' is considered from a U.S.\ perspective and measured using the specific tests we have designed. However, we strongly caution against interpreting the number of significant associations or the average significant effect size as an absolute measure of bias. Like WEAT, SEAT only has positive predictive ability: It can detect presence of bias, but not its absence. Considering that these representations are trained without explicit bias control mechanisms on naturally occurring text, we argue against interpreting a lack of evidence of bias as a lack of bias. Moreover, the counterintuitive sensitivity of SEAT on some models and biases suggests that biases revealed by SEAT may not generalize beyond the specific words and sentences in our test data. That is, our results invalidate the assumption that each set of words or sentences in our tests represents a coherent concept/attribute (like \emph{African American} or \emph{pleasant}) to the sentence encoders; hence, we do not assume the encoders will exhibit similar behavior on other potential elements of those concepts/attributes (other words or sentences representing, for example, \emph{African American} or \emph{pleasant}). One possible explanation of the observed sensitivity at the sentence level is that, from the sentence encoders' view, our sentence templates are not as \emph{semantically bleached} as we expect; small variations in their relative frequencies and interactions with the terms inserted into them may be undermining the coherence of the concepts/attributes they implement. Another possible explanation that also accounts for the sensitivity observed in the word-level tests is that cosine similarity is an inadequate measure of text similarity for sentence encoders. If this is the case, the biases revealed by SEAT may not translate to biases in downstream applications. Future work could measure bias at the application level instead, following \citet{bailey2018}'s recommendation based on the tension between descriptive and normative correctness in representations. The angry black woman stereotype represents an \emph{intersectional} bias, a phenomenon not well anticipated by an additive model of racism and sexism~\cite{crenshaw1989demarginalizing}. Previous work has modeled biases at the intersection of race and gender in distributional semantic word representations~\cite{herbelot2012distributional}, natural language inference data~\cite{rudinger2017social}, and facial recognition systems~\cite{buolamwini2018gender}, as well as at the intersection of dialect and gender in automatic speech recognition~\cite{tatman2017gender}. We advocate for further consideration of intersectionality in future work in order to avoid reproducing the erasure of multiple minorities who are most vulnerable to bias. We have developed a simple sentence-level extension of an established word embedding bias instrument and used it to measure the degree to which pretrained sentence encoders capture a range of social biases, observing a large number of significant effects as well as idiosyncrasies suggesting limited external validity. This study is preliminary and leaves open to investigation several design choices that may impact the results; future work may consider revisiting choices like the use of semantically bleached sentence inputs, the aggregation applied to models that represent sentences with sequences of hidden states, and the use of cosine similarity between sentence representations. We challenge researchers of fairness and ethics in NLP to critically (re-)examine their methods; looking forward, we hope for a deeper consideration of the social contexts in which NLP systems are applied. \section*{Acknowledgments} We are grateful to Carolyn Ros\'{e}, Jason Phang, S\'{e}bastien Jean, Thibault F\'{e}vry, Katharina Kann, and Benjamin Van Durme for helpful conversations concerning this work and to our reviewers for their thoughtful feedback. CM is funded by IARPA MATERIAL; RR is funded by DARPA AIDA; AW is funded by an NSF fellowship. The U.S.\ Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of IARPA, DARPA, NSF, or the U.S.\ Government.
proofpile-arXiv_065-4220
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Research on adversarial examples is motivated by a spectrum of questions. These range from the security of models deployed in the presence of real-world adversaries to the need to capture limitations of representations and their (in)ability to generalize~\citep{GilmerMotivating2018}. The broadest accepted definition of an adversarial example is ``an input to a ML model that is intentionally designed by an attacker to fool the model into producing an incorrect output''~\citep{goodfellow2017attacking}. To enable concrete progress, many definitions of adversarial examples were introduced in the literature since their initial discovery~\citep{szegedy2013intriguing,biggio2013evasion}. In a majority of work, adversarial examples are commonly formalized as adding a perturbation $\delta$ to some test example $x$ to obtain an input $x^*$ that produces an incorrect model outcome.\footnote{Here, an incorrect output either refers to the model returning any class different from the original \textit{source} class of the input, or a specific \textit{target} class chosen by the adversary prior to searching for a perturbation.} We refer to this entire class of malicious inputs as \textit{perturbation-based adversarial examples}. The adversary's capabilities may optionally be constrained by placing a bound on the maximum perturbation $\delta$ added to the original input (e.g., using an $\ell_p$ norm). Achieving robustness to perturbation-based adversarial examples, in particular when they are constrained using $\ell_p$ norms, is often cast as a problem of learning a model that is uniformly continuous: the defender wishes to prove that for all $\delta>0$ and for some $\varepsilon>0$, all pairs of points $(x, x^*)$ with $\|x-x^*\|\leq \varepsilon$ satisfy $\|G(x)-G(x^*)\| \leq \delta$ (where $G$ denotes the classifier's logits). Different papers take different approaches to achieving this result, ranging from robust optimization~\citep{madry2017towards} to training models to have Lipschitz constants~\citep{cisse2017parseval} to models which are provably robust to small $\ell_p$ perturbations ~\citep{kolter2017provable,raghunathan2018certified}. \begin{figure} \vspace{-5mm} \centering \includegraphics[width=1.\linewidth]{ims/fig1_rev_3.pdf} \caption{[Left]: When training a classifier without constraints, we may end up with a decision boundary that is not robust to perturbation-based adversarial examples. [Right]: However, enforcing robustness to norm-bounded perturbations, introduces erroneous invariance (dashed regions in epsilon spheres). This excessive invariance of the perturbation-robust model in task-relevant directions may be exploited, as shown by the attack proposed in this paper.} \label{fig:myfig} \end{figure} In this paper we present analytical results that show how optimizing for uniform continuity is not only insufficient to address the lack of generalization identified through adversarial examples, but also potentially harmful. Our intuition, captured in Figure~\ref{fig:myfig}, relies on the inability of $\ell_p$-norms to capture the geometry of ideal decision boundaries (or any other distance metric that does not perfectly capture semantics). This leads us to present analytical constructions and empirical evidence that robustness to perturbation-based adversaries can increase the vulnerability of models to other types of adversarial examples.% Our argument relies on the existence of \textit{invariance-based adversarial examples}~\citep{jacobsen2018excessive}. Rather than perturbing the input to change the classifier's output, they modify input semantics while keeping the decision of the classifier \textit{identical}. In other words, the vulnerability exploited by invariance-based adversarial examples is a \textit{lack} of sensitivity in directions relevant to the task: the model's consistent prediction does not reflect the change in the input's true label. Our analytical work exposes a complex relationship between perturbation-based and invariance-based adversarial examples. We construct a model that is robust to perturbation-based adversarial examples but not to invariance-based adversarial examples. We then demonstrate how an imperfect model for the adversarial spheres task proposed by~\cite{gilmer2018adversarial} is either vulnerable to perturbation-based or invariance-based attacks---depending on whether the point attacked is on the inner or outer sphere. Hence, at least these two types of adversarial examples are needed to fully account for model failures (more vulnerabilities may be discovered at a later point). To demonstrate the practicality of our argument, we then consider vision models with state-of-the-art robustness to $\ell_p$-norm adversaries. We introduce an algorithmic approach for finding invariance-based adversarial examples. Our attacks are model-agnostic and generate $\ell_0$ and $\ell_\infty$ invariance adversarial examples, succeeding at changing the underlying classification (as determined by a human study) in $55\%$ and $21\%$ of cases, respectively. When $\ell_p$-robust models classify the successful attacks, they achieve under $58\%$ (respectively, $5\%$) agreement with the human label. Perhaps one of the most interesting aspects of our work is to show that different classes of current classifier's limitations fall under the same umbrella term of adversarial examples. Despite this common terminology, each of these limitations may stem from different shortcomings of learning that have non-trivial relationships. To be clear, developing $\ell_p$-norm perturbation-robust classifiers is a useful benchmark task. However, as our paper demonstrates, it is not the only potential way classifiers may make mistakes \emph{even within the $\ell_p$ norm}. Hence, we argue that the community will benefit from working with a series of definitions that precisely taxonomize adversarial examples. \section{Defining Perturbation-based and Invariance-based Adversarial examples} \label{sec:definitions} In order to make precise statements about adversarial examples, we begin with two definitions. \begin{definition}[Perturbation-based Adversarial Examples] \label{def:advExam} Let $G$ denote the $i$-th layer, logit or argmax of the classifier. A \textbf{Perturbation-based adversarial example} (or perturbation adversarial) $x^* \in \mathbb{R}^d$ corresponding to a legitimate test input $x \in \mathbb{R}^d$ fulfills: \begin{enumerate}[(i)] \item Created by adversary: $x^* \in \mathbb{R}^d$ is created by an algorithm $\mathcal{A}: \mathbb{R}^d \rightarrow \mathbb{R}^d$ with $x \mapsto x^*$. \item Perturbation of output: $\|G(x^*) - G(x)\| > \delta$ and $\mathcal{O}(x^*) = \mathcal{O}(x)$, where perturbation $\delta > 0$ is set by the adversary and $\mathcal{O}: \mathbb{R}^d \rightarrow \{1, \dots ,C\}$ denotes the \textbf{oracle}. \end{enumerate} Furthermore, $x^*$ is \textbf{$\epsilon$-bounded} if $\|x - x^*\| < \epsilon$, where $\|\cdot\|$ is a norm on $\mathbb{R}^d$ and $\epsilon > 0$. \end{definition} Property (i) allows us to distinguish perturbation adversarial examples from points that are misclassified by the model without adversarial intervention. Furthermore, the above definition incorporates also adversarial perturbations designed for hidden features as in \citep{sabour2015adversarial}, while usually the decision of the classifier $D$ (argmax-operation on logits) is used as the perturbation target. Our definition also identifies $\epsilon$-bounded perturbation-based adversarial examples~\citep{harnessing_adversarial} as a specific case of unbounded perturbation-based adversarial examples. However, our analysis primarily considers the latter, which correspond to the threat model of a stronger adversary. \begin{definition}[Invariance-based Adversarial Examples] \label{def:advExamInv} Let $G$ denote the $i$-th layer, logit or argmax of the classifier. An \textbf{invariance-based adversarial example} (or invariance adversarial) $x^* \in \mathbb{R}^d$ corresponding to a legitimate test input $x \in \mathbb{R}^d$ fulfills: \begin{enumerate}[(i)] \item Created by adversary: $x^* \in \mathbb{R}^d$ is created by an algorithm $\mathcal{A}: \mathbb{R}^d \rightarrow \mathbb{R}^d$ with $x \mapsto x^*$. \item Lies in pre-image of $x$ under $G$: $G(x^*) = G(x)$ and $\mathcal{O}(x) \neq \mathcal{O}(x^*)$, where $\mathcal{O}: \mathbb{R}^d \rightarrow \{1, \dots ,C\}$ denotes the \textbf{oracle}. \end{enumerate} \end{definition} As a consequence, $D(x) = D(x^*)$ also holds for invariance-based adversarial examples, where $D$ is the output of the classifier. Intuitively, adversarial perturbations cause the output of the classifier to change, while the oracle would still label the new input $x^*$ in the original source class. Whereas perturbation-based adversarial examples exploit the classifier's \textit{excessive sensitivity in task-irrelevant directions}, invariance-based adversarial examples explore the classifier's pre-image to identify \textit{excessive invariance in task-relevant directions}: its prediction is unchanged while the oracle's output differs. Briefly put, perturbation-based and invariance-based adversarial examples are complementary failure modes of the learned classifier. \section{Robustness to Perturbation-based Adversarial Examples Can Cause Invariance-based Vulnerabilities} We now investigate the relationship between the two adversarial example definitions from Section~\ref{sec:definitions}. So far, it has been unclear whether solving perturbation-based adversarial examples implies solving invariance-based adversarial examples, and vice versa. In the following, we show that this relationship is intricate and developing models robust in one of the two settings only would be insufficient. In a general setting, invariance and stability can be uncoupled. For this consider a linear classifier with matrix $A$. The perturbation-robustness is tightly related to forward stability (largest singular value of $A$). On the other hand, the invariance-view relates to the stability of the inverse (smallest singular value of $A$) and to the null-space of $A$. As largest and smallest singular values are uncoupled for general matrices $A$, the relationship between both viewpoints is likely non-trivial in practice. \subsection{Building our Intuition with Extreme Uniform Continuity} In the extreme, a classifier achieving perfect uniform continuity would be a constant classifier. Let $D: \mathbb{R}^n \rightarrow [0,1]^C$ denote a classifier with $D(x) = y^*$ for all $x\in \mathbb{R}^d$. As the classifier maps all inputs to the same output $y^*$, there exist no $x^*$, such that $D(x) \neq D(x^*)$. Thus, the model is trivially perturbation-robust (at the expense of decreased utility). On the other hand, the pre-image of $y^*$ under $D$ is the entire input space, thus $D$ is arbitrarily vulnerable to invariance-based adversarial examples. Because this toy model is a constant function over the input domain, no perturbation of an initially correctly classified input can change its prediction. This trivial model illustrates how one not only needs to control \textit{sensitivity} but also \textit{invariance} alongside \textit{accuracy} to obtain a robust model. Hence, we argue that the often-discussed tradeoff between accuracy and robustness (see~\cite{tsipras2018robustness} for a recent treatment) should in fact take into account at least three notions: accuracy, sensitivity, and invariance. This is depicted in Figure~\ref{fig:myfig}. In the following, we present arguments as for why this insight can also extend to almost perfect classifiers. \iffalse While this synthetic sub-optimal model has some symmetry with respect to both perspectives, this is not necessarily the case. For this consider a linear classifier with matrix $A$. The perturbation-robustness is tightly related to forward stability (largest singular value of $A$). On the other hand, the invariance-view relates to the stability of the inverse (smallest singular value of $A$). As largest and smallest singular values are uncoupled for general matrices $A$, the relationship between both viewpoints is most likely non-trivial in practice. \fi \begin{figure} \vspace{-0.5cm} \hspace{-.45cm} \begin{minipage}{0.152\textwidth} \centering \includegraphics[width=\textwidth]{ims/outer_sphere_attack_legend_2.jpg} \end{minipage} \hspace{-.1cm} \begin{minipage}{0.37\textwidth} \centering \includegraphics[width=\textwidth]{ims/attack_outer_sphere.pdf} \end{minipage} \hspace{-.6cm} \begin{minipage}{0.15\textwidth} \centering \includegraphics[width=\textwidth]{ims/inner_sphere_attack_legend_half_2.jpg} \end{minipage} \hspace{-.1cm} \begin{minipage}{0.37\textwidth} \centering \includegraphics[width=\textwidth]{ims/attack_inner_sphere.pdf} \end{minipage} \hspace{-.7cm} \caption{Robustness experiment on spheres with radii $R_1=1$ and $R_2=1.3$ and max-margin classifier that does not see $n=10$ dimensions of the $d=500$ dimensional input. [Left]: Attacking points from the outer sphere with perturbation-based attacks, with accuracy dropping when increasing the upper bound on $\ell_2$-norm perturbations. [Right]: Attacking points from the inner sphere with invariance-based attacks, with accuracy dropping when increasing the upper bound on $\ell_2$-norm perturbations. Each attack has a different effect on the manifold. Red arrows indicate the only possible direction of attack for each sphere. Perturbation attacks fail on the inner sphere, while invariance attacks fail on the outer sphere. Hence, both attacks are needed for a full account of model failures. } \label{fig:spheres} \end{figure} \subsection{Comparing Invariance-based and Perturbation-based Robustness} We now show how the analysis of perturbation-based and invariance-based adversarial examples can uncover different model failures. To do so, we consider the synthetic \textit{adversarial spheres problem} of~\cite{gilmer2018adversarial}. The goal of this synthetic task is to distinguish points from two cocentric spheres (class 1: $\|x\|_2 = R_1$ and class 2: $\|x\|_2 = R_1$) with different radii $R_1$ and $R_2$. The dataset was designed such that a robust (max-margin) classifier can be formulated as: \begin{align*} D^*(x) = \sign\left(\|x\|_2 - \frac{R_1+R_2}{2}\right). \end{align*} Our analysis considers a similar, but slightly sub-optimal classifier in order to study model failures in a controlled setting: \begin{align*} D(x) = \sign\big(\|x_{1, \dots, d-n}\|_2 - b\big), \end{align*} which computes the norm of $x$ from its first $d-n$ cartesian-coordinates and outputs -1 (resp. +1) for the inner (resp. outer) sphere. The bias $b$ is chosen based on finite training set (see Appendix \ref{app:attacksSpheres}). Even though this sub-optimal classifier reaches nearly 100$\%$ on finite test data, the model is imperfect in the presence of adversaries that operate on the manifold (i.e., produce adversarial examples that remain on one of the two spheres but are misclassified). Most interestingly, the perturbation-based and invariance-based approaches uncover different failures (see Appendix~\ref{app:attacksSpheres} for details on the attacks): \begin{itemize} \item \textbf{Perturbation-based:} All points $x$ from the outer sphere (i.e., $\|x\|_2=R_2$) can be perturbed to $x^*$, where $\mathcal{O}(x) = D(x) \neq D(x^*)$ while staying on the outer sphere (i.e., $\|x^*\|_2=R_2$). \item \textbf{Invariance-based:} All points $x$ from the inner sphere ($\|x\|_2 = R_1$) can be perturbed to $x^*$, where $D(x) = D(x^*) \neq \mathcal{O}(x^*)$, despite being in fact on the outer sphere after the perturbation has been added (i.e., $\|x^*\|_2 = R_2$). \end{itemize} In Figure~\ref{fig:spheres}, we plot the mean accuracy over points sampled either from the inner or outer sphere, as a function of the norm of the adversarial manipulation added to create perturbation-based and invariance-based adversarial examples. This illustrates how the robustness regime differs significantly between the two variants of adversarial examples. Therefore, by looking only at perturbation-based (respectively invariance-based) adversarial examples, important model failures may be overlooked. This is exacerbated when the data is sampled in an unbalanced fashion from the two spheres: the inner sphere is robust to perturbation adversarial examples while the outer sphere is robust to invariance adversarial examples (for accurate models). \section{Invariance-based Attacks in Practice} We now show that our argument is not limited to the analysis of synthetic tasks, and give practical automated attack algorithms to generate invariance adversarial examples. We elect to study the only dataset for which robustness is considered to be nearly solved under the $\ell_p$ norm threat model: MNIST~\citep{schott2018towards}. We show that MNIST models trained to be robust to perturbation-based adversarial examples are \emph{less} robust to invariance-based adversarial examples. As a result, we show that while \emph{perturbation} adversarial examples may not exist within the $\ell_p$ ball around test examples, \emph{adversarial examples} still do exist within the $\ell_p$ ball around test examples. \paragraph{Why MNIST?} The MNIST dataset is typically a poor choice of dataset for studying adversarial examples, and in particular defenses that are designed to mitigate them~\citep{carlini2019evaluating}. In large part this is due to the fact that MNIST is significantly different from other vision classification problems (e.g., features are quasi-binary and classes are well separated in most cases). % However, the simplicity of MNIST is why studying $\ell_p$-norm adversarial examples was originally proposed as a toy task to benchmark models~\citep{harnessing_adversarial}. Unexpectedly, it is perhaps much more difficult than was originally expected. However, several years later, it is now argued that training MNIST classifiers whose decision is constant in an $\ell_p$-norm ball around their training data provides robustness to adversarial examples~\citep{schott2018towards,madry2017towards,kolter2017provable,raghunathan2018certified}. Furthermore, if defenses relying on the $\ell_p$-norm threat model are going to perform well on a vision task, MNIST is likely the best dataset to measure that---due to the specificities mentioned above. In fact, MNIST is the only dataset for which robustness to adversarial examples is considered even remotely close to being solved~\citep{schott2018towards} and researchers working on (provable) robustness to adversarial examples have moved on to other, larger vision datasets such as CIFAR-10~\citep{madry2017towards,wong2018scaling} or ImageNet~\citep{lecuyer2018certified,cohen2019certified}. This section argues that, contrary to popular belief, MNIST is far from being solved. We show why robustness to $\ell_p$-norm perturbation-based adversaries is insufficient, even on MNIST, and why defenses with unreasonably high uniform continuity can harm the performance of the classifier and make it more vulnerable to other attacks exploiting this excessive invariance. \subsection{A toy worst-case: binarized MNIST classifier} \begin{wrapfigure}{r}{0.3\textwidth} \vspace{-12mm} \includegraphics[width=.7\linewidth]{ims/binarized_mnist_rev1.pdf} \caption{Invariance-based adversarial example (top-left) is labeled differently by a human than original (bottom-left). However, both become identical after binarization.\vspace{-3em}} \label{fig:mnist-invariance-based-adv-x} \end{wrapfigure} To give an initial constructive example, consider a MNIST classifier which binarizes (by thresholding at, e.g., 0.5) all of its inputs before classifying them with a neural network. As \citep{tramer2017ensemble,schott2018towards} demonstrate, this binarizing classifier is highly $\ell_\infty$-robust, because most perturbations in the pixel space do not actually change the (thresholded) feature representation. However, this binary classifier will have trivial invariance-based adversarial examples. Figure~\ref{fig:mnist-invariance-based-adv-x} shows an example of this attack. Two images which are dramatically different to a human (e.g., a digit of a one and a digit of a four) can become identical after pre-processing the images with a thresholding function at $0.5$ (as examined by, e.g., \citet{schott2018towards}). \subsection{Generating Model-agnostic Invariance-based Adversarial Examples} In the following, we build on existing invariance-based attacks~\citep{jacobsen2018excessive,jensRelu,li2018study} to propose a model-agnostic algorithm for crafting invariance-based adversarial examples. That is, our attack algorithm generates invariance adversarial examples that cause a human to change their classification, but where most models, not known by the attack algorithm, will \emph{not} change their classification. Our algorithm for generating invariance-based adversarial examples is simple, albeit tailored to work specifically on datasets where comparing images in pixel space is meaningful, like MNIST. Begin with a \textit{source} image, correctly classified by both the oracle evaluator (i.e., a human) and the model. Next, try all possible affine transformations of training data points whose label is different from the source image, and find the \textit{target} training example which---once transformed---has the smallest distance to the source image. Finally, construct an invariance-based adversarial example by perturbing the source image to be ``more similar'' to the target image under the $\ell_p$ metric considered. In Appendix~\ref{app:attacksMNIST}, we describe instantiations of this algorithm for the $\ell_0$ and $\ell_\infty$ norms. Figure \ref{fig:l0attack} visualizes the sub-steps for the $\ell_0$ attack, which are described in details in Appendix~\ref{app:attacksMNIST}. The underlying assumption of this attack is that small affine transformations are \emph{less likely} to cause an oracle classifier to change its label of the underlying digit than $\ell_p$ perturbations. In practice, we validate this hypothesis with a human study in Section~\ref{sec:eval}. \begin{figure} \centering \includegraphics[scale=.5]{ims/mnist_l0_attack.png} \\ (a) \hspace{1.8em} (b) \hspace{1.2em} (c) \hspace{1.8em} (d) \hspace{1.2em} (e) \hspace{4.4em} (f-h) \hspace{2.2em} \caption{Process for generating $\ell_0$ invariant adversarial examples. From left to right: (a) the original image of an 8; (b) the nearest training image (labeled as 3), before alignment; (c) the nearest training image (still labeled as 3), after alignment; (d) the $\delta$ perturbation between the original and aligned training example; (e) spectral clustering of the perturbation $\delta$; and (f-h) possible invariance adversarial examples, selected by applying subsets of clusters of $\delta$ to the original image. (f) is a failed attempt at an invariance adversarial example. (g) is successful, but introduces a larger perturbation than necessary (adding pixels to the bottom of the 3). (h) is successful and minimally perturbed.} \label{fig:l0attack} \end{figure} \subsection{Evaluation} \label{sec:eval} \paragraph{Attack analysis.} We generate 1000 adversarial examples using each of the two above approaches on examples randomly drawn from the MNIST test set. Our attack is quite slow, with the alignment process taking (amortized) several minutes per example. We performed no optimizations of this process and expect it could be improved. The mean $\ell_0$ distortion required is 25.9 (with a median of 25). The $\ell_\infty$ adversarial examples always use the full budget of $0.3$ and take a similar amount of time to generate; most of the cost is again dominated by finding the nearest test image. \paragraph{Human Study.} We randomly selected 100 examples from the MNIST test set and create 100 invariance-based adversarial examples under the $\ell_0$ norm and $\ell_\infty$ norm, as described above. We then conduct a human study to evaluate whether or not these invariance adversarial examples indeed are successful, i.e., whether humans agree that the label has been changed despite the model's prediction remaining the same. We presented 40 human evaluators with these $100$ images, half of which were natural unmodified MNIST digits, and the remaining half were distributed randomly between $\ell_0$ or $\ell_\infty$ invariance adversarial examples. \begin{figure} \begin{subfigure}{.48\textwidth} \centering \begin{tabular}{l|rr} \toprule Attack Type & Success Rate & \\ \midrule Clean Images & 0\% \\ $\ell_0$ Attack & 55\% \\ $\ell_\infty$ Attack & 21\% \\ \bottomrule \end{tabular} \caption{Success rate of our invariance adversarial example causing humans to switch their classification.} \label{tab:humanstudy:eg} \end{subfigure}% \hfill \begin{subfigure}{.48\textwidth} \centering \includegraphics[scale=.35]{ims/example_attacks_2.png} \caption{Original test images (top) with our $\ell_0$ (middle) and $\ell_\infty$ (bottom) invariance adversarial examples. \,\,\,\,\,\,\,\, (left) successful attacks; (right) failed attacks.} \label{fig:examples} \end{subfigure} \caption{Our invariance-based adversarial examples. Humans (acting as the oracle) switch their classification of the image from the original test label to a different label.} \label{fig:whyislatekborked} \end{figure} \paragraph{Results.} For the clean (unmodified) test images, 98 of the 100 examples were labeled correctly by \emph{all} human evaluators. The other 2 images were labeled correctly by over $90\%$ of human evaluators. Our $\ell_0$ attack is highly effective: For 48 of the 100 examples at least $70\%$ of human evaluator who saw that digit assigned it the same label, different from the original test label. Humans only agreed with the original test label (with the same $70\%$ threshold) on 34 of the images, while they did not form a consensus on the remaining 18 examples. The (much simpler) $\ell_\infty$ attack is less effective: Humans only agreed that the image changed label on 14 of the examples, and agreed the label had not changed in 74 cases. We summarize results in Table~\ref{fig:whyislatekborked} (a). In Figure~\ref{fig:whyislatekborked} (b) we show sample invariance adversarial examples. To simplify the analysis in the following section, we split our generated invariance adversarial examples into two sets: the successes and the failures, as determined by whether the plurality decision by humans was different than or equal to the human label. We only evaluate the models on the subset of invariance adversarial examples that caused the humans to switch their classification. \paragraph{Model Evaluation.} Now that we have oracle ground-truth labels for each of the images as decided by the humans, we report how often our models agree with the human-assigned label. Table~\ref{tab:modelaccuracy} summarizes the results of this analysis. For the invariance adversarial examples we report model accuracy only on the \emph{successful} attacks, that is, those where the human oracle label changed between the original image and the modified image. Every classifiers labeled all successful $\ell_\infty$ adversarial examples \textbf{incorrectly} (with one exception where the $\ell_2$ PGD-trained classifier \cite{madry2017towards} labeled one of the invariance adversarial examples correctly). Despite this fact, PGD adversarial training and Analysis by Synthesis \cite{schott2018towards} are two of the state-of-the-art $\ell_\infty$ perturbation-robust classifiers. The situation is more complex for the $\ell_0$-invariance adversarial examples. In this setting, the models which achieve \emph{higher} $\ell_0$ perturbation-robustness result in \emph{lower} accuracy on this new invariance test set. For example, \citet{bafna2018thwarting} develops a $\ell_0$ perturbation-robust classifier that relies on the sparse Fourier transform. This perturbation-robust classifier is substantially weaker to invariance adversarial examples, getting only $38\%$ accuracy compared to a baseline classifier's $54\%$ accuracy. \begin{table} \centering \begin{tabular}{l p{1.5cm} p{1.5cm} p{1.85cm} p{1.5cm} p{1.5cm} p{1.5cm}} \toprule & \multicolumn{6}{c}{Fraction of examples where human and model agree} \\ \midrule \textbf{Model:} & \textbf{Baseline} & \textbf{ABS} & \textbf{Binary-ABS} & \textbf{$\ell_\infty$ PGD} & \textbf{$\ell_2$ PGD} & \textbf{$\ell_0$ Sparse} \\ \midrule Clean & 99\% & 99\% & 99\% & 99\% & 99\% & 99\% \\ $\ell_0$ & 54\% & 58\% & 47\% & 56\%$^*$ & 27\%$^*$ & 38\% \\ $\ell_\infty$ & 0\% & 0\% & 0\% & 0\% & 5\%$^*$ & 0\%$^*$ \\ \bottomrule \end{tabular} \caption{Models which are more robust to \emph{perturbation} adversarial examples (such as those trained with adversarial training) agree with humans \textbf{less often} on \emph{invariance-based} adversarial examples. Agreement between human oracle labels and labels by five models on clean (unmodified) examples and our \emph{successful} $\ell_0$- and $\ell_\infty$-generated invariance adversarial examples. Values denoted with an asterisks $^*$ violate the perturbation threat model of the defense and should not be taken to be attacks. When the model is \emph{wrong}, it classified the input as the original label, and not the new oracle label.} \label{tab:modelaccuracy} \end{table} \subsection{Natural Images} While the previous discussion focused on synthetic (Adversarial Spheres) and simple tasks like MNIST, similar phenomena may arise in natural images. In Figure~\ref{fig:imageNetperturbations}, we show two different $\ell_2$ perturbations of the original image (left). The perturbation of the middle image is nearly imperceptible and thus the classifier´s decision should be robust to such changes. On the other hand, the image on the right went through a semantic change (from tennis ball to a strawberry) and thus the classifier should be sensitive to such changes (even though this case is ambiguous due to two objects in the image). However, in terms of the $\ell_2$ norm the change in the right image is even smaller than the imperceptible change in the middle. Hence, making the classifier robust within this $\ell_2$ norm-ball will make the classifier vulnerable to invariance-based adversarial examples like the semantic changes in the right image. \begin{figure} \centering \includegraphics[scale=.4]{ims/orig.png} \includegraphics[scale=.4]{ims/pertadv.png} \includegraphics[scale=.4]{ims/invadv.png} \\ (a) \hspace{7.5em} (b) \hspace{7.5em} (c) \caption{Visualization that large $\ell_2$ norms can also fail to measure semantic changes in images.\,\,\,\, (a) original image in the ImageNet test set labeled as a \emph{tennis ball}; (b) imperceptible perturbation, $\ell_2=24.3$; (c) semantic perturbation with a $\ell_2$ perturbation of $23.2$ that removes the tennis ball.} \label{fig:imageNetperturbations} \end{figure} \section{Conclusion} Training models robust to perturbation-based adversarial examples should not be treated as equivalent to learning models robust to \textit{all} adversarial examples. While most of the research has focused on perturbation-based adversarial examples that exploit excessive classifier \textit{sensitivity}, we show that the reverse viewpoint of excessive classifier \textit{invariance} should also be taken into account when evaluating robustness. Furthermore, other unknown types of adversarial examples may exist: it remains unclear whether or not the union of perturbation and invariance adversarial examples completely captures the full space of evasion attacks. \paragraph{Consequences for $\ell_p$-norm evaluation.} Our invariance-based attacks are able to find adversarial examples within the $\ell_p$ ball on classifiers that were trained to be robust to $\ell_p$-norm perturbation-based adversaries. As a consequence of this analysis, researchers should carefully set the radii of $\ell_p$-balls when measuring robustness to norm-bounded perturbation-based adversarial examples. Furthermore, setting a consistent radius across all of the data may be difficult: we find in our experiments that some class pairs are more easily attacked than others by invariance-based adversaries. Some recent defense proposals, which claim extremely high $\ell_0$ and $\ell_\infty$ norm-bounded robustness, are likely over-fitting to peculiarities of MNIST to deliver higher robustness to perturbation-based adversaries. This may not actually be delivering classifiers matching the human oracle more often. Indeed, another by-product of our study is to showcase the importance of human studies when the true label of candidate adversarial inputs becomes ambiguous and cannot be inferred algorithmically. \paragraph{Invariance.} Our work confirms findings reported recently in that it surfaces the need for mitigating undesired invariance in classifiers. The cross-entropy loss as well as architectural elements such as ReLU activation functions have been put forward as possible sources of excessive invariance~\citep{jacobsen2018excessive,jensRelu}. However, more work is needed to develop quantitative metrics for invariance-based robustness. One promising architecture class to control invariance-based robustness are invertible networks \citep{dinh2014nice} because, by construction, they cannot build up any invariance until the final layer~\citep{jacobsen2018irevnet, behrmann2018invertible}.
proofpile-arXiv_065-4222
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction.} The one-phase Stefan problem (or Lam\'e-Clapeyron-Stefan problem) for a semi-infinite material is a free boundary problem for the heat equation, which requires the determination of the temperature distribution $T$ of the liquid phase (melting problem) or the solid phase (solidification problem) and the evolution of the free boundary $x = s(t)$. Phase change problems appear frequently in industrial processes and other problems of technological interest \cite{AlSo,AMR,ChRa,DHLV,Ke,Lu}. The Lam\'e-Clapeyron-Stefan problem is non-linear even in its simplest form due to the free boundary conditions. If the thermal coefficients of the material are temperature-dependent, we have a doubly non-linear free boundary problem. Some other models involving temperature-dependent thermal conductivity can also be found in \cite{BoNaSeTaLibro, BoNaSeTa-ThSci,NaTa,BrNa3,BrNa,Ma,ChSu74,OlSu87,Ro15,Ro18,CST}. In this paper, we consider two one-phase fusion problems with a temperature - dependent thermal conductivity $k(T)$ and specific heat $c(T)$. In one of them, it is assumed a Dirichlet condition at the fixed face $x=0$ and in the second case a Robin condition is imposed. The mathematical model of the governing process is described as follows: \begin{align} & \rho c(T) \pder[T]{t}=\frac{\partial}{\partial x} \left(k(T)\pder[T]{x} \right),& 0<x<s(t), \quad t>0, \label{EcCalor}\\ & T(0,t)=T_{_0}, &t>0, \label{CondBorde}\\ & T(s(t),t)=T_{f}, &t>0, \label{TempCambioFase}\\ & k_0\pder[T]{x}(s(t),t)=-\rho l \dot s(t), &t>0, \label{CondStefan}\\ & s(0)=0,\label{FrontInicial} \end{align} where the unknown functions are the temperature $T=T(x,t)$ and the free boundary $x=s(t)$ separating both phases. The parameters $\rho>0$ (density), $l>0$ (latent heat per unit mass), $T_{0}>0$ (temperature imposed at the fixed face $x=0$) and $T_f<T_0$ (phase change temperature at the free boundary $x=s(t)$) are all known constants. The functions $k$ and $c$ are defined as: \begin{align} &k(T)=k_{0}\left(1+\delta\left(\tfrac{T-T_{f}}{T_{0}-T_{f}}\right)^{p}\right)\label{k}\\ &c(T)=c_{0}\left(1+\delta\left(\tfrac{T-T_{f}}{T_{0}-T_{f}}\right)^{p}\right)\label{c}, \end{align} where $\delta$ and $p$ are given non-negative constants, $k_{0}=k(T_f)$ and $c_{0}=c(T_f)$ are the reference thermal conductivity and the specific heat, respectively. The problem (\ref{EcCalor})-(\ref{FrontInicial}) was firstly considered in \cite{KSR} where an equivalent ordinary differential problem was obtained. In \cite{BNT}, the existence of an explicit solution of a similarity type by using a double fixed point was given when the thermal coefficients are bounded and Lipschitz functions. We are interested in obtaining a similarity solution to problem (\ref{EcCalor})-(\ref{FrontInicial}). More precisely, one in which the temperature $T=T(x,t)$ can be written as a function of a single variable. Through the following change of variables: \begin{equation} y(\eta)=\tfrac{T(x,t)-T_{f}}{T_{0}-T_{f}}\geq 0 \label{Y} \end{equation} with \begin{equation} \eta=\tfrac{x}{2a\sqrt{t}},\quad 0<x<s(t),\quad t>0, \label{eta} \end{equation} the phase front moves as \begin{equation} s(t)=2a\lambda\sqrt{t} \label{freeboundary} \end{equation} where $a^{2}=\frac{k_{0}}{\rho c_{0}}$ (thermal diffusivity) and $\lambda>0$ is a positive parameter to be determined. It is easy to see that the Stefan problem $\mathrm{(}$\ref{EcCalor}$\mathrm{)}$-(\ref{FrontInicial}) has a similarity solution $(T,s)$ given by: \begin{align} &T(x,t)=\left(T_{0}-T_{f}\right)y\left(\tfrac{x}{2a\sqrt{t}}\right)+T_{f},\quad 0<x<s(t), \quad t>0,\label{T} \\ &s(t)=2a\lambda\sqrt{t},\quad\quad t>0\label{s} \end{align} if and only if the function $y$ and the parameter $\lambda>0$ satisfy the following ordinary differential problem: \begin{align} &2\eta(1+\delta y^p(\eta))y'(\eta)+[(1+\delta y^p(\eta))y'(\eta)]'=0, \quad &0<\eta<\lambda, \label{y}\\ &y(0)=1,\label{cond0}\\ &y(\lambda)=0, \label{condlambda}\\ &y'(\lambda)=-\tfrac{2\lambda}{\text{Ste}} \label{eclambda} \end{align} where $\delta\geq 0$, $p\geq 0$ and $\text{Ste}=\tfrac{c_{0}(T_{0}-T_f)}{l}>0$ is the Stefan number. In \cite{KSR}, the solution to the ordinary differential problem (\ref{y})-(\ref{eclambda}) was approximated by using shifted Chebyshev polynomials. Although, in this paper was provided the exact solution for the particular cases $p=1$ and $p=2$, the aim of our work is to prove existence and uniqueness of solution for every $\delta\geq 0$ and $p\geq 0$. The particular case with $\delta=0$, i.e. with constant thermal coefficients, and $p=1$ was studied in \cite{ChSu74,OlSu87,Ta98,SaTa} In Section 2, we are going to prove existence and uniqueness of problem (\ref{EcCalor})-(\ref{FrontInicial}) through analysing the ordinary differential problem (\ref{y})-(\ref{eclambda}). In Section 3, we present a similar problem but with a Robin type condition at the fixed face $x=0$. That is, the temperature condition (\ref{CondBorde}) will be replaced by the following convective condition \begin{equation} k(T(0,t))\pder[T]{x}(0,t)=\frac{h}{\sqrt{t}}\left(T(0,t)-T_{0}\right) \label{convectiva} \end{equation} where $h>0$ is the thermal transfer coefficient and $T_0$ is the bulk temperature. We prove existence and uniqueness of solution to this problem, similar to those of the preceding section. Finally, in Section 4, we study the asymptotic behaviour when $h\rightarrow +\infty$, that is, we show that the solution of the problem given in Section 3 converges to the solution of the analogous Stefan problem, given in Section 2. \section{Existence and uniqueness of solution to the problem with Dirichlet condition at the fixed face $x=0$} We will study the existence and uniqueness of solution to the problem (\ref{EcCalor})-(\ref{FrontInicial}) through the ordinary differential problem (\ref{y})-(\ref{eclambda}). \begin{lem}\label{ProbAux} Let $p\geq 0$, $\delta\geq 0$, $\lambda>0$, $y\in C^{\infty}[0,\lambda]$ and $y\geq 0$, then $(y,\lambda)$ is a solution to the ordinary differential problem $(\ref{y})$-$(\ref{eclambda})$ if and only if $\lambda$ is the unique solution to \begin{align}\label{7} f(x)=g,\qquad \qquad x>0, \end{align} and $y$ verifies \begin{align}\label{6} F(y(\eta))=G(\eta),\qquad\qquad 0<\eta<\lambda, \end{align} where \begin{align}\label{fg} g=\tfrac{\mathrm{Ste}}{\sqrt{\pi}}\left( 1+\tfrac{\delta}{p+1}\right), \qquad &f(x)=x \exp(x^2)\erf(x),\\ F(x)=x+\tfrac{\delta}{p+1}x^{p+1}, \qquad &G(x)=\tfrac{\sqrt{\pi}}{\mathrm{Ste}} \;\lambda \exp(\lambda^2)\left( \erf(\lambda)-\erf(x)\right).\label{FG-temp} \end{align} \end{lem} \smallskip \begin{proof} Let $(y,\lambda)$ be a solution to problem (\ref{y})-(\ref{eclambda}). Let us define $v(\eta)=\left(1+\delta y^{p}(\eta) \right) y'(\eta)$. Taking into account the ordinary differential equation (\ref{y}) and condition (\ref{cond0}), $v$ can be rewritten as $v(\eta)=(1+\delta)y'(0)\exp(-\eta^2)$. Therefore \begin{equation}\label{aux} y'(\eta)+\delta y^p(\eta)y'(\eta)=(1+\delta)y'(0)\exp(-\eta^2). \end{equation} If we integrate (\ref{aux}) from $0$ to $\eta$, and using conditions (\ref{cond0})-(\ref{condlambda}) we obtain \begin{equation}\label{yalternativa} y(\eta)+\tfrac{\delta}{p+1}y^{p+1}(\eta)=1+\tfrac{\delta}{p+1}-\tfrac{\sqrt{\pi}}{\mathrm{Ste}}\lambda \exp(\lambda^2)\erf(\eta) \end{equation} If we take $\eta=\lambda$ in the above equation, by (\ref{condlambda}), we get (\ref{7}). Furthermore, from (\ref{7}) we can rewrite (\ref{yalternativa}) as (\ref{6}). Reciprocally, if $(y,\lambda)$ is a solution to (\ref{7})-(\ref{6}) we have \begin{equation} y(\eta)=-\tfrac{\delta}{p+1}y^{p+1}(\eta)+\left(1+\tfrac{\delta}{p+1}\right)\left(1-\tfrac{\erf(\eta)}{\erf(\lambda)}\right). \end{equation} An easy computation shows that $(y,\lambda)$ is a solution to the ordinary differential problem $\mathrm{(}$\ref{y}$\mathrm{)}$-$\mathrm{(}$\ref{eclambda}$\mathrm{)}$ . \end{proof} According to the above result, we proceed to show that there exists a unique solution to problem (\ref{7})-(\ref{6}). \begin{lem}\label{ExyunProbAux} If $p\geq 0$ and $\delta\geq 0$, then there exists a unique solution $(y,\lambda)$ to the problem $\mathrm{(}$\ref{7}$\mathrm{)}$-$\mathrm{(}$\ref{6}$\mathrm{)}$ with $\lambda>0$, $y\in C^{\infty}[0,\lambda]$ and $y\geq 0$. \end{lem} \begin{proof} In virtue that $f$ given by (\ref{fg}) is an increasing function such that $f(0)=0$ and $f(+\infty)=+\infty$, there exists a unique solution $\lambda>0$ to equation $\mathrm{(}$\ref{7}$\mathrm{)}$. Now, for this $\lambda>0$, it is easy to see that $F$ given by (\ref{FG-temp}) is an increasing function, so that we can define $F^{-1}:[0,+\infty)\to [0,+\infty)$. As $G$ defined by (\ref{FG-temp}) is a positive function, we have that there exists a unique solution $y\in C^{\infty}[0,\lambda]$ of equation $\mathrm{(}$\ref{6}$\mathrm{)}$ given by \begin{equation} y(\eta)=F^{-1}\left(G(\eta)\right), \qquad 0 < \eta < \lambda. \end{equation} \end{proof} \begin{rem} \label{2.3} On one hand we have that $F$ is an increasing function with $F(0)=0$ and $F(1)=1+\frac{\delta}{p+1}$. On the other hand, $G$ is a decreasing function with $G(0)=1+\frac{\delta}{p+1}$ and $G(\lambda)=0$. Then it follows that $0\leq y(\eta)\leq 1 $, for $0<\eta<\lambda$. \end{rem} From the above lemmas we are able to claim the following result: \begin{thm} \label{ExistenciaDirichlet} The Stefan problem governed by $($\ref{EcCalor}$)$-$($\ref{FrontInicial}$)$ has a unique similarity type solution given by $($\ref{T}$)$-$($\ref{s}$)$ where $(y,\lambda)$ is the unique solution to the functional problem $($\ref{7}$)$-$($\ref{6}$)$. \end{thm} \begin{rem} In virtue of Remark \ref{2.3} and Theorem \ref{ExistenciaDirichlet} we have that \begin{equation*} T_f<T(x,t)<T_0,\qquad \qquad 0<x<s(t),\quad t>0. \end{equation*} \end{rem} \begin{rem} For the particular case $p=1$, $\delta\geq 0$, the solution to the problem (\ref{7})-(\ref{6}) is given by \begin{align} &y(\eta)=\tfrac{1}{\delta} \left[\sqrt{(1+\delta)^2-\delta(2+\delta)\tfrac{\erf(\eta)}{\erf(\lambda)}}-1 \right], \qquad 0<\eta<\lambda,\label{6-1} \end{align} where $\lambda$ verifies \begin{align} &\lambda \exp(\lambda^2)\erf(\lambda)=\tfrac{\mathrm{Ste}}{\sqrt{\pi}}\left( 1+\tfrac{\delta}{2}\right).\label{7-1} \end{align} \begin{proof} If $p=1$ the equation (\ref{6}) is given by \begin{equation} y^{2}(\eta)+\tfrac{2}{\gamma}y(\eta)-(1+\tfrac{2}{\gamma})\left[1-\tfrac{\erf(\eta)}{\erf(\lambda)}\right]=0 \end{equation} which has a unique positive solution obtained by the expression (\ref{6-1}). \end{proof} \end{rem} \vspace{0.4cm} In view of Lemmas \ref{ExyunProbAux} and \ref{2.3}, we can compute the solution $(y,\lambda)$ to the ordinary differential problem $(\ref{y})$-$(\ref{eclambda})$, by using its functional formulation. In Figure \ref{Fig:yeta}, for different values of $p$, we plot the solution $(y,\lambda)$ to the problem $(\ref{7})$-$(\ref{6})$. In order to compare the obtained solution $y$, we extend them by zero for every $\eta>\lambda$. We assume $\delta=5$ and $\text{Ste}=0.5$. It must be pointed out that the choice for $\text{Ste}$ is due to the fact that for most phase-change material candidates over a realistic temperature, the Stefan number will not exceed 1 (see \cite{So}). \begin{figure}[h] \begin{center} \includegraphics[scale=0.22]{yetaVERSIONFINAL.eps} \caption{Plot of function $y$ for different values of \mbox{$p=1,5,10$}, fixing $\delta=5$ and $\text{Ste}=0.5$.} \label{Fig:yeta} \end{center} \end{figure} \medskip Although it can be analytically deduced from equation $(\ref{7})$, we can observe graphically that as $p$ increases, the value of $\lambda$ decreases. In view of Theorem \ref{ProbAux}, we can also plot the solution $(T,s)$ to the problem $\mathrm{(}$\ref{y}$\mathrm{)}$-$\mathrm{(}$\ref{eclambda}$\mathrm{)}$. In Figure \ref{Fig:ColorTemp} we present a colormap for the temperature $T=T(x,t)$ extending it by zero for $x>s(t)$. \begin{figure}[h!!!!] \begin{center} \includegraphics[scale=0.22]{ColorTempVERSIONFINAL.eps} \caption{Colormap for the temperature $T=T(x,t)$ function fixing $\delta=1$, $p=1$, $\text{Ste}=0.5$, $T_f=0$, $T_0=10$ and $a=1$} \label{Fig:ColorTemp} \end{center} \end{figure} \newpage \section{Existence and uniqueness of solution to the problem with Robin condition at the fixed face $x=0$} In this section we are going to consider a Stefan problem with a convective boundary condition at the fixed face instead of a Dirichlet one. This heat input is the true relevant physical condition due to the fact that it establishes that the incoming flux at the fixed face is proportional to the difference between the temperature at the surface of the material and the ambient temperature to be imposed. Let us consider the free boundary problem given by $\mathrm{(}$\ref{EcCalor}$\mathrm{)}$, $\mathrm{(}$\ref{TempCambioFase}$\mathrm{)}$-$\mathrm{(}$\ref{FrontInicial}$\mathrm{)}$ and the convective condition $\mathrm{(}$\ref{convectiva}$\mathrm{)}$ instead of the temperature condition $\mathrm{(}$\ref{CondBorde}$\mathrm{)}$ at the fixed face $x=0$. The temperature-dependent thermal conductivity $k(T)$ and the specific heat $c(T)$ are given by $\mathrm{(}$\ref{k}$\mathrm{)}$ and $\mathrm{(}$\ref{c}$\mathrm{)}$, respectively. As in the above section, we are searching a similarity type solution. If we define the change of variables as $\mathrm{(}$\ref{Y}$\mathrm{)}$-$\mathrm{(}$\ref{eta}$\mathrm{)}$, the phase front moves as $\mathrm{(}$\ref{freeboundary}$\mathrm{)}$ where $a^{2}=\frac{k_{0}}{\rho c_{0}}$ (thermal diffusivity) and $\lambda_\gamma$ is a positive parameter to be determined. It follows that $(T_\gamma,s_\gamma)$ is a solution to $\mathrm{(}$\ref{EcCalor}$\mathrm{)}$, $\mathrm{(}$\ref{TempCambioFase}$\mathrm{)}$-$\mathrm{(}$\ref{FrontInicial}$\mathrm{)}$ and $\mathrm{(}$\ref{convectiva}$\mathrm{)}$ if and only if the function $y_\gamma$ defined by (\ref{y}) and the parameter $\lambda_\gamma>0$ given by (\ref{freeboundary}) satisfy $\mathrm{(}$\ref{y}$\mathrm{)}$, $\mathrm{(}$\ref{condlambda}$\mathrm{)}$, $\mathrm{(}$\ref{eclambda}$\mathrm{)}$ and \begin{align} \left(1+\delta y^{p}(0)\right) y'(0)=\gamma \left(y(0)-1\right)\label{ecconvectiva} \end{align} where $\delta\geq 0$, $p\geq 0$, \begin{equation} \gamma=2\text{Bi}, \quad \text{and}\quad \text{Bi}=\frac{ha }{k_{0}} \end{equation} where $\text{Bi}>0$ is the generalized Biot number. With a few slight changes on the results obtained in the previous section, the following assertions can be established: \begin{lem}\label{ProbAuxConv} Let $p\geq 0$, $\delta\geq 0$, $\gamma>0$, $\lambda_\gamma>0$, $y_\gamma\in C^{\infty}[0,\lambda_\gamma]$ and $y_\gamma\geq 0$, then $(y_\gamma,\lambda_\gamma)$ is a solution to the ordinary differential problem $\mathrm{(}$\ref{y}$\mathrm{)}$, $\mathrm{(}$\ref{condlambda}$\mathrm{)}$, $\mathrm{(}$\ref{eclambda}$\mathrm{)}$ and $\mathrm{(}$\ref{ecconvectiva}$\mathrm{)}$ if and only if $\lambda_\gamma$ is the unique solution to the following equation \begin{align}\label{7bis} F(\beta_\gamma(x))=f(x),\qquad \qquad x>0, \end{align} and $y_\gamma$ verifies \begin{align} F(y_\gamma (\eta))=G_\gamma(\eta),\qquad \qquad 0< \eta<\lambda_\gamma \label{6bis} \end{align} where $f$ and $F$ are given by $(\ref{fg})$ and $(\ref{FG-temp})$, respectively and \begin{align}\label{beta} \beta_\gamma(x)= 1-\tfrac{2x\exp\left(x^{2}\right)}{\gamma \, \emph{\text{Ste}}}, \qquad 0\leq x\leq \lambda_0=\beta_\gamma^{-1}(0),\\ G_\gamma(x)=\tfrac{\lambda_\gamma \exp\left(\lambda_\gamma^{2}\right)\sqrt{\pi}}{\emph{Ste}}\left(\erf(\lambda_\gamma)-\erf(x)\right), \quad 0<x<\lambda_\gamma.\label{Ggamma} \end{align} \end{lem} \begin{proof} Let $(y_\gamma,\lambda_\gamma)$ be a solution to problem (\ref{y}), (\ref{condlambda}), (\ref{eclambda}) and (\ref{ecconvectiva}). Let us define $w(\eta)=\left(1+\delta y_\gamma^{p}(\eta) \right) y_\gamma'(\eta)$. Taking into account the ordinary differential equation (\ref{y}) and the conditions (\ref{condlambda}), (\ref{ecconvectiva}), $w$ can be rewritten as $w(\eta)=y_\gamma'(\lambda_\gamma)\exp(\lambda_\gamma^2)\exp(-\eta^2)$. Therefore \begin{equation}\label{aux-conv} y_\gamma'(\eta)+\delta y_\gamma^p(\eta)y_\gamma'(\eta)=y'_\gamma(\lambda_\gamma)\exp(\lambda_\gamma^2)\exp(-\eta^2). \end{equation} If we integrate (\ref{aux}) from $\eta$ to $\lambda_\gamma$ and using conditions (\ref{condlambda}), (\ref{eclambda}) and (\ref{ecconvectiva}) we obtain that $y_\gamma$ verifies (\ref{6bis}). If we take $\eta=0$ in (\ref{6bis}) we get \begin{equation}\label{eqaux1} y_\gamma(0)+\tfrac{\delta}{p+1}y_\gamma^{p+1}(0)=\tfrac{\sqrt{\pi}}{\mathrm{Ste}}\lambda_\gamma \exp(\lambda_\gamma^2)\erf(\lambda_\gamma). \end{equation} Furthermore, if we differentiate equation (\ref{6bis}) and computing this derivative at $\eta=0$ we obtain: \begin{equation}\label{eqaux2} y'_\gamma(0)+\delta y_\gamma^{p}(0)y'_\gamma(0)=-\tfrac{2\lambda_\gamma \exp(\lambda_\gamma^2)}{\mathrm{Ste}} \end{equation} From (\ref{ecconvectiva}) and (\ref{eqaux2}) we obtain \begin{equation} \label{ygamma-0} y_\gamma(0)=1-\tfrac{2\lambda_\gamma \exp(\lambda_\gamma^2)} {\mathrm{Ste}}=\beta(\lambda_\gamma)\geq 0 \end{equation} and therefore (\ref{7bis}) holds. Reciprocally, if $(y_\gamma,\lambda_\gamma)$ is a solution to (\ref{7bis})-(\ref{6bis}), an easy computation shows that $(y_\gamma,\lambda_\gamma)$ verifies (\ref{y}), (\ref{condlambda}), (\ref{eclambda}) and (\ref{ecconvectiva}). \end{proof} \begin{rem} The notations $\lambda_\gamma$ and $ y_\gamma$ are adopted in order to emphasize the dependence of the solution to problem $\mathrm{(}$\ref{y}$\mathrm{)}$, $\mathrm{(}$\ref{condlambda}$\mathrm{)}$, $\mathrm{(}$\ref{eclambda}$\mathrm{)}$ and $\mathrm{(}$\ref{ecconvectiva}$\mathrm{)}$ on $\gamma$, although it also depends on $p$ and $\delta$. This fact is going to facilitate the subsequent analysis of the asymptotic behaviour of $y_\gamma$ when $\gamma \to\infty$ $\left( h \to \infty\right)$ to be presented in Section \ref{sec_Conv}. \end{rem} \begin{lem}\label{ExyunProbAux-Conv} If $p\geq 0$, $\delta\geq 0$ and $\gamma>0$, then there exists a unique solution $(y_\gamma,\lambda_\gamma)$ to the problem $(\ref{7bis})$-$(\ref{6bis})$ with $\lambda_\gamma>0$, $y_\gamma\in C^{\infty}[0,\lambda_\gamma]$ and $y_\gamma\geq 0$. \end{lem} \begin{proof} On one hand, the function $f$ given by (\ref{fg}) is an increasing function such that $f(0)=0$ and \mbox{$f(\lambda_0)>0$} with $\lambda_0=\beta_\gamma^{-1}(0)$. On the other hand, $F(\beta_\gamma)$ with $F$ given by (\ref{FG-temp}) and $\beta_\gamma$ given by (\ref{beta}), is a decreasing function for $0\leq x \leq \lambda_0$. Notice that $F(\beta_\gamma(0))=F(1)=\tfrac{\mathrm{Ste}}{\sqrt{\pi}}\left( 1+\tfrac{\delta}{p+1}\right)$ and $F(\beta_\gamma(\lambda_0))=F(0)=0$. Therefore we can conclude that there exists a unique $0<\lambda_\gamma<\lambda_0$ that verifies (\ref{7bis}). Now, for this $\lambda_\gamma>0$, it is easy to see that $F$ is an increasing function, so that we can define $F^{-1}:[0,+\infty)\to [0,+\infty)$. As $G_\gamma$ given by (\ref{Ggamma}) is a positive function, we have that there exists a unique solution $y\in C^{\infty}[0,\lambda_\gamma]$ of equation (\ref{6bis}) given by \begin{equation} y_\gamma(\eta)=F^{-1}\left(G_\gamma(\eta)\right), \qquad 0 < \eta <\lambda_\gamma. \end{equation} \end{proof} \begin{rem} \label{2.3-Conv} On one hand we have that $F$ is an increasing function with $F(0)=0$ and $F(1)=1+\frac{\delta}{p+1}$. On the other hand, $G_\gamma$ is a decreasing function with $G_\gamma(0)=\lambda_\gamma \exp(\lambda_\gamma^2)\erf(\lambda_\gamma)$ and $G_\gamma(\lambda_\gamma)=0$. Then $y_\gamma$ is a decreasing function and due to (\ref{7bis}) we obtain $$y_\gamma(0)=F^{-1}(G_\gamma(0))=\beta_\gamma(\lambda_\gamma)=1-\tfrac{2\lambda_\gamma\exp\left(\lambda_\gamma^{2}\right)}{\gamma \, \mathrm{\text{Ste}}}<1.$$ Then it follows that $0\leq y_\gamma(\eta)\leq 1 $ for $0<\eta<\lambda_\gamma$. \end{rem} Finally, from the above lemmas we are able to claim the following result: \begin{thm} \label{ExistenciaConvectiva} The Stefan problem governed by $(\ref{EcCalor}), (\ref{TempCambioFase})$-$(\ref{FrontInicial})$ and $(\ref{convectiva})$ has a unique similarity type solution given by $($\ref{T}$)$-$($\ref{s}$)$ where $(y_\gamma,\lambda_\gamma)$ is the unique solution to the functional problem $(\ref{7bis})$-$(\ref{6bis})$. \end{thm} Taking into account Lemmas \ref{ProbAuxConv} and \ref{ExyunProbAux-Conv} we compute the solution $(y_\gamma,\lambda_\gamma)$ to the ordinary differential problem $(\ref{y})$, $(\ref{condlambda})$, $(\ref{eclambda})$ and $(\ref{ecconvectiva})$, using its functional formulation $(\ref{7bis})$-$(\ref{6bis})$. Figure \ref{Fig:yetaConv1} shows the function $y_\gamma$ for a fixed $\delta=5$, $\gamma=50$, $\text{Ste}=0.5$, varying $p=1,5,10$. As it was made for the problem with a Dirichlet condition at the fixed face, the solution $y_\gamma$ is extended by zero for every $\eta>\lambda_\gamma$. \begin{figure}[h!!!] \begin{center} \includegraphics[scale=0.22]{yetaConvVERSIONFINAL.eps} \caption{Plot of function $y$ for different values of \mbox{$p=1,5,10$}, fixing $\delta=5$, $\gamma=50$ and $\text{Ste}=0.5$.} \label{Fig:yetaConv1} \end{center} \end{figure} \begin{figure}[h!!!] \begin{center} \includegraphics[scale=0.22]{ColorConvVERSIONFINAL.eps} \caption{Colormap for the temperature $T=T(x,t)$ function fixing $\delta=1$,$\gamma=50$, $p=1$, $\text{Ste}=0.5$, $T_f=0$, $T_0=10$ and $a=1$} \label{Fig:ColorConv} \end{center} \end{figure} \medskip Applying Theorem \ref{ProbAuxConv}, we can also plot the solution $(T_\gamma,s_\gamma)$ to the problem $(\ref{EcCalor}), (\ref{TempCambioFase})$-$(\ref{FrontInicial})$ and $(\ref{convectiva})$. In Figure \ref{Fig:ColorConv} we present a colormap for the temperature $T_\gamma=T_\gamma(x,t)$ extending it by zero for $x>s_\gamma(t)$. \newpage \section{Asymptotic behaviour}\label{sec_Conv} Now, we will show that if the coefficient $\gamma$, that characterizes the heat transfer at the fixed face, goes to infinity then the solution to the problem with the Robin type condition $(\ref{EcCalor}),(\ref{TempCambioFase})$-$(\ref{FrontInicial})$ and $(\ref{convectiva})$ converges to the solution to the problem $(\ref{EcCalor})$-$(\ref{FrontInicial})$, with a Dirichlet condition at the fixed face $x=0$. In order to get the convergence it will be necessary to prove the following preliminary result: \begin{lem}\label{ConvergenciaLambda} Let $\gamma>0$, $p\geq 0$ and $\delta>0$ be. If $\lambda_\gamma$ is the unique solution to equation $(\ref{7bis})$ and $\lambda$ is the unique solution to equation $(\ref{7})$, then the sequence $\lbrace\lambda_\gamma \rbrace$ is increasing and bounded. Moreover, $$\lim\limits_{\gamma\to\infty} \lambda_\gamma=\lambda.$$ \end{lem} \begin{proof} Let $\gamma_1<\gamma_2$ then $F(\beta_{\gamma_1})<F(\beta_{\gamma_2})$ where $F$ is given by (\ref{FG-temp}) and $\beta_\gamma$ is defined by (\ref{beta}). Therefore $\lambda_{\gamma_1}<\lambda_{\gamma_2}$. In addition as $\lim\limits_{\gamma \to\infty} F(\beta_\gamma)=g$ we have $\lambda_\gamma<\lambda$, for all $\gamma>0$. Finally, we obtain that $\lim\limits_{\gamma\to\infty} \lambda_\gamma=\lambda$. \end{proof} \begin{lem}\label{Convergenciaygamma} Let $\gamma>0$, $p\geq 0$ and $\delta>0$ be. If $(y_{\gamma},\lambda_\gamma)$ is the unique solution to the ordinary differential problem $\mathrm{(}$\ref{y}$\mathrm{)}$, $\mathrm{(}$\ref{condlambda}$\mathrm{)}$, $\mathrm{(}$\ref{eclambda}$\mathrm{)}$, $\mathrm{(}$\ref{ecconvectiva}$\mathrm{)}$ and $(y,\lambda)$ is the unique solution to the problem $\mathrm{(}$\ref{y}$\mathrm{)}$-$\mathrm{(}$\ref{eclambda}$\mathrm{)}$, then for every $\eta\in (0,\lambda)$ the following convergence holds \begin{equation} \lim\limits_{\gamma \to\infty} y_\gamma(\eta)=y(\eta). \end{equation} \end{lem} \begin{proof} According to Lemmas \ref{ExyunProbAux} and \ref{ExyunProbAux-Conv} we have that $y_\gamma(\eta)=F^{-1}(G_\gamma(\eta))$, with $0<\eta<\lambda_\gamma$ and $y(\eta)=F^{-1}(G(\eta))$, with $0<\eta<\lambda$ where the functions $F$, $G$ and $G_\gamma$ are given by (\ref{FG-temp}) and (\ref{Ggamma}). Let $\eta \in (0,\lambda)$. Then due to Lemma \ref{Convergenciaygamma}, there exists $\gamma_0$ such that $\eta<\lambda_\gamma$, for every $\gamma>\gamma_0$. As it can be easily seen that $G_\gamma(\eta)\to G(\eta)$ when $\gamma\to\infty$, it follows that $$\lim\limits_{\gamma \to\infty} y_\gamma(\eta)=\lim\limits_{\gamma \to\infty} F^{-1}(G_\gamma (\eta))=F^{-1}\left( \lim\limits_{\gamma \to\infty} G_\gamma (\eta)\right)=F^{-1}(G(\eta))=y(\eta).$$ \end{proof} In order to illustrate the results obtained in Lemmas \ref{ConvergenciaLambda} and \ref{Convergenciaygamma}, in Figure \ref{FiguraConv} we plot the $(y_\gamma,\lambda_\gamma)$ assuming $\delta=5$, $p=1$ and varying $\gamma=1, 25, 50,100$. We show that as $\gamma$ becomes greater, the function $y_\gamma$ converges pointwise to the solution $y$ of the problem $(\ref{y})$-$(\ref{eclambda})$. \begin{figure}[h!!!!] \begin{center} \includegraphics[scale=0.22]{ConvergenciaVERSIONFINAL.eps} \caption{Plot of $y_\gamma$ for $\gamma=1,25,50,100$, and $y$ functions fixing $p=1$ and $\delta=5$} \label{FiguraConv} \end{center} \end{figure} \begin{thm} \label{ConvergenciaTeo} The unique solution $(T_\gamma,s_\gamma)$ to the Stefan problem governed by $(\ref{EcCalor}),(\ref{TempCambioFase})$-$(\ref{FrontInicial})$ and $(\ref{convectiva})$ converges pointwise to the unique solution $(T,s)$ to the Stefan problem $(\ref{EcCalor})$-$(\ref{FrontInicial})$ when $\gamma\to\infty$. \end{thm} \begin{proof} The proof follows straightforward from Lemmas \ref{ConvergenciaLambda}, \ref{Convergenciaygamma} and formulas (\ref{T})-(\ref{s}). \end{proof} \section{Conclusions} One dimensional Stefan problems with temperature dependent thermal coefficients and a Dirichlet or a Robin type condition at fixed face $x=0$ for a semi-infinite material were considered. Existence and uniqueness of solution was obtained in both cases. Moreover, it was proved that the solution of the problem with the Robin type condition converges to the solution of the problem with the Dirichlet condition at the fixed face. For a particular case, an explicit solution was also obtained. In addition, computational examples were provided in order to show the previous theoretical results. \section*{Acknowledgement} The present work has been partially sponsored by the Project PIP No 0275 from CONICET-UA, Rosario, Argentina, and ANPCyT PICTO Austral 2016 No 0090.
proofpile-arXiv_065-4225
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Datasets, deep learning (DL) techniques and high-performance computing facilities are the three most important driving forces in today's new AI era. DL has also outperformed the conventional methods on scene text detection, which has received significant attention in the past decade. However, existing solutions all focus on English (Latin) language, and little effort has been invested in Chinese scene text detection and recognition (hereafter referred to as ``Chinese Photo OCR"). This is not only because of the difficulty of Chinese text recognition owing to the complexity of the language and its huge number of classes / characters (the number of commonly used Chinese characters is 6,763, while English language only has 26 letters), but also due to the scarcity of large-scale well-annotated datasets of Chinese natural scene images, since deep learning based techniques are data-driven, data-eager. In the literature, early approaches to scene text detection use ``low-level" features to localize texts in natural scene images. Starting from 2012, growing effort has been devoted to the development of ``high-level" deep learning based scene text detection approaches, which have shown significantly better performance than conventional methods. CTPN~\cite{CTPN} and TextBoxes~\cite{Textboxes} are representative methods for horizontal text detection, while EAST~\cite{EAST} and TextBoxes++~\cite{TextBoxesplus} are recent solutions for multi-oriented text detection. However, most of these advances are towards scene text detection, and only until recent years, researchers have started investigating DL-based scene text recognition, where CRNN~\cite{CRNN} and Sliding CNN~\cite{SlidingCNN} are representative solutions. Nevertheless, few methods have been designed towards Chinese scene text detection and recognition. To advance research in Chinese Photo OCR, we present a diverse and challenging dataset of Chinese natural scene images, which consists of shop signs along the streets in China. This dataset is hereafter referred to as ``ShopSign". It contains 25,770 images collected from more than 20 cities, using 50 different smart phones. These images exhibit a wide variety of scales, orientations, lighting conditions, layout and geo-spatial locations as well as class imbalance. Moreover, we characterize the difficulty of ShopSign by specifying 5 categories of ``hard" images, which contain mirror, exposed, obscured, wooden, or deformed texts. The images in ShopSign have been manually annotated in ``text-line-based" manner by 10 research assistants. Upon ShopSign, we train and evaluate well-known deep network architectures (as baseline models) and report their text detection/recognition performance. We show and analyze the deficiency of existing deep-learning based Photo OCR solutions on ShopSign, and point out that Chinese OCR is a very difficult task and needs significantly more research effort. We hope the publicity of this large ShopSign dataset will spur new advances in Chinese scene text detection and recognition. The main contributions of this work are summarized as follows: \begin{enumerate} \item We present ShopSign, which is a large-scale, diverse and difficult dataset of Chinese scene images for text detection and recognition. \item We report results of several baseline methods on ShopSign, including EAST, TextBoxes++, and CTPN. Through cross-dataset experiments, we observe improved results on difficult detection/recognition cases with ShopSign. \end{enumerate} \section{Related Work} In this section, we introduce state-of-the-art algorithms in scene text detection and recognition. We will also present related datasets. \subsection{Scene Text Detection and Recognition} A scene text recognition system usually consists of two main components: scene text detector and recognizer. The former module localizes characters/texts in images in the form of bounding boxes, while the latter identifies texts (character sequences) from the cropped images inside the bounding boxes. But there are also a few attempts that aim to directly output the text in an ``end-to-end" manner, i.e., seamlessly integrating scene text detection and recognition in a single neural network (procedure). However, we argue that it is not mandatory for scene text recognition systems to be ``end-to-end", because in some cases the detected image patches in the bounding boxes predicted by scene text detectors may be too vague or tiny to be recognized. Yet the advantage of ``end-to-end" approaches could be the internal feedback and the seamless interaction between the detection and recognition modules. \textbf{A. Scene Text Detection} Most of the early DL-based approaches to scene text detection only support horizontal text detection. In~\cite{Seglink}, Shi et al. propose to detect texts with segments and links. They first detect a number of text parts, then predict the linking relationships between neighboring parts to form text bounding boxes. CTPN~\cite{CTPN} first detects text in sequences of fine-scale proposals, then recurrently connects these sequential proposals using BLSTM. TextBoxes~\cite{Textboxes} was designed based on SSD; but it adopts long default boxes that have large aspect ratios (as well as vertical offsets), because texts tend to have larger aspect ratios than general objects. It only supports the detection of horizontal (vertical) texts in the beginning, later on, the same authors propose TextBoxes++~\cite{TextBoxesplus} to support multi-oriented scene text detection. TextBoxes++ improves upon TextBoxes by replacing the rectangular box representation in conventional object detector by a quadrilateral representation. Moreover, the authors adopt a few long convolutional kernels to enable long but narrow receptive fields. It directly outputs word bounding boxes at multiple layers by jointly predicting text presence and coordinate offsets to anchor boxes. EAST~\cite{EAST} is a U-shape fully convolutional network for detecting multi-oriented texts, it uses the PVANet to speed up the computation. \textbf{B. Scene Text Recognition} In~\cite{CRNN}, Shi et al. propose to use CNN and RNN to model image features and Connectionist Temporal Classification (CTC)~\cite{CTC} to transcript the feature sequences into texts. In~\cite{ShiWLYB16}, Shi et al. recognize scene text via attention based sequence-to-sequence model. In~\cite{SlidingCNN}, Yin et al. propose the sliding convolutional character model, in which a sliding window is used to transform a text-line image into sequential character-size crops. Then for each crop (of character-size, e.g. 32*40), they extract deep features using convolutional neural networks and make predictions. These outputs from the sequential sliding windows are finally decoded with CTC. Sliding CNN can avoid the gradient vanishing/exploding in training RNN-LSTM based models. \textbf{C. End-to-End Frameworks} In~\cite{LiWS17,DeepText}, two end-to-end methods were proposed to localize and recognize text in a unified network, but they require relatively complex training procedures. In~\cite{MaskTextSpotter}, the authors design an end-to-end framework which is able to detect and recognize arbitrary-shape (horizontal, oriented, and curved) scene texts. \textbf{Short Summary}. It is noticeable that, most state-of-the-art approaches for scene text detection and recognition focus on English language, but very little effort has been put into Chinese scene text recognition. \subsection{Related Datasets (English, Chinese) } For English scene text detection, ICDAR2013, ICDAR2015~\cite{ICDAR2015}, COCO-Text~\cite{COCOText} are well-known real-world datasets, while SynthText~\cite{Synthetic} is a commonly used synthetic English scene text dataset. The training sets of ICDAR2013 and ICDAR2015 are rather small, which are 229 and 1,000, respectively. COCO-Text has 43,686 training images (yet the annotations of some images are not very accurate), whereas SynthText has 800,000 images with 8 Million synthetic cropped image patches. For Chinese scene text detection and recognition, the three most related datasets to ours are RCTW~\cite{RCTW}, CTW~\cite{CTW} and ICPR 2018 MTWI challenge dataset~\cite{MTWI}, all of them were lately released (i.e., in 2017 and 2018). RCTW (a.k.a.CTW-12k) is an ICDAR-2017 competition dataset for scene text detection and recognition. It has 12,263 annotated images, in which 8,034 images are used as training data. Most images in RCTW were captured using smart phone cameras; but it also includes screen-shot images from computers and smart phones, so these images are ``born-digital". The texts in the images of RCTW were annotated at the level of text lines using quadrilaterals to enable multi-oriented scene text detection. CTW is a large dataset of Tecent street view images in China. It has 32,285 natural images with 1,018,402 Chinese characters, which is much larger than previous datasets including RCTW. All the natural images in CTW have the same resolution of 2048*2048 and all the street view images were captured at fixed intervals (10 to 20 meters). Moreover, if two successive images have 70\% overlap, either of them was removed. For each Chinese character in a natural scene image, they annotate its underlying character, its bounding box, and 6 attributes to indicate whether it is occluded, background complex or not, distorted or not, 3D raised or not, wordart or not, and handwritten or not, respectively. The CTW images were annotated in crowd-sourcing manner by a third-party image annotation company, yet characters in English and other languages were not annotated. Finally, CTW has 3,850 unique Chinese characters (categories), 13.2\% of the character instances are occluded, 28.0\% have complex background, 26.0\% are distorted, and 26.9\% are 3D raised text. The authors argue that the majority of the 3,850 character categories are rarely-used and many of them have very few samples in the training data (imbalanceness), so they only consider the recognition performance of the top 1000 frequent characters in their benchmark experiments. \textbf{Discussion}. DL-based methods are data-driven, they need to consume huge amount of data (images) to achieve good recognition performance. Therefore, a fundamental question for DL-based Chinese Photo OCR is: how many Chinese images do the DL-based methods need to achieve high recognition accuracy on the Chinese characters? English language has only 26 letters, but it needs SynthText which has 800,000 synthetic images to achieve good English text recognition accuracy (around 80\%). While there are 6,763 commonly used Chinese characters, then the DL-based Chinese OCR methods need 200 Million images with Chinese texts to reach the same recognition accuracy as DL-based English photo OCR approaches? Due to the difficulty of collecting/annotating such huge number of photos, researchers need to investigate how to artificially generate photos with Chinese sentences. Another simpler solution is to use tools such as~\cite{Synthetic} to generate the synthetic Chinese scene images. Indeed, different methods for generating synthetic scene images with Chinese texts/characters should be jointly adopted to enhance the diversity, complexity and difficulty of the generated Chinese scene images. Moreover, semi-supervised approaches can also be helpful, to fully utilize the limited annotated images and the vast amount of un-annotated ones, especially Web images from the Internet. \section{The ShopSign Dataset} \subsection{Dataset Collection and Annotation} In developed countries such as USA, Italy and France, there are very limited number of characters in the shop signs, and the sizes of the shop signs are usually small. Moreover, the background of most shop signs in these countries are bare walls (whereas most Chinese shop signs use curtain/glass/wooden backgrounds). Owing to the differences in language, culture, history and the degree of development, the Chinese shop signs have distinctive features and high recognition difficulty. Even inside China, there is a big diversity in the materials and styles of the shop signs across different regions. For instance, in major cities such as Shanghai and in the downtown of many cities, the shops usually adopt fiber-reinforced plastic (FRP) and neon sign boards; but in the suburb or developing regions, economic wooden and outdoor inkjet and acrylic shop signs are very common. The styles of the shop signs also vary in different provinces, e.g., shop signs in Inner Mongolia and the northwestern Xinjiang provinces are significantly different from Shanghai. \begin{figure*} \begin{center} \includegraphics[width=0.80\linewidth]{A1.jpg} \includegraphics[width=0.80\linewidth]{A2.jpg} \includegraphics[width=0.80\linewidth]{A3.jpg} \end{center} \caption{Sample images of ShopSign. } \label{fig9} \end{figure*} \begin{figure*} \centering \begin{subfigure}[t]{0.19\textwidth} \includegraphics[width=\textwidth]{h1mirror.jpg} \caption{mirror} \label{fig:hard1} \end{subfigure} \hfill \begin{subfigure}[t]{0.19\textwidth} \includegraphics[width=\textwidth]{h2wooden.jpg} \caption{wooden} \label{fig:hard2} \end{subfigure} \hfill \begin{subfigure}[t]{0.19\textwidth} \includegraphics[width=\textwidth]{h3deformed.jpg} \caption{deformed} \label{fig:hard3} \end{subfigure} \hfill \begin{subfigure}[t]{0.19\textwidth} \includegraphics[width=\textwidth]{h4exposed.jpg} \caption{exposed} \label{fig:hard4} \end{subfigure} \hfill \begin{subfigure}[t]{0.19\textwidth} \includegraphics[width=\textwidth]{h5obscured.jpg} \caption{obscured} \label{fig:hard5} \end{subfigure} \caption{Examples of the 5 categories of hard images.} \label{fig:hard} \end{figure*} Building a large-scale dataset of Chinese shop signs is a fundamental yet critical task that needs enormous manual collection and annotation effort. We asked help from 40 students from our institution to collect shop sign images in various cities/regions of China, including Shanghai, Beijing, Inner Mongolia, Xinjiang, Heilongjiang, Liaoning, Fujian (Xiamen) as well as several cities/towns in Henan Province (Zhengzhou, Kaifeng, Xinyang, and a few counties/towns in Shangqiu and Zhoukou cities), with a duration of over two years. A total of 50 different cameras and smart phones were used in the collection and many of the images carry GPS locations. After the collection of the images, two faculty members and ten graduate students were involved in the annotation of these images (in text-line manner using quadrilaterals), which took another three months. Finally, the ShopSign dataset we build contains 25,770 well annotated images of Chinese shop sign in street views. In Figure \ref{fig9}, we showcase a few representative images in ShopSign. \subsection{Dataset Characteristics} \begin{table}[!ht] \begin{center} \begin{tabular}{|l|c|} \hline Item & Number / Ratio \\ \hline\hline \textbf{Total Number of Images} & \textbf{25,770}\\ Training images & 20,738 \\ Testing images & 5,032 \\ Text-lines & \textbf{196,010} \\ Text-lines in Training set & 146,570\\ Text-lines in Testing set & 49,440\\ Chinese Characters & \textbf{626,280}\\ Unique Chinese Characters & \textbf{4,072} \\ \hline\hline \textbf{Number of Unique Characters with} & \\ 1 occurrence & 537 \\ 2-10 occurrences & 1,150 \\ 11-50 occurrences & 958 \\ 51-100 occurrences & 395 \\ 101-200 occurrences & 338 \\ 201-500 occurrences & 360 \\ 501-1000 occurrences & 188 \\ 1001-2000 occurrences & 108 \\ 2001-3000 occurrences & 22 \\ 3001-5000 occurrences & 15 \\ $ \geq 50$ occurrences & 1,439 \\ $ \geq 100$ occurrences & 1,039 \\ $ \geq 200$ occurrences & 694 \\ $ \geq 500$ occurrences & 333 \\ $ \geq 1000$ occurrences & 145 \\ $ \geq 2000$ occurrences & 37 \\ \hline\hline \textbf{Ratio of Unique Characters with} & \\ 1 occurrence & 13.2\% \\ 2-10 occurrences & 28.2\% \\ 11-50 occurrences & 23.5\% \\ $ \leq 50$ occurrences & 65\% \\ $ \leq 100$ occurrences & 74.7\% \\ 101-500 occurrences & 17.1\%\\ $ \geq 500$ occurrences & 8.2\% \\ $ \geq 1000$ occurrences & 3.6\% \\ $ \geq 2000$ occurrences & 0.9\% \\ \hline\hline \textbf{Number of Occurrences for} & \\ No. 1 most frequent character & 7,603 \\ No. 2 most frequent characters & 6,503 \\ No. 3 most frequent characters & 5,276 \\ No. 4 most frequent characters & 5,121 \\ No. 5 most frequent characters & 5,074 \\ \hline\hline \textbf{Distribution (Class) Imbalance} & \\ Characters with $ \geq 500$ occurrences & 8.2\%\\ Their Total number of occurrences &64.7\%\\ Characters with $ \leq 100$ occurrences & 74.7\%\\ Their Total number of occurrences &9.5\%\\ \hline \end{tabular} \end{center} \caption{Basic Statistics of ShopSign.} \label{tab8} \end{table} In Table \ref{tab8}, we present the basic statistics of ShopSign. It contains 25,770 Chinese natural scene images and 196,010 text-lines. The total number of unique Chinese characters is 4,072, with 626,280 occurrences in total. Overall, ShopSign has the following characteristics. \begin{enumerate} \item \textbf{Large-scale}. It contains both images with horizontal and multi-directional texts. It comprises more than 10,000 images with horizontal texts and over 10,000 images having multi-directional texts. \item \textbf{Night images}. It includes near 4,000 night images (captured in the night). In these night images, the sign boards are very remarkable and the rest background areas are comparatively dark. Such night images rarely exist in other datasets. \item \textbf{Special categories of hard images}. It consists of 5 special categories of hard images, which are mirror, wooden, deformed, exposed and obscure, as depicted in Figure \ref{fig:hard}. Text detection and recognition over such hard images should be more challenging than ordinary natural scene images. Besides, it has 500 very difficult scene images, with complex layout or gloomy background, as we can observe from Figure \ref{fig9}. \item \textbf{Sparsity and class imbalance}. The number of unique characters with 500 or more occurrences is 333 (8.2\% in ratio), but the sum occurrences of these characters occupies 64.7\% of the total occurences of the characters in the dataset. In comparison, the ratio of characters with 100 or less occurrences is 74.7\%, but their total number of occurrences is only 9.5\% in the dataset. Furthermore, 537 characters only have 1 occurrence, and 1,687 characters (41.4\% in ratio) have 10 or less occurrences. Hence, the distribution of the Chinese characters' occurence frequency in ShopSign is highly skewed. \item \textbf{Diversity}. The dataset spans several provinces in China, from Beijing and Shanghai to northeastern and northwestern provinces in China, from downtown to developing regions. The photos were captured in different seasons of the year, using 50 different smart phones. The styles of the sign boards, their backgrounds and texts are also very diverse. \item \textbf{Pair images}. Our dataset contains 2,516 pairs of images. In each pair of images, the same sign board was shot twice, from both frontal and tilted perspectives. Pair images facilitate the evaluation/comparison of an algorithm's performance on horizontal and multi-oriented text detection. \end{enumerate} In Table \ref{tab1}, we make comparisons between our ShopSign dataset and the existing CTW dataset, which is the most relevant dataset to ours. \begin{table*}[ht] \begin{center} \begin{tabular}{p{3cm}|p{5.5cm}|p{7.5cm}} \hline & CTW & ShopSign \\ \hline\hline\hline Themes & Street Views (roads, buildings, trees, etc.) & Shop Signs (sign boards) with texts \\ \hline Equipments & StreetView Collection Vehicles with Identical Nikon SLR Cameras & Smart Phones. 50 different smart phones from various brands (e.g. iPhone, Huawei, Samsung, Vivo, Xiaomi, etc.) \\ \hline Collection Manner & Automated (by vehicles) & Manually (by 40 different research assistants) \\ \hline Shooting Angles & fixed angles & arbitrary angles \\ \hline Capture Distance & fixed distance: $\geq$ 10 meters (vehicles inside motor vehicle lanes) & Varying distances (to the targets), 2-8 meters, on pavements \\ \hline Time Span & around 3 months & 2 years and 4 months \\ \hline Sites & a few major cities (developed) & Wide geographical coverage (Beijing, Shanghai, Xiamen, Xinjiang, Mongolia, Mudanjiang, Huludao and a few cities and small towns in Henan province) including many developing regions or small towns where street view vehicles do not reach. \\ \hline Photo Resolutions & unanimously 2048*2048 & Various resolutions (3024*4032, 1920*1080, 1280*720, ...) \\ \hline Annotation Methods & Third-party crowd-sourcing platform, per character. Suffer from well-known quality issues caused by crow-sourcing. & 10 full-time research assistants (manually annotated, and with calibrations), per textline. Highly precise annotations. \\ \hline Diversities of the Sign Boards & Limited (Medium or Expensive Sign Boards) & Various materials for sign boards, including many inexpensive (e.g., cloth or wood) ones used by rural or developing urban areas. \\ \hline Languages & Chinese only & Chinese and English \\ \hline Richness of texts & Medium or large signboards, no tiny ads; no indoor texts; no tiny texts; no texts in the mirror boards. & Various scales of text areas: embossed texts on the buildings; texts on the mirrors; indoor texts; tiny ads; rich texts on sign boards of different styles. \\ \hline Night Photos & No & Yes, with near 4000 photos shoot in the night. \\ \hline Pair Photos & No & Yes, with 2,516 pairs of photos, in which two photos were taken for one shop sign from horizontal and multi-oriented angles. \\ \hline Very difficult photos & very few & More than 500 very difficult photos. \\ \hline \end{tabular} \end{center} \caption{Comparison Between CTW and our ShopSign dataset.} \label{tab1} \end{table*} \subsection{Dataset Split} The ShopSign dataset will be completely shared with the research community. We will not reserve the testing set but release it to the public. However, we suggest researchers to train and adjust parameters on the training set, and minimize the number of runs on the testing set. First, we split ShopSign into training (Train1) and testing set (Test1), which contain 20,738 and 5,032 images, respectively. The testing set comprises 2,516 pairs of images, which were independently collected/annotated in a later period (by a different group of research assistants) than the training set. We collected the testing set in such a way that the real performance of state-of-the-art text detection algorithms is more authentically reflected. The pair images can also compare and reveal the ability of an algorithm in detecting horizontal and multi-oriented scene texts. Train1 and Test1 will also be used for assessing the performance of text recognition algorithms. The collection of cropped text-lines from Train1 (since they have annotations) will be used for training text recognition models, while the set of cropped text-lines from Test1 will be used for testing their recognition performance. It should be noted that, due to the large number of classes in ShopSign (with 4,072 unique characters) and the text-line based annotation manner, the data is very sparse and imbalanced (the class imbalance issue in the number of samples for different Chinese characters has been disccused above). Second, for specific evaluation of text detection performance on the five ``hard" categories of images, ShopSign is re-split into another training (Train2) and testing set (Test2). Test2 comprises half of the images from each of the five ``hard" categories, whereas all the other images of ShopSign are used as the new training set, i.e., Train2. In short, Train1 and Test1 are for evaluating both the text detection and recognition algorithms; whereas Train2 and Test2 are specially designed for assessing the performance of text detection algorithms on the five ``hard" categories of images. In view of the large-scale Chinese characters (i.e. 4,072 classes in our dataset), our ShopSign dataset is very sparse and imbalanced (as depicted in the above subsection). What is more, the text-line based annotation manner makes the data even more sparse and challenging. \section{Experiments} Having the ShopSign dataset at hand, we now proceed to illustrate how it helps improve Chinese scene text detection and recognition results. \subsection{Influence of language on text dection} For scene text detection, people only need to localize the areas of the candidate text lines (or areas), without recognizing the content (characters) of the texts. Furthermore, some text lines may be too tiny or vague so that they can only be detected as textual lines/areas but the texts (characters) are too unclear to be recognized. Hence, the intrinsic features of the images utilized by deep learning based scene text detection algorithms should be very different from those used by deep learning based text recognition algorithms. Early research on scene text detection only focuses on English natural scene texts. But many researchers in this field often raise the following questions: do text detection models, trained on English natural scene images, perform well on the Chinese ones and vice versa? Does the performance of scene text detection algorithms differ between English natural scene images and the Chinese ones? In order to investigate whether text language has a significant influence on the performance of scene text detection algorithms, in Table \ref{tab2}, we first present the text detection results of the EAST and TextBoxes++ models trained on SynthText, on the test set of ShopSign (Test1). We see that these two models achieve very low recall and precision results on ShopSign; their recall rate are only 12\%-15\%. This demonstrates that the models trained by deep learning based scene text detection algorithms on the English natural scene images perform very poorly on the Chinese ones. Therefore, text language has a very significant influence on the performance of scene text detection algorithms. Thus, for Chinese scene text detection, datasets with Chinese natural scene images are needed. \subsection{Baseline Experiments on ShopSign} We next report the performance of baseline text detection algorithms on ShopSign, using Train1 and Test1. In Table \ref{tab2}, we report the prediction performance of the major text detection algorithms (using Train1 of ShopSign) on the test set (i.e., Test1) of ShopSign, which contains 2,516 pairs of images, with each pair of images containing both horizontal and multi-oriented images captured for the same shop sign(s). The pair images can more comprehensively evaluate the relative performance of scene text detection models, since the two images were shot for the same shop sign but from different perspectives (one frontal, the other leaning). We observe that, EAST achieves the best text detection performance on ShopSign, with a recall of 57.9\%-58.4\%. We also notice that CTPN obtains better text detection results than TextBoxes++ on the horizontal test set of ShopSign, but the latter outperforms the former in multi-oriented scene texts of ShopSign. Therefore, CTPN is more capable in horizontal text detection. \subsection{Cross-dataset Text Detection Performance} We now compare the relative difficulty of ShopSign with RCTW/CTW/MTWI on scene text detection, using EAST, TextBoxes++ and CTPN. We choose the RCTW/CTW/MTWI datasets to do cross-dataset generalization because they are the only available scene text datasets that contain Chinese texts/characters. Cross-dataset generalization is an evaluation for the generalization ability of a dataset. Because there are no officially released ground-truth (labels) for the test datasets of RCTW/CTW/MTWI, we use their official training data to train the corresponding text detection models, then test these models on test set (i.e., Test1) of ShopSign. In Table \ref{tab2}, we use three scene text detection algorithms to train models on RCTW/CTW/MTWI, then report their performance on the test set of ShopSign. We see that, text detection models trained on CTW and MTWI by all the three algorithms only obtain a recall of 16.1\%-38\% on ShopSign, which is very low. This indicates that, ShopSign is a more challenging Chinese natural scene image set than CTW and MTWI. We also see that, EAST trained on RCTW obtains the best recall result on ShopSign, which is between 50.5\% and 53.2\%, while the other two algorithms only have a recall below 44.2\%. In short, ShopSign is a more comprehensive and difficult Chinese natural scene text dataset than RCTW/CTW/MTWI. \begin{table*}[ht] \begin{center} \begin{tabular}{|c|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Methods}& \multicolumn{3}{c|}{Horizontal} & \multicolumn{3}{c|}{Multi\_Oriented}\\ \cline{3-8} &&R &P &H &R &P &H\\ \hline\hline\hline \multirow{2}{*}{SynthText} &EAST &0.124& 0.125& 0.124 & 0.133 & 0.174 & 0.150\\ \cline{2-8} &TextBoxes++ & 0.115 & 0.287 & 0.165&0.150 & 0.330 & 0.206\\ \hline\hline \multirow{3}{*}{ShopSign} &EAST &0.584 &0.364 &0.448 & 0.579 &0.410 &0.480\\ \cline{2-8} &TextBoxes++ &0.471 &0.501 &0.486 &0.479 &0.476 &0.478\\ \cline{2-8} &CTPN &0.535 &0.566 &0.550 &0.444 &0.518 &0.478\\ \hline\hline \multirow{3}{*}{CTW} &EAST &0.241 &0.261 &0.250 &0.215 &0.279 &0.243\\ \cline{2-8} &TextBoxes++ &0.346 &0.084 &0.135 &0.313 &0.075 &0.121\\ \cline{2-8} & CTPN &0.182 &0.371 &0.244 &0.161 &0.379 &0.226\\ \hline\hline \multirow{3}{*}{RCTW} &EAST &0.532 &0.371 &0.437 &0.505 &0.412 &0.454\\ \cline{2-8} &TextBoxes++ &0.401 &0.516 &0.451 &0.380 &0.432 &0.405\\ \cline{2-8} &CTPN &0.442 &0.446 &0.444 &0.373 &0.407 &0.389\\ \hline\hline \multirow{3}{*}{MTWI} &EAST &0.360 &0.250 &0.295 &0.319 &0.274 &0.294\\ \cline{2-8} &TextBoxes++ &0.328 &0.392 &0.357 &0.306 &0.34 &0.322\\ \cline{2-8} &CTPN &0.380 &0.490 &0.428 &0.336 &0.480 &0.395\\ \hline\hline \end{tabular} \end{center} \caption{ Results of State-of-the-art Text Detection Methods on the test dataset of ShopSign.} \label{tab2} \end{table*} \subsection{Specific Performance on the Hard Categories} \begin{table*}[ht] \begin{center} \begin{tabular}{|c|l|l|l|l|l|l|} \hline Methods & Datasets&Mirror &Wooden &Deformed &Exposed &Obscured\\ \hline\hline \multirow{6}{*}{EAST} & CTW &0.155 &0.242 &0.219 &0.274 &0.209\\ \cline{2-7} & CTW+ShopSign &0.515 &0.561 &0.561 &0.534 &0.549\\ \cline{2-7} & MTWI &0.295 &0.328 &0.351 &0.347 &0.329\\ \cline{2-7} &MTWI+ShopSign &0.542 &0.542 &0.560 &0.574 &0.542\\ \cline{2-7} & RCTW &0.452 &0.467 &0.504 &0.490 &0.485\\ \cline{2-7} & RCTW+ShopSign &0.533 &0.558 &0.569 &0.595 &0.558\\ \hline \hline \multirow{6}{*}{TextBoxes++} &CTW &0.324 &0.287 &0.285 &0.300 &0.318\\ \cline{2-7} & CTW+ShopSign &0.494 &0.398 &0.391 &0.440 &0.422 \\ \cline{2-7} & MTWI &0.360 &0.272 &0.273 &0.297 &0.291 \\ \cline{2-7} & MTWI+ShopSign& 0.527 &0.404 &0.394 &0.466 &0.431\\ \cline{2-7} & RCTW &0.449 &0.334 &0.330 &0.397 &0.357 \\ \cline{2-7} &RCTW+ShopSign &0.515 &0.397 &0.387 &0.478 &0.431\\ \hline \hline \multirow{6}{*}{CTPN} &CTW &0.136 &0.152 &0.167 &0.114 &0.173\\ \cline{2-7} & CTW+ShopSign &0.565 &0.496 &0.465 &0.503 &0.507\\ \cline{2-7} & MTWI &0.283 &0.333 &0.352 &0.334 &0.353\\ \cline{2-7} & MTWI+ShopSign &0.564 &0.516 &0.486 &0.527 &0.511\\ \cline{2-7} & RCTW &0.398 &0.371 &0.380 &0.404 &0.403\\ \cline{2-7} & RCTW+ShopSign &0.565 &0.499 &0.489 &0.568 &0.522\\ \hline \hline \end{tabular} \end{center} \caption{ Recall of Baseline Text Detection Methods on the difficult images of ShopSign (whole image level).} \label{tab3} \end{table*} \begin{table*}[ht] \begin{center} \begin{tabular}{|c|l|l|l|l|l|l|} \hline Methods & Datasets&Mirror &Wooden &Deformed &Exposed &Obscured\\ \hline\hline \multirow{6}{*}{EAST} & CTW &0.096 &0.152 &0.201 &0.239 &0.112\\ \cline{2-7} & CTW+ShopSign &0.376 &0.488 &0.462 &0.543 &0.341\\ \cline{2-7} & MTWI &0.196 &0.264 &0.316 &0.351 &0.155\\ \cline{2-7} &MTWI+ShopSign &0.388 &0.492 &0.496 &0.564 &0.343\\ \cline{2-7} & RCTW &0.296 &0.389 &0.444 &0.452 &0.278\\ \cline{2-7} & RCTW+ShopSign &0.380 &0.492 &0.479 &0.585 &0.359\\ \hline \hline \multirow{6}{*}{TextBoxes++} &CTW&0.244&0.373&0.333&0.282&0.302\\ \cline{2-7} & CTW+ShopSign &0.488 &0.635 &0.427 &0.521 &0.473 \\ \cline{2-7} & MTWI &0.356 &0.450 &0.359 &0.372 &0.320 \\ \cline{2-7} & MTWI+ShopSign& 0.524 &0.645 &0.466 &0.548 &0.486\\ \cline{2-7} & RCTW &0.436 &0.535 &0.380 &0.468 &0.402 \\ \cline{2-7} &RCTW+ShopSign &0.500 &0.637 &0.432 &0.569 &0.482\\ \hline \hline \multirow{6}{*}{CTPN} &CTW &0.096 &0.177 &0.248 &0.133 &0.124\\ \cline{2-7} & CTW+ShopSign &0.332 &0.373 &0.350 &0.362 &0.342 \\ \cline{2-7} & MTWI &0.340 &0.401 &0.346 &0.356 &0.328 \\ \cline{2-7} & MTWI+ShopSign &0.412 &0.455 &0.368 &0.415 &0.372 \\ \cline{2-7} & RCTW &0.368 &0.420 &0.359 &0.426 &0.366 \\ \cline{2-7} & RCTW+ShopSign &0.372 &0.420 &0.372 &0.367 &0.407 \\ \hline \hline \end{tabular} \end{center} \caption{ Recall of Baseline Text Detection Methods on the difficult images of ShopSign (specific text-line level).} \label{tab4} \end{table*} To characterize the detection difficulty on the five ``hard" categories of ShopSign, which are mirror, wooden, deformed, exposed and obscured, we report the specific text detection results on each category, using Train2 and Test2. In Table \ref{tab3}, we first show the overall results of the three methods on each specific ``hard" category of ShopSign. Without ShopSign, the three text detection algorithms again achieve low recall results on each`` hard" category, but the performance difference from what we observe from Table \ref{tab2} is not significant. This is because the results from Table \ref{tab3} are image-level, they include all the text lines of the whole image that contains hard text lines (such as the text lines with mirrors). When ShopSign is combined with RCTW/CTW/MTWI, we observe very significant performance improvement. Therefore, ShopSign dataset improves the performance of the scene text detection algorithms on the ``hard" examples. To further check the performance of scene detection algorithms on the specific ``hard" examples of ShopSign images, in Table \ref{tab4} we pick the corresponding ``hard" text lines from each ``hard" image, and separately calculate their recall. That is, for each image that belongs to the ``hard" category, we only measure the recall results of the specific hard text lines of the image. From Table \ref{tab4}, it is clear that the recall results of the scene text detection algorithms on the hard text lines of ShopSign is much lower than those reported in Table \ref{tab2}. This shows that, the five specific ``hard" examples of ShopSign are more challenging. Moreover, we see that TextBoxes++ is less influenced by the hard examples than EAST and CTPN. In particular, both EAST and CTPN perform poorly on the mirror and obscured images, with a recall under 41.2\%, while the performance of CTPN also drops on the exposed scene text images. Overall, the ShopSign dataset is useful in the detection of ``hard" Chinese scene text images. \section{Conclusion and Future work} In this work, we present the ShopSign dataset, which is a diverse large-scale natural scene images with Chinese texts. We describe the properties of the dataset and carry out baseline experiments to show the challenges of this dataset. In future work, we will conduct experiments and report the performance of baseline text recognition algorithms on ShopSign. We will also show the corresponding difficulties of text recognition on the ``hard" images of ShopSign. Given the large number of Chinese characters, data sparsity is very common (even inevitable) in datasets of Chinese natural scene images, although the sizes of CTW and ShopSign are already very large. The text-line based annotation manner further increases the data sparsity, when we consider the combinations of Chinese characters in a sequence. Besides, the number of samples for different characters is also highly imbalanced. A large-scale synthetic dataset of scene images with Chinese texts is needed by the community, which may contain tens of millions of images. To this end, we are designing GAN (generative adversarial network) based techniques to generate such synthetic datasets. \section{Acknowledgments} We are very grateful to the 40 students who contributed to the collection of the ShopSign images, including (but not limited to) Mr. Yikun Han, Mr. Jiashuai Zhang, Mr. Kai Wu, Ms. Xiaoyi Chen, Mr. Cheng Zhang, Ms. Shi Wang, Ms. Mengjing Sun, Ms. Jia Shi, Ms. Xin Wang, Ms. Huihui Wang, Ms. Lumin Wang, Mr. Weizhen Chen, Mr. Menglei Jiao, Mr. Muye Zhang, Mr. Zhiqiang Guo and Dr. Wei Jiang from NCWU, etc. We deeply acknowledge the great efforts in the annotation of the ShopSign dataset, made by Ms. Guowen Peng, Ms. Lumin Wang, Mr. Yikun Han, Mr. Yuefeng Tao, Mr. Jingzhe Yan, Mr. Hongsong Yan, Ms. Feifei Fu, Ms. Mingjue Niu, etc. Ms. Guowen Peng has also spent a huge amount of time in the re-arrangement of the images, the merging and correction of the annotation results. We also thank the useful feedbacks from Prof. Xucheng Yin (USTB), Mr. Chang Liu, and Dr. Chun Yang. {\small \bibliographystyle{ieee}
proofpile-arXiv_065-4226
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} This paper focuses on the topic of transition to turbulence for particulate flows. Transition to turbulence has been a widely studied topic since Reynolds first documented the phenomenon experimentally \cite{reynolds1883experimental,kerswell2005recent}. While much of the research on the topic has focused on single phase flows, there is a growing interest for particulate flows, due to their many applications. Examples range from the precise determination of the volume fraction of oil in the oil-water-sand-gas mixture that is extracted from offshore wells, to needs in the food processing industry \citep{ismail2005_fmi}, and flows of molten metal carrying impurities during recycling processes \citep{kolesnikov2011_mmtb}. Transition to turbulence is, even for single phase flows, a complex problem. In the case of the pipe flow, there is no clearly defined critical Reynolds number. The problem is even more complex in the case of the particulate flow, due to the large number of parameters to consider for the solid phase. To the complexity of the dynamics involved is added the inherent difficulty of considering, theoretically or numerically, a large number of independent objects. Nonetheless, experimental \cite{segre1962behaviour,matas2003transition} and theoretical \cite{saffman1962stability,asmolov1999inertial,klinkenberg2011modal} knowledge has been amassed on the topic of transition to turbulence in particulate pipe flows. For the particles influence on the pipe flow stability in particular, the effect on the transition to turbulence depends non-trivially on the size and volume fraction of the particles. Matas observed that transition occurs at lower flow rates after the addition of particles for small particles, while the effect is reversed for large particles. In a similar fashion, particles tend to have a destabilising effect on the pipe flow stability for low particle volume concentrations but a stabilising one for high volume concentrations \cite{matas2003transition}. Numerical simulations based on accurate modelling of individual solid particles recovered this phenomenology for pipe flows \citep{yu2013numerical}. The present knowledge on this topic is mostly empirical and there is a need for a better understanding of the underlying mechanisms underlying the transition to turbulence of particulate pipe flows. A previous study of the linear stability of particulate pipe flows uncovered a mechanism for instability \cite{rouquier2018instability}. However, even when the flow is stable to infinitesimal disturbances, interactions between disturbances and the underlying flow can lead to large distortions of the base flow due to the non-normality of the linearised equations. Perturbations can experience large growth at finite time \citep{waleffe1995transition,bergstrom1993optimal}, a phenomenon generally referred to as \textit{transient growth}. This paper aims to further this understanding using linear transient growth analysis in order to study the flow behaviour below the critical Reynolds number. The paper starts by an introduction of the two-fluids model used and the assumptions it relies on, the details of the variational method used to obtain the transient growth as well as the numerical methods used (section \ref{sec:model}). We then consider the envelope of the optimal gain as a function of the time in section \ref{sec:envelope}. Section \ref{sec:homog} focuses on the effect of the Reynolds number and particle size on the optimal gain for homogeneously distributed particles. It is expanded for the case of a nonhomogeneous particle distribution in section \ref{sec:inhomog}. Finally, the topology of the optimal perturbations is studied in section \ref{sec:topology}. \section{Model and governing equations}\label{sec:model} The complexity of particulate flows means they are usually studied through modelling assumptions and approximations in order to simplify the problem while keeping as much of the underlying dynamics as possible. In general, a trade-off has to be struck between how accurately the model represents the particulate flow and its complexity. Models with an accurate particle description, such as fully Lagrangian models \citep{hu2006multi,sakai2012lagrangian,sun2013three} and immersed boundary methods \citep{glowinski1999distributed,prosperetti2001physalis,uhlmann2005immersed}, suffer from a high computational cost. In order to avoid the computational cost incurred when accounting for particles as individual solids, here we describe the particulate flow using the `two-fluid' model first derived in \cite{saffman1962stability}. The fluid and solid phases are treated as two inter-penetrating media, with the particles being described as a continuous field rather than as discrete entities. It is a two-way coupled model, taking into account the feedback of the solid phase on the fluid. On the other hand, particle-particle interactions such as collisions or clustering, as well as the deflection of the flow around the particles, are neglected. The two-fluid model is therefore valid for lower concentrations and in the limit where particles are sufficiently smaller than the characteristic scale of the flow. This model has been used in the context of channel flow \cite{klinkenberg2011modal,boronin12} and boundary layer flow \cite{boronin2014modal}. \subsection{Two-fluid model} We consider the flow of a fluid of density $\rho_f$ and dynamic viscosity $\mu$ through a straight pipe with a constant circular cross-section of radius $r_0$ and driven by a constant pressure gradient. The fluid carries spherical particles of radius $a$. The particles are treated as a continuous field with a spatially varying density $N$. Their motion is coupled to the fluid via a Stokes drag force, $\mathbf S_d=6\pi a\mu |\mathbf{u_p}-\mathbf{u}|$, where $\mathbf{u}$ and $\mathbf{u_p}$ are the fluid and particle velocities respectively. When working with an averaging method, one has to ensure that the system of equations is closed. If only the Stokes drag is considered, no specific correction is required \cite{jackson2000dynamics}. The Stokes drag force is proportional to $a$. On the other hand, other forces commonly considered (such as virtual mass force, buoyancy, Magnus force, Saffman force and the Basset history force) are all quadratic or above in particle radius and so can in general be neglected. The Stokes force, by contrast, may become significant if the background shear is large rather than $O(1)$ \citep{boronin2008stability}. Similarly, buoyancy is proportional to $\rho_f a^3$, regardless of which exact definition is chosen, and becomes vanishingly smaller than the Stokes drag $S_d=6\pi a\mu |\mathbf{u_p}-\mathbf{u}|$ in the limit $a\rightarrow0$. More details on the relevance of the drag-only, two-fluid model used in this paper can be found in \cite{rouquier2018instability}. We use, the standard cylindrical set of coordinates $(r,\theta,z)$ aligned with the pipe, with the respective velocity components: $\mathbf{u}=(u_r,u_{\theta},u_z)$ and $\mathbf{u_p}=(u_{pr},u_{p \theta},u_{pz})$ for the fluid and particulate phases. Where relevant, we distinguish quantities associated with the particles from those associated with the fluid by means of a subscript $p$. The fluid velocity is described using the standard Navier-Stokes set of equations to which a Stokes drag force is added to account for the interaction between fluid and solid phases. The solid phase is characterised by the conservation of the particles momentum and density. Nondimensionalising by the centreline velocity, $U_0$, the pipe radius, $r_0$, and the fluid density $\rho_f$ yields the following set of governing equations: \begin{gather} \label{adi1} \frac{\partial \mathbf{u}}{\partial t} = -\nabla p \, - (\mathbf{u} \cdot \nabla ) \mathbf{u} \, + \frac{1}{Re} \nabla^2 \mathbf{u} \, + \frac{f N}{S Re} (\mathbf{u_p} -\mathbf{u} ) \; , \\[1.0em] \label{adi2} \frac{\partial \mathbf{u_p}}{\partial t} = N (\mathbf{u_p} \cdot \nabla) \mathbf{u_p} \, + \frac{1}{S Re} ( \mathbf{u} -\mathbf{u_p} ) \; , \\[1.0em] \label{adii3} \frac{\partial N}{\partial t} = - \nabla \cdot (N \mathbf{u_p}) \; , \\[1.0em] \label{adi4} \nabla \cdot \mathbf{u} = 0 \; , \end{gather} where $p$ is the flow pressure and $N$ the local particle concentration. This system is governed by three non-dimensional parameters: the Reynolds numbers $Re=U_0r_0\rho_f/\mu$, the aforementioned Stokes number, which expresses a dimensionless relaxation time $S=2a^2\rho_p/9r_0^2\rho_f$ and the mass concentration $f=m_p/m_f$, corresponding to the ratio between the particles and fluid mass over the entire pipe. $N$ is normalised such that $\int N \, dV=1$. For a given position $\mathbf{x}$, $N(\mathbf{x}) >1$ implies that the local concentration of particles is higher than the pipe average. These equations satisfy an impermeable and no-slip boundary condition for the fluid \begin{equation} \mathbf{u}\vert_{r=1}=0 \label{eqn:bc_f} \; , \end{equation} and a no penetration boundary condition for the radial particle velocity \begin{equation} u_p\vert_{r=1}=0. \label{eqn:bc_p} \; . \end{equation} The stability of the flow is studied through the addition of a small perturbation to the steady solution, $\mathbf{U}=\mathbf{U}_p=(1-r^2)\hat{\mathbf{z}}$ : \[ \mathbf{u}=\mathbf{U}+\mathbf{u}' , \; \; \mathbf{u_p} = \mathbf{U} + \mathbf{u_p}', \; \; p = P + p' , \; \; N = N_0+N' \, , \] where $N_0$ is the average local particle concentration. Linearising equations (\ref{adi1}) - (\ref{adi4}) around this base state yields: \begin{gather} \label{lin1} \partial_t \mathbf{u}' = -\nabla p' \; -\mathbf{U} \cdot \nabla \mathbf{u}' \; -\mathbf{u}' \cdot \nabla \mathbf{U} \, + \frac{1}{Re} \nabla^2 \mathbf{u'} \, + \frac{f N_0}{S Re } \, (\mathbf{u_p'} -\mathbf{u'}) \; , \\[1.0em] \label{lin2} \partial_t \mathbf{u_p}' = -\mathbf{u_p}'\cdot \nabla \mathbf{U} \, - \mathbf{U} \cdot \nabla \mathbf{u_p}' \, + \frac{1}{S Re} \,( \mathbf{u'} - \mathbf{u_p'} ) \; , \\[1.0em] \label{lin3} \partial_t N = - N_0 \nabla \cdot \mathbf{u_p'} - \mathbf{u_p'} \cdot \nabla N_0 - \mathbf{U} \cdot \nabla N' \; , \\[1.0em] \label{lin4} \nabla \cdot \mathbf{u'} = 0 \; \text{.} \end{gather} The boundary conditions for the perturbations $\mathbf{u'}$ and $\mathbf{u_p'}$ are the same as for the full flow, $\mathbf{u}$ and $\mathbf{u_p}$ . From here on the primes are dropped for the sake of readability. The gain corresponds to the ratio between the maximal energy a perturbation can have at a time $T$ and the perturbation initial energy: \begin{equation} G(T,Re) = \max\limits_{\mathbf{u}(0)} \frac{E(\mathbf{u}(T))}{E(\mathbf{u}(0))} \; . \end{equation} The perturbation $\mathbf{u}(0)$ is the one causing the largest amount of growth, and is often referred to as the optimal disturbance. By optimising $G(T,Re)$ over $T$, one can find the maximum possible gain, or \textit{optimal gain}, at a given Reynolds number. This paper focuses on the optimal gain and the associated time of occurrence. A variational method approach, adapted from the single phase flow problem \cite{pringle2010using}, is used to solve this optimisation problem. The problem described by equations (\ref{lin1})-(\ref{lin4}) can be characterised with the following functional $\mathcal{L}$: \begin{align} \label{func1} \mathcal{L} = & \left\langle \frac{1}{2} \Big(m_f \mathbf{u}^2(T) + m_p \mathbf{u^2_p}(T) \Big) \right\rangle - \lambda \left[ \left\langle \frac{1}{2} \Big(m_f \mathbf{u^2}(0) + m_p \mathbf{u^2_p}(0) \Big) - E_0 \right\rangle \right] \nonumber \\[0.9em] & - \, \int_0^T \! \left\langle \boldsymbol{\Upsilon}\, \cdot \, \Big( \partial_t \mathbf{u} + \nabla p \; + \mathbf{U} \cdot \nabla \mathbf{u} \; -\mathbf{u} \cdot \nabla \mathbf{U} \, - \frac{1}{Re} \nabla^2 \mathbf{u} \, - \frac{f N_0}{S Re} \, (\mathbf{u_p} -\mathbf{u}) \Big) \right\rangle \mathrm{d}t \, \nonumber \\[0.9em] & -\int_0^T \! \left\langle \boldsymbol{\Upsilon_p} \, \cdot \, \Big( \partial_t \mathbf{u_p} + \mathbf{u_p} \cdot \nabla \mathbf{U} \, + \mathbf{U} \cdot \nabla \mathbf{u_p} \, - \frac{1}{S Re} \,( \mathbf{u} - \mathbf{u_p} ) \Big) \right\rangle \mathrm{d}t \nonumber \\[0.9em] & - \, \int_0^T \! \left\langle \Pi \, \cdot \, \nabla \cdot \mathbf{u} \right\rangle \mathrm{d}t \, - \int_0^T \! \left\langle \Gamma \, \cdot \, ( \partial_t N + N_0 \nabla \cdot \mathbf{u_p} + \mathbf{u_p} \cdot \nabla N_0 + \mathbf{U} \cdot \nabla N ) \right\rangle \, \mathrm{d}t \; , \end{align} where $ \lambda $, $\boldsymbol{\Upsilon} $, $\boldsymbol{\Upsilon_p} $, $\Gamma$ and $\Pi$ are the Lagrange multipliers enforcing the constraints of the problem: $ \lambda $ enforces that the energy is fixed, $\boldsymbol{\Upsilon} $, $\boldsymbol{\Upsilon_p} $ and $\Gamma$ enforce that Equations (\ref{lin1}) and (\ref{lin2}) hold true over $ t \in [0, T] $, $\Pi$ and $\Gamma$ enforces the incompressibility of the flow and the conservation of the total number of particles. The brackets represent a normalised volume integral over the pipe, given any function $f$: $ \; \left\langle f \right\rangle = \int \! f \, \mathrm{d}V / V_p $ with $V_p$ the pipe volume. Finding the initial perturbation that will maximise energy growth is equivalent to maximising $\mathcal{L}$, done here through finding the root of its variational derivative $\delta \mathcal{L}$. By reordering $\delta \mathcal{L}$, one can obtain the adjoint system of equations of our problem, with an additional set of conditions. The adjoint system of equation is: \begin{gather} \label{Adj1} \partial_t \boldsymbol{\Upsilon} = - \, \mathbf{U} \cdot \nabla \boldsymbol{\Upsilon} \; + \, \boldsymbol{\Upsilon} \cdot \nabla \mathbf{U} \, - \nabla \boldsymbol{\Pi } -\frac{1}{Re} \nabla^2 \boldsymbol{\Upsilon}\,+ \frac{f N_0}{S Re} \, \boldsymbol{\Upsilon}\,-\,\frac{1}{S Re} \,\boldsymbol{\Upsilon_p } \; , \\[1.0em] \label{Adj2} \partial_t \boldsymbol{\Upsilon_p} = - \, \mathbf{U} \cdot \nabla \boldsymbol{\Upsilon_p} \, + \, \boldsymbol{\Upsilon_p} \cdot \nabla \mathbf{U} - N_0 \, \nabla \Gamma - \frac{f N_0}{S Re} \, \boldsymbol{\Upsilon} \, + \, \frac{1}{S Re} \boldsymbol{\Upsilon_p} \; , \\[1.0em] \label{Adj3} \partial_t \Gamma = - \, \mathbf{U} \cdot \nabla \Gamma - \mathbf{u_p} \cdot \nabla \Gamma \; , \\[1.0em] \label{Adj4} \nabla \cdot \boldsymbol{\Upsilon} = 0 \; . \end{gather} where $\boldsymbol{\Upsilon}$ and $\boldsymbol{\Upsilon_p}$ are the adjoint fluid and particles velocities respectively; $\Gamma$ is the adjoint particle local concentration while $\Pi$ is the adjoint pressure. The adjoint equations must be true for $\delta \mathcal{L}$ to be equal to $0$. Enforcing $\delta \mathcal{L} = 0$ yields another set of conditions: \begin{gather} \label{Adj5} \mathbf{u}(T) = \boldsymbol{\Upsilon}(T) \quad , \quad \mathbf{u_p}(T) = \boldsymbol{\Upsilon_p}(T) \; , \\[1.0em] \label{Adj6} \lambda \mathbf{u}(0) - \boldsymbol{\Upsilon}(0) = 0 \quad , \quad \lambda \mathbf{u_p}(0) - \boldsymbol{\Upsilon_p}(0) = 0 \; . \end{gather} In this paper we consider homogeneous and nonhomogeneous particle distributions. In the case of a homogeneous particle distribution, $N_0$ is held constant spatially. However, particles are not necessarily uniformly distributed in practice. In particular, they tend to aggregate in the radial direction, around $r=0.6-0.8$ \citep{segre1962behaviour, matas2004inertial}. We parametrise this phenomenon by mean of a particle distribution of the form \begin{equation} N_0(r)=\tilde{N}\exp\{-(r-r_d)^2/2\sigma^2\}, \end{equation} with $\tilde{N}$ chosen such that $\int_0^1 N_0(r)rdr=1$. The distribution is then, in the radial direction, a Gaussian centred around radius $r_d$ with a standard deviation $\sigma$. $N_0$ is still homogeneous in the axial and azimuthal directions. A point of note is that, as opposed to the single phase pipe flow which is well-known to be linearly stable, particulate flow can, within our theoretical framework, be linearly unstable in the case of nonhomogeneous particle distributions \cite{rouquier2018instability}. However, only linearly stable cases are considered in this work. \subsection{Iterative variational method} We use an iterative procedure to minimise $\delta \mathcal{L}$, akin to the one used in \cite{pringle2012minimal}. Initially, a first guess of the initial perturbation is made for the fluid velocity $\mathbf{u}^{(0)}(t=0) = \mathbf{u_0}^{(0)}$ and the particles velocity $\mathbf{u_p}^{(0)}(t=0) = \mathbf{u_{p0}}^{(0)}$. The initial perturbations of the fluid and solid phases for iteration $(i+1)$ are: \begin{equation} \mathbf{u}^{(i+1)}(0) = \mathbf{u}^{(i)}(0) + \epsilon ( \lambda \mathbf{u}^{(i)}(0) - \boldsymbol{\Upsilon}^{(i)}(0) ) \end{equation} for the fluid velocity, and: \begin{equation} \mathbf{u_p}^{(i+1)}(0) = \mathbf{u_p}^{(i)}(0) + \epsilon_p (\lambda_p \mathbf{u_p}^{(i)}(0) - \boldsymbol{\Upsilon_p}^{(i)}(0)) \; \end{equation} for the particle velocity. It entails that $\boldsymbol{\Upsilon}(0)$ and $\boldsymbol{\Upsilon_p}(0)$ have to be computed for each iteration. To that effect, the iteration process is as follows: \begin{itemize} \item At the $i$-th iteration, equations (\ref{lin1})-(\ref{lin4}) are advanced from $t=0$ until a target time $t=T$ is reached in order to obtain $\mathbf{u}^{(i)}(T)$ and $\mathbf{u_p}^{(i)}(T)$. \item $\boldsymbol{\Upsilon}^{(i)}(T)$ and $\boldsymbol{\Upsilon_p}^{(i)}(T) $ are then computed using conditions (\ref{Adj5}). \item The adjoint system of Equations (\ref{Adj1})- (\ref{Adj4}) is run from $t=T$ to $t=0$ to find $\boldsymbol{\Upsilon}^{(i)}(0)$ and $\boldsymbol{\Upsilon_p}^{(ii)}(0)$ \item The final conditions \begin{equation} \frac{\partial \mathcal{L}}{\partial\mathbf{u_0}} = - \lambda \mathbf{u_0} - \boldsymbol{\Upsilon} \; , \quad \frac{\partial \mathcal{L}}{\partial\mathbf{u_{p0}}} = - \lambda_p \mathbf{u_{p0}} - \boldsymbol{\Upsilon_p} \; \end{equation} give the gradient in $\mathbf{u_0}$ and $\mathbf{u_{p0}}$ and the initial conditions are updated as \begin{equation} \mathbf{u}^{(i+1)}(0) = \mathbf{u}^{(i)}(0) + \epsilon \frac{\partial \mathcal{L}}{\partial\mathbf{u_0}} \; , \quad \mathbf{u_p}^{(i+1)}(0) = \mathbf{u_p}^{(i)}(0) + \epsilon \frac{\partial \mathcal{L}}{\partial\mathbf{u_{p0}}}, \end{equation} where $\epsilon$ is the step size. \end{itemize} The process is repeated until the norms of $\partial \mathcal{L}/\partial\mathbf{u_0}$ and $\partial \mathcal{L}/\partial\mathbf{u_{p0}}$ are less than a threshold chosen for convergence. \subsection{Computational method} The code is derived from a standard DNS code \cite{openpipeflow}. Temporal discretisation is done through a predictor-corrector scheme. The spatial discretisation is done using a fourth order finite difference method in the radial direction and Fourier spatial discretisation with $128$ mesh points used in the azimuthal and streamwise directions. Any field $\mathbf{g}$ can then be written as: \begin{equation} \mathbf{g}(r,\theta,z ,t ) = \sum_{\alpha} \sum_{m} \hat{g}(r) e^{i (\alpha z + m \theta - \omega t) } \; , \end{equation} where $\alpha$ and $m$ are the wavenumbers in the streamwise and azimuthal directions respectively. The numerical code has been modified in order to incorporate the solid phase, using a fully Eulerian method. First, we add a set of equations for the particles velocity, $\mathbf{u_p}$, for both the standard and adjoint problems (Equations (\ref{lin2}) and (\ref{Adj2}) respectively). Initial and boundary conditions for the particle velocity (equation \ref{eqn:bc_p}) are added as well. The initial fluid velocity is obtained from a previously saved state. \begin{figure} \centering \includegraphics[width=0.85\textwidth]{fluid_c.eps} \caption{Optimal gain $G_f$ (green) and optimal time of gain $T_f$ (red) for the single phase flow as a function of the Reynolds number. The points correspond to values obtained using our code. The lines corresponds to the scaling given in \cite{schmid2012stability}: $G_f=72.40 Re^2 \times 10^{-6}$ , $T_f=48.77 Re \times 10^{-3}$. } \label{fig:fl} \end{figure} \begin{figure} \centering \includegraphics[width=1\textwidth]{conv_TG.eps} \caption{ $S=10^{-3}$, $f=0.1$, $Re=1000$. \textbf{Left:} Optimal gain as a function of the number of iterations $n$, within the optimisation process. Single phase flow (blue dots), two particulate cases are shown, where $G$ is either computed with a fixed value of $T=50$ (red dots) or $G$ is optimised over $T$ (green dots). \textbf{Right:} Optimal gain as a function of the time step for single phase (red) and particulate flows (green), $S=10^{-3}$, $f=0.1$, $Re=1000$. \\ }\label{fig:conv} \end{figure} \subsection{Code validation and convergence} The code has been first verified against the literature on the single phase pipe flow, which is simulated by fixing the particle mass concentration $f$ to $0$. \cite{bergstrom1992initial} found that the time of the peak in energy increases linearly with the Reynolds number while the optimal gain scales with $Re^2$ for all modes, with $G_f=72.40 Re^2 \times 10^{-6}$ and $T_f=48.77 Re \times 10^{-3}$ \cite{schmid2012stability}. These scalings have been recovered with our code, as illustrated in figure \ref{fig:fl}. Second, the growth rate of the leading eigenvalue obtained through a linear stability analysis of the system of equations (\ref{lin1})-(\ref{lin4}) is proportional to the energy decay rate of the linear DNS at large times and therefore offers a convenient way to test the long term evolution of individuals modes in the DNS code. Table \ref{comparison1} and Table \ref{comparison2} show the leading eigenvalue found with linear stability analysis and LDNS simulation, for a single phase and particulate flow respectively. The normalised error is always below $10^{-3}$. The difference between the linear stability analysis and linear DNS results is not increased by the addition of particles.\\ Figure \ref{fig:conv} shows the difference between the values for optimal gain obtained for a given number of iterations, and a fully converged value, $G^{(500)}$. The growth $G$ is shown to converge as the process is iterated, reaching fully converged values after a sufficient number of iterations in the three cases considered in figure \ref{fig:conv}. The number of iterations needed to fully converge depends on the case, convergence is significantly faster in the case of a single phase flow, where $30$ iterations are typically needed to reach machine precision; whereas in the case of particulate flows, this number varies between $80$ and $100$. The number of iterations to reach machine precision can also be decreased by choosing initial velocity profiles closer to the ones leading to optimal growth. \\ The optimal gain also converges as the time step decreases following a power law as illustrated in Figure \ref{fig:conv}. The time step chosen in this study is, unless otherwise specified, $\Delta t =10^{-3}$, to obtain a good compromise between accuracy and computational cost. As we observe asymptotic behaviours for extreme values of $S$, these are less relevant, we therefore use in this work values of $S$ ranging form $10^{-4}$ to $10^{-1}$ as it is the region where interesting behaviour is observed. We chose to keep $f$ constant at $f=0.1$, as $f$ was not shown to significantly impacts the results found, similarly to what has been observed in the case of the linear stability analysis with the same model \cite{rouquier2018instability}. Reynolds numbers are considered up to $Re=10^4$ as the behaviour showed little change with variations of $Re$ and large values are less relevant within the linear approximation we consider. \newpage \begin{center} \begin{tabular}{c c c c c c} \toprule $Re$ & $\alpha$ & $m$ & Eigenvalue solver & LDNS & $\epsilon$\\ \midrule $1000$ & $0$ & $1$ & $-1.4682 \times 10^{-2}$ & $-1.4681 \times 10^{-2}$ & $5.5853 \times 10^{-5}$ \\ $3000$ & $0$ & $1$ & $-4.8940 \times 10^{-3}$ & $-4.8866 \times 10^{-3}$ & $1.5121 \times 10^{-3}$ \\ $5000$ & $0$ & $1$ & $-2.9364 \times 10^{-3}$ & $-2.9344 \times 10^{-3}$ & $6.9658 \times 10^{-4}$ \\ $1000$ & $1$ & $0$ & $-7.0864 \times 10^{-2}$ & $-7.0898 \times 10^{-2}$ & $4.7956 \times 10^{-4}$ \\ $3000$ & $1$ & $0$ & $-4.1276 \times 10^{-2}$ & $-4.1317 \times 10^{-2}$ & $1.0131 \times 10^{-3}$ \\ $5000$ & $1$ & $0$ & $-3.2043 \times 10^{-2}$ & $-3.2087 \times 10^{-2}$ & $1.3604 \times 10^{-3}$ \\ $1000$ & $1$ & $1$ & $-9.0443 \times 10^{-2}$ & $-9.0483 \times 10^{-2}$ & $4.3953 \times 10^{-4}$ \\ $3000$ & $1$ & $1$ & $-5.1973 \times 10^{-2}$ & $-5.2018 \times 10^{-2}$ & $8.7257 \times 10^{-4}$ \\ $5000$ & $1$ & $1$ & $-4.0200 \times 10^{-2}$ & $-4.0246 \times 10^{-2}$ & $1.1504 \times 10^{-3}$ \\ \bottomrule \end{tabular} \end{center} \vspace{-0.5\baselineskip} \captionof{table}{Comparison of long term decay rates of linearly stable eigenmodes obtained from LSA (eigenvalue solver) and through our DNS code for a single phase flow. $\epsilon = \frac{\vert \omega_{lsa}- \omega_{LDNS} \vert}{\omega_{LDNS}} $, $\Delta t = 10^{-3} $.} \label{comparison1} \vspace{1\baselineskip} \begin{center} \begin{tabular}{c c c c c c} \toprule $S$ & $\alpha$ & $m$ & Eigenvalue solver & DNS & $\epsilon$\\ \midrule $10^{-4}$ & $0$ & $1$ & $-1.4526 \times 10^{-2}$ & $-1.4526 \times 10^{-2}$ & $5.5075 \times 10^{-6} $ \\ $10^{-3}$ & $0$ & $1$ & $-1.4536 \times 10^{-2}$ & $-1.4523 \times 10^{-2}$ & $8.3513 \times 10^{-4} $ \\ $10^{-2}$ & $0$ & $1$ & $-1.4513 \times 10^{-2}$ & $-1.4501 \times 10^{-2}$ & $8.7025 \times 10^{-4} $ \\ $10^{-1}$ & $0$ & $1$ & $-8.4935 \times 10^{-3}$ & $-8.4931 \times 10^{-3}$ & $4.8274 \times 10^{-5} $ \\ $10^{-4}$ & $1$ & $0$ & $-8.9988 \times 10^{-2}$ & $-9.0029 \times 10^{-2}$ & $4.5108 \times 10^{-4} $ \\ $10^{-3}$ & $1$ & $0$ & $-8.9981 \times 10^{-2}$ & $-8.9977 \times 10^{-2}$ & $4.7790 \times 10^{-5} $ \\ $10^{-2}$ & $1$ & $0$ & $-8.9791 \times 10^{-2}$ & $-8.9855 \times 10^{-2}$ & $7.5478 \times 10^{-4} $ \\ \bottomrule \end{tabular} \end{center} \vspace{-0.5\baselineskip} \captionof{table}{Comparison of long term decay rates of linearly stable eigenmodes obtained from LSA (eigenvalue solver) and through our DNS code for particulate flows. $\epsilon = \frac{\vert \omega_{lsa}-\omega_{LDNS} \vert}{\omega_{LDNS}}$, $Re=1000$, $f=0.01$, $\Delta t = 10^{-3} $. } \label{comparison2} \vspace{2\baselineskip} \section{Growth envelope}\label{sec:envelope} \begin{figure} \centering \includegraphics[width=1\textwidth]{env2_3.eps} \caption{Maximal growth as a function of the time of optimisation, $T$ with $Re=1500 $. From left to right : Single phase flow ; uniform particle distribution with $S=10^{-3} $ and $f=0.1 $ ; Gaussian particle distribution with $r_d = 0.65$, $\sigma=0.104$, $S=10^{-3}$ and $f=0.1 $. Wavenumbers ($\alpha,m$) = ($1,1$) in red, ($\alpha,m$) = ($0,1$) in green. } \label{env1} \end{figure} The value of the maximum transient growth depends on the target time chosen. While we are mostly interested in optimising for $T$, it is still interesting to see how $G$ depends on $T$. Figure \ref{env1} shows the growth envelope (from left to right) for a single phase flow and two examples of particulate flows with homogeneous and nonhomogeneous particle distribution. The two modes showing the largest growth, ($\alpha,m$) = ($0,1$) and ($\alpha,m$) = ($1,1$), are plotted independently. The envelopes are of similar shape in all three cases. We observe two competing mechanisms for growth: at small times, below $T \approx20$ in the single phase flow case, and $T\approx25-30$ for the particulate flow for the examples shown in figure \ref{env1}, the mode producing the most growth is ($\alpha,m$) = ($1,1$). The growth produced by this mode quickly decreases as the time increases. At larger values of $T$, the mode producing the most growth is ($\alpha,m$) = ($0,1$). For single phase pipe flows, ($\alpha,m$) = ($0,1$) is the mode that yields the maximal gain when optimising for the target time $T$ \cite{bergstrom1992initial}. This is also the case for particulate flows, whether the particle distribution is homogeneous or not. This result, plus the similar shapes of the envelopes, suggests that the mechanisms producing growth are the same for single phase and particulate flows. The mode ($\alpha,m$) = ($1,1$) is the most affected by the addition of particles, especially for nonhomogeneous particle distributions as illustrated in figure \ref{env1}. However, ($\alpha,m$) = ($1,0$) is still the mode for which the gain is the strongest. From now on, $G$ is optimised over $T$ when studying the optimal gain and the value of the mass fraction $f$ is also kept constant, to $f=0.1$. \section{Homogeneous particle distribution}\label{sec:homog} \begin{figure} \centering \includegraphics[width=1\textwidth]{growth_uni.eps} \caption{Ratio of growth between particulate and single phase flows (at the same $Re$) as a function of $S$ for $f = 0.1$. \textbf{Left:} Ratio of optimal gain. \textbf{Right:} Ratio of the time of maximum growth. $Re = 500$ (red), $1000$ (green), $2000$ (dotted blue), $3000$ (purple), $5000$ (blue dashed).} \label{fig:uniG} \end{figure} We first examine the effect of adding homogeneously distributed particles in the flow. In order to illustrate the effect of particles, we define the ratio between the growth for the particulate flow with a given set of parameters and the single phase flow with the same Reynolds number: \begin{equation} G' = \frac{G_p(Re , S,f)}{G_f(Re)}, \end{equation} where $G_p$ and $G_f$, are the optimal gains for particulate and single phase flows respectively, both maximised over all values of $T$. A similar ratio is chosen between the times of optimal growth for particulate and single phase flow, \begin{equation} T' = \frac{T_p(Re ,S,f)}{T_f(Re)}, \end{equation} where $T_p$ and $T_f$ are the target time associated to $G_p$ and $G_f$. \subsection{Effect of the Stokes numbers on the gain} We first consider variations of these quantities as a with the Stokes number. Figure \ref{fig:uniG} shows $G'$ and $T'$ as a function of $S$, for different values of $Re$.\\ The addition of particles increases the optimal gain for all values of $S$. The curves are non monotonic, with $G'$ increasing until it reaches a peak defined as $G'_{peak}$, for an associated Stokes number $S_G$. For large values of $S$, the ratio $G'$ seems to decreases towards $1$. In the limit of $S \rightarrow \infty$, the particles are so heavy that they are effectively decoupled from the flow and have no effect on it. When $S \to 0$, $G' \approx 1.21 $. The difference between single phase and particulate flows is due to the modification of the average density of the flow caused by the particles. With $f=0.1 $, the modified Reynolds number is $Re' = (1+f) Re= 1.1 Re $. Since for the single phase pipe flow, $G_f \propto Re^2 $, $G_p \propto (1+f)^2 Re^2$, as observed in Figure \ref{fig:uniG}. It follows that there is an optimal Stokes number $S$ for which the influence of homogeneously distributed particles on the optimal gain is greatest. A similar behaviour is observed for $T'$. For all values of $Re$ and $S$ considered, the growth is delayed for particulate flows compared to single phase flows. As $S \to 0$, the time for which the growth is maximised increases by $10 \%$ compared to the single phase flow. This corresponds to the time of optimal growth for the modified Reynolds number $Re' = (1+f) Re$ since, as discussed in the previous section, the time for maximum growth increases linearly with $Re$. Similarly, a peak for the time ratio $T'_{peak}$ occurs at a Stokes number $S_T$. The time ratio then decreases as the Stokes number continues to increase in a similar fashion as the ratio of growths. For all Reynolds numbers considered and $f=0.1$, the peak Stokes number is around $S_T = 5 \times 10^{-2}$. \subsection{Effect of the Reynolds number} The Reynolds number has little incidence on $G'$, as the envelopes have a very similar shape when $Re$ is varied. For all $Re$ considered, the curves of Figure \ref{fig:uniG} exhibit a peak at approximately the same Stokes number, $S_G \simeq 2.5 \times 10^{-2}$. The value $G'_{peak}$ of the peak shows little change, varying by only $0.25\%$ for $Re$ ranging from $500$ to $5000$. Since $G'_{peak}$ is almost constant over the Reynolds number and the optimal gain for single phase flow $G_f(Re)$ scales with $Re^2$ \citep{bergstrom1992initial}, it follows that the optimal gain for particulate flows optimised over $S$ also scales with $Re^2$. Similarly, the ratio of delays $T'_{peak}$ varies little with the Reynolds number, with variations just under $1\%$ for $Re$ ranging from $500$ to $5000$. Moreover, $T_f$ scales linearly with the Reynolds number. Therefore $T_p$ optimised over $S$ scales linearly with the Reynolds number as well. \section{Inhomogeneous particle distribution}\label{sec:inhomog} Particles do tend to cluster in laminar pipe flows \cite{matas2003transition} such that considering homogeneously distributed particles is less realistic. Moreover, allowing for an inhomogeneous distribution dramatically increases the effect of the solid phase in the case of the linear stability \cite{rouquier2018instability}. \begin{figure} \begin{subfigure}{1\textwidth} \centering \includegraphics[width=1\textwidth]{growth_rd1000.eps} \caption{$Re=1000$ and $r_d=0.7$. } \label{fig:g1} \end{subfigure}\hfill \begin{subfigure}{1\textwidth} \centering \includegraphics[width=1\textwidth]{growth_rd03.eps} \caption{$Re=500$ and $r_d=0.3$. } \label{fig:g2} \end{subfigure} \caption{Ratio of optimal gain $G'$ (left) and time of optimal gain $T'$ (right) as a function of $S$ for $f = 0.1$ in the case of a Gaussian particle distribution. Uniform distribution (red), $\sigma=0.15$ (green), $\sigma=0.12$ (dotted blue), $\sigma=0.10$ (purple).} \label{fig:growth2} \end{figure} \begin{figure} \centering \includegraphics[width=1\textwidth]{growth_sd.eps} \caption{Ratio of growth between particulate and single phase flow as a function of $S$ for $f = 0.1$ and $Re=1000$ in the case of a Gaussian particle distribution with $\sigma=0.1$. \textbf{Left:} Optimal gain. \textbf{Right:} Time of optimal gain. Uniform distribution (red), $r_d=0.3$ (green), $r_d=0.5$ (dotted blue), $r_d=0.6$ (purple), $r_d=0.7$ (dashed blue), $r_d=0.8$ (yellow).} \label{grsd} \end{figure} \subsection{Influence of the distribution standard deviation} Figure \ref{fig:growth2} shows $G'$ and $T'$ for varying values of $\sigma$, centred around $r_d=0.7$ (figure \ref{fig:g1}) and $r_d=0.3$ (figure \ref{fig:g2}). The overall shape of the growth is the same as in the case of a homogeneous particle distribution, but the effect of the solid phase on the gain is significantly stronger. It also significantly varies with $\sigma$. The more the particles are concentrated, \emph{i.e.} the smaller $\sigma$, the larger both $G'$ and $T'$ are, as illustrated in figure \ref{fig:growth2}. Varying $\sigma$ from $0.15$ to $0.10$, $G'_{peak}$ increases by $ 24\%$ and $T'_{peak}$ by $ 18\%$ for $r_d=0.7$ (table \ref{tab:inhomog}).\\ The effect of $\sigma$ is similar for $r_d=0.3$, as seen in figure \ref{fig:g2}. However, the values of $G'_{peak}$ and $T'_{peak}$ are noticeably smaller for equivalent values of $\sigma$. This indicates that the position of the particles determine the amount of transient growth as well. However, both $G'$ and $T'$ still tends towards $1$ as $S \to \infty$ in all cases. Compared to homogeneous particles distribution, the values of $S$ yielding the maximal gain shifts to a larger value for $r_d=0.7$. The time at which the optimal growth occurs is delayed for Gaussian particle distributions as well. On the other hand these effects are reversed for $r_d=0.3$. Moreover, while changing $\sigma$ affected the growth ratio, it has little effect on the value of $S_G$ and $S_T$ for all cases observed. \begin{center} \begin{tabular}{c c c c c c} \toprule $r_d$ & $\sigma$ & $G'_{peak}$ & $T'_{peak}$ & $S_G$ & $S_T$\\ \midrule \multicolumn{2}{l}{Homogeneous distribution} & $ 1.30$ & $1.20$ & $2.5 \times 10^{-2}$ & $5 \times 10^{-2} $ \\ $0.3$ & $0.15$ & $1,49$ & $1.29$ & $1.7 \times 10^{-2}$ & $ 3.8 \times 10^{-2} $ \\ $0.3$ & $0.10$ & $1.63$ & $1.35$ & $2.0 \times 10^{-2}$ & $ 4.3 \times 10^{-2} $ \\ $0.6$ & $0.10$ & $3.50$ & $2.20$ & $4.2 \times 10^{-2}$ & $ 8.0 \times 10^{-2} $ \\ $0.7$ & $0.15$ & $1.90$ & $1.65$ & $4.0\times 10^{-2}$ & $ 7.5 \times 10^{-2} $ \\ $0.7$ & $0.10$ & $2.35$ & $1.95$ & $4.9 \times 10^{-2}$ & $ 9 \times 10^{-2} $ \\ \bottomrule \end{tabular} \end{center} \vspace{-0.5\baselineskip} \captionof{table}{Values of interest as a function of varying particles distributions.} \label{tab:inhomog} \subsection{Influence of the radial distribution of particles} Figure \ref{grsd} shows the ratios of gains as a function of $S$ for several average radii $r_d$ of the particle distribution, $Re=1000$ and $\sigma=0.1$. $G'_{peak}$ and $T'_{peak}$ show strong variations with $r_d$. Indeed, the $G'_{peak}$ ranges from $1.95$ to $3.50$ while $T'_{peak}$ ranges from $1.40$ to $2.20$.\\ The effect of particles on the flow is highest for $r_d$ in the range $0.5 - 0.6$ both for the ratio of maximum growths and the ratio of optimal times as shown in Figure \ref{grsd}. This value is relatively close to the Segr\'{e}-Silberberg radius where particles are known to naturally cluster \cite{segre1962behaviour}, albeit a little closer to the pipe centre. Unlike the Reynolds number, $r_d$ has a very strong influence on the optimal Stokes number. Both $S_G$ and $S_T$ are larger than their counterpart in the case of a uniform particle distribution for all $S$, with the exception of $r_d=0.3$, which where this is only the case for $S\lesssim0.07$. \section{Topology of the optimal perturbations}\label{sec:topology} \begin{figure} \centering \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=1\textwidth]{fluid14.eps} \caption{Single phase flow, $T = 14$, $\mathbf{u_0}$ } \label{fT14} \end{subfigure}\hfill \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=1\textwidth]{fluid90.eps} \caption{Single phase flow, $T = 90$, $\mathbf{u_0}$} \label{fT90} \end{subfigure} \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=1.0\textwidth]{gauss14.eps} \caption{Particulate flow, $T = 14$, $\mathbf{u_0}$ } \label{gT14} \end{subfigure}\hfill \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=1.0\textwidth]{gauss90.eps} \caption{Particulate flow, $T = 90$, $\mathbf{u_0}$ } \label{gT90} \end{subfigure} \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=1.0\textwidth]{gauss14p.eps} \caption{Particulate flow, $T = 14$, $\mathbf{u_{p0}}$ } \label{gpT14} \end{subfigure}\hfill \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=1.0\textwidth]{gauss90p.eps} \caption{Particulate flow, $T = 90$, $\mathbf{u_{p0}}$ } \label{gpT90} \end{subfigure} \caption{Velocity contours of the optimal perturbation of a single phase flow and of a particulate flow with a Gaussian particle distribution, $Re=1500$, $f=0.1$, $S=10^{-3}$, $r_d=0.65$, $\sigma=0.104$.} \end{figure} In this section we study the topology of the optimal velocity fields. First, we consider the velocity fields at $t=0$, subsequently called optimal perturbation and denoted $\mathbf{u_{0}}$ for the fluid and $\mathbf{u_{p0}}$ for the particles' velocity. Second, the velocity fields at $t=T$, referred to as the velocity peak and denoted $\mathbf{u_{T}}$ for the fluid and $\mathbf{u_{pT}}$ for the particles' velocity. Two target times are shown here, $T = 14$ for which the mode ($\alpha,m$) = ($1,1$) is dominant, and $T=90$ for which ($\alpha,m$) = ($0,1$) is dominant. The radial sections of the optimal perturbations are very different depending on whether the dominant mode is ($\alpha,m$) = ($1,1$) or ($\alpha,m$) = ($0,1$). Figures \ref{fT14}-\ref{gpT90} shows the contours of streamwise velocity and sectional fluid velocity vectors, for a single phase flow and a particulate flow with a nonhomogeneous distribution. The optimal perturbations contours are weakly affected by the addition of particles in this case. For $T=14$ (figures \ref{fT14} and \ref{gT14}), the profile of the optimal perturbations shows two symmetric rolls in the spanwise direction. The streamwise velocity has a peak in the shape of an antisymmetric annulus between $r=0.5$ and $r=0.7$. Streamwise and spanwise velocities are of the same order of magnitude in both cases. For a larger target time $T$, the streamwise independent mode is dominating as illustrated in figures \ref{fT90} and \ref{gT90}. In the spanwise direction we observe two rolls that are distinctive of the usual single phase transient growth. Figures \ref{gpT14} and \ref{gpT90} shows the contours of streamwise velocity and sectional particles velocity vectors: both the streamwise and sectional particles velocities are strongest in the region where the particle concentration is the highest. \begin{figure} \centering \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=0.9\textwidth]{fluid14f.eps} \caption{Single phase flow, $T = 14$, $\mathbf{u_T}$} \label{fT14f} \end{subfigure}\hfill \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=0.9\textwidth]{fluid90f.eps} \caption{Single phase flow, $T = 90$, $\mathbf{u_T}$} \label{fT90f} \end{subfigure} \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=0.9\textwidth]{part14pf.eps} \caption{Particulate flow, $T = 14$, $\mathbf{u_T}$ } \label{gT14f} \end{subfigure}\hfill \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=0.9\textwidth]{part90pf.eps} \caption{Particulate flow, $T = 90$, $\mathbf{u_T}$ } \label{gT90f} \end{subfigure} \centering \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=0.9\textwidth]{gauss14pf.eps} \caption{Particulate flow, $T = 14$, $\mathbf{u_{pT}}$ } \label{gTp14f} \end{subfigure}\hfill \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=0.9\textwidth]{gauss90pf.eps} \caption{Particulate flow, $T = 90$, $\mathbf{u_{pT}}$ } \label{gpT90f} \end{subfigure} \caption{Peak velocity contours of single phase flow and a particulate flow with a Gaussian particle distribution at $t=0$, $Re=1500$, $f=0.1$, $S=10^{-3}$, $r_d=0.65$, $\sigma=0.104$.} \label{fig:peak} \end{figure} Examples of peak velocity contours are given in figure \ref{fig:peak}. In all cases considered the streamwise velocity is larger than the spanwise velocity, for both $\mathbf{u_{T}}$ and $\mathbf{u_{pT}}$. The effect is even more pronounced for $T=90$. $u_T(=14)$ spanwise component is similar to $u_0(T=14)$'s but the streamwise component is different with two opposed currents, one close to the centre and the other around it. In the case $T=90$ the streamwise velocity take the form of two antisymmetric rolls. Figure \ref{fig:peak} shows that fluid and particles profile are, in the case of the peak velocity, almost identical, due to the strong coupling between the fluid and solid phases.\\ In summary, the addition of particles does not alter the transient growth mechanisms nor the topology of the optimal modes, even if the particles are distributed inhomohenously. Particles tend to be accelerated where the flow is, however their effect on the growth itself is significant when they are inhomogenously distributed. \newpage \section{Conclusion and discussion} This paper presented a study of the linear transient growth of particulate pipe flow through a simple two-fluid, two-way model for the solid and the liquid phases. The addition of particles has been found to increase the amount of transient growth regardless of Stokes number. However the modes that are responsible for the transient growth remain the same as those in flows without particles. The growth itself still varies with the Stokes number, with a sweet spot for which it is maximised. Interestingly, the corresponding Stokes number is independent of the Reynolds number. Moreover, the ratio of growths optimised over time for the particulate to non-particulate flow, $G'_{peak}$, is also independent of the Reynolds number, implying that the growth for the particulate flows scales as $Re^2$ as it does for the single phase flow \cite{bergstrom1993optimal}. We also showed that the solid phase has a delaying effect on the transient growth, again regardless of the Stokes number considered. We observe that there is a value of $S$ for which the delay is maximised, this Stokes number is independent of the Reynolds number as well. Here too, the ratio of times of optimal growths for the particulate to the non-particulate flow $T'_{peak}$, is independent of the Reynolds number. This implies that the time for which growth is optimised scales as $Re$ as it does for the single phase flow.\\ The most important result is that allowing for particles to be inhomogeneously distributed can drastically increase their impact on the transient growth, which is increased by more than $200\%$ depending on their size and the shape of the spatial distribution. The way in which the particles are distributed is important too. We have considered particles in a Gaussian distribution of standard deviation $\sigma$ located in annulus located at radius $r_d$. The transient growth increases monotonically as $\sigma$ decreases, \emph{i.e.} when the particles are more localised. The effect of the solid phase on the transient growth was also found to be weaker when the particles are localised close to the wall ($r_d$ close to 1) or at the pipe centre ($r_d$ close to 0), and strongest in the intermediate region ($r_d = 0.6-0.7$). This region seems to play a particularly important role both in the laminar state and in the process of transition to turbulence: not only do particle tend to naturally cluster there in the laminar state \citep{segre1962behaviour,matas2004inertial}, but particulate pipe flows have been found linearly unstable when particles of intermediate size are added in that region \cite{rouquier2018instability}. This raises the question of whether the actual pathway to turbulence is indeed sensitive to particles being present in that region, a question that could be answered in further analysis including fully nonlinear effects. Secondly, the question of the robustness of the model is also crucial: further studies with more physically accurate model for the solid phase and their interaction with the fluid phase are needed to confirm the nature of the role played by the mechanisms identified in this work.\\ \textit{AR is supported by TUV-NEL. CCTP is partially supported by EPSRC grant No. EP/P021352/1. AP acknowledges support from the Royal Society under the Wolfson Research Merit Award Scheme (Grant WM140032).} \bibliographystyle{vancouver}
proofpile-arXiv_065-4228
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The LIGO-Virgo collaboration has recently released a catalogue of compact object mergers observed via gravitational wave (GW) emission. The catalogue includes ten black hole (BH) binaries and one neutron star (NS) binary \citep{ligo2018}. From the first two observing runs, the LIGO-Virgo collaboration inferred a merger rate of $110$--$3840$ Gpc$^{-3}$ yr$^{-1}$ for binary NSs and a merger rate of $9.7$--$101$ Gpc$^{-3}$ yr$^{-1}$ for binary BHs. As the detector sensitivity improves and new instruments are developed, hundreds of merging binary signals are expected to be observed in the upcoming years, and the modeling of astrophysical channels that lead to the formation of BH and NS binaries has become of crucial importance. Several different astrophysical channels have been proposed to form merging compact objects. Possibilities include isolated binary evolution through a common envelope phase \citep{bel16b,giac2018,kruc2018} or chemically homogeneous evolution \citep{mand16,march16}, GW capture events in galactic nuclei \citep{olea09,rass2019}, mergers in star clusters \citep{askar17,baner18,frak18,rod18}, Kozai-Lidov (KL) mergers of binaries in galactic nuclei \citep{antoper12,petr17,fragrish2018,grish18}, in stellar triple \citep{ant17,sil17,arc2018} and quadruple systems \citep{fragk2019,liu2019}, and mergers in active galactic nuclei accretion disks \citep{bart17,sto17}. While typically each model accounts for roughly the same rate of the order of a $\sim\ \mathrm{few}$ Gpc$^{-3}$ yr$^{-1}$, the statistical contribution of different astrophysical channels can be hopefully disentangled using the distributions of their masses, spins, eccentricity and redshift \citep[see e.g.][]{olea16,gondan2018,zevin18}. Generally, theoretical predictions can match the rate inferred by LIGO for BHs, while the high rate estimated for NS-NS inferred from GW170817 can be explained only by assuming rather extreme prescriptions about natal kicks and common envelope in binary systems. Interestingly, BH-NS mergers have not been observed as of yet and LIGO has only set a $90\%$ upper limit of $610$ Gpc$^{-3}$ yr$^{-1}$ on the merger rate. The formation of such binaries is intriguing. The most natural is in isolation \citep{giac2018,kruc2018}. In star clusters, mass segregation and the strong heating by gravitational BH scattering prevent the NSs from forming BH-NS binaries \citep{frag2018}. As a result, BH-NS binaries are expected to be even rarer than NS-NS binaries in star clusters. Recent simulations by \citet{ye2019} have shown that only a very few BH-NS binaries are formed even in massive clusters. Another viable scenario is the dynamical formation in galactic nuclei, but aside from \citet{fragrish2018} this scenario was not studied in detail. Bound stellar multiples are common. Surveys of massive stars, which are the progenitors of NSs and BHs, have shown that more than $\sim 50$\% and $\sim 15$\% have at least one or two stellar companions, respectively \citep{duq91,ragh10,sa2013AA,duns2015,sana2017,jim2019}. Most previous studies on bound multiples have exclusively focused on determining the BH-BH merger rate from isolated bound triples \citep{ant17,sil17} or quadruples \citep{fragk2019,liu2019}. In this paper, we study for the first time the dynamical evolution of triples comprised of an inner BH-NS binary by means of high-precision $N$-body simulations, including Post-Newtonian (PN) terms up to 2.5PN order and GW emission. We denote the BH and NS masses in the inner binary as $\mbh$ and $\mns$, respectively, and the mass of third companion as $m_3$. We start from the main-sequence (MS) progenitors of the compact objects and model the supernova (SN) events that lead to the formation of BHs and NSs. We adopt different prescriptions for the natal kick velocities that are imparted by SN events. We quantify how the probability of merger depends on the initial conditions and determine the parameter distribution of merging systems relative to the initial distributions, showing that BH-NS mergers in triples predict a high merger rate only in the case low or null natal velocity kicks are assumed. Most of these mergers have high eccentricity ($\gtrsim 0.1$) in the LIGO frequency band. The paper is organized as follows. In Section~\ref{sect:supern}, we discuss the SN mechanism in triple stars. In Section~\ref{sect:lk}, we discuss the KL mechanism in triple systems. In Section~\ref{sect:results}, we present our numerical methods to determine the rate of BH-NS mergers in triples, and discuss the parameters of merging systems. Finally, in Section~\ref{sect:conc}, we discuss the implications of our findings and draw our conclusions. \section{Supernovae in triples} \label{sect:supern} \begin{table*} \caption{Models parameters: name, dispersion of BH kick-velocity distribution ($\sigbh$), dispersion of the NS kick-velocity distribution ($\signs$), eccentricity distribution ($f(e)$), maximum outer semi-major axis of the triple ($\amax$), fraction of stable triple systems after SNe ($f_{\rm stable}$), fraction of stable systems that merge from the $N$-body simulations ($f_{\rm merge}$).} \centering \begin{tabular}{lcccccc} \hline Name & $\sigbh$ ($\kms$) & $\signs$ ($\kms$) & $f(e)$ & $\amax$ (AU) & $f_{\rm merge}$ & $f_{\rm stable}$\\ \hline\hline A1 & $\signs\times m_\mathrm{BH}/m_\mathrm{NS}$ & $260$ & uniform & $2000$ & $1.1\times 10^{-6}$ & $0.11$\\ A2 & $0$ & $0$ & uniform & $2000$ & $1.6\times 10^{-2}$ & $0.08$\\ A3 & $\signs\times m_\mathrm{BH}/m_\mathrm{NS}$ & $100$ & uniform & $2000$ & $3.6\times 10^{-5}$ & $0.11$\\ B1 & $\signs\times m_\mathrm{BH}/m_\mathrm{NS}$ & $260$ & thermal & $2000$ & $1.6\times 10^{-6}$ & $0.09$\\ C1 & $\signs\times m_\mathrm{BH}/m_\mathrm{NS}$ & $260$ & uniform & $3000$ & $0.9\times 10^{-6}$ & $0.10$\\ C2 & $\signs\times m_\mathrm{BH}/m_\mathrm{NS}$ & $260$ & uniform & $5000$ & $0.7\times 10^{-6}$ & $0.10$\\ \hline \end{tabular} \label{tab:models} \end{table*} We consider a hierarchical triple system that consists of an inner binary of mass $m_{\rm in}=m_1+m_2$ and a third body of mass $m_3$ that orbits the inner binary \citep{pijloo2012,toonen2016,lu2019}. The triple can be described in terms of the Keplerian elements of the inner orbit, describing the relative motion of $m_1$ and $m_2$, and of the outer orbit, describing the relative motion of $m_3$ around the centre of mass of the inner binary. The semi-major axis and eccentricity of the inner orbit are $\ain$ and $\ein$, respectively, while the semi-major axis and eccentricity of the outer orbit are $\aout$ and $\eout$, respectively. The inner and outer orbital plane have initial mutual inclination $i_0$. When the primary star undergoes an SN event, we assume that it takes place instantaneously, i.e. on a time-scale shorter than the orbital period, during which the star has an instantaneous removal of mass. Under this assumption, the position of the body that undergoes an impulsive SN event is assumed not to change. We ignore the SN-shell impact on the companion stars. As a consequence of the mass loss, the exploding star is imparted a kick to its center of mass \citep{bla1961}. Moreover, the system receives a natal kick due to recoil from an asymmetric supernova explosion. We assume that the velocity kick is drawn from a Maxwellian distribution \begin{equation} p(\vk)\propto \vk^2 e^{-\vk^2/\sigma^2}\ , \label{eqn:vkick} \end{equation} with a velocity dispersion $\sigma$. We consider first the SN event of the primary star in the inner binary. Before the SN takes place, the inner binary consists of two stars with masses $m_1$ and $m_2$, respectively, a relative velocity, $v=|{\bf{v}}|$, and a separation distance, $r=|{\bf{r}}|$. Conservation of the orbital energy implies \begin{equation} |{\bf{v}}|^2=\mu\left(\frac{2}{r}-\frac{1}{\ain}\right)\ , \label{eqn:vcons} \end{equation} with $\mu=G(m_1+m_2)$, while the specific relative angular momentum $\bf{h}$ is related to the orbital parameters as \begin{equation} |{\bf{h}}|^2=|{\bf{r}}\times {\bf{v}}|^2=\mu \ain(1-e_{\mathrm{in}}^2)\ . \label{eqn:hcons} \end{equation} After the SN event, the orbital semi-major axis and eccentricity change as a consequence of the mass loss $\Delta m$ and a natal kick velocity $\vk$. The total mass of the binary decreases to $m_{1,n}+m_2$, where $m_{1,n}=m_1-\Delta m$, while the relative velocity will become ${\bf{v_n}}={\bf{v}}+{\bf{\vk}}$. Since ${\bf{r_{\rm n}}}={\bf{r}}$, the new semi-major axis can be computed from Eq.~\ref{eqn:vcons}, \begin{equation} a_{\rm in,n}=\left(\frac{2}{r}-\frac{v_n^2}{\mu_{\rm in,n}}\right)^{-1}\ , \end{equation} where $\mu_{\rm in,n}=G(m_{1,n}+m_2)$, while the new eccentricity can be computed from Eq.~\ref{eqn:hcons}, \begin{equation} e_{\rm in,n}=\left(1-\frac{|{\bf{r}}\times {\bf{v_n}}|^2}{\mu_{\rm in,n} a_{\rm in,n}}\right)^{1/2}\ . \end{equation} Due to the SN of the primary in the inner binary, an effective kick ${\bf{V_{\rm cm}}}$ is imparted to its center of mass. As a consequence, its position ${\bf{R_0}}$ changes by ${\bf{\Delta R}}$ to ${\bf{R}}={\bf{R_0}}+{\bf{\Delta R}}$, and the outer semi-major axis $\aout$ and eccentricity $\eout$ change accordingly. To compute the relative change with respect to the pre-SN values, the strategy is to use again Eq.~\ref{eqn:vcons} and Eq.~\ref{eqn:hcons} relative to the outer orbit and the outer orbit relative velocity ${\bf{V_{\rm 3}}}$, with effective velocity kick ${\bf{V_{\rm cm}}}$, \begin{equation} a_{\rm out,n}=\left(\frac{2}{R}-\frac{V_n^2}{\mu_{\rm out,n}}\right)^{-1}\ , \end{equation} where $\mu_{\rm out,n}=G(m_{1,n}+m_2+m_3)$ and ${\bf{V_{\rm n}}}={\bf{V_{\rm 3}}}+{\bf{V_{\rm cm}}}$, \begin{equation} e_{\rm out,n}=\left(1-\frac{|{\bf{R}}\times {\bf{V_n}}|^2}{\mu_{\rm out,n} a_{\rm out,n}}\right)^{1/2}\ . \end{equation} Finally, the inclination of the outer binary orbital plane with respect to the inner binary orbital plane is tilted as a consequence of the kick. The new relative inclination $i_{\rm n}$ can be computed from, \begin{equation} \sin i_{\rm n}=\frac{|{\bf{R_0}}|}{|{\bf{R}}|} \sin i_0\ . \end{equation} The same prescriptions can be applied for the other two stars in the triple. After every SN event, if either $\ain\le 0$ or $\aout\le 0$ the triple is considered to have had a merger, while if either $\ein\ge 1$ or $\eout\ge 1$ the triple is considered to be unbound. \section{Kozai-Lidov mechanism} \label{sect:lk} We consider triple objects where the inner binary is made up of a BH and a NS, of mass $\mbh$ and $\mns$, respectively. A triple system undergoes KL oscillations in eccentricity whenever the initial mutual orbital inclination of the inner and outer orbit is in the window $i\sim 40^\circ$-$140^\circ$ \citep{koz62,lid62}. At the quadrupole order of approximation, the KL oscillations occur on a timescale \citep{nao16}, \begin{equation} T_{\rm KL}=\frac{8}{15\pi}\frac{m_{\rm tot}}{m_{\rm 3}}\frac{P_{\rm out,n}^2}{P_{\rm BHNS}}\left(1-e_{\rm out,n}^2\right)^{3/2}\ , \end{equation} where $m_{\rm 3}$ is the mass of the outer body orbiting the inner BH-NS binary, $m_{\rm tot}$ is the total mass of the triple system, and $P_{{\rm BHNS}}\propto a_{{\rm in,n}}^{3/2}$ and $P_{{\rm out,n}}\propto a_{{\rm out,n}}^{3/2}$ are the orbital periods of the inner BH-NS binary and of the outer binary, respectively. The maximal eccentricity is essentially a function of the initial mutual inclination \begin{equation} e_{\rm in,n}^{\max}=\sqrt{1-\frac{5}{3}\cos^2 i_\mathrm{n}}\ . \label{eqn:emax} \end{equation} Whenever $i_{\rm n}\sim 90^\circ$, the inner binary eccentricity approaches almost unity. If the octupole corrections are taken into account, the inner eccentricity can reach almost unity even if the initial inclination is outside of the $i_{\rm n}\sim 40^\circ$-$140^\circ$ KL range in the case the outer orbit is eccentric \citep{naoz13a,li14}. The large values reached by the eccentricity of the BH-NS binary during the KL cycles make its merger time shorter since it efficiently dissipates energy when $e \sim e_{\rm in,n}^{\max}$ \citep[e.g., see][]{antognini14,fragrish2018}. However, KL oscillations can be suppressed by additional sources of precession, such as tidal bulges or relativistic precession. In our case, the most important effect is general relativistic precession that can quench the escursions to high eccentricity on a timescale \citep{nao16}, \begin{equation} T_{GR}=\frac{a_{\rm in,n}^{5/2}c^2(1-e_{\rm in,n}^2)}{3G^{3/2}(\mbh+\mns)^{3/2}}\ . \end{equation} In the region of parameter space where $T_{KL}>T_{GR}$, the KL cycles of the BH-NS orbital elements are damped by relativistic effects. \section{N-body simulations} \label{sect:results} \subsection{Initial conditions} The stellar triples in our simulations are initialized as described in what follows. In total, we consider six different models (see Table~\ref{tab:models}). For simplicity, we assume that every star in the mass range $8 \msun$--$20\msun$ will form a NS, while stars in the mass range $20 \msun$--$150\msun$ collapse to a BH. In all our models, we sample the mass $m_1$ of the most massive star in the inner binary from an initial mass function \begin{equation} \frac{dN}{dm} \propto m^{-\beta}\ , \label{eqn:bhmassfunc} \end{equation} in the mass range $20\msun$-$150\msun$, reflecting the progenitor of the BH. In our \textit{fiducial} model, $\beta=2.3$ (canonical \citet{kroupa2001} mass function; first model in Table~\ref{tab:models}). We adopt a flat mass ratio distribution for both the inner binary, $m_2/m_1$, and the outer binary, $m_3/(m_1+m_2)$. This is consistent with observations of massive binary stars, which suggest a nearly flat distribution of the mass ratio \citep{sana12,duch2013,sana2017}. The mass of the secondary in the inner binary is sampled from the range $8\msun$-$20\msun$, thus it is the progenitor of the NS, while the mass of the third companion is drawn from the range $0.5\msun$-$150\msun$. The distribution of the inner and outer semi-major axis, $\ain$ and $\aout$, respectively, is assumed to be flat in log-space (\"{O}pik's law), roughly consistent with the results of \citet{kob2014}. We set as minimum separation $10$ AU, and adopt different values for the maximum separation $\amax=2000$ AU--$3000$ AU--$5000$ AU \citep{sana2014}. For the orbital eccentricities of the inner binary, $\ein$, and outer binary, $\eout$, we assume a flat distribution. We also run one additional model where we consider a thermal distribution of eccentricities, for comparison. The initial mutual inclination $i_0$ between the inner and outer orbit is drawn from an isotropic distribution (i.e. uniform in $\cos i_0$). The other relevant angles are drawn randomly. After sampling the relevant parameters, we check that the initial configuration satisfies the stability criterion of hierarchical triples of \citet{mar01}, \begin{equation} \frac{R_{\rm p}}{a_{\rm in}}\geq 2.8 \left[\left(1+\frac{m_{\rm 3}}{m_1+m_2}\right)\frac{1+\eout}{\sqrt{1-\eout}} \right]^{2/5}\left(1.0-0.3\frac{i_0}{\pi}\right)\ , \label{eqn:stabts} \end{equation} where $R_p=\aout(1-\eout)$ is the pericentre distance of the outer orbit. Given the above set of initial conditions, we let the primary star and the secondary star in the inner binary to undergo a conversion to a BH and NS, respectively. For simplicity, we assume that every star in the mass range $8 \msun$--$20\msun$ will form a NS of mass $\mns=1.3\msun$, while a star in the mass range $20 \msun$--$150\msun$ collapses to a BH of mass $\mbh=m/3$ \citep{sil17}, i.e. loses two-thirds of its mass. This is consistent with recent theoretical results on pulsational pair instability that limit the maximum BH mass to $\sim 50\msun$ \citep{bel2016b}. We note that the exact value of the mass intervals that lead to the formation of NSs and BHs and the mass lost during the MS phase and the following SN depends on the datailed physics of the star, importantly on the metallicity, stellar winds and rotation. These assumptions could affect the relative frequency of NSs and BHs, and the mass distribution of the latter ones. The orbital elements of the inner and outer orbit are updated as appropriate following the procedure discussed in Section~\ref{sect:supern}, to account both for mass loss and natal kicks. The distribution of natal kick velocities of BHs and NSs is unknown. We implement momentum-conserving kicks, in which we assume that the momentum imparted to a BH is the same as the momentum given to a NS. As a consequence, the kick velocities for the BHs will be reduced with respect to those of NSs by a factor of $\mbh/\mns$. In our fiducial model, we consider a non-zero natal kick velocity for the newly formed BHs and NSs, by adopting Eq.~\ref{eqn:vkick} with $\sigma=\signs=260 \kms$. This is consistent with the distribution deduced by \citet{hobbs2005}. We run an additional model where we adopt $\sigma=\signs=100 \kms$, consistent with the distribution of natal kicks found by \citet{arz2002}. Finally, we adopt a model where no natal kick is imparted during BH and NS formation. For NSs, this would be consistent with the electron-capture supernova process, where the newly-formed NSs would receive no kick at birth or only a very small one due to an asymmetry in neutrino emissions \citep{pod2004}. We note that even in this case, the triple experiences a kick to its center of mass because one of the massive components suddenly loses mass \citep{bla1961}. In case also the third companion is more massive than $8\msun$, we let it undergo an SN event and conversion to a compact object. If the triple remains bound, we check again the triple stability criterion of \citet{mar01} with the updated masses and orbital parameters for the inner and outer orbits. Given the above set of initial parameters, we integrate the equations of motion of the 3-bodies \begin{equation} {\ddot{\textbf{r}}}_i=-G\sum\limits_{j\ne i}\frac{m_j(\textbf{r}_i-\textbf{r}_j)}{\left|\textbf{r}_i-\textbf{r}_j\right|^3}\ , \end{equation} with $i=1$,$2$,$3$, by means of the \textsc{ARCHAIN} code \citep{mik06,mik08}, a fully regularized code able to model the evolution of binaries of arbitrary mass ratios and eccentricities with high accuracy and that includes PN corrections up to order PN2.5. We performed $1000$ simulations for each model in Table~\ref{tab:models}. We fix the maximum integration time as \citep{sil17}, \begin{equation} T=\min \left(10^3 \times T_{\rm KL}, 10\ \gyr \right)\ , \label{eqn:tint} \end{equation} where $T_{\rm KL}$ is the triple KL timescales. In the case the third companion is not a compact object, i.e. $m_3\le 8\msun$, we set as the maximum timescale the minimum between Eq.~\ref{eqn:tint} and its MS lifetime, which is simply parametrised as \citep[e.g.][]{iben91,hurley00,maeder09}, \begin{equation} \tau_{\rm MS} = \max(10\ (m/\msun)^{-2.5}\,{\rm Gyr}, 7\,{\rm Myr})\ . \end{equation} In this situation, we also check if the third star overflows its Roche lobe \citep{egg83}. In such a case, we stop the integration\footnote{We do not model the process that leads to the formation of a white dwarf. If the tertiary becomes a white dwarf and the system remains bound, some of the systems could still merge via KL oscillations.}. \subsection{Results} \begin{figure} \centering \includegraphics[scale=0.55]{incl.pdf} \caption{Inclination ($i_{\rm n}$) distribution of BH-NS binaries that merge in triples. Most of the triples merge when $i_{\rm n}\sim 90^\circ$, where the effect of the KL oscillations is maximal.} \label{fig:incl} \end{figure} A BH-NS binary is expected to be significantly perturbed by the tidal field of the third companion whenever its orbital plane is sufficiently inclined with respect to the outer orbit \citep{lid62,koz62}. According to Eq.~\ref{eqn:emax}, the BH-NS eccentricity reaches almost unity when $i_{\rm n}\sim 90^\circ$. Figure~\ref{fig:incl} shows the probability distribution function (PDF) of the initial binary plane inclination angle $i_{\rm n}$ (relative inclination after all the SN events in the triple take place (see Sect.~\ref{sect:supern})) in the systems which ended up in a merger. The distributions are shown for $\amax=2000$ AU and different values of $\sigma$. Independently of the mean of the natal kick velocity, the majority of the BH-NS mergers in triples takes place when the inclination approaches $\sim 90^\circ$. In this case, the KL effect is maximal, leading to eccentricity oscillates up to unity, the BH-NS binaries experience efficient gravitational energy dissipation near the pericentre and ends in a merger. BH-NS systems that merge with low relative inclinations have typically initial large eccentricities. Figure~\ref{fig:semi} illustrates the cumulative distribution function (CDF) of inner (top) and outer (bottom) semi-major axis of BH-NS binaries in triples that lead to a merger, for different values of $\sigma$. In this figure, we plot $a_{\rm in,n}$ and $a_{\rm out,n}$, which are the inner and outer semi-major axes, respectively, after the SN events in the triple take place (see Sect.~\ref{sect:supern}). The larger the mean natal kick, the smaller the typical semi-major axis. This can be explained by considering that triples with wide orbits are generally unbound by large kick velocities, while they stay bound if the natal kick is not too high. We find that $\sim 50$\% of the systems have $a_{\rm in,n}\lesssim 70$ AU, $\lesssim 35$ AU, $\lesssim 25$ AU for $\sigma=0\kms$, $100\kms$, $260\kms$, respectively, and $\sim 50$\% of the systems have $a_{\rm out,n}\lesssim 2500$ AU, $\lesssim 800$ AU, $\lesssim 500$ AU for $\sigma=0\kms$, $100\kms$, $260\kms$, respectively. Since $T_{\rm KL}\propto a_{\rm out,n}^3/a_{\rm in,n}^{3/2}$, BH-NS systems that merge in models with $\sigma=0\kms$ are expected to typically merger on longer timescales compared to systems with non-null kick velocity, when the KL effect is at work (see Fig.~\ref{fig:tmerge}). \begin{figure} \centering \includegraphics[scale=0.55]{ain.pdf} \includegraphics[scale=0.55]{aout.pdf} \caption{Cumulative distribution function of inner (top) and outer (bottom) semi-major axis of BH-NS binaries in triples that lead to a merger, for different values of $\sigma$.} \label{fig:semi} \end{figure} The typical mean natal kick velocity affects also the distribution of BH masses in BH-NS binaries that merge in triple systems. We illustrate this in Figure~\ref{fig:mass}, where we plot the cumulative distribution function of total mass (top) and chirp mass (bottom) of BH-NS binaries in triples that lead to a merger. In the case of $\sigma=0\kms$, we find that merging BHs have typically lower masses compared to the models with $\sigma=100\kms$ and $\sigma=260\kms$. In the former case, $\sim 50\%$ of the BHs that merge have mass $\lesssim 17 \msun$, while for non-zero kick velocities we find that $\sim 50\%$ of the BHs that merge have mass $\lesssim 35 \msun$. This can be explained by our assumption of momentum-conserving kicks, where higher mass BHs receive, on average, lower velocity kicks and, as a consequence, are more likely to be retained in triples and (eventually) merge. However, other models for the kicks (e.g. scaling with the fraction of matter ejected by the SN event) may lead to different mass distributions. The same holds for the mass of the third companion. In Figure~\ref{fig:mass3}, we show the cumulative distribution function of the tertiary mass before it undergoes a SN (if any) of the BH-NS binaries in triples that lead to a merger, for different values of $\sigma$. All the third companions have initial mass $m_3\lesssim 100\msun$. Models with $\sigma=100\kms$ and $\sigma=260\kms$ typically predict a more massive tertiary, that will collapse to a BH of mass $m_3/3$, than the case of no natal kicks. Only in the Model where $\sigma=0\kms$, a few tertiaries are NSs. The fraction of third companions that will not collapse to a compact object is $\lesssim 20\%$. \begin{figure} \centering \includegraphics[scale=0.55]{mbh_sig.pdf} \includegraphics[scale=0.55]{mchirp_sig.pdf} \caption{Cumulative distribution function of total mass (top) and chirp mass (bottom) of BH-NS binaries in triples that lead to a merger, for different values of $\sigma$.} \label{fig:mass} \end{figure} \begin{figure} \centering \includegraphics[scale=0.55]{m3_sig.pdf} \caption{Cumulative distribution function of the tertiary mass before it undergoes a SN (if any) of the BH-NS binaries in triples that lead to a merger, for different values of $\sigma$.} \label{fig:mass3} \end{figure} Hierarchical configurations are expected to have eccentricities in the LIGO band ($10$ Hz) larger than binaries that merge in isolation, as a consequence of the perturbation of the third object and the KL cycles \citep[see e.g.][]{fragrish2018}. For the BH-NS binaries that merge in our simulations, we compute a proxy for the GW frequency, which we take to be the frequency corresponding to the harmonic that gives the maximal emission of GWs \citep{wen03}, \begin{equation} f_{\rm GW}=\frac{\sqrt{G(m_1+m_2)}}{\pi}\frac{(1+e_{\rm in,n})^{1.1954}}{[a_{\rm in,n}(1-e_{\rm in,n}^2)]^{1.5}}\ . \end{equation} In Figure~\ref{fig:ecc}, we illustrate the distribution of eccentricities at the moment the BH-NS binaries enter the LIGO frequency band for mergers produced by triples, for $\amax=2000$ AU and different $\sigma$'s (top panel) and $\sigma=260\kms$ and different $\amax$'s (bottom panel). We also plot the minimum $e_{\rm 10Hz}=0.081$ where LIGO/VIRGO/KAGRA network may distinguish eccentric sources from circular sources \citep{gond2019}. As expected for hierarchical configurations, a large fraction of BH-NS binaries formed in triple have a significant eccentricity in the LIGO band. On the other hand, BH-NS binaries that merge in isolation are essentially circular ($e\sim 10^{-8}$-$10^{-7}$) when they enter the LIGO frequency band. Thus, highly-eccentric mergers might be an imprint of BH-NS that merge through this channel. A similar signature could be found in BH-NS that merge in galactic nuclei in proximity to a supermassive black hole \citep{fragrish2018}, in mergers that follow from the GW capture scenario in clusters \citep{sam2018,sam2018b}, from hierarchical triples \citep{ant17} and quadruples \citep{fragk2019}, and from BH binaries orbiting intermediate-mass black holes in star clusters \citep{fragbr2019} \begin{figure} \centering \includegraphics[scale=0.55]{ecc.pdf} \includegraphics[scale=0.55]{ecc2.pdf} \caption{Distribution of eccentricities at the moment the BH-NS binaries enter the LIGO frequency band ($10$ Hz) for mergers produced by triples. Top panel: $\amax=2000$ AU and different $\sigma$'s; bottom panel: $\sigma=260\kms$ and different $\amax$'s. The vertical line shows the minimum $e_{\rm 10Hz}=0.081$ where LIGO/VIRGO/KAGRA network may distinguish eccentric sources from circular sources \citep{gond2019}. A significant fraction of binaries formed in triples have a significant eccentricity in the LIGO band.} \label{fig:ecc} \end{figure} \subsection{Merger times and rates} \begin{figure} \centering \includegraphics[scale=0.55]{tmerge.pdf} \caption{Merger time distribution of BH-NS binaries in triples that lead to merger for all models (see Table\ref{tab:models}).} \label{fig:tmerge} \end{figure} Figure~\ref{fig:tmerge} shows the merger time cumulative distribution functions of BH-NS binaries in triples that lead to merger for all models. The cumulative function depends essentially on the $\sigma$ of the natal velocity kick distribution. Larger kick velocities imply a larger outer semi-major axis, thus a larger KL timescale since $T_{\rm}\propto a_{\rm out,n}^{3}$. We find that BH-NS systems in models with $\sigma=0\kms$ merge on longer timescales compared to systems with non-null kick velocity, when the KL effect is at work. In our simulations, $\sim 50\%$ of the mergers happen within $\sim 2\times 10^6$ yr, $\sim 6\times 10^6$ yr, $\sim 8\times 10^7$ yr for $\sigma=0\kms$, $100\kms$, $260\kms$, respectively. Different $\amax$'s and a thermal distribution of the inner and outer eccentricity do not affect the merger time distribution significantly. In order to compute the merger rate of BH-NS binaries, we assume that the local star formation rate is $0.025 \msun$ Mpc$^{-3}$ yr$^{-1}$, thus the number of stars formed per unit mass is given by \citep{both2011}, \begin{equation} N(m)dm=5.4\times 10^6 m^{-2.3}\ \mathrm{Gpc}^{-3}\ \mathrm{yr}^{-1}\ . \end{equation} Adopting a constant star-formation rate per comoving volume unit, the merger rate of binary BH-NS in triples is then, \begin{equation} \Gamma_\mathrm{BH-NS}=8.1\times 10^4 f_{\rm 3} f_{\rm stable} f_{\rm merge}\ \mathrm{Gpc}^{-3}\ \mathrm{yr}^{-1}\ , \end{equation} where $f_{\rm 3}$ is the fraction massive stars in triples, $f_{\rm stable}$ is the fraction of sampled systems that are stable after the SN events take place, and $f_{\rm merge}$ is the conditional probability that systems (stable after all the SNe) merge. In our calculations, we adopt $f_{\rm 3}=0.15$. The fraction of stable systems after SNe depends mainly on the value of $\sigma$ for the natal velocity kick distribution. We find $f_{\rm stable}\approx 1.6\times 10^{-2}$, $3.6\times 10^{-5}$, $1.1\times 10^{-6}$ for $\sigma=0\kms$, $100\kms$, $260\kms$, respectively, when $\amax=2000$ AU, and $f_{\rm stable}\approx 1.1\times 10^{-6}$, $0.9\times 10^{-6}$, $0.7\times 10^{-6}$ for $\amax=2000$ AU, $3000$ AU, $5000$ AU, respectively, when $\sigma=260\kms$. The typical fraction of systems that merge is $f_{\rm merge}=0.1$ (see Tab.~\ref{tab:models}). Therefore, our final estimated rate is in the range, \begin{equation} \Gamma_\mathrm{BH-NS}=1\times 10^{-3}-19 \ \mathrm{Gpc}^{-3}\ \mathrm{yr}^{-1}\ . \end{equation} Our rate overlaps with BH-NS mergers in binaries \citep{giac2018,kruc2018} and is entirely within the LIGO allowed values \citep{ligo2018}. \section{Discussion and conclusions} \label{sect:conc} While BH-NS mergers have not been found yet by current GW observatories, many of these systems are expected to be observed in the upcoming years. LIGO has set a $90\%$ upper limit of $610$ Gpc$^{-3}$ yr$^{-1}$ for the merger rate. The formation of BH-NS binaries is particularly difficult in star clusters since the strong heating by BH interactions prevent the NSs from efficiently forming these binaries \citep{frag2018,ye2019}. This favours binary evolution in isolation \citep{giac2018,kruc2018}. However, surveys of massive stars have shown that a significant fraction of them resides in triple systems \citep{duq91,sana2017}. We have carried out for the first time a systematic statistical study of the dynamical evolution of triples comprised of an inner BH-NS binary by means of direct high-precision $N$-body simulations, including Post-Newtonian (PN) terms up to 2.5PN order. In our calculations, we have started from the MS progenitors of the compact objects and have modeled the SN events that lead to the formation of BHs and NSs. Our findings are as follows: \begin{itemize} \item The majority of the BH-NS mergers in triples takes place when the inclination is $\sim 90^\circ$ and the KL effect is maximized, independently of the mean kick velocity. \item $\sigma$ affects the distribution of orbital elements of the merging BH-NS binaries in triples; the larger the mean natal kick, the smaller the typical semi-major axis. \item Larger $\sigma$'s lead to larger BH and chirp mass. \item A large fraction of merging BH-NS binaries in triple have a significant eccentricity ($\gtrsim 0.1$) in the LIGO band. \item Models with $\sigma=0\kms$ merge on longer timescales compared to systems with a natal kick velocity. \end{itemize} We have also computed the merger rate of BH-NS binaries, and found a rate $\Gamma_\mathrm{BH-NS}=1\times 10^{-3}-19 \ \mathrm{Gpc}^{-3}\ \mathrm{yr}^{-1}$, which overlaps with the BH-NS mergers in binaries \citep{giac2018,kruc2018} and is entirely within the LIGO range of allowed values \citep{ligo2018}. The large uncertainty in the rate originates from the unknown physics and natal kick velocity distribution of compact objects in triple system. Therefore, the observed merger rate of BH-NS binaries can be used to constrain the magnitude of the velocity kicks that are imparted during the SN process. Future work is also deserved to the study of the different assumption on the final mass of the NSs and BHs. The exact value of the mass intervals that lead to the formation of NSs and BHs and the mass lost during the MS phase and the following SN depends on the stellar metallicity, winds and rotation. These assumptions could affect the relative frequency of NSs and BHs, and the BH mass distribution. BH-NS mergers are also of interest for their electromagnetic (EM) counterparts, such as short gamma-ray bursts, which can provide crucial information on the origin of BH-NS mergers, since they can potentially provide a much better localization and redshift determination compared to GWs alone. Mergers of BH-NS binaries could be accompanied by the formation of an hyperaccreting disk whenever the mass ratio does not exceed $\sim 5$--$6$ \citep{Pannarale2011,Foucart2012,Foucart2018}. We find that $\sim 5$\% of mergers for $\sigma=0\kms$ have mass ratios $\lesssim 5$--$6$. For $\sigma=100\kms$ and $\sigma=260\kms$ we find no such systems. No disk will form for larger mass ratios since the tidal disruption radius of the NS is smaller than the radius of the innermost stable circular orbit, thus resulting in a direct plunge into the BH \citep{Bartos2013}. Larger mass ratios would instead reveal important information on the NS magnetosphere and crust \citep{tsang2012,dora2013,dora2016}. \section*{Acknowledgements} GF is supported by the Foreign Postdoctoral Fellowship Program of the Israel Academy of Sciences and Humanities. GF also acknowledges support from an Arskin postdoctoral fellowship. GF thanks Seppo Mikkola for helpful discussions on the use of the code \textsc{archain}. Simulations were run on the \textit{Astric} cluster at the Hebrew University of Jerusalem. This work was also supported by the Black Hole Initiative at Harvard University, which is founded by a JTF grant (to AL). \bibliographystyle{mn2e}
proofpile-arXiv_065-4236
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The history of artificial intelligence (AI) can be mapped by its achievements playing and winning various games. From the early days of Chess-playing machines to the most recent accomplishments of Deep Blue~\cite{deep-blue}, AlphaGo~\cite{alpha-go}, and AlphaStar~\cite{AlphaStar}, {game-playing} AI\footnote{We refer to game-playing AI as any artificial intelligence solution that powers an agent in the game to simulate a player (user). This can range from rule-based agents to the state-of-the-art deep reinforcement learning agents.} has advanced from competent, to competitive, to champion in even the most complex games. Games have been instrumental in advancing AI, and most notably in recent times through Monte Carlo tree search and \blue{deep} reinforcement learning (RL). \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{Agent_PerformancexStyle_graph_v2.png} \caption{\blue{A depiction of the possible ranges of AI agents and the possible tradeoff/balance between skill and style. In this tradeoff, there is a region that captures human-like skill and style. AI Agents may not necessarily land in the human-like region. High-skill AI agents land in the green region while their style may fall out of the human-like region.} } \vspace{-.15in} \label{Figure:agent_performance_x_style} \end{figure} \blue{Complementary to these great efforts on training high-skill gameplaying agents, at Electronic Arts, our primary goal is to train agents that assist in the game design process, which is iterative and laborious. The complexity of modern games steadily increases, making the corresponding design tasks even more challenging. To support designers in this context, we train game-playing AI agents as user simulators to perform tasks ranging from automated playtesting to interaction with human players tailored to enhance game-play immersion.} \blue{ To approach the challenge of creating agents that can generate meaningful interaction data to inform game developers, we propose simple techniques to model different user behaviors. Each agent has to strike a different balance between {\em style} and {\em skill}. We define skill as how efficient the agent is at completing the task it is designed for. Style is vaguely defined as how the player engages with the game and what makes the player enjoy their game-play. Defining and gauging skill is usually much easier than that of style. Gameplay style in itself is a complex concept, spawning a field of its own. While a comprehensive study of style is outside the scope of this work, we refer the interested reader to ~\cite{drachen2009playerstyle, gow2012unsupervisedplayerstyle} on modeling player style. In this work, we attempt to evaluate style of an artificial agent using statistical properties of the underlying simulator model. } \blue{One of the most crucial tasks in game design is the process of playtesting. Game designers usually rely on playtesting sessions and feedback they receive from playtesters to make design choices in the game development process. Playtesting is performed to guarantee quality game-play that is free of game-breaking exceptions (e.g., bugs and glitches) and delivers the experience intended by the designers. Since games are complex entities with many moving parts, solving this multi-faceted optimization problem is even more challenging. An iterative loop where data is gathered from the game by one or more playtesters, followed by designer analysis is repeated many times throughout the game development process.} \blue{ To mitigate this expensive process, one of our major efforts is to implement agents that can help automate aspects of playtesting. These agents are meant to play through the game, or a slice of it, trying to explore behaviors that can generate meaningful data to assist in answering questions that designers pose. These can range from exhaustively exploring certain sequences of actions, to trying to play a scenario from start to finish with the shortest sequence of actions possible. We showcase use-cases focused on creating AI agents to playtest games at Electronic Arts and discuss the related challenges.} \blue{Another key task in game development is the creation of in-game characters that interact with real human players. Agents must be trained and delicate tuning has to be performed to guarantee quality experience (e.g., engagingness and humanness). An AI adversary with an unreasonably fast reaction time can be deemed unfair rather than challenging. On the other hand, a pushover agent might be an appropriate introductory opponent for novice players, while it fails to retain player interest after a handful of matches. While traditional AI solutions can provide excellent experiences for the players, it is becoming increasingly more difficult to scale those traditional solutions up as the game worlds are becoming larger and the content is becoming dynamic.} \blue{ In our experience, as Fig. \ref{Figure:agent_performance_x_style} shows, we have observed that there is a range of style/skill pairs that are achievable by human players, and hence called human-like. High-skill game-playing agents may have an unrealistic style rating if they rely on high computational power and memory size, and reaction times unachievable by humans. Evaluation of techniques to emulate human-like behavior have been presented~\cite{ortega2013imitating}, but measuring non-objective metrics such as fun and immersion is an open research question~\cite{fun-in-games, immersion-in-games}. Further, we cannot evaluate player engagement prior to the game launch, so we rely on our best approximation: {\em designer feedback}. Through an iterative process, designers evaluate the game-play experience by interacting with the agents to measure whether the intended game-play experience is provided.} \blue{These challenges each require a unique equilibrium between style and skill. Certain agents could take advantage of superhuman computation to perform exploratory tasks, most likely relying more heavily on skill. Others need to interact with human players, requiring a style that won't break player immersion. Then there are agents that need to play the game with players cooperatively, which makes them rely on a much more delicate balance that is required to pursue a human-like play style. Each of these individual problems call for different approaches and have significant challenges. Pursuing human-like style and skill can be as challenging (if not more) than achieving high performance agents.} Finally, training an agent to a specific need is often more efficient than achieving such solution through high-skill AI agents. This is the case, for example, when using game-playing AI to run multiple playthroughs of a specific in-game scenario to trace the origin of an unintended game-play behavior. In this scenario, an agent that would explore the game space would potentially be a better option than one that reaches the goal state of the level more quickly. Another advantage in creating specialized solutions (as opposed to artificial general intelligence) is the cost of implementation and training. The agents needed for these tasks are, commonly, of less complexity in terms of needed training resources. {To summarize,} we mainly pursue two use-cases for having AI agents enhance the game development process. \begin{enumerate} \item {\em playtesting AI agents} to provide feedback during the game's development. \item \blue{{\em game-playing AI agents} to interact with real human players to shape their game-play experience.} \end{enumerate} The rest of the paper is organized as follows. In Section~\ref{sec:related_work}, we review the related work on training agents for playtesting and NPCs. In Section~\ref{sec:pipeline}, we describe our training pipeline. \blue{In Sections~\ref{sec:feedback} and~\ref{sec:NPC}, we provide four case studies that cover playtesting and game-playing, respectively. These studies are performed to help with the development process of multiple games at Electronic Arts. These games vary considerably in many aspects, such as the game-play platform, the target audience, and the engagement duration. The solutions in these case studies were created in constant collaboration with the game designers. The first case study in Section~\ref{sec:sims-mobile}, which covers game balancing and playtesting was done in conjunction with the development of The Sims Mobile. The other case studies are performed on games that are still under development at the time this paper was written. Hence, we had to omit specific details purposely to comply with company confidentiality.} Finally, the concluding remarks are provided in Section~\ref{sec:conclusion}. \section{Related Work} \label{sec:related_work} \subsection{Playtesting AI agents (user simulators)} To validate their design, game designers conduct playtesting sessions. Playtesting consists of having a group of players interact with the game in the development cycle to not only gauge the engagement of players, but to discover states that result in undesirable outcomes. As a game goes through the various stages of development, it is essential to continuously iterate and improve the relevant aspects of the game-play and its balance. Relying exclusively on playtesting conducted by humans can be costly and inefficient. Artificial agents could perform much faster play sessions, allowing the exploration of much more of the game space in much shorter time. When applied to playing computer games, RL assumes that the goal of a trained agent is to achieve the best possible performance with respect to clearly defined rewards while the game itself remains fixed for the foreseen future. In contrast, during game development the objectives and the settings are quite different and vary over time. Agents can play a variety of roles with rewards that are not obvious to define formally, e.g., an objective exploring a game level is different from defeating all adversaries. In addition, the environment often changes between the game builds. In such settings, it is desirable to quickly train agents that help with automated testing and data generation for the game balance and feature evaluation. It is also desirable that the agent be re-usable despite new aesthetics and game-play features. A strategy of relying on increasing computational resources combined with substantial engineering efforts to train agents in such conditions is far from practical and calls for a different approach. The idea of using artificial agents for playtesting is not new. Algorithmic approaches have been proposed to address the issue of game balance, in board games~\cite{de2017contemporaryboardgameai,hom2007automatic} and card games~\cite{r2014,mahlmann2012evolving, silva2019evolving}. More recently, Holmgard {\em et al.}~\cite{holmgard}, as well as, Mugrai {\em et al.}~\cite{mugrai2019automated} built variants of MCTS to create a player model for AI Agent based playtesting. \blue{Guerrero-Romero {\em et al.} created different goals for general game-playing agents in order to playtest games emulating players of different profiles ~\cite{guerrero2018using}}. These techniques are relevant to creating rewarding mechanisms for mimicking player behavior. AI and machine learning can also play the role of a co-designer, making suggestions during development process~\cite{yannakakis2014mixed}. Tools for creating game maps~\cite{liapis2013sentient} and level design~\cite{smith2010tanagra,shaker2013ropossum} are also proposed. See~\cite{search-based-generation,pcgml} for a survey of these techniques in game design. In this paper, we describe our framework that supports game designers with automated playtesting. This also entails a training pipeline that universally applies this framework to a variety of games. We then provide two case studies that entail different solution techniques. \subsection{Game-playing AI agents (Non-Player Characters)} Game-playing AI has been a main constituent of games since the dawn of video gaming. \blue{Analogously, games, given their challenging nature, have been a target for AI research~\cite{yannakakis2018artificial}. Over the years, AI agents have become more sophisticated and have been providing excellent experiences to millions of players as games have grown in complexity.} Scaling traditional AI solutions in ever growing worlds with thousands of agents and dynamic content is a challenging problem calling for alternative approaches. \blue{The idea of using machine learning for game-playing AI dates back to} Arthur Samuel~\cite{samuel-checkers}, who applied some form of tree search combined with basic reinforcement learning to the game of checkers. His success motivated researchers to target other games using machine learning, and particularly reinforcement learning. IBM Deep Blue followed the tree search path and was the first artificial game agent who beat the chess world champion, Gary Kasparov~\cite{deep-blue}. A decade later, Monte Carlo Tree Search (MCTS)~\cite{MCTS,UCT} was a big leap in AI to train game agents. MCTS agents for playing Settlers of Catan were reported in~\cite{settlersszita2009monte,settlerschaslot2008monte} and shown to beat previous heuristics. Other work compares multiple approaches of agents to one another in the game Carcassonne on the two-player variant of the game and discusses variations of MCTS and Minimax search for playing the game~\cite{Carcassonneheyden2009implementing}. MCTS has also been applied to the game of 7 Wonders~\cite{7wonders} and Ticket to Ride~\cite{MCTSTicketToRide}. \blue{Furthermore, Baier {\em et al.} biased MCTS with a player model, extracted from game-play data, to have an agent that was competitive while approximating human-like play~\cite{baier2018emulating}.} Tesauro~\cite{TDGammon}, on the other hand, used TD-Lambda to train Backgammon agents at a superhuman level. The impressive recent progress on RL to solve video games is partly due to the advancements in processing power and AI computing technology.\footnote{The amount of AI compute has been doubling every 3-4 months in the past few years~\cite{openai-compute}.} More recently, \blue{following the success stories in deep learning, deep Q networks (DQNs) use deep neural networks as function approximators within Q-learning~\cite{DQN}. DQNs can use convolutional function approximators as a general representation learning framework from the pixels in a frame buffer without need for task-specific feature engineering.} DeepMind remarried the approaches by combining DQN with MCTS to create AI agents that play Go at a superhuman level~\cite{alpha-go}, and solely via self-play~\cite{alpha-go-zero,alpha-zero}. Subsequently, OpenAI researchers showed that a policy optimization approach with function approximation, called Proximal Policy Optimization (PPO)~\cite{PPO}, would lead to training agents at a superhuman level in Dota 2~\cite{openai-dota2}. \blue{Cuccu {\em et al.} proposed learning policies and state representations individually, but at the same time, and did so using two novel algorithms~\cite{cuccu2019playing}. With such approach they were able to play Atari games with neural networks of 18 neurons or less.} Recently, progress was reported by DeepMind on StarCraft II, where AlphaStar was unveiled to play the game at a high-competitive human level by combining several techniques, including attention networks~\cite{AlphaStar}. \begin{figure} \centering \includegraphics[width=\linewidth]{pipeline.pdf} \caption{The AI agent training pipeline, \blue{which consists of two main components, game-play environment and agent environment. Agents submit actions to the game-play environment and receive the next state.}} \label{Figure:training_pipeline} \end{figure} \section{Training Pipeline} \label{sec:pipeline} \blue{To train AI agents efficiently, we developed a unified training pipeline applicable to all of EA games, regardless of platform and genre. In this section, we present our training pipeline that is used for solving the case studies presented in the sections that follows.} \subsection{Gameplay and Agent Environments} The AI agent training pipeline, which is depicted in Fig.~\ref{Figure:training_pipeline}, consists of two key components: \begin{itemize} \item {\it Gameplay environment} refers to the simulated game world that executes the game logic with actions submitted by the agent every timestep and produces the next state. \item {\it Agent environment} refers to where the agent interacts with the game world. The agent observes the game state and produces an action. It is where training occurs. \end{itemize} In practice, the game architecture can be complex and infeasible for the game to directly communicate the complete state space at every timestep. To train artificial agents, we create an interface between game-play environment and learning environment.\footnote{These environments may be physically separated, and hence, we prefer a client that supports fast cloud execution, and is not tied to frame rendering.} The interface extends OpenAI Gym~\cite{openai-gym} and supports actions that take arguments, which is necessary to encode action functions and is consistent with PySC2~\cite{starcraft2}. We also adapt Dopamine~\cite{dopamine} to this pipeline to make DQN~\cite{DQN}, Rainbow~\cite{rainbow} and PPO~\cite{schulman2017proximal} agents available for training in the game. Additionally, we add support for more complex preprocessing of game state features other than the usual frame buffer stacking. \subsection{State Abstraction} The use of frame buffer as an observation of the game state has proved advantageous in eliminating the need for manual feature engineering in Atari games~\cite{DQN}. However, to enable efficient training (using imitation learning or reinforcement learning) in a fast-paced game development process, the drawbacks of using frame buffer outweigh its advantages. The main considerations which we take into account when deciding in favor of a lower-dimensional engineered representation of game state are: \begin{enumerate}[(a)] \item During almost all stages of development, the game parameters are evolving. In particular, the art may change at any moment and the look of already learned environments can change overnight. It is desirable to train agents using more stable features that can transfer to new environments with little need for retraining. \item State abstraction allows us to train much smaller models (networks) because of the smaller input size and use of carefully engineered features. This is critical for real time application environments where rendering, animation and physics are occupying much of the GPU and CPU power at inference time. \item In playtesting, game-play environment and learning environment may reside in physically separate nodes. Naturally, RL state-action-reward loop in such requires a lot of network communication. Frame buffers would significantly increase this communication cost whereas derived game state features enable more compact encoding. \item Obtaining an artificial agent in a reasonable time usually requires that the game be clocked at a rate much higher than the usual game-play speed. As rendering each frame takes significant time, overclocking with rendering enabled is not practical. Additionally, moving large amounts of data from GPU to main memory drastically slows down execution and can potentially introduce simulation artifacts, by interfering with the target timestep rate. \item Last but not least, we can leverage having access to the game code to have the game engine distill a compact state representation and pass it to the agent environment. By doing so we also have better hope of learning in environments where the pixel frames only contain partial information about the the state space. \end{enumerate} Feature selection for the compact state representation may require some engineering efforts, but it is straightforward after familiarization with game mechanics. It is often similar to that of traditional game-playing AI, which is informed by the game designer. We remind the reader that our goal is not to train agents to win, but to simulate human-like behavior, so we train on information accessible to human players. In the rest of this paper, we present four case studies on training intelligent agents for game development; two of which focusing on playtesting AI agents, and the next two on gameplaying AI agents. \section{\blue{Playtesting AI Agents}} \label{sec:feedback} \subsection{\blue{Measuring player experience for different player styles}} \label{sec:sims-mobile} In this section, we consider the early development of The Sims Mobile, whose game-play is about ``emulating life'': players create avatars, called Sims, and conduct them through a variety of everyday activities. In this game, there is no single predetermined goal to achieve. Instead, players craft their own experiences, and the designer's objective is to evaluate different aspects of that experience. In particular, each player can pursue different careers, and each will have a different experience and trajectory in the game. \blue{In this specific case study, the designer's goal is to evaluate if the current tuning of the game achieves the intended balanced game-play experience across different careers. For example, different careers should prove similarly difficult to complete. } We refer the interested reader to \cite{Fernando-A-star} for a more comprehensive study of this problem. The game is \blue{single-player, deterministic, real-time,} fully observable and the dynamics are fully known. \blue{We also have access to the complete game state, which is composed mostly of character and on-going action attributes.} This simplified case allows for the extraction of a lightweight model of the game \blue{(i.e., state transition probabilities).} While this requires some additional development effort, we can achieve a dramatic speedup in training agents by avoiding (reinforcement) learning and resorting to planning techniques instead. In particular, we use the A* algorithm for \blue{the simplicity of proposing a heuristic that can be tailored to the specific designer need} by exploring the state transition graph. The customizable heuristics and the target states corresponding to different game-play objectives, \blue{which represent the style we are trying to achieve}, provide sufficient control to conduct various experiments and explore multiple aspects of the game. \blue{Our heuristic for the A* algorithm is the weighted sum of the 3 main parameters that contribute for career progression: career level, current career experience points and amount of completed career events. These parameters are directly related. To gain career levels players have to accumulate career experience points and to obtain experience, players have to complete career events. The weights are attributed based on the order of magnitude each parameter has. Since levels are the most important, it receives the highest weight. The total completed career events has the lowest weight because it is already partially factored into the career points received so far.} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{career_approach_comparison.png} \caption{Comparison of the average amount of actions (appointments) taken to complete the career using A* search and evolution strategy adapted from~\cite{Fernando-A-star}.} \label{Figure:career_approach_comp} \end{figure} \blue{ We also compare A* results to the results from an optimization problem over a subspace of utility-based policies approximately solved with an evolution strategy (ES) \cite{openai-es}. Our goal, in this case, is to achieve a high environment reward against selected objective, e.g., reach the end of a career track while maximizing earned career event points. We design ES objective accordingly. The agent performs an action $a$ based on a probabilistic policy by taking a softmax on the utility $U(a, s)$ measure of the actions in a game state $s$. Utility here serves as an action selection mechanism to compactly represent a policy. In a sense, it is a proxy to a state-action value $Q$-function $Q(a, s)$ in RL. However, we do not attempt to derive utility from Bellman's equation and the actual environment reward $R$. Instead, we learn parameters that define $U(a, s)$ to optimize the environment rewards using the black-box ES optimization technique. In that sense optimizing $R$ by learning parameters of $U$ is similar to Proximal Policy Optimization (PPO), however, in much more constrained settings. To this end, we design utility of an action as a weighted sum of the immediate action rewards $r(a)$ and costs $c(a)$. These are vector-valued quantities and are explicitly present in the game tuning describing the outcome of executing such actions. The parameters evolving by the ES are the linear weights for the utility function $U$ explained below and the temperature of the softmax function. An additional advantage of the proposed linear design of the utility function is a certain level of interpretability of the weights corresponding to the perceived by the agent utilities of the individual components of the resources or the immediate rewards. Such interpretability can guide changes to the tuning data.} { Concretely, given the game state $s$, we design the utility $U$ of an action $a$ as $U(s,a) = r(a)v(s) + c(a)w(s).$ The immediate reward $r(a)$ here is a vector that can include quantities like the amount of experience, amount of career points earned for the action and the events triggered by it. The costs $c(a)$ is a vector defined similarly. The action costs specify the resources required to execute such an action, e.g., how much time, energy, etc. a player needs to spend to successfully trigger and complete the action. The design of the tuning data makes the both quantities $r$ and $c$ only depend on the action itself. Since both - the immediate reward $r(a)$ and $c(a)$ are vector values, the products in the definition of $U$ above are dot products. The vectors $v(s)$ and $w(s)$ introduce dependence of the utility on the current game state and are the weights defining relative contribution of the immediate resource costs and immediate rewards towards the current goals of the agent. Inferred utilities of the actions depend on the state. Some actions in certain states are more beneficial than in others, e.g., triggering a career event when not having enough resources to complete it successfully. The relevant state components $s=(s_1,..,s_k)$ include available commodities like energy and hunger and a categorical event indicator (0 if outside of the event and 1 otherwise) wrapped into a vector. The total number of the relevant dimensions here is $k$. We design the weights $v_a(s)$ and $w_a(s)$ as bi-linear functions with the coefficients ${\bf p}=(p_1,...,p_k)$ and ${\bf q}=(q_1,...,q_k)$ that we are learning: $v_a(s)=\sum_{i=1,..,k} p_i s_i$ and $w_a(s)=\sum_{i=1,..,k} q_i s_i$. To define the optimization objective $J$, we construct it as a function of the number of successfully completed events $N$ and the number of attempted events $M$. We aim to maximize the ratio of successful to attempted events times total number of successful events in the episode as follows: $J(N, M) = N (N + \epsilon) / (M + \epsilon)$, where $\epsilon$ is a small positive real number to ensure stability when the policy fails to attempt any events. The overall optimization problem is therefore $\max_{{\bf p}, \bf{q}} J(N, M)$ subject to the policy parameterized with the parameters $\bf p$ and $\bf q$. The utility-based ES, as we describe it here, captures the design intention of driving career progression in the game-play by successful completion of career events. Due to the emphasis on the events completion, our evolution strategy setup is not necessarily resulting in an optimization problem equivalent to the one tackled with $A^*$. However, as we discuss below, it has similar optimum most of the time, supporting the design view on the progression. A similar approach works for evaluating relationship progression, which is another important element of the game-play. } We compare the number of actions that it takes to reach the goal for each career in Fig.~\ref{Figure:career_approach_comp} as computed by the two approaches. \blue{We can see that the more expensive optimization based evolution strategy performs similarly to the much simpler A* search.} The largest discrepancy arises for the Barista career. This can be from the fact that this career has an action that does not reward experience by itself, but rather enables another action that does it. This action can be repeated often and can explain the high numbers. Also, we observe that in the case of the medical career, the 2,000 node A* cutoff was potentially responsible for the under performance in that solution. When running the two approaches, we can compare how many sample runs are required to obtain statistically significant results. The A* agent learns a deterministic playstyle, with no variance. We performed 2,000 runs for the evolution strategy, and the agent has a high variance and requires a sufficiently high number of runs to approach a final reasonable strategy~\cite{Fernando-A-star}. \blue{In this use case, we were able to use a planning algorithm, A*, to explore the game space to gather data for the game designers to evaluate the current tuning of the game. This was possible due to the goal being straightforward, to evaluate progression in the different careers. With such, the requirements of skill and style for the agent were achievable and simple to model. Over the next use cases, we analyze scenarios that call for different approaches as consequence of having more complex requirements and subjective agent goals.} \begin{figure*} \centering \includegraphics[width=0.48\linewidth]{RL_train.png} \includegraphics[width=0.48\linewidth]{RL_eval.png} \caption{\blue{This plot belongs to Section~\ref{sec:player-progression}.} Average cumulative reward (return) in training and evaluation for the agents as a function of the number of iterations. Each iteration is worth $\sim$60 minutes of game-play. The trained agents are: (1) a DQN agent with complete state space, (2) a Rainbow agent with complete state space, (3) a DQN agent with augmented observation space, and (4) a Rainbow agent with augmented observation space. Augmented space is the space observable by humans in addition to inferred information, which is much smaller than the complete space. }\label{fig:result} \end{figure*} \subsection{Measuring competent player progression} \label{sec:player-progression} \blue{In the next case study, we consider a real-time multi-player mobile game, with a stochastic environment and sequential actions. The game dynamics are governed by a complex physics engine, which makes it impractical to apply planning methods. This game is more complex than The Sims Mobile in the sense that strategic decision making is required for progression. } When the game dynamics are unknown \blue{or complex}, most recent success stories are based on \blue{model-free} RL (and particularly variants of DQN and PPO). In this section, we show how such model-free control techniques fit into the paradigm of playtesting modern games. \blue{ In this game, the goal is to level up and reach milestones in the game. To this end, the players need to make decisions in terms of resource mining and management for different tasks. In the process, the agent needs to perform upgrades that require certain resources. If such resources are insufficient, a human player will be able to visually discern the validity of such action by clicking on the particular upgrade. The designer's primary concern in this case study is to measure how a competent player would progress in the early stages of this game. In particular, players are required to balance resources and make strategic choices that the agent needs to discern as well. } \blue{We consider a simplified state space that contains information about the early game, ignoring the full state space.} The \blue{relevant part of the} state \blue{space} consists of $\sim$50 continuous and $\sim$100 discrete state variables. The set of possible actions $\alpha$ is a subset of a space $A$, which consists of $\sim$25 action classes, some of which are from a continuous range of possible action values, and some are from a discrete set of action choices. The agent has the ability to generate actions $\alpha \in A$ but not all of them are valid at every game state since $\alpha=\alpha(s, t)$, i.e., $\alpha$ depends on the timestep and the game state. Moreover, the subset $\alpha(s, t)$ of valid actions \blue{may} only partially \blue{be} known to the agent. \blue{If the agent attempts to take an unavailable action, such as a building upgrade without sufficient resources, the action will be deemed invalid and no {\em actual} action will be taken by the game server. } While the problem of a huge state space~\cite{poupart-pomdp-continuous,pomdp-continuous,poupart-pomdp-continuous2}, a continuous action space~\cite{silver-cont-action}, and a parametric action space~\cite{stone-parameterized} could be dealt with, these techniques are not directly applicable to our problem. This is because, as we shall see, some actions will be invalid at times and inferring that information may not be fully possible from the observation space. Finally, the game is designed to last tens of millions of timesteps, taking the problem of training a functional agent in such an environment outside of the domain of previously explored problems. We study game progression while taking only valid actions. As we already mentioned, the set of valid actions $\alpha$ \blue{may} not \blue{be} fully determined by the current observation, and hence, we deal with a partially observable Markov decision process (POMDP). Given the practical constraints outlined above, it is infeasible to apply deep reinforcement learning to train agents in the game in its entirety. In this section, we show progress toward training an artificial agent that takes valid actions and progresses in the game like \blue{a competent} human player. \blue{To this end, we wrap this game in the game environment and connect it} to our training pipeline with DQN and Rainbow agents. \blue{In the agent environment,} we use a \blue{feedforward neural} network with two fully connected hidden layers, \blue{each with 256 neurons followed by} ReLU activation. \blue{As a first step in measuring game progression,} we \blue{define} an episode by setting an early goal state in the game that takes an expert human player $\sim$5 minutes to reach. We let the agent submit actions to the game server every second. \blue{We may have to revisit this assumption for longer episodes where the human player is expected to interact with the game more periodically.} \blue{We use a simple rewarding mechanism, where} we reward the agent with `+1' when they reach the goal state, `-1' when they submit an invalid action, `0' when they take a valid action, and `-0.1' when they choose the ``do nothing'' action. The game is such that at times the agent has no other valid action to choose, and hence they should choose ``do nothing'', but such periods do not last more than a few seconds \blue{in the early stages of the game, which is the focus of this case study.} We consider two different versions of the observation space, both extracted from the game engine \blue{(state abstraction)}. The first is what we call the ``naive'' state space. The complete state space contains information that is not straightforward to infer from the real observation in the game and is only used as a baseline for the agent. \blue{In particular, the complete state space also includes the list of available actions at each state.} The polar opposite of this state space could be called the ``naive'' state space, which only contains straightforward information \blue{that is always shown on the screen of the player.} The second state space we consider is what we call the ``augmented'' observation space, which contains information from the naive state space and information the agent would infer and retain from current and previous gameplays. \blue{For example, this includes the amount of resources needed for an upgrade after the agent has checked a particular building for an upgrade. The augmented observation space does not include the set of all available actions, and hence, we rely on the game server to validate whether a submitted action is available because it is not possible to encode and pass the set $\alpha$ of available actions. Hence, if an invalid action is chosen by the agent, the game server will ignore the action and will flag the action so that we can provide a `-1' reward.} We trained four types of agents as shown in Fig.~\ref{fig:result}, where we are plotting the average undiscounted return per episode. By design, this quantity is upper bounded by `+1', which is achieved by taking valid actions until reaching the final goal state. In reality, this may not always be achievable as there are periods of time where no action is available and the agent has to choose the ``do nothing'' action and be rewarded with `-0.1'. Hence, the best a \blue{competent} human player would achieve on these episodes would be around zero. We see that after a few iterations, both the Rainbow and DQN agents converge to their asymptotic performance values. The Rainbow agent converges to a better performance level compared to the DQN agent. However, in the the transient behavior we observe that the DQN agent achieves the asymptotic behavior faster than the Rainbow agent. \blue{We believe this is due to the fact that hyperparameters of prioritized experience replay~\cite{prioritized-experience-replay} and distributional RL~\cite{distributional-RL} were not tuned.\footnote{\blue{This is consistent with the results of Section~\ref{sec:team-sports}, where Rainbow with default hyperparameters does not outperform DQN either.}} We used the default values that worked best on Atari games with frame buffer as state space. Extra hyperparameter tuning would have been costly in terms of cloud infrastructure and time for this particular problem since the game server does not allow speedup, building the game takes several hours, and training one agent takes several hours as well.} As expected, Fig.~\ref{fig:result} shows that the augmented observation space makes the training slower and has worse performance on the final strategy. In addition, the agent keeps attempting invalid actions in some cases as the state remains mostly unchanged after each attempt and the policy is (almost) deterministic. These results in accumulating large negative returns in such episodes which account for the dips in the right-hand-side panel in Fig.~\ref{fig:result} at evaluation time. The observed behavior drew our attention to the question of whether it is too difficult to discern and keep track of the set of valid actions for a human player as well. In fact, after seeking more extensive human feedback the game designers concluded that better visual cues were needed for a human player on information about valid actions at each state so that the human players could progress more smoothly without being blocked by invalid actions. As next steps, we intend to experiment with shaping the reward function for achieving different play styles to be able to better model different player clusters. Comparison between human play styles and agent emulated styles is discussed in \cite{kemmerling2013making}. We also intend to investigate augmenting the replay buffer with expert demonstrations for faster training and also for generative adversarial imitation learning~\cite{GAIL} once the game is released and human play data is available. \blue{ We remark that without state abstraction (streamlined access to the game state), the neural network function approximator used for Q-learning would have needed to discern all such information from the pixels in the frame buffer, and hence we would not have been able to get away with such a simple two-layer feedforward function approximator to solve this problem. However,} we observe that the training within the paradigm of \blue{model-free} RL remains costly. Specifically, even using the complete state space, it takes several hours to train \blue{an agent} that achieves a level of performance expected of a \blue{competent} human player on this relatively short episode of $\sim$5 minutes. This calls for the exploration of complementary approaches to augment the training process. \blue{In particular, we also would like to streamline this process by training reusable agents and capitalizing on existing human data through imitation learning.} \section{Game-playing AI} \label{sec:NPC} We have shown the value of simulated agents in a fully modeled game, and the potential of training agents in a complex game to model player progression \blue{for game balancing.} We can take these techniques a step further and make use of agent training to help build the game itself. Instead of applying RL to capture player behaviors, we consider an approach to game-play design where the player agents learn behavior policies from the game designers. \blue{The primary motivation of that is to give direct control into the designer hands and enable easy interactive creation of various behavior types. At the same time, we aim to complement organic demonstrations with bootstrap and heuristics to eliminate the need for a human to train an agent on the states normally not encountered by humans, e.g., unblocking an agent using obstacle avoidance.} \subsection{Human-Like Exploration in an Open-World Game} To bridge the gap between the agent and the designer, we introduce imitation learning (IL) to our system~\cite{GAIL,IL1, IL2,IL3}. In the present application, IL allows us to translate the intentions of the game designer into a primer and a target for our agent learning system. Learning from expert demonstrations has traditionally proved very helpful in training agents, \blue{including in games~\cite{thurau2007bayesian}}. In particular, the original Alpha Go~\cite{alpha-go} used expert demonstrations in training a deep Q network. While subsequent work argued that learning via self-play could have better asymptotic return, the performance gain comes with significantly higher training computational resource costs and superhuman performance is not seeked in this work. Other cases preferred training agents on relatively short demonstrations played by developers or designers~\cite{oneshot-IL}. \begin{figure*} \centering \includegraphics[width=0.77\linewidth]{interactive_training.png} \vspace{-.08in} \caption{Model performance measures the probability of the event that the Markov agent finds at least one previous action from human-played demonstration episodes in the current game state. The goal of interactive learning is to add support for new game features to the already trained model or improve its performance in underexplored game states. Plotted is the model performance during interactive training from demonstrations in a proprietary open-world game as a function of time measured in \blue{milliseconds (with the total duration around 10 minutes).} } \label{Figure:interactive_training} \vspace{-.08in} \end{figure*} In this application, we consider training artificial agents in an open-world video game, where the game designer is interested in training non-player characters that \blue{exhibit} certain behavioral styles. \blue{The game we are exploring is a shooter with contextual game-play and destructible environment. We focus on single-player, which provides an environment tractable yet rich enough to test our approach. Overall the dimensionality of the agent state can grow to several dozens of continuous and categorical variables. We construct similar states for interactable NPCs. } \blue{The NPCs in this environment represent adversarial entities, trying to attack the agent. Additionally, the environment can contain objects of interest, like ammo boxes, dropped weapons, etc. The environment itself is non-deterministic stochastic, i.e., there is no single random seed which we can set to control all random choices in the environment. Additionally, frequent saving and reloading game state is not practical due to relatively long loading times. } \blue{The main objective for us in this use case is to provide designer with a tool to playtest the game by interacting with the game in a number of particular styles to emulate different players. The styles can include: \begin{itemize} \item Aggressive: the agent tries to defeat adversarial NPCs, \item Sniper: the agent finds a good sniping spot and waits for adversaries to appear in its cone of sight to shoot them, \item Exploratory: the agent attempts to explore as many locations and objects of interest as possible while actively trying to avoid combat, \item Sneaky: the agent tries focus on its objectives while avoiding combat. \end{itemize} } \blue{An agent trained this way can also be playing as an ``avatar'' of an actual human player, to stand-in for the players when they are not online or to fill a vacant spot in a squad. The agents are not designed to have any specific level of performance and they may not necessarily follow any long-term goals. These agents are intended to explore the game and also be able to interact with human players at a relatively shallow level of engagement. In summary,} we want to efficiently train an agent using demonstrations capturing only certain elements of the game-play. The training process has to be computationally inexpensive and the agent has to imitate the behavior of the teacher(s) by mimicking their relevant style (in a statistical sense) for \emph{implicit} representation of the teacher's objectives. Casting this problem directly into the RL framework is complicated. First, it is not straightforward how to design a rewarding mechanism for imitating the style of the expert. While inverse RL aims at solving this problem, its applicability is not obvious given the reduced representation of the huge state-action space that we deal with and the ill-posed nature of the inverse RL problem~\cite{apprenticeship-learning,IRL}. Second, the RL training loop often requires thousands of episodes to learn useful policies, directly translating to a high cost of training in terms of time and computational resources. \blue{Hence, rather than using more complex solutions such as generative adversarial imitation learning~\cite{GAIL} which use an RL network as their generator,} we propose a solution to the stated problem based on an ensemble of multi-resolution Markov models. \blue{One of the major benefits of the proposed model is the ability to perform an interactive training within the same episode. As useful byproduct of our formulation, we can also sketch a mechanism for numerical evaluation of the style associated with the agents we train. We outline the main elements of the approach next, for additional details refer to \cite{ICML-HILL,Igor-NeurIPS18,AAAI-Make, LLNL_CASIS_2019} }. { \subsubsection{Markov Decision Process with Extended State} We place the problem into the standard MDP framework and augment it as follows. Firstly, we ignore differences between the observation and the actual state $s$ of the environment. The actual state may be impractical to expose to the agent. To mitigate partial observability, we extended observations with a short history of previously taken actions. In addition to implicitly encoding the intent of a teacher and their reactions to potentially richer observations, it helps to preserve the stylistic elements of human demonstrations. Concretely, we assume the following. The interaction of the agent and the environment takes place at discrete moments $t=1,\dots, T$ with the value of $t$ trivially observable by the agent. After receiving an observation $s_t$ at time $t$, the agent can take an action $a_t$ from the set of allowed actions $A(s, t)$, using policy $\pi: s \to a$. Executing an action results in a new state $s_{t+1}$. Since we focus on the stylistic elements of the agent behavior, the rewards are inconsequential for the model we build, and we drop them from the discussion. Considering the episode-based environment, a complete episode is then $E=\{(s_t, a_t)\}_{t \in {1,\dots,T}}$. The fundamental assumption regarding the described decision process is that it has the Markov property. We also consider a recent history of the past $n$ actions, where $1 \leq n < T$, $\alpha_{t, n} := a_{t-n}^{t-1}=\{a_{t-n}, \dots, a_{t-1}\}$, whenever it is defined in episode $E$. For $n=0$, we define $a_{t,0}$ as the empty sequence. We augment observed state $s_t$ with the action history $\alpha_{t,n}$, to obtain \textit{extended state} $S_{t,n}=(s_t, \alpha_{t,n})$. The purpose of including the action history is to capture additional information from human input during interactive demonstrations. An extended policy $\pi_{n}$, which operates on the extended states $\pi_{n} : S_{t,n} \to a_t$, is useful for modeling human actions in a manner similar to $n$-grams text models in natural language processing (NLP) (e.g., \cite{KaminskiMP}, \cite{davidwrite}, \cite{andresen2017approximating}). Of course, the analogy with $n$-gram models in NLP works only if both state and action spaces are discrete. We address this restriction in the next subsection using multi-resolution quantization. For a discrete state-action space and various $n$, we can compute probabilities $P\{a_t|S_{t,n}\}$ of transitions $S_{t,n} \to a_t$ occurring in demonstrations and use them as a Markov model $M_n$ of order $n$ actions. We say that the model $M_n$ is defined on an extended state $S_{.,n}$ if the demonstrations contain at least one occurrence of $S_{.,n}$. When a model $M_n$ is defined on $S$, we can use $P\{a_t|S_{t,n}\}$ to sample the next action from all ever observed next actions in state $S_{.,n}$. Hence, $M_n$ defines a partial stochastic mapping $M_n: S_{.,n} \to A$ from extended states to action space $A$. \subsubsection{Stacked Markov models} We call a sequence of Markov models $\mathcal{M}_n = \{M_i\}_{i=0,\dots,n}$ a stack of models. A (partial) policy defined by $\mathcal{M}_n$ computes the next action at a state $s_t$, see \cite{ICML-HILL} for the pseudo-code of the corresponding algorithm. Such policy performs a simple behavior cloning. The policy is partial since it may not be defined on all possible extended states and needs a fallback policy $\pi_*$ to provide a functional agent acting in the environment. Note that it is possible to implement sampling from a Markov model using an $\mathcal{O}(1)$ complexity operation with hash tables, making the inference very efficient and suitable for real-time execution in a video game. \subsubsection{Quantization} Quantization (aka discretization) works around the limitation of discrete state-action space, enabling the application of the Markov Ensemble approach to environments with continuous dimensions. Quantization is commonly used in solving MDPs \cite{RL_state_of_the_art} and has been extensively studied in the signal processing literature \cite{digital_signal_proc}, \cite{vect_quant}. Quantization schemes that have been optimized for specific objectives can lead to significant gains in model performance, improving various metrics vs. ad-hoc quantization schemes \cite{RL_state_of_the_art}, \cite{Pages}. Instead of trying to pose and solve the problem of optimal quantization, we use a set of quantizers covering a range of schemes from coarse to fine. At the conceptual level, such an approach is similar to multi-resolution methods in image processing, mip-mapping and Level-of-Detail (LoD) representations in computer graphics \cite{comp_graph}. The simplest quantization is a uniform one with bin size $\sigma$, i.e., $Q_{\sigma}(x) = \sigma \floor*{\frac{x}{\sigma}}.$ For each continuous variable in the state-action space, we consider a sequence of quantizers with decreasing step size $Q = \{Q_{\sigma_j}\}_ {j=0,\dots, K}$, $\sigma_j > \sigma_{j+1}$, which naturally gives a quantization sequence $\Bar{Q}$ for the entire state-action space, provided $K$ is fixed across the continuous dimensions. To simplify notation, we collapse the sub index and write $Q_{j}$ to stand for $Q_{\sigma_j}$. For more general quantization schemes, the main requirement is the decreasingly smaller reconstruction error for $Q_{j+1}$ in comparison to $Q_j$. For an episode $E$, we compute its quantized representation in an obvious component-wise manner: \begin{equation} \label{eq:quantization} E_j = \Bar{Q}_j(E)=\{(\Bar{Q}_j(s_t), \Bar{Q}_j(a_t))\}_{t \in {1,\dots,T}} \end{equation} which defines a multi-resolution representation of the episode as a corresponding ordered set $\{E_j\}_{j \in \{0, \dots, K\}}$ of quantized episodes, where $\Bar{Q}$ is the vector version of quantization $Q$. In the quantized Markov model $M_{n,j}= \Bar{Q}_j(M_n)$, constructed from the episode $E_j$, we compute extended states using the corresponding quantized values. The extended state is $\Bar{Q}_j(S_{t,n})=(\Bar{Q}(s_t), \Bar{Q}(\alpha_{t,n}))$. Further, we define the model $\Bar{Q}_j(M_n)$ to contain probabilities $P\{a_t|\Bar{Q}_j(S_{t,n})\}$ for the \emph{original} action values. In other words, we do not rely on the reconstruction mapping $\Bar{Q}_j^{-1}$ to recover action, but store the original actions explicitly. Continuous action values tend to be unique and the model samples from the set of values observed after the occurrences of the corresponding extended state. Our experiments show that replaying the original actions instead of their quantized representation provides better continuity and natural true-to-the-demonstration look of the cloned behavior. \subsubsection{Markov Ensemble} Combining stacking and multi-resolution quantization of Markov models, we obtain Markov Ensemble $\mathcal{E}$ as an array of Markov models parameterized by the model order $n$ and the quantization schema $Q_{j}$. Note, that with the coarsest quantization $\sigma_0$ present in the multi-resolution schema, the policy should always return an action sampled using one of the quantized models, which at the level $0$ always finds a match. Hence, such models always ``generalize'' by resorting to simple sampling of actions when no better match found in the observations. Excluding too coarse quantizers and Markov order 0 will result in executing some ``default policy'' $\pi_*$, which we discuss in the next section. The agent execution with the outlined ensemble of quantized stacked Markov models is easy to express as an algorithm, which in essence boils down to a look-up table \cite{ICML-HILL}. \subsubsection{Interactive Training of Markov Ensemble with human in the loop (HITL) learning} If the environment allows a human to override the current policy and record new actions, then we can generate demonstrations produced interactively. For each demonstration, we construct a new Markov Ensemble and add it to the sequence of existing models. The policy based on these models consults with the latest one first. If the model fails to produce an action, the next model is asked, until there are no other models. Thanks to the sequential organization, the latest demonstrations take precedence of the earlier ones, allowing correcting previous mistakes or adding new behavior for the previously unobserved situations. We illustrate the logic of such an interaction with the sample git repository \cite{ibor_github_jun2019}. In our case studies, we show that often even a small number of strategically provided demonstrations results in a well-behaving policy. While the spirit of the outlined idea is similar to that of DAgger~\cite{dagger}, providing corrective labels on the newly generated samples is more time consuming than providing new demonstration data in under-explored states. The interactivity could also be used to support newly added features or to update the existing model otherwise. The designer can directly interact with the game, select a particular moment where a new human demonstration is required and collect new demonstration data without reloading the game. The interactivity eliminates most of the complexity of the agent design process and brings down the cost of gathering data from under-explored parts of the state space. We report an example chart for such an interactive training in Fig.~\ref{Figure:interactive_training}. The goal in this example is to train an agent capable of an attack behavior. The training on the figure starts with the most basic game-play when the designer provides a demonstration for approaching the target. The next training period happens after observing the trained model for a short period of time. In between the training training periods, the designer makes sure that the agent reaches the intended state and is capable of executing already learned actions. The second training period adds more elements to the behavior. The figure covers several minutes of game-play and the sliding window size is approximately one second, or 30 frames. The competence here is equated to the model performance and is a metric of how many states the model can handle by returning an action. The figure shows that the model competence grows as it accumulates more demonstrations in each of the two training segments. The confidence metric is a natural proxy for evaluating how close stylistically is the model behavior to the demonstrations. Additional details on the interactive training are available from \cite{ICML-HILL} and the repository \cite{ibor_github_jun2019} which allows experimentation with two classic control OpenAI environments. The prolonged period of training may increase the size of the model with many older demonstrations already irrelevant, but still contributing to the model size. Instead of using rule-based compression of the resulting model ensemble, in the next subsection, we discuss the creation of a DNN model trained from the ensemble of Markov models via a novel bootstrap approach using the game itself as the way to compress the model representation and strip off obsolete demonstration data. Using the proposed approach, we train an agent that satisfies the design needs in only a few hours of interactive training. \subsubsection{A sketch of style distance with Markov Ensemble} The models defined above allows us to introduce a candidate metric for measuring stylistic difference between behaviors $V$ and $W$ represented by the corresponding set of episodes. For a fixed quantization scheme, we can compute a sample distribution of the $n$-grams for both behaviors, which we denote as $v_n$ and $w_n$. Then the ``style'' distance $D=D_{\lambda, N}(V, W)$ between $V$ and $W$ can be estimated using the formula: $$D(V, W) = \frac{\lambda}{1 - \lambda} \sum_{n=0}^{N} \lambda^{n} d(v_n, w_n) + \frac{\lambda^{N+1}}{1 - \lambda} d(v_N, w_N),$$ where $\lambda \in (0,1)$ controls the contributions of different $n$-gram models. As defined, larger $\lambda$ puts more weight on longer $n$-grams and as such values more complex sequence of actions more. The function $d$ is one of the probability distances. We used Jensen-Shannon (JSD) and Hellinger (HD); both in the range $[0,1]$, hence for all $V$ and $W$, $D \in [0,1]$. The introduced distance can augment the traditional RL rewards to preserve style during training without human inputs as we discuss in \cite{LLNL_CASIS_2019}. However, the main motivation of introducing distance $D$ is to provide a numerical metric to evaluate how demonstrations and the learned policy differ in terms of style without visually inspecting them in the environment. } \subsection{Bootstrapped DNN agent} \blue{The ensemble of multi-resolution Markov models described in the previous section suffers from several drawbacks. One is the linear growth of the model size with demonstration data. The other problem stems from the limited nature of the human demonstrations. In particular, humans proactively take certain actions and there are only few if any ``negative'' examples where humans fails to navigate smoothly. Imitation learning is well known to suffer from propagation of errors reaching states that are not represented in the demonstration data. In these situations, the Markov agent may get blocked and act erratically. To address both of these issues, we introduce a bootstrapped DNN agent.} \blue{When generating boostrapped episodes, we use the Markov agent augmented with some common sense rules (heuristics addressing the states not encountered in demonstrations). For instance, for a blocked state, a simple obstacle avoidance fall-back policy can help the agent unblock. Combining such rules with the supervised learning from human demonstrations allows to make the boostrapped training dataset much richer. } We treat the existing demonstrations as a training set for a supervised policy, which predicts the next action conditioned on the gameplay history, i.e., a sequence of observed state-action pairs. Since our demonstration dataset is relatively small, we generate more data (bootstrap) by letting the trained Markov agent interact with the game environment. { \renewcommand{\arraystretch}{1.3} \begin{table}[t] \caption{Comparison between OpenAI 1v1 Dota 2 Bot \cite{openai-dota2} training metrics and training an agent via bootstrap. \blue{The comparison is not 1-to-1 because the training objectives are very different. However, the environments are similar in complexity. These metrics highlight the practical training of agents during the game development cycle. The point is to illustrate that the training objectives play a critical role.}} \label{stats-table} \centering \vspace{0.12in} \begin{tabular}{l|l|l} & OpenAI & Bootstrapped \\ & 1v1 Bot & Agent \\ \hline Experience & $\sim$300 years & $\sim$5 min \\ & (per day) & demonstrations\\ \hline Bootstrap using & N/A & $\times$5-20 \\ game client & & \\ \hline CPU & 60,000 CPU & 1 local CPU \\ & cores on Azure & \\ \hline GPU & 256 K80 GPUs & N/A \\ & on Azure & \\ \hline Size of& $\sim$3.3kB & $\sim$0.5kB\\ observation & & \\ \hline Observations & 10 & 33 \\ per second & & \\ \end{tabular} \end{table} } Such a bootstrap process is easy to parallelize in offline policy training as we can have multiple copies of the agent running without the need to cross-interact, as opposed to online algorithms such as A3C \cite{async_rl}. The generated augmented dataset is used to train a more sophisticated policy (deep neural network). Due to partial observability, the low dimensionality of the feature space results in fast training in a wide range of model architectures, allowing a quick experimentation loop. We converged on a simple model with a single ``wide'' hidden layer for motion control channels and a fully-connected DNN for discrete channels responsible for turning on/off actions like sprint, firing, climbing. The approach shows promise even with many yet unexplored opportunities to improve its efficiency. \blue{A reasonable architecture for both DNN models can be inferred from the tasks they solve. For the motion controller, the only hidden layer roughly corresponds to the temporal-spacial quantization levels in the base Markov model. When using ReLUs for motion controller hidden layer, we start experimentation with their number equal to the double number of the quantization steps per input variable. Intuitively, training encodes those quantization levels into the layer weights. Adding more depth may help to better capture stylistic elements of the motion. To prevent overfitting, the number of model parameters should be chosen based on the size of the training dataset. In our case, overfitting to the few demonstrations may result in better representation of the style, yet may lead to the degraded in-game performance, e.g., an agent will not achieve game-play objectives as efficiently. In our experiments, we find that consistent (vs. random) demonstrations require only single hidden layer for the motion controller to reproduce basic stylistic features of the agent motion. A useful rule of thumb for discrete actions DNN is to start with the number of layers roughly equal to the maximum order of Markov model used to drive the bootstrap and conservatively increase the model complexity only as needed. For such a DNN, we are using fully connected layers with the number of ReLUs per layer roughly equal to doubled the dimensionality of the input space. We also observed that recurrent networks do not provide much improvement perhaps because the engineered features capture the essence of the game state. } Table \ref{stats-table} illustrates the computational resources required by this approach as compared to training 1v1 agents in Dota 2~\cite{openai-dota2}. While we acknowledge that the goal of our agent is not to play optimally against the opponent, we observe that using model-based training augmented with expert demonstrations to solve the Markov decision process, in a complex game, results in huge computational savings compared to an optimal reinforcement learning approach. \subsection{\blue{Cooperative (Assistive) game-playing AI}} \label{sec:team-sports} \blue{Our last case study involves a team sports game, where the designer's goal is to train agents that can learn strategic teamplay to complement arbitrary human player styles. For example, if the human player is more offensive, we would like their teammate agent to be more defensive and vice versa. The game in question involves two teams trying to score the most points before time runs out. To score a point, the team needs to put the ball past the goal line on their opponent's side of the field. Similar to several team sports games, the players have to fight for ball possession for them to be able to score, and hence ball control is a big component of this game. This is a more complex challenge compared to the previous case study that concerned exploration of a game world. As the agent in this game is required to make strategic decisions, we resort to reinforcement learning.} \begin{figure} \centering \includegraphics[width=0.4\linewidth, angle=90]{STS2-2} \caption{A screen shot of the simple team sports simulator (STS2). Red agents are home agents attempting to score at the left end and white agents are away agents attempting to score at the right end. The highlighted player has possession of the ball and the arrows demonstrate a pass/shoot attempt.}\label{fig:STS2} \end{figure} Our training takes place on simple team sports simulator (STS2).\footnote{We intend to release this environment as an open-source package.} A screenshot of STS2 game-play is shown in Fig.~\ref{fig:STS2}. The simulator embeds the rules of the game and the physics at a high level. Each of the players can be controlled by a human, a pre-built rule-based agent, or any other learned policy. The behavior of a rule-based agent is controlled by a handful of rules and constraints that govern their game-play strategy, and is most similar to game controlled opponents usually implemented in adversarial games. The STS2 state space consists of player coordinates, their velocities and an indicator of ball possession. The action space is discrete and is considered to be left, right, forward, backward, pass, and shoot. Although the player can hit two or more of the actions together, we exclude that possibility to keep the complexity of the action space manageable. { Our goal is to train a teammate agent that can adapt to a human player's style. As the simplest multi-agent mode, we consider a 2v2 game. We show two scenarios. In the first case, we train an agent to cooperate with a novice human player. In the second case, we train a defensive agent that complements a high-skill offensive agent teammate. A more comprehensive study on the material in this section appears in~\cite{MAL-team-sports}. \subsubsection{Game-playing AI to assist a low-skill player} We consider training an agent in a 2v2 game \blue{that can assist a low-skill player.} We let rule-based agents take control of the opponent players. We also choose a low-skill rule-based agent to control the teammate player. The goal is to train a cooperative agent that complements the low-skill agent. In this experiment, we provide a `+/-1' reward for scoring. We also provide a `+/-0.8` {\em individual} reward for the agent for gaining/losing the possession of the ball. This reward promotes the agent to gain the ball back from the opponent and score. We ran the experiment using DQN, PPO, and Rainbow (with its default hyperparameters). PPO requires an order of magnitude less trajectories for convergence, and the final policy is similar to that of DQN. However, Rainbow did not converge at all with the default hyperparameters and we suspect that the prioritized experience replay~\cite{prioritized-experience-replay} is sensitive to hyperparameters. The team statistics for this agent are shown in Table~\ref{tab:1w1v2_offensive}. As can be seen, the agent has learned an offensive game-play style where it scores most of the time. It also keeps more possession than the rest of the agents in the game. { \renewcommand{\arraystretch}{1.3} \begin{table}[h] \caption{Offensive DQN agent in a 2v2 partnered with the rule-based agent versus two rule-based agents. Rewards: sparse `+/-1' for scoring and individual `+/-0.8' for win/lose possession of the ball. } \label{tab:1w1v2_offensive} \centering \begin{tabular}{l|c|c|c|c} Statistics & DQN-1 & Rule-based Agent & Opponent 1 & Opponent 2\\ \hline Score rate & 54\% & 20\% & 13\% & 13\%\\ \hline Possession & 30\% & 18\% & 26\% & 26\% \\ \end{tabular} \end{table} } \subsubsection{Game-playing AI to assist a high-skill offensive player} Next, we report training an agent that complements a high-skill offensive player. In particular, we train an agent that complements the DQN-1 that was trained in the previous experiment. We train another agent as the teammate using exactly the same rewarding mechanism as the one used in training the offensive DQN-1 agent. The statistics of the game-play for the two agents playing together against the rule-based agent are shown in Table~\ref{tab:2v2_offensive}. While the second agent is trained with the same reward function as the first one, it is trained in a different environment as it is partnered with the previous offensive DQN-1 agent. As can be seen, the second agent now becomes defensive and is more interested in protecting the net, recovering ball possession, and passing it to the offensive teammate. We can also see that the game stats for DQN-2 are similar to that of the rule-based agent in the previous experiment. { \renewcommand{\arraystretch}{1.3} \begin{table}[h] \caption{Two DQN agents in a 2v2 match against two rule-based agents, with a sparse `+/-1' reward for scoring and a `+/0.8' individual reward for gaining/losing the possession of the ball. } \label{tab:2v2_offensive} \centering \begin{tabular}{l|c|c|c|c} Statistics & DQN-1& DQN-2 & Opponent 1 & Opponent 2\\ \hline Score rate & 50\% & 26\% & 12\% & 12\%\\ \hline Possession & 28\% & 22\% & 25\% & 25\% \\ \end{tabular} \end{table} } } We repeated these experiments using PPO and Rainbow as well. We observe that the PPO agent's policy converges quickly to a simple one. When it is in possession of the ball, it wanders around in its own half without attempting to cross the half-line or to shoot until the game times out. This happens because the rule-based agent is programmed not to chase the opponent in their half when the opponent has the ball, and hence, the game goes on as described until timeout with no scoring on either side. PPO has clearly reached a local minimum in the space of policies, which is not unexpected as it is optimizing the policy directly. Finally, the Rainbow agent does not learn a useful policy in this case. \section{Concluding Remarks} \label{sec:conclusion} In this paper, we presented our efforts to create intelligent agents that can assist game designers in building games. To this end, we outlined a training pipeline, designed to train agents in games. We presented four case studies, two on creating playtesting agents and two on creating game-playing agents. Each use case showcased intelligent agents that strike a balance between skill and style. In the first case study, we considered The Sims Mobile in its early development stage. We showed that the game dynamics could be fully extracted in a lightweight model of the game. This removed the need for learning; and the game-play experience was modeled using much simpler planning methods. The playtesting agent modeled with the A* algorithm proved effective because of the straightforward skill requirement, i.e., fast progression with minimum number of actions taken. In the second case study, we considered a mobile game with large state and action spaces, where the designer's objective was to measure an average player's progression. We showed how model-free RL could inform the designer of their design choices for the game. The game presented a challenge in which the player's choice of actions on resource management would manifest itself near the end of the game in interactions with other players (creating an environment with delayed rewards). Our experiments demonstrated that the choice of the observation space could dramatically impact the effectiveness of the solutions trained using deep RL. We are currently investigating proper reward shaping schemes as part of a hierarchical gameplay solution for this game. In the third case study, we considered an open-world HD game with the goal of imitating gameplay demonstrations provided by the game designer. We used a multi-resolution ensemble of Markov models as a baseline in this environment. While the baseline model performed well in most settings, it encountered poor generalization in underexplored states. We addressed this challenge on three fronts: basic rules, human-in-the-loop learning, and compressing the ensemble into a compact representation. We augmented the model with simple rules to avoid unintended states not present in human demonstrations. The end-to-end training of the baseline, taking only a few hours, allowed us to quickly iterate with the game designer in a human-in-the-loop setting to correct any unintended behavior. Finally, we bootsrapped a supervised DNN model using the ensemble model as a simulator to generate training data, resulting in a compressed model with fast inference and better generalization. In the last case study, we considered a team sports game, where the goal was to train game-playing agents that could complement human players with different skills to win against a competent opposing team. In addition to the reward function, the emergent behavior of an agent trained using deep RL was also impacted by the style of their teammate player. This made reward shaping extremely challenging in this setting. As part of this investigation, we also observed that the state-of-the-art open-source deep RL models are heavily tuned to perform well on benchmark environments including Atari games. We are currently investigating meta-policies that could adapt to a variety of teammates and opponents without much tuning. These four case studies presented in this work highlight the challenges faced by game designers in training intelligent agents for the purpose playtesting and gameplaying AI. We would like to share two main takeaways learned throughout this work as guiding principles for the community: (1) depending on the problem at hand, we need to resort to a variety of techniques, ranging from planning to deep RL, to effectively accomplish the objectives of designers; (2) the learning potential of state-of-the-art deep RL models does not seamlessly transfer from the benchmark environments to target ones without heavily tuning their hyperparameters, leading to linear scaling of the engineering efforts and computational cost with the number of target domains. \section*{Acknowledgement} The authors are thankful to EA Sports and other game team partners for their support and collaboration. \blue{The authors also would like to thank the anonymous reviewers and the EIC for their constructive feedback.} \bibliographystyle{IEEEtran}
proofpile-arXiv_065-4249
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Operator algebraists often refer to (for good reasons, of course) the UHF-algebras such as CAR-algebra as the noncommutative analogues of the Cantor set $2^\mathbb N$, or more precisely the commutative $C^*$-algebra $C(2^\mathbb N)$. We introduce a different class of separable AF-algebras, we call them ``AF-algebras with Cantor property" (Definition \ref{Bratteli}), which in some contexts are more suitable noncommutative analogues of $C(2^\mathbb N)$. One of the main features of AF-algebras with Cantor property is that they are direct limits of sequences of finite-dimensional $C^*$-algebras where the connecting maps are left-invertible homomorphisms. This property, for example, guarantees that if the algebra is infinite-dimensional, it has plenty of nontrivial ideals and quotients, while UHF-algebras are simple. The Cantor set is a ``special and unique" space in the category of all compact (zero-dimensional) metrizable spaces in the sense that it bears some universality and homogeneity properties; it maps onto any compact (zero-dimensional) metrizable space and it has the homogeneity property that any homeomorphism between finite quotients lifts to a homeomorphism of the Cantor set (see \cite{Kubis-FS}). Moreover, Cantor set is the unique compact zero-dimensional metrizable space with the property that (stated algebraically): for every $m,n\in \mathbb N$ and unital embeddings $\phi: \mathbb C^n \to \mathbb C^m$ and $\alpha: \mathbb C^n \to C(2^\mathbb N)$ there is an embedding $\beta:\mathbb C^m \to C(2^\mathbb N)$ such that the diagram \begin{equation*} \begin{tikzcd} & C(2^\mathbb N) \\ \mathbb C^n \arrow[ur, hook, "\alpha"] \arrow[r, hook, "\phi"] & \mathbb C^m \arrow[u, hook, dashed, "\beta"] \end{tikzcd} \end{equation*} commutes. Note that the map $\phi$ in the above must be left-invertible and if $\alpha$ is left-invertible then $\beta$ can be chosen to be left-invertible. Recall that a homomorphism $\phi: \mathcal{B}\to \mathcal{A}$ is left-invertible if there is a homomorphism $\pi: \mathcal{A} \to \mathcal{B}$ such that $\pi \circ \phi = \operatorname{id}_\mathcal{B}$. The AF-algebras with Cantor property satisfy similar universality and homogeneity properties in their corresponding categories of finite-dimensional $C^*$-algebras and left-invertible homomorphisms. Although, in general AF-algebras with Cantor property are not assumed to be unital, when restricted to the categories with unital maps, one can obtain the unital AF-algebras with same properties subject to the condition that maps are unital. For instance, the ``truly" noncommutative AF-algebra with Cantor property $\mathcal{A}_\mathfrak F$, that was mentioned in the abstract, is the unique (nonunital) AF-algebra which is the limit of a sequence of finite-dimensional $C^*$-algebras and left-invertible homomorphisms (necessarily embeddings), with the property that for every finite-dimensional $C^*$-algebras $\mathcal{D}, \mathcal{E}$ and (not necessarily unital) left-invertible embeddings $\phi: \mathcal{D} \to \mathcal{E}$ and $\alpha: \mathcal{D} \to \mathcal{A}_\mathfrak F$ there is a left-invertible embedding $\beta:\mathcal{E} \to \mathcal{A}_\mathfrak F$ such that the diagram \begin{equation*} \begin{tikzcd} & \mathcal{A}_\mathfrak F \\ \mathcal{D} \arrow[ur, hook, "\alpha"] \arrow[r, hook, "\phi"] & \mathcal{E} \arrow[u, hook, dashed, "\beta"] \end{tikzcd} \end{equation*} commutes (Theorem \ref{A_F-char}). One of our main results (Theorem \ref{universal AF-algebra}) states that $\mathcal{A}_\mathfrak F$ maps surjectively onto any separable AF-algebra. However, this universality property is not unique to $\mathcal{A}_\mathfrak F$ (Remark \ref{not unique}). The properties of the Cantor set that are mentioned above can be viewed as consequences of the fact that it is the ``Fra\"\i ss\'e limit" of the class of all nonempty finite spaces and surjective maps (as well as the class of all nonempty compact metric spaces and continuous surjections); see \cite{Kubis-ME}. The theory of Fra\"\i ss\'e limits was introduced by R. Fra\"\i ss\'e \cite{Fraisse} in 1954 as a model-theoretic approach to the back-and-forth argument. Roughly speaking, Fra\"\i ss\'e theory establishes a correspondence between classes of finite (or finitely generated) models of a first-order language with certain properties (the joint-embedding property, the amalgamation property and having countably many isomorphism types), known as \emph{Fra\"\i ss\'e classes}, and the unique (ultra-)homogeneous and universal countable structure, known as the \emph{Fra\"\i ss\'e limit}, which can be represented as the union of a chain of models from the class. Fra\"\i ss\'e theory has been recently extended way beyond the countable first-order structures, in particular, covering some topological spaces, Banach spaces and, even more recently, some $C^*$-algebras. Usually in these extensions the classical Fra\"\i ss\'e theory is replaced by its ``approximate" version. Approximate Fra\"\i ss\'e theory was developed by Ben Yaacov \cite{Ben Yaacov} in continuous model theory (an earlier approach was developed in \cite{Schoretsanitis}) and independently, in the framework of metric-enriched categories, by the second author~\cite{Kubis-ME}. The Urysohn metric space, the separable infinite-dimensional Hilbert space \cite{Ben Yaacov}, and the Gurari\u{\i} space \cite{Kubis-Solecki} are some of the other well known examples of Fra\"\i ss\'e limits of metric structures (see also \cite{Martino} for more on Fra\"\i ss\'e limits in functional analysis). Fra\"\i ss\'e limits of $C^*$-algebras are studied in \cite{Eagle} and \cite{Masumoto}, where it has been shown that the Jiang-Su algebra, all UHF algebras, and the hyperfinite II$_1$-factor are Fra\"\i ss\'e limits of suitable classes of finitely generated $C^*$-algebras with distinguished traces. Here we investigate the separable AF-algebras that arise as limits of Fra\"\i ss\'e classes of finite-dimensional $C^*$-algebras. Apart from $C(2^\mathbb N)$, which is the Fra\"\i ss\'e limit of the class of all commutative finite-dimensional $C^*$-algebras and unital (automatically left-invertible) embeddings, all UHF-algebras \cite[Theorem 3.4]{Eagle} and a class of simple monotracial AF-algebras described in \cite[Theorem 3.9]{Eagle}, are Fra\"\i ss\'e limits of classes of finite-dimensional $C^*$-algebras. It is also worth noticing that the $C^*$-algebra $\mathcal K(H)$ of all compact operators on a separable Hilbert space $H$ and the universal UHF-algebra $\mathcal{Q}$ (see Section \ref{universal uhf}) are both Fra\"\i ss\'e limits of, respectively, the category of all matrix algebras and (not necessarily unital) embeddings and the category of all matrix algebras and unital embeddings. In general, however, obstacles arising from the existence of traces prevent many classes of finite-dimensional $C^*$-algebras from having the amalgamation property (\cite[Proposition 3.3]{Eagle}), therefore making it difficult to realize AF-algebras as Fra\"\i ss\'e limits of such classes. The AF-algebra $C(2^\mathbb N)$ is neither a UHF-algebra nor it is among AF-algebras considered in \cite[Theorem 3.9]{Eagle}. Therefore, it is natural to ask whether $C(2^\mathbb N)$ belongs to any larger nontrivial class of AF-algebras whose elements are Fra\"\i ss\'e limits of some class of finite-dimensional $C^*$-algebras. This was our initial motivation behind introducing the class of separable AF-algebras with Cantor property (Definition \ref{Bratteli}). This class properly contains the AF-algebras of the form $M_n \otimes C(2^\mathbb N)$, for any matrix algebra $M_n$. If $\mathcal{A}$ and $\mathcal{B}$ are $C^*$-algebras, $\phi: \mathcal{B} \hookrightarrow \mathcal{A}$ is a left-invertible embedding and $\pi:\mathcal{A} \twoheadrightarrow \mathcal{B}$ is a left inverse of $\phi$, then we have the short exact sequence \begin{equation*} \begin{tikzcd}[ ampersand replacement=\&] 0 \arrow[r, hook] \& \ker(\pi) \arrow[r, hook, "\iota"] \& \mathcal{A} \arrow[r, two heads, shift left, "\pi"] \& \mathcal{B} \arrow[l, hook', shift left, "\phi"] \arrow[r, hook] \& 0. \end{tikzcd} \end{equation*} Therefore $\mathcal{A}$ is a ``split-extension" of $\mathcal{B}$. In this case we say $\mathcal{B}$ is a ``retract" of $\mathcal{A}$. It would be more convenient for us to say `` $\mathcal{B}$ is a retract of $\mathcal{A}$" rather than the more familiar phrase (for $C^*$-algebraists) ``$\mathcal{A}$ is a split-extension of $\mathcal{B}$". In Section \ref{LI-section} we consider direct sequences of finite-dimensional $C^*$-algebras $$\mathcal{A}_1 \xrightarrow{\phi_1^2}\mathcal{A}_2 \xrightarrow{\phi_2^3}\mathcal{A}_3 \xrightarrow{\phi_3^4} \dots $$ where each $\phi_n^{n+1}$ is a left-invertible embedding. The AF-algebra $\mathcal{A}$ that arises as the limit of this sequence has the property that every matrix algebra $M_k$ appearing as a direct-sum component (an ideal) of some $\mathcal{A}_n$ is a retract of $\mathcal{A}$ (equivalently, $\mathcal{A}$ is a split-extension of each $\mathcal{A}_n$ and such $M_k$). Moreover, every retract of $\mathcal{A}$ which is a matrix algebra, appears as a direct-sum component of some $\mathcal{A}_n$ (Lemma \ref{sub-retract}). The AF-algebras with Cantor property are defined and studied in Section \ref{CP-section}. They are characterized by the set of their matrix algebra retracts. That is, two AF-algebras with Cantor property are isomorphic if and only if they have exactly the same matrix algebras as their retracts (Corollary \ref{retract-coro}), i.e., they are split-extensions of the same class of matrix algebras. We will use the Fra\"\i ss\'e-theoretic framework of (\emph{metric-enriched}) categories described in \cite{Kubis-ME}, rather than the (metric) model-theoretic approach to the Fra\"\i ss\'e theory. A brief introduction to Fra\"\i ss\'e categories is provided in Section \ref{Flim-intro-section}. We show that (Theorem \ref{closed-Fraisse}) any category of finite-dimensional $C^*$-algebras and (not necessarily unital) left-invertible embeddings, which is closed under taking direct sums and ideals of its objects (we call these categories $\oplus$-stable) is a Fra\"\i ss\'e category. Moreover, Fra\"\i ss\'e limits of these categories have the Cantor property (Lemma \ref{A_K-CS}) and in fact any AF-algebra $\mathcal{A}$ with Cantor property can be realized as Fra\"\i ss\'e limit of such a category, where the objects of this category are precisely the finite-dimensional retracts of $\mathcal{A}$ (see Definition \ref{retract-def} and Theorem \ref{Cantor-closed-Fraisse}). In particular, the category $\mathfrak F$ of \emph{all} finite-dimensional $C^*$-algebras and left-invertible embeddings is a Fra\"\i ss\'e category (Section \ref{universal-section}). A priori, the Fra\"\i ss\'e limit $\mathcal A_{\mathfrak F}$ of this category is a separable AF-algebra with the universality property that any separable AF-algebra $\mathcal A$ which is the limit of a sequence of finite-dimensional $C^*$-algebras with left-invertible embeddings as connecting maps, can be embedded into $\mathcal{A}_\mathfrak F$ via a left-invertible embedding, i.e., $\mathcal{A}_\mathfrak F$ is a split-extension of $\mathcal{A}$. In particular, there is a surjective homomorphism $\theta :\mathcal{A}_\mathfrak F \twoheadrightarrow \mathcal{A}$. Also any separable AF-algebra is isomorphic to a quotient (by an essential ideal) of an AF-algebra which is the limit of a sequence of finite-dimensional $C^*$-algebras with left-invertible embeddings (Proposition \ref{ess-quotient}). Combining the two quotient maps, we have the following result, which is later restated as Theorem \ref{universal AF-algebra}. \begin{theorem}\label{main theorem} The category of all finite-dimensional $C^*$-algebras and left-invertible embeddings is a Fra\"\i ss\'e category. Its Fra\"\i ss\'e limit $\mathcal A_{\mathfrak F}$ is a separable AF-algebra such that \begin{itemize} \item $\mathcal A_{\mathfrak F}$ is a split-extension of any AF-algebra which is the limit of a sequence of finite-dimensional $C^*$-algebras and left-invertible connecting maps. \item there is a surjective homomorphism from $\mathcal{A}_{\mathfrak F}$ onto any separable AF-algebra. \end{itemize} \end{theorem} The Bratteli diagram of $\mathcal{A}_\mathfrak F$ is described in Proposition \ref{Bratteli diagram-A_F}, using the fact that it has the Cantor property. It is the unique AF-algebra with Cantor property which is a split-extension of every finite-dimensional $C^*$-algebra. The unital versions of these results are given in Section \ref{unital-sec} (with a bit of extra work, since unlike $\mathfrak F$, the category of all finite-dimensional $C^*$-algebras and \emph{unital} left-invertible maps is not a Fra\"\i ss\'e category, namely, it lacks the joint embedding property). Separable AF-algebras are famously characterized \cite{Elliott} by their $K_0$-invariants which are scaled countable dimension groups (with order-unit, in the unital case). By applying the $K_0$-functor to Theorem \ref{main theorem} we have the following result. \begin{corollary} There is a scaled countable dimension group (with order-unit) which maps onto any scaled countable dimension group (with order-unit). \end{corollary} The corresponding characterizations of these dimension groups are mentioned in Section \ref{K0-section}. Finally, this paper could have been written entirely in the language of partially ordered abelian groups, where the categories of ``simplicial groups" and left-invertible positive embeddings replace our categories. However, we do not see any clear advantage in doing so. \section{Preliminaries}\label{pre-sec} Recall that an approximately finite-dimensional (AF) algebra is a $C^*$-algebra which is an inductive limit of a sequence of finite-dimensional $C^*$-algebras. We review a few basic facts about separable AF-algebras. The background needed regarding AF-algebras is quite elementary and \cite{Davidson} is more than sufficient. The AF-algebras that are considered here are always separable and therefore by ``AF-algebra" we always mean ``separable AF-algebra". AF-algebras can be characterized up to isomorphisms by their Bratteli diagrams \cite{Bratteli}. However, there is no efficient way (at least visually) to decide whether two Bratteli diagrams are isomorphic, i.e., they correspond to isomorphic AF-algebras. A much better characterization of AF-algebras uses $K$-theory. To each $C^*$-algebra the $K_0$-functor assigns a partially ordered abelian group (its $K_0$-group) which turns out to be a complete invariant for AF-algebras \cite{Elliott}. Moreover, there is a complete description of all possible $K_0$-groups of AF-algebras. Namely, a partially ordered abelian group is isomorphic to the $K_0$-group of an AF-algebra if and only if it is a countable dimension group. We mostly use the notation from \cite{Davidson} with minor adjustments. Let $M_k$ denote the $C^*$-algebra of all $k\times k$ matrices over $\mathbb C$. Suppose $\mathcal{A} = \varinjlim (\mathcal{A}_n, \phi_n^m)$ is an AF-algebra with Bratteli diagram $\mathfrak D$ such that each $\mathcal{A}_n \cong \mathcal{A}_{n,1} \oplus \dots\oplus \mathcal{A}_{n,\ell}$, is a finite-dimensional $C^*$-algebra and each $\mathcal{A}_{n,s}$ is a full matrix algebra. The node of $\mathfrak D$ corresponding to $\mathcal{A}_{n,s}$ is ``officially" denoted by $(n,s)$, while intrinsically it carries over a natural number $\dim(n,s)$, which represents the dimension of the matrix algebra $\mathcal{A}_{n,s}$, i.e. $\mathcal{A}_{n,s} \cong M_{\dim(n,s)}$. For $(n,s) , (m,t) \in \mathfrak D$ we write $(n,s) \to (m,t)$ if $(n,t)$ is connected $(m,t)$ by at least one path in $\mathfrak D$, i.e. if $\phi_n^m$ sends $\mathcal{A}_{n,s}$ faithfully into $\mathcal{A}_{m,t}$. The ideals of AF-algebras are also AF-algebras and they can be recognized from the Bratteli diagram of the algebra. Namely, the Bratteli diagrams of ideals correspond to \emph{directed} and \emph{hereditary} subsets of the Bratteli diagram of the algebra (see \cite[Theorem III.4.2]{Davidson}). Recall that an essential ideal $\mathcal{J}$ of $\mathcal{A}$ is an ideal which has nonzero intersections with every nonzero ideal of $\mathcal{A}$. Suppose $\mathfrak D$ is the Bratteli diagram for an AF-algebra $\mathcal{A}$ and $\mathcal{J}$ is an ideal of $\mathcal{A}$ whose Bratteli diagram corresponds to $\mathfrak J \subseteq \mathfrak D$. Then $\mathcal{J}$ is essential if and only if for every $(n,s)\in \mathfrak D$ there is $(m,t)\in \mathfrak J$ such that $(n,s) \to (m,t)$. If $\mathcal{D} =\mathcal{D}_1 \oplus \dots \oplus \mathcal{D}_l $ and $ \mathcal{E} = \mathcal{E}_1 \oplus \dots \oplus \mathcal{E}_k$ are finite-dimensional $C^*$-algebras where $\mathcal{D}_i$ and $\mathcal{E}_j$ are matrix algebras and $\phi: \mathcal{D} \to \mathcal{E}$ is a homomorphism, we denote the ``multiplicity of $\mathcal{D}_i$ in $\mathcal{E}_j$ along $\phi$" by $\operatorname{Mult}_\phi(\mathcal{D}_i, \mathcal{E}_j) $. Also let $\operatorname{Mult}_\phi(\mathcal{D}, \mathcal{E}_j) $ denote the tuple $$(\operatorname{Mult}_\phi(\mathcal{D}_1, \mathcal{E}_j), \dots , \operatorname{Mult}_\phi(\mathcal{D}_l, \mathcal{E}_j))\in \mathbb N^l.$$ Suppose $\pi_j: \mathcal{E} \to \mathcal{E}_j$ is the canonical projection. If $\operatorname{Mult}_\phi(\mathcal{D}, \mathcal{E}_j) = (x_1, \dots, x_l) $ then the group homomorphism $K_0(\pi_j \circ \phi): \mathbb Z^l \to \mathbb Z$ sends $(y_1, \dots, y_l)$ to $\sum_{i\leq l} x_i y_i$. Therefore if $\phi, \psi : \mathcal{D} \to \mathcal{E}$ are homomorphisms, we have $K_0(\phi) = K_0(\psi)$ if and only if $\operatorname{Mult}_\phi(\mathcal{D}, \mathcal{E}_j)= \operatorname{Mult}_\psi(\mathcal{D}, \mathcal{E}_j)$ for every $j\leq k$. The following well known facts about AF-algebras will be used several times throughout the article. We denote the unitization of $\mathcal{A}$ by $\widetilde \mathcal{A}$ and if $u$ is a unitary in $\widetilde\mathcal{A}$, then $\operatorname{Ad}_u$ denotes the inner automorphisms of $\mathcal{A}$ given by $a \to u^* a u$. \begin{lemma}{\cite[Lemma III.3.2]{Davidson}}\label{twisting} Suppose $\epsilon >0$ and $\{\mathcal{A}_n\}$ is an increasing sequence of finite-dimensional $C^*$-algebras such that $\mathcal{A} = \overline{\bigcup \mathcal{A}_n}$. If $\mathcal{F}$ is a finite-dimensional subalgebra of $\mathcal{A}$, then there are $m\in \mathbb N$ and a unitary $u$ in $\widetilde \mathcal{A}$ such that $u^* \mathcal{F} u \subseteq \mathcal{A}_m $ and $\|1-u\| < \epsilon$. \end{lemma} \begin{lemma}\label{fd-af} Suppose $\mathcal{D}$ is a finite-dimensional $C^*$-algebra, $\mathcal{A}$ is a separable AF-algebra and $\phi,\psi: \mathcal{D} \to \mathcal{A}$ are homomorphisms such that $\|\phi - \psi\|< 1$. Then there is a unitary $u\in \widetilde \mathcal{A}$ such that $\operatorname{Ad}_u \circ \psi = \phi$. \end{lemma} \begin{proof} We have $K_0(\phi) = K_0(\psi)$, since otherwise for some nonzero projection $p$ in $\mathcal{D}$ the dimensions of the projections $\phi(p)$ and $\psi(p)$ differ and hence $\|\psi - \phi \| \geq 1$. Therefore there is a unitary $u$ in $\widetilde \mathcal{A}$ such that $\operatorname{Ad}_u \circ \psi = \phi$, by \cite[Lemma 7.3.2]{K-theory Rordam}. \end{proof} \begin{lemma}\label{matrix-absorbing} Suppose $\mathcal{D} = \mathcal{D}_1 \oplus \dots \oplus \mathcal{D}_l$ is a finite-dimensional $C^*$-algebra, where each $\mathcal{D}_i$ is a matrix algebra. Assume $\gamma: \mathcal{D} \hookrightarrow M_k$ and $\phi: \mathcal{D} \hookrightarrow M_\ell$ are embeddings. The following are equivalent. \begin{enumerate} \item There is an embedding $\delta: M_k \hookrightarrow M_\ell$ such that $\delta \circ \gamma = \phi$. \item There is an embedding $\delta: M_k \hookrightarrow M_\ell$ such that $\|\delta \circ \gamma - \phi \| <1$. \item There is a natural number $c\geq 1$ such that $\ell \geq c k$ and $\operatorname{Mult}_\phi(\mathcal{D}, M_\ell) = c \operatorname{Mult}_\gamma(\mathcal{D}, M_k) $. \end{enumerate} \end{lemma} \begin{proof} (1) trivially implies (2). To see (2)$\Rightarrow$(3), note that we have $$\operatorname{Mult}_\phi(\mathcal{D}_i, M_\ell) = \operatorname{Mult}_\delta(M_k, M_\ell) \operatorname{Mult}_\gamma(\mathcal{D}_i, M_k), $$ for every $i\leq l$, since otherwise $\|\delta \circ \gamma - \phi \| \geq 1$. Let $c = \operatorname{Mult}_\delta(M_k, M_\ell)$. To see (3)$\Rightarrow$(1), let $\delta': M_k \to M_l$ be the embedding which sends an element of $M_k$ to $c$ many identical copies of it along the diagonal of $M_\ell$. Then we have $K_0(\phi) = K_0(\delta' \circ \gamma)$, by the assumption of (3). Therefore there is a unitary $u$ in $M_\ell$ such that $\operatorname{Ad}_u \circ\delta' \circ \gamma = \phi$. Let $\delta= \operatorname{Ad}_u \circ\delta' $. \end{proof} \section{AF-algebras with left-invertible connecting maps}\label{LI-section} Suppose $\mathcal{A}, \mathcal{B}$ are $C^*$-algebras. A homomorphism $\phi: \mathcal{B} \to \mathcal{A}$ is \emph{left-invertible} if there is a (necessarily surjective) homomorphism $\pi: \mathcal{A} \twoheadrightarrow \mathcal{B}$ such that $\pi \circ \phi = \operatorname{id}_{\mathcal{B}}$. Clearly a left-invertible homomorphism is necessarily an embedding. \begin{definition}\label{retract-def} We say $\mathcal{B}$ is a \emph{retract} of $\mathcal{A}$ if there is a left-invertible embedding from $\mathcal{B}$ into $\mathcal{A}$. We say a subalgebra $\mathcal{B}$ of $\mathcal{A}$ is an \emph{inner} retract if and only if there is a homomorphism $\theta: \mathcal{A} \twoheadrightarrow \mathcal{B}$ such that $\theta|_{\mathcal{B}} = \operatorname{id}_\mathcal{B}$. \end{definition} The image of a left-invertible embedding $\phi: \mathcal{B} \hookrightarrow \mathcal{A}$ is an inner retract of $\mathcal{A}$. Note that $\mathcal{B}$ is a retract of $\mathcal{A}$ if and only if $\mathcal{A}$ is a split-extension of $\mathcal{B}$. The next proposition contains some elementary facts about retracts of finite-dimensional $C^*$-algebras and left-invertible maps between them. They follow from elementary facts about finite-dimensional $C^*$-algebras, e.g., matrix algebras are simple. \begin{proposition} \label{fact1} A $C^*$-algebra $\mathcal{D}$ is a retract of a finite-dimensional $C^*$-algebra $\mathcal{E}$ if and only if $\mathcal{E} \cong \mathcal{D} \oplus \mathcal{F}$, for some finite-dimensional $C^*$-algebra $\mathcal{F}$. In other words, $\mathcal{D}$ is a retract of $\mathcal{E}$ if and only if $\mathcal{D}$ is isomorphic to an ideal of $\mathcal{E}$. Suppose $\phi: \mathcal{D} \hookrightarrow \mathcal{E}$ is a (unital) left-invertible embedding and $\pi: \mathcal{E} \twoheadrightarrow \mathcal{D}$ is a left inverse of $\phi$. Then $\mathcal{E}$ can be written as $\mathcal{E}_0 \oplus \mathcal{E}_1$ and there are $\phi_0, \phi_1$ such that $\phi_0 : \mathcal{D} \to \mathcal{E}_0$ is an isomorphism, $\phi_1: \mathcal{D} \rightarrow \mathcal{E}_1$ is a (unital) homomorphism and \begin{itemize} \item $\phi(d)= (\phi_0(d), \phi_1(d))$, for every $d\in \mathcal{D}$, \item $\pi(e_0, e_1) = \phi^{ -1}_0(e_0)$, for every $(e_0, e_1) \in \mathcal{E}_0 \oplus \mathcal{E}_1$. \end{itemize} \end{proposition} Suppose $(\mathcal{A}_n, \phi_n^m)$ is a sequence where each connecting map $\phi_n^m: \mathcal{A}_n \hookrightarrow \mathcal{A}_m$ is left-invertible. Let $\pi_n^{n+1}: \mathcal{A}_{n+1} \twoheadrightarrow \mathcal{A}_n$ be a left inverse of $\phi_n^{n+1}$, for each $n$. For $m> n$ define $\pi_n^m: \mathcal{A}_m \twoheadrightarrow \mathcal{A}_n$ by $\pi_n^m = \pi^{n+1}_n \circ \dots \circ \pi_{m-1}^m$. Then $\pi_n^m$ is a left inverse of $\phi_n^m$ which satisfies $\pi_n^m \circ \pi^k_m = \pi_n^k$, for every $n\leq m\leq k$. \begin{definition} \label{left-invertible-AF} We say $(\mathcal{A}_n, \phi_n^m)$ is a \emph{left-invertible sequence} if each $\phi_n^m$ is left-invertible and $\phi_n^n = \operatorname{id}_{\mathcal{A}_n}$. We call $(\pi_n^m)$ a \emph{compatible} left inverse of the left-invertible sequence $(\mathcal{A}_n, \phi_n^m)$ if $\pi_n^m: \mathcal{A}_m \twoheadrightarrow \mathcal{A}_n$ are surjective homomorphisms such that $\pi_n^m \circ \pi^k_m = \pi_n^k$ and $\pi_n^m \circ \phi_n^m = \operatorname{id}_{\mathcal{A}_n}$, for every $n\leq m\leq k$. \end{definition} The following simple lemma is true for arbitrary categories, see~\cite[Lemma 6.2]{Kubis-FS}. \begin{lemma}\label{reverse-maps} Suppose $(\mathcal{A}_n, \phi_n^m)$ is a left-invertible sequence of $C^*$-algebras with a compatible left inverse $(\pi_n^m)$ and $\mathcal{A} =\varinjlim (\mathcal{A}_n, \phi_n^m) $. Then for every $n$ there are surjective homomorphisms $\pi_n^\infty: \mathcal{A} \twoheadrightarrow \mathcal{A}_n$ such that $\pi_n^\infty \circ \phi_n^\infty = \operatorname{id}_{\mathcal{A}_n}$ and $ \pi_n^m \circ \pi_m^\infty = \pi_n^\infty$ for each $n\leq m$. \end{lemma} \begin{proof} First define $\pi_n^\infty$ on $\bigcup_i \phi_{i}^\infty [\mathcal{A}_i]$, which is dense in $\mathcal{A}$. If $a= \phi_m^\infty(a_m)$ for some $m$ and $a_m \in \mathcal{A}_m$, then let $$ \pi_n^\infty(a) = \begin{cases} \pi_n^m (a_m) &\mbox{if } n \leq m \\ \phi_m^n(a_m) &\mbox{if } n >m \end{cases} $$ These maps are well-defined (norm-decreasing) homomorphism, so they extend to $\mathcal{A}$ and satisfy the requirements of the lemma. \end{proof} In particular, each $\mathcal{A}_n$ or any retract of it, is a retract of $\mathcal{A}$. The converse of this is also true. \begin{lemma}\label{sub-retract} Suppose $(\mathcal{A}_n, \phi_n^m)$ is a left-invertible sequence of finite-dimensional $C^*$-algebras with $\mathcal{A} =\varinjlim (\mathcal{A}_n, \phi_n^m) $. \begin{enumerate} \item If $\mathcal{D}$ is a finite-dimensional subalgebra of $\mathcal{A}$, then $\mathcal{D}$ is contained in an inner retract of $\mathcal{A}$. \item If $\mathcal{D}$ is a finite-dimensional retract of $\mathcal{A}$, then there is $m\in \mathbb N$ such that $\mathcal{D}$ is a retract of $\mathcal{A}_{m'}$ for every $m'\geq m$. \end{enumerate} \end{lemma} \begin{proof} Let $(\pi_n^m)$ be a compatible left inverse of $(\mathcal{A}_n, \phi_n^m)$. (1) If $\mathcal{D}$ is a finite-dimensional subalgebra of $\mathcal{A}$, then for some $m\in \mathbb N$ and a unitary $u\in \widetilde \mathcal{A}$, it is contained in $u \phi_m^\infty [\mathcal{A}_m]u^*$ (Lemma~\ref{twisting}). The latter is an inner retract of $\mathcal{A}$. (2) If $\mathcal{D}$ is a retract of $\mathcal{A}$, there is an embedding $\phi:\mathcal{D} \hookrightarrow \mathcal{A}$ with a left inverse $\pi: \mathcal{A}\twoheadrightarrow\mathcal{D}$. Find $m$ and a unitary $u$ in $\widetilde \mathcal{A}$ such that $u^* \phi[\mathcal{D}] u \subseteq \phi_m^\infty [\mathcal{A}_m]$. This implies that $$ \phi_m^\infty\circ \pi_m^\infty(u^*\phi(d) u) = u^*\phi(d) u$$ for every $d\in\mathcal{D}$. Define $\psi: \mathcal{D} \hookrightarrow \mathcal{A}_m$ by $\psi(d) = \pi_m^\infty (u^*\phi(d) u)$. Then $\psi$ has a left inverse $\theta: \mathcal{A}_m \twoheadrightarrow \mathcal{D}$ defined by $\theta(x) =\pi(u\phi_m^\infty(x) u^*)$, since for every $d\in\mathcal{D}$ we have $$ \theta(\psi(d)) = \theta(\pi_m^\infty (u^*\phi(d) u)) =\pi(u \phi_m^\infty\circ \pi_m^\infty(u^*\phi(d) u) u^*) = \pi(\phi(d))=d. $$ Because $\mathcal{A}_{m}$ is a retract of $\mathcal{A}_{m'}$, for every $m' \geq m$, we conclude that $\mathcal{D}$ is also a retract of $\mathcal{A}_{m'}$. \end{proof} \begin{remark}\label{nonexamples-remark} It is not surprising that many AF-algebras are not limits of left-invertible sequences of finite-dimensional $C^*$-algebras. This is because, for instance, such an AF-algebra has infinitely many ideals (unless it is finite-dimensional), and admits finite traces, as it maps onto finite-dimensional $C^*$-algebras. Therefore, for example $\mathcal K(\ell_2)$, the $C^*$-algebra of all compact operators on $\ell_2$, and infinite-dimensional UHF-algebras are not limits of left-invertible sequences of finite-dim\-en\-sio\-nal $C^*$-algebras. Recall that a $C^*$-algebra is \emph{stable} if its tensor product with $\mathcal K(\ell_2)$ is isomorphic to itself. Blackadar's characterization of stable AF-algebras \cite{Blackadar-AF} (see also \cite[Corollary 1.5.8]{Rordam-Stormer}) states that a separable AF-algebra $\mathcal{A}$ is stable if and only if no nonzero ideal of $\mathcal{A}$ admits a nonzero finite (bounded) trace. Therefore no stable AF-algebra is the limit of a left-invertible sequence of finite-dimensional $C^*$-algebras. \end{remark} The following proposition gives another criteria to distinguish these AF-algebras. For example, it can be directly used to show that infinite-dimensional UHF-algebras are not limits of left-invertible sequences of finite-dimensional $C^*$-algebras. \begin{proposition}\label{Theorem III.3.5} Suppose $\mathcal{A}$ is an AF-algebra isomorphic to the limit of a left-invertible sequence of finite-dimensional $C^*$-algebras and $\mathcal{A} = \overline{\bigcup_n \mathcal{B}_n}$ for an increasing sequence $(\mathcal{B}_n)$ of finite-dimensional subalgebras. Then there is an increasing sequence $(n_i)$ of natural numbers and an increasing sequence $(\mathcal{C}_{i})$ of finite-dimensional subalgebras of $\mathcal{A}$ such that $\mathcal{A} = \overline{\bigcup_n \mathcal{C}_n}$ and $\mathcal{B}_{n_i}\subseteq \mathcal{C}_{i} \subseteq \mathcal{B}_{n_{i+1}}$ and $\mathcal{C}_{i}$ is an inner retract of $\mathcal{C}_{i+1}$ for every $i\in \mathbb N$. \end{proposition} \begin{proof} Suppose $\mathcal{A}$ is the limit of a left-invertible direct sequence $(\mathcal{A}_n, \phi_n^m)$ of finite-dimensional $C^*$-algebras. Theorem III.3.5 of \cite{Davidson}, applied to sequences $(\mathcal{B}_n)$ and $(\phi_n^\infty[\mathcal{A}_n])$, shows that there are sequences $(n_i)$, $(m_i)$ of natural numbers and a unitary $u\in \widetilde A$ such that $$\mathcal{B}_{n_i}\subseteq u^*\phi_{m_i}^\infty[\mathcal{A}_{m_i}]u \subseteq \mathcal{B}_{n_{i+1}}$$ for every $i\in \mathbb N$. Let $\mathcal{C}_i = u^*\phi_{m_i}^\infty[\mathcal{A}_{m_i}]u $. \end{proof} However, the next proposition shows that any AF-algebra is a quotient of an AF-algebra which is the limit of a left-invertible sequence of finite-dimensional $C^*$-algebras. \begin{proposition}\label{ess-quotient} For every (unital) AF-algebra $\mathcal{B}$ there is a (unital) AF-algebra $\mathcal{A}\supseteq \mathcal{B}$ which is the limit of a (unital) left-invertible sequence of finite-dimensional $C^*$-algebras and $\mathcal{A}/\mathcal{J}\cong \mathcal{B}$ for an essential ideal $\mathcal{J}$ of $\mathcal{A}$. \end{proposition} \begin{proof} Suppose $\mathcal{B}$ is the limit of the sequence $(\mathcal{B}_n, \psi_n^m)$ of finite-dimensional $C^*$-algebras and homomorphisms. Let $\mathcal{A}$ denote the limit of the following diagram: \begin{equation*}\label{diag2} \begin{tikzcd}[row sep=small, column sep=large] \mathcal{B}_1 \arrow[r, "\psi_1^2"] \arrow[rd, "id"] & \mathcal{B}_{2} \arrow[rd, "id"] \arrow[r, "\psi_{2}^{3}"] & \mathcal{B}_{3} \arrow[rd, "id"] \arrow[r, "\psi_{3}^{4}"] & \mathcal{B}_{4} \, {\dots \atop \begin{turn}{-30} \dots \end{turn}}\\ & \mathcal{B}_{1} \arrow[rd, "id"] & \mathcal{B}_{2} \arrow[rd, "id"] & \mathcal{B}_{3} \, \begin{turn}{-30} \dots \end{turn}& \\ & & \mathcal{B}_{1} \arrow[rd, "id"] & \mathcal{B}_{2} \, \begin{turn}{-30} \dots \end{turn}& \\ &&& \mathcal{B}_1 \, \begin{turn}{-30} \dots \end{turn}& \\ & & & & \null \end{tikzcd} \end{equation*} Then $\mathcal{A}$ is an AF-algebra which contains $\mathcal{B}$ and the connecting maps are left-invertible embeddings. The ideal $\mathcal{J}$ corresponding to the (directed and hereditary) subdiagram of the above diagram which contains all the nodes except the ones on the top line is essential and clearly $\mathcal{A}/\mathcal{J} \cong \mathcal{B}$. \end{proof} \section{AF-algebras with the Cantor property} \label{CP-section} We define the notion of the ``Cantor property" for an AF-algebra. These algebras have properties which are, in a sense, generalizations of the ones satisfied (some trivially) by $C(2^\mathbb N)$. It is easier to state these properties using the notation for Bratteli diagrams that we fixed in Section \ref{pre-sec}. For example, every node of the Bratteli diagram of $C(2^\mathbb N)$ splits in two, which here is generalized to ``each node splits into at least two nodes with the same dimension at some further stage", which of course guarantees that there are no minimal projections in the limit algebra. \begin{definition}\label{Bratteli} We say an AF-algebra $\mathcal{A}$ has \emph{the Cantor property} if there is a sequence $(\mathcal{A}_n, \phi_n^m)$ of finite-dimensional $C^*$-algebras and embeddings such that $\mathcal{A} = \varinjlim (\mathcal{A}_n, \phi_n^m)$ and the Bratteli diagram $\mathfrak D$ of $(\mathcal{A}_n, \phi_n^m)$ has the following properties: \begin{enumerate} \item[(D0)] For every $(n,s)\in \mathfrak D$ there is $(n+1,t)\in \mathfrak D$ such that $\dim(n,s) = \dim(n+1,t)$ and $(n,s)\to (n+1,t)$. \item[(D1)] For every $(n,s)\in \mathfrak D$ there are distinct nodes $(m,t),(m,t')\in \mathfrak D$, for some $m>n$, such that $\dim(n,s) = \dim(m,t) = \dim(m,t')$ and $(n,s)\to (m,t)$ and $(n,s) \to (m,t')$. \item[(D2)] For every $(n,s_1), \dots, (n,s_k), (n', s')\in \mathfrak D$ and $\{x_1, \dots , x_k\}\subseteq \mathbb N$ such that $\sum_{i=1}^k x_i\dim(n, s_i) \leq \dim(n', s')$, there is $m\geq n$ such that for some $(m,t)\in \mathfrak D$ we have $\dim(m,t) = \dim(n',s')$ and there are exactly $x_i$ distinct paths from $(n, s_i)$ to $(m,t)$ in $\mathfrak D$. \end{enumerate} \end{definition} The Bratteli diagram of $C(2^\mathbb N)$ trivially satisfies these conditions and therefore $C(2^\mathbb N)$ has the Cantor property. \begin{remark}{\label{remark-CP-def}} Condition (D0) states that $(\mathcal{A}_n, \phi_n^m)$ is a left-invertible sequence. Dropping (D0) from Definition \ref{Bratteli} does not change the definition (i.e., $\mathcal{A}$ has the Cantor property if and only if it has a representing sequence satisfying (D1) and (D2)). This is because (D1) alone implies the existence of a left-invertible sequence with limit $\mathcal{A}$ that still satisfies (D1) and (D2). In fact, $\mathcal{A}$ has the Cantor property if and only if any representing sequence satisfies (D1) and (D2). However, we add (D0) for simplicity to make sure that $(\mathcal{A}_n, \phi_n^m)$ is already a left-invertible sequence, since, as we shall see later, being the limit of a left-invertible direct sequence of finite-dimensional $C^*$-algebras is a crucial and helpful property of AF-algebras with the Cantor property. Condition (D2) can be rewritten as \begin{enumerate} \item[(D$2'$)] For every ideal $\mathcal{D}$ of some $\mathcal{A}_n$, if $M_\ell$ is a retract of $\mathcal{A}$ and $\gamma : \mathcal{D} \hookrightarrow M_\ell$ is an embedding, then there is $m\geq n$ and $\mathcal{A}_{m,t}\subseteq \mathcal{A}_m$ such that $\mathcal{A}_{m,t} \cong M_\ell$ and $\operatorname{Mult}_{\phi_n^m}(\mathcal{D} , \mathcal{A}_{m,t} ) = \operatorname{Mult}_{\gamma}(\mathcal{D}, M_\ell)$. \end{enumerate} \end{remark} Definition \ref{Bratteli} may be adjusted for unital AF-algebras where all the maps are considered to be unital. \begin{definition}\label{CS-unital-def} A unital AF-algebra $\mathcal{A}$ has the Cantor property if and only if it satisfies the conditions of Definition \ref{Bratteli}, where $\phi_n^m$ are unital and in condition (D2) the inequality $\sum_{i=1}^k x_i \dim(n, s_i) \leq \dim(n', s')$ is replaced with equality. \end{definition} \begin{proposition}\label{sum-retract} Suppose $\mathcal{A}$ is an AF-algebra with Cantor property. If $\mathcal{D}, \mathcal{E}$ are finite-dimensional retracts of $\mathcal{A}$, then so is $\mathcal{D}\oplus \mathcal{E}$. \end{proposition} \begin{proof} Suppose $\mathcal{D}= \mathcal{D}_1 \oplus \mathcal{D}_2 \oplus \dots \oplus \mathcal{D}_l$ and $\mathcal{E}= \mathcal{E}_1 \oplus \mathcal{E}_2 \oplus \dots \oplus \mathcal{E}_k$, where $\mathcal{D}_i,\mathcal{E}_i$ are isomorphic to matrix algebras. By Lemma \ref{sub-retract} both $\mathcal{D}$ and $\mathcal{E}$ are retracts of some $\mathcal{A}_m$, which means that all $\mathcal{D}_i$ and $\mathcal{E}_i$ appear in $\mathcal{A}_m$ as retracts (ideals). By (D1) and enlarging $m$, if necessary, we can make sure these retracts in $\mathcal{A}_m$ are orthogonal, meaning that $\mathcal{A}_m \cong \mathcal{D} \oplus \mathcal{E} \oplus \mathcal{F}$, for some finite-dimensional $C^*$-algebra $\mathcal{F}$. Therefore $\mathcal{D} \oplus \mathcal{E}$ is a retract of $\mathcal{A}_m$ and as a result, it is a retract of $\mathcal{A}$. \end{proof} \begin{lemma}\label{absorbing} Suppose $\mathcal{A}$ is an AF-algebra with the Cantor property, witnessed by $(\mathcal{A}_n, \phi_n^m)$ satisfying Definition \ref{Bratteli} and $\mathcal{E}$ is a finite-dimensional retract of $\mathcal{A}$. If $\gamma: \mathcal{A}_n \hookrightarrow \mathcal{E}$ is a left-invertible embedding then there are $m\geq n$ and a left-invertible embedding $\delta: \mathcal{E} \hookrightarrow \mathcal{A}_m$ such that $\delta\circ \gamma = \phi_n^m$. \end{lemma} \begin{proof} Suppose $\mathcal{A}_n = \mathcal{A}_{n,1} \oplus \dots \oplus \mathcal{A}_{n,l}$ and $\mathcal{E}= \mathcal{E}_1 \oplus \mathcal{E}_2 \oplus \dots \oplus \mathcal{E}_k$ where $\mathcal{E}_i$ and $\mathcal{A}_{n,j}$ are all matrix algebras. Let $\pi_i$ denote the canonical projection from $\mathcal{E}$ onto $\mathcal{E}_i$. For every $i\leq k $ put $$Y_i = \{j \leq l :\gamma[\mathcal{A}_{n,j}]\cap \mathcal{E}_i \neq 0\},$$ and let $\mathcal{A}_{n,Y_i} = \bigoplus_{j\in Y_i} \mathcal{A}_{n,j}$. Then $\mathcal{A}_{n,Y_i}$ is an ideal (a retract) of $\mathcal{A}_n$ and the map $\gamma_i : \mathcal{A}_{n,Y_i} \hookrightarrow \mathcal{E}_i$, the restriction of $\gamma$ to $\mathcal{A}_{n,Y_i}$ composed with $\pi_i$, is an embedding. Since $\mathcal{E}$ is a finite-dimensional retract of $\mathcal{A}$, it is a retract of some $\mathcal{A}_{n^\prime}$ (Lemma \ref{sub-retract}). So each $\mathcal{E}_i$ is a retract of $\mathcal{A}_{n^\prime}$. By applying (D2) for each $i\leq k$ there are $m_i\geq n $ and $(m_i,t_{i}) \in \mathfrak D$ such that $\dim(m_i, t_{i}) = \dim(\mathcal{E}_i)$ and $\operatorname{Mult}_{\phi_n^{m_i}}(\mathcal{A}_{n,Y_i} , \mathcal{A}_{m_i, t_i}) = \operatorname{Mult}_{\gamma_i}(\mathcal{A}_{n,Y_i}, \mathcal{E}_i)$. Let $m = \max\{m_i : i\leq k\}$ and by (D0) find $(m,s_i)$ such that $\dim(m_i, t_i) = \dim(m,s_i)$ and $(m_i, t_i) \to (m,s_i)$. Applying (D1) and possibly increasing $m$ allows us to make sure that $(m,s_i) \neq (m,s_j)$ for distinct $i,j$ and therefore $\mathcal{A}_{m, s_{i}} $ are pairwise orthogonal. Then $\{\mathcal{A}_{m,s_i} : i\leq k\}$ is a sequence of pairwise orthogonal subalgebras (retracts) of $\mathcal{A}_m$ such that $\mathcal{A}_{m,s_i} \cong \mathcal{E}_i$ and $$\operatorname{Mult}_{\phi_n^{m}}(\mathcal{A}_{n,Y_i} , \mathcal{A}_{m,s_i}) = \operatorname{Mult}_{\gamma_i}(\mathcal{A}_{n,Y_i}, \mathcal{E}_i).$$ By Lemma \ref{matrix-absorbing} there are isomorphisms $\delta_i : \mathcal{E}_i \hookrightarrow \mathcal{A}_{m,s_i}$ such that $\gamma_i \circ \delta_i $ is equal to the restriction of $\phi_n^m$ to $\mathcal{A}_{n,Y_i}$ projected onto $\mathcal{A}_{m,s_i}$. Suppose $1_m$ is the unit of $\mathcal{A}_m$ and $q_i$ is the unit of $\mathcal{A}_{m,s_i}$. Each $q_i$ is a central projection of $\mathcal{A}_m$, because $\mathcal{A}_{m,s_i}$ are ideals of $\mathcal{A}_m$. Since $\gamma$ is left-invertible, for each $j\leq l$ there is $k(j) \leq k$ such that $\mathcal{A}_{n,j}\cong \mathcal{E}_{k(j)}$ and $\hat\gamma_{j} = \pi_{k(j)}\circ \gamma|_{\mathcal{A}_{n,j}}$ is an isomorphism. Also for $j\leq l$ let $$X_j = \{i\leq k : \gamma[\mathcal{A}_{n,j}] \cap \mathcal{E}_i \neq 0\}.$$ Note that \begin{enumerate} \item $k(j) \in X_j$, \item $k(j') \notin X_j$ if $j\neq j'$, \item $i\in X_j \Leftrightarrow j\in Y_i$. \end{enumerate} Let $\hat\delta_j: \mathcal{E}_{k(j)} \to (1_m - \sum_{i\in X_j} q_i) \mathcal{A}_m (1_m - \sum_{i\in X_j} q_i)$ be the homomorphism defined by $$\hat\delta_j (e) = (1_m - \sum_{i\in X_j}q_i)\phi_n^m(\hat\gamma_j^{-1}(e)) (1_m - \sum_{i\in X_j} q_i).$$ Define $\delta: \mathcal{E} \hookrightarrow \mathcal{A}_m $ by $$\delta(e_1, \dots ,e_k) = \hat\delta_1(e_{k(1)}) + \dots + \hat\delta_l(e_{k(l)}) + \delta_1 (e_1) + \dots +\delta_k (e_k).$$ Since each $\delta_i$ is an isomorphism, it is clear that $\delta$ is left-invertible. To check that $\delta \circ \gamma = \phi_n^m$, by linearity of the maps it is enough to check it only for $\bar a = (0, \dots, 0,a_j, 0, \dots,0)\in \mathcal{A}_n$. If $\gamma(\bar a) = (e_1, \dots , e_k)$ then $$ e_i = \begin{cases} 0 & i\notin X_j \\ \gamma_i(\bar a) & i \in X_j \end{cases} $$ for $i\leq k$. Also note that $e_{k(j)}=\hat \gamma_j(a_j)$. Assume $X_j = \{r_1, \dots, r_\ell\}$. Then by (1)-(3) we have \begin{align*} \delta \circ \gamma (\bar a) & = \hat \delta_j (\hat \gamma_j (a_j))+ \delta_{r_1}(\gamma_{r_1}(\bar a))+ \dots+ \delta_{r_\ell}(\gamma_{r_\ell}(\bar a)) \\ &= (1_m - \sum_{i\in X_j}q_i)\phi_n^m(\bar a) (1_m - \sum_{i\in X_j} q_i) + q_{r_1} \phi_n^m(\bar a) q_{r_1} +\dots + q_{r_\ell} \phi_n^m(\bar a) q_{r_\ell} \\ &= \phi_n^m(\bar a). \end{align*} This completes the proof. \end{proof} \subsection{AF-algebras with the Cantor property are $C(2^\mathbb N)$-absorbing} Suppose $\mathcal{A}$ is an AF-algebra with Cantor property. Define $\mathcal{A}^\mathcal C$ to be the limit of the sequence $(\mathcal{B}_n, \psi_n^m)$ such that $\mathcal{B}_n = \bigoplus_{i \leq 2^{n-1}} \mathcal{A}_{n} \cong \mathbb C^{2^{n-1}} \otimes \mathcal{A}_n$ and $\psi_n^{n+1} = \bigoplus_{i \leq 2^n} \phi_n^{n+1}$, as shown in the following diagram \begin{equation}\label{A^N-diag} \begin{tikzcd} [row sep=0.05mm , column sep=large] & & & ... \\ & & \mathcal{A}_{3} \arrow[ru, hook] \arrow[rd, hook] \\ &&& ...\\ & \mathcal{A}_{2} \arrow[ruu, hook, "\phi_2^3"] \arrow[rdd, hook, "\phi_2^3"] & & \\ &&& ...\\ & & \mathcal{A}_{3} \arrow[ru, hook] \arrow[rd, hook] \\ &&& ...\\ \mathcal{A}_1 \arrow[ruuuu, hook, "\phi_1^2"] \arrow[rdddd, hook, "\phi_1^2"] \\ &&& ...\\ & & \mathcal{A}_{3} \arrow[ru, hook] \arrow[rd, hook] \\ &&& ...\\ & \mathcal{A}_{2} \arrow[ruu, hook, "\phi_2^3"] \arrow[rdd, hook, "\phi_2^3"] & & \\ &&& ...\\ & & \mathcal{A}_{3} \arrow[ru, hook] \arrow[rd, hook] \\ &&& ...\\ \end{tikzcd} \end{equation} It is straightforward to check that $\mathcal{A}^\mathcal C \cong \mathcal{A} \otimes C(2^\mathbb N)\cong C(2^\mathbb N, \mathcal{A})$. \begin{lemma}\label{A^N} $\mathcal{A}^\mathcal C$ has the Cantor property. \end{lemma} \begin{proof} We check that $(\mathcal{B}_n, \psi_n^m)$ satisfies (D0)--(D2). Each $\psi_n^{n+1}$ is left-invertible, by Proposition \ref{fact1} and since $\phi_n^{n+1}$ is left-invertible, therefore (D0) holds. Conditions (D1) and (D2) are trivially satisfied by analyzing the Bratteli diagram (\ref{A^N-diag}), since $\mathcal{A}$ satisfies them. \end{proof} \begin{lemma}\label{A^N=A} Suppose $\mathcal{A}$ is an AF-algebra with Cantor property. Then $\mathcal{A} \otimes C(2^\mathbb N)$ is isomorphic to $\mathcal{A}$. \end{lemma} \begin{proof} Identify $\mathcal{A} \otimes C(2^\mathbb N)$ with $\mathcal{A}^\mathcal C$. Find sequences $(m_i)$ and $(n_i)$ of natural numbers and left-invertible embeddings $\gamma_i : \mathcal{A}_{n_i} \hookrightarrow \mathcal{B}_{m_{i+1}}$ and $\delta_i : \mathcal{B}_{n_i} \hookrightarrow \mathcal{A}_{m_{i}}$ such that $n_1 = m_1 = 1$, $m_2 = 2$ and $\gamma_1 = \psi_1^2$ and the diagram below is commutative. \begin{equation}\label{intertwining} \begin{tikzcd} \mathcal{B}_1 \arrow[r, hook, "\psi_1^2"] \ar[d,equal] & \mathcal{B}_{m_2} \arrow[rd, hook, "\delta_2"] \arrow[rr, hook, "\psi_{m_2}^{m_3}"] & & \mathcal{B}_{m_3} \arrow[rd, hook, "\delta_2"] \arrow[rr, hook, "\psi_{m_3}^{m_4}"] & & \dots & \mathcal{A}^\mathcal C \arrow[d, "\phi"] \\ \mathcal{A}_{1} \arrow[rr, hook, "\phi_1^{n_2}"] \arrow[ru, hook, "\gamma_1"] & & \mathcal{A}_{n_2} \arrow[rr, hook, "\phi_{n_2}^{n_3}"] \arrow[ru, hook, "\gamma_2"] & & \mathcal{A}_{n_3} \arrow[r, hook, "\phi_{n_3}^{n_4}"]\arrow[ru, hook, "\gamma_3"] & \dots & \mathcal{A} \end{tikzcd} \end{equation} The existence of such $\gamma_i$ and $\delta_i$ is guaranteed by Lemma \ref{absorbing}, since each $\mathcal{B}_i$ is a retract of $\mathcal{A}$, by Lemma \ref{sub-retract} and Proposition \ref{sum-retract}, and of course each $\mathcal{A}_i$ is a retract of $\mathcal{B}_i$. The universal property of inductive limits implies the existence of an isomorphism between $\mathcal{A}$ and $\mathcal{A}^\mathcal C$. \end{proof} \begin{remark} As we will see in section \ref{tensor-cp} the tensor products of two AF-algebras with Cantor property do not necessarily have the Cantor property. \end{remark} \subsection{ Ideals} Let $\mathcal{A} = \varinjlim_n (\mathcal{A}_n, \phi_n^m)$ be an AF-algebra with Cantor property, such that the Bratteli diagram $\mathfrak D$ of $(\mathcal{A}_n,\phi_n^m)$ satisfies (D0)--(D2) of Definition \ref{Bratteli}. Let $\mathfrak J\subseteq \mathfrak D$ denote the Bratteli diagram of an ideal $\mathcal{J}\subseteq \mathcal{A}$. Put $\mathcal{J}_n = \bigoplus_{(n,s)\in \mathfrak J} \mathcal{A}_{n,s}$, which is an ideal (a retract) of $\mathcal{A}_n$. Then $\mathcal{J} = \varinjlim_n (\mathcal{J}_n, \phi_n^m|_{\mathcal{J}_n})$. It is automatic from the fact that $\mathfrak J$ is a directed subdiagram of $\mathfrak D$ that each $\phi_n^m|_{\mathcal{J}_n}: \mathcal{J}_n \hookrightarrow \mathcal{J}_m$ is left-invertible and that $(\mathcal{J}_n, \phi_n^m|_{\mathcal{J}_n})$ satisfies (D0)--(D2). In particular: \begin{proposition} Any ideal of an AF-algebra with Cantor property also has the Cantor property. \end{proposition} Here is another elementary fact about $C(2^\mathbb N)$ that is (essentially by Lemma \ref{A^N=A}) passed on to AF-algebras with Cantor property. \begin{proposition}\label{prop} Suppose $\mathcal{A}$ is an AF-algebra with Cantor property and $\mathcal{Q}$ is a quotient of $\mathcal{A}$. Then there is a surjection $\eta: \mathcal{A} \twoheadrightarrow \mathcal{Q}$ such that $ker(\eta)$ is an essential ideal of $\mathcal{A}$. \end{proposition} \begin{proof} It is enough to show that there is an essential ideal $\mathcal{J}$ of $\mathcal{A}$ such that $\mathcal{A}/\mathcal{J}$ is isomorphic to $\mathcal{A}$. In fact, we will show that there is an essential ideal $\mathcal{J}$ of $\mathcal{A}^\mathcal C$ such that $\mathcal{A}^\mathcal C/ \mathcal{J}$ is isomorphic to $\mathcal{A}$. This is enough since $\mathcal{A}^\mathcal C$ is isomorphic to $\mathcal{A}$ (Lemma \ref{A^N=A}). Let $\mathfrak D$ be the Bratteli diagram of $\mathcal{A}^\mathcal C$ as in Diagram (\ref{A^N-diag}). Let $\mathfrak J$ be the directed and hereditary subdiagram of $\mathfrak D$ containing all the nodes in Diagram (\ref{A^N-diag}) except the lowest line. Being directed and hereditary, $\mathfrak J$ corresponds to an ideal $\mathcal{J}$, which intersects any other directed and hereditary subdiagram of $\mathfrak D$. Therefore $\mathcal{J}$ is an essential ideal of $\mathcal{A}^\mathcal C$ and $\mathcal{A}^\mathcal C/\mathcal{J}$ is isomorphic to the limit of the sequence $\mathcal{A}_1 \xrightarrow{\phi_1^2} \mathcal{A}_{2} \xrightarrow{\phi_2^3} \mathcal{A}_{3}\xrightarrow{\phi_3^4} \dots $ in the lowest line of Diagram (\ref{A^N-diag}), which is $\mathcal{A}$. \end{proof} \section{Fra\"\i ss\'e categories}\label{Flim-intro-section} Suppose $\mathfrak{K}$ is a category of metric structures with non-expansive (1-Lipschitz) morphisms. We refer to objects and morphisms (arrows) of $\mathfrak{K}$ by $\mathfrak{K}$-objects and $\mathfrak{K}$-arrows, respectively. We write $A\in \mathfrak{K}$ if $A$ is a $\mathfrak{K}$-object and $\mathfrak{K}(A, B)$ to denote the set of all $\mathfrak{K}$-arrows from $A$ to $B\in \mathfrak{K}$. The category $\mathfrak{K}$ is \emph{metric-enriched} or \emph{enriched over metric spaces} if for every $\mathfrak{K}$-objects $A$ and $B$ there is a metric $d$ on $\mathfrak{K}(A,B)$ satisfying $$d(\psi_0 \circ \phi,\psi_1 \circ \phi) \leq d(\psi_0 ,\psi_1) \qquad \text{and} \qquad d(\psi \circ \phi_0,\psi \circ \phi_0) \leq d(\phi_0 ,\phi_1)$$ whenever the compositions make sense. We say $\mathfrak{K}$ is \emph{enriched over complete metric spaces} if $\mathfrak{K}(A,B)$ is a complete metric space for every $\mathfrak{K}$-objects $A$, $B$. A \emph{$\mathfrak{K}$-sequence} is a direct sequence in $\mathfrak{K}$, that is, a covariant functor from the category of all positive integers (treated as a poset) into $\mathfrak{K}$. In our cases, $\mathfrak{K}$ will always be a category of finite-dimensional $C^*$-algebras with left-invertible embeddings. However, we would like to invoke the general theory of Fra\"\i ss\'e categories, which is possibly applicable to other similar contexts. \begin{definition} \label{Fraisse-def} We say $\mathfrak{K}$ is a \emph{Fra\"\i ss\'e category} if \begin{enumerate} \item[(JEP)] $\mathfrak{K}$ has \emph{the joint embedding property}: for $A, B \in \mathfrak{K}$ there is $C\in \mathfrak{K}$ such that $\mathfrak{K}(A, C)$ and $\mathfrak{K}(B,C)$ are nonempty. \item[(NAP)] $\mathfrak{K}$ has \emph{the near amalgamation property}: for every $\epsilon>0$, objects $A,B,C\in \mathfrak{K}$, arrows $\phi\in \mathfrak{K}(A, B)$ and $\psi\in \mathfrak{K}(A, C)$, there are $D\in \mathfrak{K}$ and $\phi' \in \mathfrak{K}(B,D)$ and $\psi'\in \mathfrak{K}(C,D)$ such that $d(\phi'\circ \phi , \psi' \circ \psi)<\epsilon$. \item[(SEP)] $\mathfrak{K}$ is \emph{separable}: there is a countable \emph{dominating} subcategory $\mathfrak C$, that is, \begin{itemize} \item for every $A\in \mathfrak{K}$ there is $C\in \mathfrak C$ and a $\mathfrak{K}$-arrow $\phi: A \to C$, \item for every $\epsilon >0$ and a $\mathfrak{K}$-arrow $\phi: A \to B$ with $A\in \mathfrak C$, there exist a $\mathfrak{K}$-arrow $\psi: B \to C$ with $\mathcal{C} \in \mathfrak C$ and a $\mathfrak C$-arrow $\alpha: A \to C$ such that $d(\alpha, \psi \circ \phi)< \epsilon$. \end{itemize} \end{enumerate} \end{definition} Now suppose that $\mathfrak{K}$ is contained in a bigger metric-enriched category $\mathfrak{L}$ so that every sequence in $\mathfrak{K}$ has a limit in $\mathfrak{L}$. We say that $\mathfrak{K} \subseteq \mathfrak{L}$ has the \emph{almost factorization property} if given any sequence $(X_n,f_n^m)$ in $\mathfrak{K}$ with limit $X_\infty$ in $\mathfrak{L}$, for every $\epsilon>0$, for every $\mathfrak{L}$-arrow $g \colon A \to X_\infty$ with $A \in \mathfrak{K}$ there is a $\mathfrak{K}$-arrow $g' \colon A \to X_n$ for some positive integer $n$, such that $d(f_n^\infty \circ g', g) \leq \epsilon$, where $f_n^\infty \colon X_n \to X_\infty$ comes from the limiting cocone\footnote{Formally, the limit, or rather \emph{colimit} of $(X_n, f_n^m)$ is a pair consisting of an $\mathfrak{L}$-object $X_\infty$ and a sequence of $\mathfrak{L}$-arrows $f_n^\infty \colon X_n \to X_\infty$ satisfying suitable conditions. This sequence is called the (co-)limiting cocone. We use the word ``limit" instead of ``colimit" as we consider only covariant functors from the positive integers, called \emph{sequences}.}. \begin{theorem}{\cite[Theorem 3.3]{Kubis-ME}}\label{generic} Suppose $\mathfrak{K}$ is a Fra\"\i ss\'e category. Then there exists a sequence $(U_n, \phi_n^m)$ in $\mathfrak{K}$ satisfying \begin{itemize} \item[{\rm(F)}] for every $n\in \mathbb N$, for every $\epsilon>0$ and for every $\mathfrak{K}$-arrow $\gamma: U_n \to D$, there are $m\geq n$ and a $\mathfrak{K}$-arrow $\delta: D \to U_m$ such that $d(\phi_n^m , \delta \circ \gamma)<\epsilon$. \end{itemize} \end{theorem} If $\mathfrak{K}$ is a Fra\"\i ss\'e category, the $\mathfrak{K}$-sequence $(U_n, \phi_n^m)$ from Theorem \ref{generic} is uniquely determined by the ``Fra\"\i ss\'e condition" (F). That is, any two $\mathfrak{K}$-sequences satisfying (F) can be approximately intertwined (there is an approximate back-and-forth between them), and hence the limits of the sequences (typically in a bigger category containing $\mathfrak{K}$) must be isomorphic (see \cite[Theorem 3.5]{Kubis-ME}). Therefore the $\mathfrak{K}$-sequence satisfying (F) is usually referred to as ``the" \emph{Fra\"\i ss\'e sequence}. The limit of the Fra\"\i ss\'e sequence is called the \emph{Fra\"\i ss\'e limit} of the category $\mathfrak{K}$. In our case, $\mathfrak{K}$ will be a category of finite-dimensional $C^*$-algebras and the limit is just the inductive limit (also called colimit) in the category of all (or just separable) $C^*$-algebras. \begin{theorem}[{cf. \cite{Kubis-ME}}]\label{uni-hom} Assume $\mathfrak{K}$ is a Fra\"\i ss\'e category contained in a category $\mathfrak{L}$ such that every sequence in $\mathfrak{K}$ has a limit in $\mathfrak{L}$ and every $\mathfrak{L}$-object is the limit of some sequence in $\mathfrak{K}$. Let $U\in \mathfrak{L}$ be the Fra\"\i ss\'e limit of $\mathfrak{K}$. Then \begin{itemize}[leftmargin=*] \item (uniqueness) $U$ is unique, up to isomorphisms. \item(universality) For every $\mathfrak{L}$-object $B$ there is an $\mathfrak{L}$-arrow $\phi: B \to U$. \end{itemize} Furthermore, if $\mathfrak{K} \subseteq \mathfrak{L}$ has the almost factorization property then \begin{itemize}[leftmargin=*] \item(almost $\mathfrak{K}$-homogeneity) For every $\epsilon>0$, $\mathfrak{K}$-object $A$ and $\mathfrak{L}$-arrows $\phi_i : A \to U$ ($i=0,1$), there is an automorphism $\eta: U \to U$ such that $d(\eta \circ \phi_0 , \phi_1) <\epsilon$. \end{itemize} $$\begin{tikzcd} & & U \ar[dd, "\eta"] \\ A \ar[rru, "\phi_0"] \ar[rrd, "\phi_1"] & & \\ & & U \end{tikzcd}$$ \end{theorem} \begin{definition} Let $\ddagger\mathfrak{K}$ denote the category with the same objects as $\mathfrak{K}$, but a $\ddagger \mathfrak{K}$-arrow from $A$ to $B$ is a pair $(\phi, \pi)$ where $\phi, \pi$ are $\mathfrak{K}$-arrows, $\phi:A \to B$ is left-invertible and $\pi: B \to A$ is a left inverse of $\phi$. We will denote such $\ddagger \mathfrak{K}$-arrow by $(\phi, \pi): A \to B$. The composition is $(\phi, \pi) \circ (\phi', \pi') = (\phi \circ \phi', \pi' \circ \pi)$. The category $\ddagger \mathfrak{K}$ is usually called the category of \emph{embedding-projection pairs} or briefly EP-pairs over $\mathfrak{K}$ (see \cite{Kubis-FS}). \end{definition} \begin{definition} We say $\ddagger \mathfrak{K}$ has the \emph{near proper amalgamation property} if for every $\epsilon>0$, objects $A, B, C\in \mathfrak{K}$, arrows $(\phi, \pi)\in \ddagger\mathfrak{K}(A, B)$ and $(\psi, \theta)\in \ddagger\mathfrak{K}(A, C)$, there are $D\in \ddagger\mathfrak{K}$ and $(\phi', \pi') \in \ddagger\mathfrak{K}(B, D)$ and $(\psi', \theta')\in \ddagger\mathfrak{K}(C, D)$ such that the diagram \begin{equation*}\label{diag-PAP} \begin{tikzcd} & B \arrow[dr, hook', "\phi'"] \arrow[dl, shift left=1ex, two heads, "\pi"] \\ A \arrow[ur, hook', "\phi"] \arrow[dr, hook, "\psi"] & & D \arrow[ul, shift left=1ex, two heads, "\pi'"] \arrow[dl,shift left=1ex, two heads, "\theta'"]\\ & C \arrow[ur, hook, "\psi^\prime "] \arrow[ul, shift left=1ex, two heads, "\theta"] \end{tikzcd} \end{equation*} ``fully commutes" up to $\epsilon$, meaning that $d( \phi' \circ \phi , \psi' \circ \psi )$, $d( \pi' \circ \pi , \theta' \circ \theta )$, $d( \phi \circ \theta , \pi' \circ \psi' )$ and $d( \psi \circ \pi , \theta' \circ \phi' )$ are all less than or equal to $\epsilon$. We say $\ddagger \mathfrak{K}$ has the ``proper amalgamation property" if $\epsilon$ could be $0$. \end{definition} Let us denote by $\dagger \mathfrak{K}$ the category of left-invertible $\mathfrak{K}$-arrows. In other words, $\dagger \mathfrak{K}$ is the image of $\ddagger \mathfrak{K}$ under the functor that forgets the left inverse, namely mapping $(\phi, \pi)$ to $\phi$. Note that all three categories $\mathfrak{K}$, $\dagger \mathfrak{K}$, and $\ddagger \mathfrak{K}$ have the same objects. \begin{lemma}\label{EPlim-retract} Suppose $\mathfrak{L}$ is enriched over complete metric spaces, $\dagger \mathfrak{K}$ is a Fra\"\i ss\'e category with Fra\"\i ss\'e limit $U$, and $\ddagger \mathfrak{K}$ has the proper amalgamation property. Then for every $\mathfrak{L}$-object $B$ isomorphic to the limit of a sequence in $\dagger \mathfrak{K}$ there is a pair of $\mathfrak{L}$-arrows $\alpha \colon B \to U$, $\beta \colon U \to B$ such that $$\beta \circ \alpha = \operatorname{id}_{B}.$$ \end{lemma} \begin{proof} Suppose $(U_n, \phi_n^m)$ is a Fra\"\i ss\'e sequence in $\dagger \mathfrak{K}$. Suppose first that the sequence satisfies (F) with $\epsilon=0$ and that $\ddagger \mathfrak{K}$ has the proper amalgamation property, namely with $\epsilon=0$ (this will be the case in the next section). In this case we do not use the fact that $\mathfrak{L}$ is enriched over complete metric spaces. Fix a $\dagger \mathfrak{K}$-sequence $(B_n, \psi_n^m)$ whose direct limit is $B$. For each $n$ we may choose a left inverse $\theta_n^{n+1}$ to $\psi_n^{n+1}$ and next, setting $\theta_n^m = \theta_{m-1}^m \circ \dots \circ \theta_n^{n+1}$ for every $n<m$, we obtain a $\ddagger \mathfrak{K}$-sequence $(B_n, (\psi_n^m, \theta_n^m))$ whose direct limit is $B$. Using (JEP) of $\dagger \mathfrak{K}$ and fixing arbitrary left inverses, find $F_1\in \mathfrak{K}$ and $\ddagger \mathfrak{K}$-arrows $(\gamma_1, \eta_1): U_1 \to F_1$ and $(\mu_1, \nu_1): B_1 \to F_1$. By (F) and again fixing arbitrary left inverses, there are $n_1 \geq 1$ and a $\ddagger \mathfrak{K}$-arrow $(\delta_1, \lambda_1): F_1 \to U_{n_1}$ such that $\phi_1^{n_1} = \delta_1 \circ \gamma_1$ (see Diagram (\ref{diag-retract}) below). Consider the composition arrow $(\delta_1 \circ \mu_1, \nu_1 \circ \lambda_1): B_1 \to U_{n_1}$ and $(\psi_1^2, \theta_1^2): B_1 \to B_2$ and use the proper amalgamation property to find $\mathcal{F}_2\in \mathfrak{K}$ and $\ddagger \mathfrak{K}$-arrows $(\mu_2, \nu_2): B_2 \to F_2$ and $(\gamma_2, \eta_2): U_{n_1} \to F_2$ such that \begin{equation}\label{eq2} \gamma_2 \circ \delta_1 \circ \mu_1 = \mu_2 \circ \psi_1^2 \qquad \text{and} \qquad \nu_2 \circ \gamma_2 = \psi_1^2 \circ \nu_1 \circ \lambda_1. \end{equation} Again using (F) we can find $n_2 \geq n_1$ and $(\delta_2, \lambda_2): F_2 \to U_{n_2}$ such that \begin{equation}\label{eq3} \phi_{n_1}^{n_2} = \delta_2 \circ \gamma_2. \end{equation} Combining equations in (\ref{eq2}) and (\ref{eq3}) we have (also can be easily checked in Diagram (\ref{diag-retract})): \begin{equation*}\label{eq4} \phi_{n_1}^{n_2} \circ \delta_1 \circ \mu_1 = \delta_2 \circ \mu_2 \circ \psi_1^2 \qquad \text{and} \qquad \psi_{1}^{2} \circ \nu_1 \circ \lambda_1 = \nu_2 \circ \lambda_2 \circ \phi_{n_1}^{n_2}. \end{equation*} Again use the proper amalgamation property to find $F_3\in \mathfrak{K}$ and $(\mu_3, \nu_3): B_3 \to F_3$ and $(\gamma_3, \eta_3): U_{n_2} \to F_3$. Follow the procedure, by finding $\ddagger \mathfrak{K}$-arrow $(\delta_3, \lambda_3): F_3 \to U_{n_3}$, for some $n_3\geq n_2$ such that \begin{equation*} \phi_{n_2}^{n_3} \circ \delta_2 \circ \mu_2 = \delta_3 \circ \mu_3 \circ \psi_2^3 \qquad \text{and} \qquad \psi_{2}^{3} \circ \nu_2 \circ \lambda_2 =\nu_3 \circ \lambda_3 \circ \phi_{n_2}^{n_3} \end{equation*} \begin{equation}\label{diag-retract} \begin{tikzcd}[column sep=normal] U_1 \arrow[rr, hook, "\phi_1^{n_1}"] \arrow[dr, hook, shift left=1ex, "\gamma_1"] & & U_{n_1} \arrow[rr, hook, "\phi_{n_1}^{n_2}"] \arrow[dr, hook, shift left=1ex, "\gamma_2"] \arrow[dl, two heads, "\lambda_1"] & & U_{n_2} \arrow[rr, hook, "\phi_{n_2}^{n_3}"] \arrow[dr, hook, shift left=1ex, "\gamma_3"] \arrow[dl, two heads, "\lambda_2"] & & U_{n_3} \arrow[r, hook, "\phi_{n_3}^{n_4}"] \arrow[dl, two heads, "\lambda_3"] & \dots \ \ U \arrow[dd, two heads, shift left=3ex, "\beta"] \\ & F_1 \arrow[dl, two heads, "\nu_1"] \arrow[ul, two heads, "\eta_1"] \arrow[ur, hook, shift left=1ex, "\delta_1"] & & F_2 \arrow[dl, two heads, "\nu_2"] \arrow[ul, two heads, "\eta_2"] \arrow[ur, hook, shift left=1ex, "\delta_2"] & & F_3 \arrow[dl, two heads, "\nu_3"] \arrow[ul, two heads, "\eta_3"] \arrow[ur, hook, shift left=1ex, "\delta_3"] & & \\ B_1 \arrow[rr, hook, shift left=1ex, "\psi_1^2"] \arrow[ur, hook, shift left=1ex, "\mu_1"] & & B_2 \arrow[rr, hook,shift left=1ex, "\psi_2^3"] \arrow[ur, hook, shift left=1ex, "\mu_2"] \arrow[ll, two heads, "\theta_1^2"] & & B_3 \arrow[rr, hook, shift left=1ex, "\psi_3^4"] \arrow[ur, hook, shift left=1ex, "\mu_3"] \arrow[ll, two heads, "\theta_2^3"] & & \dots \arrow[ll, two heads, "\theta_3^4"] & \ \ \ \ \ B \arrow[uu, hook, shift right=2ex, "\alpha"] \end{tikzcd} \end{equation} Let $\alpha_i = \delta_i \circ \mu_i$ and $\beta_i: \nu_i \circ \lambda_i$. By the construction, for every $i\in \mathbb N$ we have $$\phi_{n_i}^{n_{i+1}} \circ \alpha_i = \alpha_{i+1} \circ \psi_{i}^{i+1} \qquad \text{and} \qquad \psi_1^2 \circ \beta_i = \beta_{i+1} \circ \phi_{n_i}^{n_{i+1}}$$ and $\beta_i$ is a left inverse of $\alpha_i$. Then $\alpha = \lim_i \alpha_i$ is a well-defined arrow from $B$ to $U$ and $\beta = \lim_i \beta_i$ is a well-defined arrow from $U$ onto $B$ such that $\beta \circ \alpha = \operatorname{id}_{B}$. Finally, if $\ddagger \mathfrak{K}$ has the near proper amalgamation property and the sequence $(U_n, \phi_n^m)$ satisfies (F) with arbitrary $\epsilon>0$, we repeat the arguments above, except that Diagram~(\ref{diag-retract}) is no longer commutative. On the other hand, at step $n$ we may choose $\epsilon = 2^{-n}$ and then the arrows $\alpha$ and $\beta$ are obtained as limits of suitable Cauchy sequences in $\mathfrak{L}(B, U)$. This is the only place where we need to know that $\mathfrak{L}$ is enriched over complete metric spaces. \end{proof} Let us mention that the concept of EP-pairs has been already used by Garbuli\'nska-W\c egrzyn \cite{JGW} in the category of finite dimensional normed spaces, obtaining isometric uniqueness of a complementably universal Banach space. \section{Categories of finite-dimensional $C^*$-algebras and left-invertible mappings} In this section $\mathfrak{K}$ always denotes a (naturally metric-enriched) category whose objects are (not necessarily all) finite-dimensional $C^*$-algebras, closed under isomorphisms, and $\mathfrak{K}$-arrows are left-invertible embeddings. For such $\mathfrak{K}$, let $\mathfrak{L}\mathfrak{K}$ denote the ``category of limits" of $\mathfrak{K}$; a category whose objects are limits of $\mathfrak{K}$-sequences and if $\mathcal{B}$ and $\mathcal C$ are $\mathfrak{L}\mathfrak{K}$-objects, then an $\mathfrak{L}\mathfrak{K}$-arrow from $\mathcal{B}$ into $\mathcal C$ is a left-invertible embedding $\phi:\mathcal{B} \hookrightarrow \mathcal C$. Clearly $\mathfrak{L}\mathfrak{K}$ contains $\mathfrak{K}$ as a full subcategory. The metric defined between $\mathfrak{L}\mathfrak{K}$-arrows $\phi$ and $\psi$ with the same domain and codomain is $\|\phi - \psi\|$. For every such category $\mathfrak{K}$, let $\widehat\mathfrak{K}$ denote the category whose objects are exactly the objects of $\mathfrak{K}$, but the $\widehat\mathfrak{K}$-morphisms are all homomorphisms between the objects. Then we can define the corresponding category of EP-pairs $\ddagger\widehat\mathfrak{K}$ as in the previous section. In what follows, let us agree to write $\ddagger\mathfrak{K}$ instead of $\ddagger\widehat\mathfrak{K}$. Hence, the $\ddagger\mathfrak{K}$-morphisms are of the form $(\phi,\pi)$, where $\phi$ is a $\mathfrak{K}$-morphism and $\pi$ is a homomorphism which is a left inverse of $\phi$. \begin{remark}\label{e=0} If $\mathfrak{K}$ is a category of finite-dimensional $C^*$-algebras and embeddings, then it has the near amalgamation property (NAP) if and only if it has the amalgamation property (\cite[Lemma 3.2]{Eagle}), namely, with $\epsilon = 0$. Similarly, the near proper amalgamation property of $\ddagger \mathfrak{K}$ is equivalent to the proper amalgamation property of $\ddagger \mathfrak{K}$. Also in this case, the Fra\"\i ss\'e sequence $(\mathcal{U}_n,\phi_n^m)$, whenever it exists for $\mathfrak{K}$, satisfies the Fra\"\i ss\'e condition (F) of Theorem \ref{generic} with $\epsilon = 0$. Therefore in this section (F) refers to the following condition. \begin{itemize} \item[(F)] for every $n\in \mathbb N$ and for every $\mathfrak{K}$-arrow $\gamma: \mathcal U_n \to \mathcal{D}$, there are $m\geq n$ and $\mathfrak{K}$-arrow $\delta: \mathcal{D} \to \mathcal U_m$ such that $\phi_n^m = \delta \circ \gamma$. \end{itemize} \end{remark} \begin{lemma} \label{L1-L3} $\mathfrak{K} \subseteq \mathfrak{L}\mathfrak{K}$ has the almost factorization property. \end{lemma} \begin{proof} Suppose $\mathcal{B} \in \mathfrak{L}\mathfrak{K}$ is the limit of the $\mathfrak{K}$-sequence $(\mathcal{B}_n, \psi_n^{m})$ and $(\theta_n^m)$ is a compatible left inverse of $(\psi_n^{n+1})$. Assume $\mathcal{D}$ is a $\mathfrak{K}$-object and $\phi: \mathcal{D} \hookrightarrow \mathcal{B}$ is an $\mathfrak{L}\mathfrak{K}$-arrow with a left inverse $\pi: \mathcal{B} \twoheadrightarrow \mathcal{D}$. For given $\epsilon>0$, find $n$ and a unitary $u$ in $\widetilde\mathcal{B}$ such that $u^* \phi[\mathcal{D}] u \subseteq \psi_n^\infty [\mathcal{B}_n]$ and $\|u - 1\|<\epsilon/2$ (Lemma \ref{twisting}). Define $\psi: \mathcal{D} \hookrightarrow \mathcal{B}_n$ by $\psi(d) = \theta_n^\infty (u^*\phi(d) u)$. Then $\psi$ has a left inverse $\theta: \mathcal{B}_n \twoheadrightarrow \mathcal{D}$ defined by $\theta(x) =\pi(u\psi_n^\infty(x) u^*)$ (see the proof of Lemma \ref{sub-retract} (2)). Condition $\|u - 1\|<\epsilon/2$ implies that $\| \psi_n^\infty(\psi(d)) - \phi(d)\|<\epsilon$, for every $d$ in the unit ball of $\mathcal{D}$. \end{proof} \begin{lemma} $\mathfrak{K}$ is separable. \end{lemma} \begin{proof} There are, up to isomorphisms, countably many $\mathfrak{K}$-objects, namely finite sums of matrix algebras. The set of all embeddings between two fixed finite-dimensional $C^*$-algebras is a separable metric space. Thus, $\mathfrak{K}$ trivially has a countable dominating subcategory. \end{proof} The following statement is a direct consequence of Lemma~\ref{EPlim-retract}. \begin{corollary}\label{CORlim-retract} Suppose $\mathfrak{K}$ is a Fra\"\i ss\'e category of $C^*$-algebras with the Fra\"\i ss\'e limit $\mathcal U$ and $\ddagger \mathfrak{K}$ has the proper amalgamation property. Then $\mathcal U$ is a split-extension of every AF-algebra $\mathcal{B}$ in $\mathfrak{L}\mathfrak{K}$. In particular, $\mathcal U$ maps onto any AF-algebra in $\mathfrak{L}$. \end{corollary} \section{AF-algebras with Cantor property as Fra\"\i ss\'e limits}\label{Ax-sec} Suppose $\mathfrak{K}$ is a category of (not necessarily all) finite-dimensional $C^*$-algebras, closed under isomorphisms, and $\mathfrak{K}$-arrows are left-invertible embeddings. \begin{definition}\label{closed-def} We say $\mathfrak{K}$ is $\oplus$-\emph{stable} if it satisfies the following conditions. \begin{enumerate} \item If $\mathcal{D}$ is a $\mathfrak{K}$-object, then so is any retract (ideal) of $\mathcal{D}$, \item $\mathcal{D}\oplus \mathcal{E} \in \mathfrak{K}$ whenever $\mathcal{D}, \mathcal{E}\in \mathfrak{K}$. \end{enumerate} \end{definition} In general $0$ is a retract of any $C^*$-algebra and therefore it is the initial object of any $\oplus$-stable category, unless, when working with the unital categories (when all the $\mathfrak{K}$-arrows are unital), which in that case $0$ is not a $\mathfrak{K}$-object anymore. Unital categories are briefly discussed in Section \ref{unital-sec}. \begin{theorem}\label{closed-Fraisse} Suppose $\mathfrak{K}$ is a $\oplus$-stable category. Then $\ddagger \mathfrak{K}$ has the proper amalgamation property. In particular, $\mathfrak{K}$ is a Fra\"\i ss\'e category. \end{theorem} \begin{proof} Suppose $\mathcal{D}, \mathcal{E}$ and $\mathcal{F}$ are $\mathfrak{K}$-objects and $\ddagger\mathfrak{K}$-arrows $(\phi, \pi):\mathcal{D} \to \mathcal{E} $ and $(\psi, \theta): \mathcal{D} \to \mathcal{F}$ are given. Since $\phi$ and $\psi$ are left-invertible, by Proposition \ref{fact1} we can identify $\mathcal{E}$ and $\mathcal{F}$ with $\mathcal{E}_0 \oplus\mathcal{E}_1$ and $\mathcal{F}_0 \oplus \mathcal{F}_1$, respectively, and find $\phi_0, \phi_1, \psi_0, \psi_1$ such that \begin{itemize} \item $\phi_0: \mathcal{D} \to \mathcal{E}_0$ and $\psi_0: \mathcal{D} \to \mathcal{F}_0$ are isomorphisms, \item $\phi_1: \mathcal{D} \to \mathcal{E}_1$ and $\psi_1: \mathcal{D} \to \mathcal{F}_1$ are homomorphisms, \item $\phi(d) = (\phi_0(d), \phi_1(d))$ and $\psi(d) = (\psi_0(d), \psi_1(d))$ for every $d\in \mathcal{D}$, \item $\pi(e_0, e_1) = \phi_0^{-1}(e_0)$ and $\theta(f_0, f_1) = \psi_0^{-1}(f_0)$. \end{itemize} Define homomorphisms $\mu: \mathcal{E} \rightarrow \mathcal{F}_1$ and $\nu: \mathcal{F} \rightarrow \mathcal{E}_1$ by $\mu = \psi_1 \circ \pi$ and $\nu = \phi_1 \circ \theta$ (see Diagram (\ref{diag1})). Since $\mathfrak{K}$ is $\oplus$-stable $\mathcal{D}\oplus \mathcal{E}_1\oplus \mathcal{F}_1$ is a $\mathfrak{K}$-object. Define $\mathfrak{K}$-arrows $\phi': \mathcal{E} \hookrightarrow \mathcal{D} \oplus \mathcal{E}_1\oplus \mathcal{F}_1$ and $\psi': \mathcal{F} \hookrightarrow \mathcal{D}\oplus \mathcal{E}_1\oplus \mathcal{F}_1$ by $$\phi'(e_0,e_1) = (\phi_0^{-1}(e_0), e_1, \mu(e_0, e_1))$$ and $$\psi'(f_0, f_1) = (\psi_0^{-1}(f_0), \nu(f_0, f_1), f_1).$$ For every $d\in \mathcal{D}$ we have $$ \phi'(\phi(d)) = \phi'(\phi_0(d), \phi_1(d))= (d, \phi_1(d), \mu(\phi(d))) = (d, \phi_1(d), \psi_1(d)) $$ and $$ \psi'(\psi(d)) =\psi'(\psi_0(d), \psi_1(d)) = (d, \nu(\phi(d)), \psi_1(d)) = (d, \phi_1(d), \psi_1(d)) . $$ \begin{equation}\label{diag1} \begin{tikzcd}[column sep=large] & \mathcal \mathcal{E} \arrow[dr, hook', "\phi'"] \arrow[dd, shift left=0.5ex, "\mu"] \arrow[dl, shift left=1ex, two heads, "\pi"] \\ \mathcal{D} \arrow[ur, hook', "\phi"] \arrow[dr, hook, "\psi"] & & \mathcal{D} \oplus \mathcal{E}_1\oplus \mathcal{F}_1 \arrow[ul, shift left=1ex, two heads, "\pi'"] \arrow[dl,shift left=1ex, two heads, "\theta'"]\\ & \mathcal{F} \arrow[ur, hook, "\psi^\prime "] \arrow[uu,shift left=0.5ex, "\nu"] \arrow[ul, shift left=1ex, two heads, "\theta"] \end{tikzcd} \end{equation} Therefore $\phi^\prime \circ \phi = \psi^\prime \circ \psi $. The map $\pi^\prime: \mathcal{D} \oplus \mathcal{E}_1\oplus \mathcal{F}_1 \to \mathcal{E}$ defined by $\pi^\prime(d,e_1, f_1) = (\phi_0(d), e_1)$ is a left inverse of $\phi^\prime$. Similarly the map $\theta^\prime: \mathcal{D} \oplus \mathcal{E}_1\oplus \mathcal{F}_1 \to \mathcal{F}$ defined by $\theta^\prime(d,e_1, f_1) = (\psi_0(d), f_1)$ is a left inverse of $\psi^\prime$. Therefore $(\phi^\prime, \pi^\prime): \mathcal{E} \to \mathcal{D} \oplus \mathcal{E}_1\oplus \mathcal{F}_1 $ and $(\psi^\prime, \theta^\prime): \mathcal{E} \to \mathcal{D} \oplus \mathcal{E}_1\oplus \mathcal{F}_1 $ are $\mathfrak{K}$-arrows. We have $$\pi \circ \pi^\prime (d, e_1, f_1) = \pi (\phi_0(d), e_1) = d, $$ $$\theta \circ \theta^\prime (d, e_1, f_1) = \theta (\psi_0(d), e_1) = d. $$ Hence $\pi \circ \pi^\prime =\theta \circ \theta^\prime $. Also \begin{align*} \theta^\prime \circ \phi^\prime(e_0, e_1) &= \theta^\prime (\phi_0^{-1}(e_0), e_1, \mu(e_0, e_1)) = (\psi_0(\phi_0^{-1}(e_0)), \mu(e_0,e_1)) \\ & =(\psi_0(\pi(e_0, e_1)), \psi_1(\pi(e_0,e_1))) = \psi (\pi(e_0,e_1)). \end{align*} So $\theta^\prime \circ \phi^\prime = \psi \circ\pi$ and similarly we have $\phi \circ \theta = \pi^\prime \circ \psi^\prime$. This shows that $\ddagger\mathfrak{K}$ has proper amalgamation property. Since $\mathfrak{K}$ is separable and has an initial object, in particular, it is a Fra\"\i ss\'e category. \end{proof} Therefore any $\oplus$-stable category $\mathfrak{K}$ has a unique Fra\"\i ss\'e sequence; a $\mathfrak{K}$-sequence which satisfies (F). \begin{notation} Let $\mathcal{A}_\mathfrak{K}$ denote the Fra\"\i ss\'e limit of the $\oplus$-stable category $\mathfrak{K}$. \end{notation} The AF-algebra $\mathcal{A}_\mathfrak{K}$ is $\mathfrak{K}$-universal and almost $\mathfrak{K}$-homogeneous, by Theorem~\ref{uni-hom} and Lemma~\ref{L1-L3}. In fact, $\mathcal{A}_\mathfrak{K}$ is $\mathfrak{K}$-homogeneous (where $\epsilon$ is zero). To see this, suppose $\mathcal{F}$ is a finite-dimensional $C^*$-algebra in $\mathfrak{K}$ and $\phi_i: \mathcal{F} \hookrightarrow \mathcal{A}$ $(i=0,1)$ are left-invertible embeddings. By the almost $\mathfrak{K}$-homogeneity, there is an automorphism $\eta: \mathcal{A}_\mathfrak{K} \to \mathcal{A}_\mathfrak{K}$ such that $\|\eta \circ \phi_0 - \phi_1\|< 1$. There exists (Lemma \ref{fd-af}) a unitary $u\in\widetilde\mathcal{A}$ such that $\operatorname{Ad}_u \circ \eta \circ \phi_0 = \phi_1$. The automorphism $\operatorname{Ad}_u \circ \eta$ witnesses the $\mathfrak{K}$-homogeneity of $\mathcal{A}_\mathfrak{K}$. Moreover, since $\ddagger \mathfrak{K}$ has the proper amalgamation property, every AF-algebra in $\mathfrak{L}\mathfrak{K}$, is a retract of $\mathcal{A}_\mathfrak{K}$ (Corollary \ref{CORlim-retract}). \begin{corollary}\label{inj. uni. AF} Suppose $\mathfrak{K}$ is a $\oplus$-stable category, then \begin{itemize} \item (universality) Every AF-algebra which is the limit of a $\mathfrak{K}$-sequence, is a retract of $\mathcal{A}_\mathfrak{K}$. \item ($\mathfrak{K}$-homogeneity) For every finite-dimensional $C^*$-algebra $\mathcal{F}\in \mathfrak{K}$ and left-invertible embeddings $\phi_i: \mathcal{F} \hookrightarrow \mathcal{A}_\mathfrak{K}$ ($i=0,1$), there is an automorphism $\eta: \mathcal{A}_\mathfrak{K} \to \mathcal{A}_\mathfrak{K}$ such that $\eta \circ \phi_0 = \phi_1$. \end{itemize} \end{corollary} We will describe the structure of $\mathcal{A}_\mathfrak{K}$ by showing that it has the Cantor property. \begin{lemma}\label{A_K-CS} Suppose $\mathfrak{K}$ is a $\oplus$-stable category, then $\mathcal{A}_\mathfrak{K}$ has the Cantor property. \end{lemma} \begin{proof} Suppose $\mathcal{A}_\mathfrak{K} = \varinjlim_n (\mathcal{A}_n, \phi_n^m)$, where $(\mathcal{A}_n, \phi_n^m)$ is a $\mathfrak{K}$-sequence, i.e., $(\mathcal{A}_n, \phi_n^m)$ is a left-invertible sequence of finite-dimensional $C^*$-algebras in $\mathfrak{K}$. Since $\mathcal{A}_\mathfrak{K}$ is the Fra\"\i ss\'e limit of $\mathfrak{K}$, we can suppose $(\mathcal{A}_n, \phi_n^m)$ satisfies (F). We claim that $(\mathcal{A}_n, \phi_n^m)$ satisfies (D0)--(D2) of Definition \ref{Bratteli}. Suppose $\mathfrak D$ is the Bratteli diagram of $(\mathcal{A}_n, \phi_n^m)$ and $\mathcal{A}_n = \mathcal{A}_{n,1} \oplus \dots \oplus \mathcal{A}_{n,k_n}$ for every $n$, such that each $\mathcal{A}_{n,s}$ is a matrix algebra. The condition (D0) is trivial since $\phi_n^m$ are left-invertible. To see (D1), fix $\mathcal{A}_{n,s}$. Note that since $\mathcal{A}_n$ is a $\mathfrak{K}$-object and $\mathfrak{K}$ is $\oplus$-stable, we have $\mathcal{A}_{n} \oplus \mathcal{A}_{n} \in \mathfrak{K}$. Let $\gamma: \mathcal{A}_{n} \hookrightarrow \mathcal{A}_{n} \oplus \mathcal{A}_{n}$ be the left-invertible embedding defined by $\gamma(a) = (a,a)$. Use the Fra\"\i ss\'e condition (F) to find $\delta: \mathcal{A}_n \oplus \mathcal{A}_{n} \hookrightarrow \mathcal{A}_{m}$, for some $m\geq n$, such that $\delta \circ \gamma = \phi_n^m$. Since $\delta$ is left-invertible, there are distinct $(m,t)$ and $(m,t')$ in $\mathfrak D$ such that $\mathcal{A}_{m,t} \cong \mathcal{A}_{m,t'} \cong \mathcal{A}_{n,s}$. Then $\delta \circ \gamma = \phi_n^m$ implies that $(n,s)\to (m,t)$ and $(n,s) \to (m,t')$ in $\mathfrak D$. To see (D2) assume $\mathcal{D} \subseteq \mathcal{A}_n$ is an ideal of $\mathcal{A}_n$ and $M_\ell$ is a retract of $\mathcal{A}_\mathfrak{K}$ and there is an embedding $\gamma:\mathcal{D} \hookrightarrow M_\ell$. Suppose $\mathcal{A}_n= \mathcal{D} \oplus \mathcal{E}$ for some $\mathcal{E}$. Since $\mathfrak{K}$ is $\oplus$-stable, $\mathcal{D} \oplus \mathcal{E} \oplus M_\ell$ is a $\mathfrak{K}$-object. Therefore $\gamma': \mathcal{D}\oplus \mathcal{E} \hookrightarrow \mathcal{D} \oplus \mathcal{E} \oplus M_\ell$ defined by $\gamma'(d,e) = (d, e, \gamma(d))$ is a $\mathfrak{K}$-arrow. Then by $(F)$ there is a left-invertible embedding $\delta': \mathcal{D} \oplus \mathcal{E} \oplus M_\ell \hookrightarrow \mathcal{A}_m$ for some $m\geq n$, such that \begin{equation}\label{eq-delta} \delta'\circ \gamma' = \phi_n^m. \end{equation} Since $\delta'$ is left-invertible, there is $(m,t)$ such that $\dim(\mathcal{A}_{m,t}) = \ell$ and $$\delta_{m,t}=\pi_{\mathcal{A}_{m,t}} \circ \delta|_{M_\ell} : M_\ell \hookrightarrow \mathcal{A}_{m,t}$$ is an isomorphism, where $\pi_{\mathcal{A}_{m,t}}: \mathcal{A}_m \twoheadrightarrow \mathcal{A}_{m,t}$ is the canonical projection. Let $$\phi_{m,t}=\pi_{\mathcal{A}_{m,t}} \circ \phi_n^m|_\mathcal{D} : \mathcal{D} \to \mathcal{A}_{m,t}.$$ By definition of $\gamma'$ and (\ref{eq-delta}) it is clear that $\phi_{m,t} = \delta_{m,t}\circ \gamma$ and that $\phi_{m,t}$ is also an embedding. By Lemma \ref{matrix-absorbing} we have $ \operatorname{Mult}_{\phi_{m,t}}(\mathcal{D}, \mathcal{A}_{m,t}) = c \operatorname{Mult}_\gamma(\mathcal{D} , M_\ell ) $ for some natural number $c\geq 1$. Since $\delta_{m,t}$ is an isomorphism, we have $c=1$. This proves (D2). \end{proof} Next we show that every AF-algebra with Cantor property can be realized as the Fra\"\i ss\'e limit of a suitable $\oplus$-stable category of finite-dimensional $C^*$-algebras and left-invertible embeddings. \subsection{The category $\mathfrak{K}_\mathcal{A}$} Suppose $\mathcal{A}$ is an AF-algebra with Cantor property. Let $\mathfrak{K}_\mathcal{A}$ denote the category whose objects are finite-dimensional retracts of $\mathcal{A}$ and $\mathfrak{K}_\mathcal{A}$-arrows are left-invertible embeddings. Let $\mathfrak{L}_\mathcal{A}$ be the category whose objects are limits of $\mathfrak{K}_\mathcal{A}$-sequences. If $\mathcal{B}$ and $\mathcal C$ are $\mathfrak{L}_\mathcal{A}$-objects, an $\mathfrak{L}_\mathcal{A}$-arrow from $\mathcal{B}$ into $\mathcal C$ is a left-invertible embedding $\phi: \mathcal{B} \hookrightarrow \mathcal C$. \begin{lemma}\label{K_A closed} $\mathfrak{K}_\mathcal{A}$ is a Fra\"\i ss\'e category and $\ddagger \mathfrak{K}_\mathcal{A}$ has the proper amalgamation property. \end{lemma} \begin{proof} By Theorem \ref{closed-Fraisse}, it is enough to show that $\mathfrak{K}_\mathcal{A}$ is a $\oplus$-stable category. Condition (1) of Definition \ref{closed-def} is trivial. Condition (2) follows from Proposition \ref{sum-retract}. \end{proof} Again, Theorem \ref{uni-hom} guarantees the existence of a unique $\mathfrak{K}_\mathcal{A}$-universal and $\mathfrak{K}_\mathcal{A}$-homogeneous AF-algebra in $\mathfrak{L}_\mathcal{A}$, namely the Fra\"\i ss\'e limit of $\mathfrak{K}_\mathcal{A}$. \begin{theorem}\label{Cantor-closed-Fraisse} The Fra\"\i ss\'e limit of $\mathfrak{K}_\mathcal{A}$ is $\mathcal{A}$. \end{theorem} \begin{proof} There is a sequence $(\mathcal{A}_n, \phi_n^m)$ of finite-dimensional $C^*$-algebras and embeddings such that $\mathcal{A} = \varinjlim (\mathcal{A}_n, \phi_n^m)$ satisfies (D0)--(D2) of Definition \ref{Bratteli}. First note that by (D0), $(\mathcal{A}_n, \phi_n^m)$ is a $\mathfrak{K}_\mathcal{A}$-sequence and therefore $\mathcal{A}$ is an $\mathfrak{L}_\mathcal{A}$-object. In order to show that $\mathcal{A}$ is the Fra\"\i ss\'e limit of $\mathfrak{K}_\mathcal{A}$, we need to show that $(\mathcal{A}_n, \phi_n^m)$ satisfies condition (F). This is Lemma \ref{absorbing}. \end{proof} \begin{theorem}\label{A_K-characterization} Suppose $\mathfrak{K}$ is a $\oplus$-stable category. $\mathcal{A}_\mathfrak{K}$ is the unique AF-algebra such that \begin{enumerate} \item it has the Cantor property, \item a finite-dimensional $C^*$-algebra is a retract of $\mathcal{A}_\mathfrak{K}$ if and only if it is a $\mathfrak{K}$-object. \end{enumerate} \end{theorem} \begin{proof} We have already shown that $\mathcal{A}_\mathfrak{K}$ has the Cantor property (Lemma \ref{A_K-CS}). By Lemma \ref{sub-retract}(2), every finite-dimensional retract of $\mathcal{A}_\mathfrak{K}$ is a $\mathfrak{K}$-object and every finite-dimensional $C^*$-algebra in $\mathfrak{K}$ is a retract of $\mathcal{A}_\mathfrak{K}$, by the $\mathfrak{K}$-universality of $\mathcal{A}_\mathfrak{K}$. If $\mathcal{A}$ is an AF-algebra satisfying (1) and (2), then by definition $\mathfrak{K}_\mathcal{A}= \mathfrak{K}$. The uniqueness of the Fra\"\i ss\'e limit and Theorem \ref{Cantor-closed-Fraisse} imply that $\mathcal{A} \cong \mathcal{A}_\mathfrak{K}$. \end{proof} \begin{corollary}\label{retract-coro} Two AF-algebras with Cantor property are isomorphic if and only if they have the same set of matrix algebras as retracts. \end{corollary} \subsection{Examples}\label{tensor-cp} Corollary \ref{retract-coro} shows that there is a one to one correspondence between AF-algebras with the Cantor property and the collections of (non-isomorphic) matrix algebras (hence, with the subsets of the natural numbers). More precisely, given any collection $X$ of non-isomorphic matrix algebras, let $\mathfrak{K}_X$ denote the $\oplus$-stable category whose objects are finite direct sums of the matrix algebras in $\mathfrak{K}_X$ (finite direct sums of a member of $X$ with itself are of course allowed) and left-invertible embeddings as arrows. Then the Fra\"\i ss\'e limit of $\mathfrak{K}_X$ is the unique AF-algebra whose matrix algebra retracts are exactly the members of $X$. The class of AF-algebras with the Cantor property is not closed under direct sum (for instance, $(M_2 \oplus M_3) \otimes C(2^\mathbb N)$ does not have the Cantor property, as its Bratteli diagram easily reveals, while $M_2\otimes C(2^\mathbb N)$ and $M_3 \otimes C(2^\mathbb N)$ do). The following example shows that this class is also not closed under tensor product. Let $\mathcal{A}$ denote the unique AF-algebra with the Cantor property whose matrix algebra retracts are exactly $\{M_2, M_3, M_5, M_{11}\}$. We claim that $\mathcal{A} \otimes \mathcal{A}$ does not have the Cantor property. Suppose $\mathcal{A} = \varinjlim (\mathcal{A}_n, \phi_n^m)$ where the sequence satisfies (D0)--(D2) of Definition \ref{Bratteli}. Clearly $\mathcal{A}\otimes \mathcal{A}$ is the limit of the left-invertible sequence $ (\mathcal{A}_n\otimes \mathcal{A}_n, \phi_n^m\otimes \phi_n^m)$. Therefore by Lemma \ref{sub-retract} every matrix algebra retract of $\mathcal{A}\otimes \mathcal{A}$ is isomorphic to $\mathcal{D} \otimes \mathcal{E}$, where $\mathcal{D}, \mathcal{E}\in \{M_2, M_3, M_5 , M_{11}\}$. Take a retract of $\mathcal{A}_n\otimes \mathcal{A}_n$ isomorphic to $M_3 \otimes M_5$ (for large enough $n$ there is such a retract) and let $\gamma : M_3 \otimes M_5 \to M_2 \otimes M_{11}$ be an embedding of multiplicity $1$. However, there is no embedding $\phi\otimes \psi : M_3 \otimes M_5 \to M_{22} \cong M_2 \otimes M_{11}$ which corresponds to a path in the Bratteli diagram of the sequence $ (\mathcal{A}_n\otimes \mathcal{A}_n, \phi_n^m\otimes \phi_n^m)$. This is because the codomain of any such $\phi$ should be either $M_3$ or $M_5$ or $M_{11}$ (since $\phi$ corresponds to a path in the Bratteli diagram of the sequence $ (\mathcal{A}_n, \phi_n^m)$) and similarly the codomain of $\psi$ could only be $M_5$ or $M_{11}$, while the tensor product of their codomains should be isomorphic to $M_{22}$, which is not possible. Thus condition (D2) is satisfied neither by the sequence $ (\mathcal{A}_n\otimes \mathcal{A}_n, \phi_n^m\otimes \phi_n^m)$, nor by any sequence of finite-dimensional $C^*$-algebras whose limit is $\mathcal{A} \otimes \mathcal{A}$ (see Remark \ref{remark-CP-def}), which means that $\mathcal{A}\otimes \mathcal{A}$ does not have the Cantor property. \section{Universal AF-algebras}\label{universal-section} Let ${\mathfrak F}$ denote the category of \emph{all} finite-dimensional $C^*$-algebras and left-invertible embeddings. The category ${\mathfrak F}$ is $\oplus$-stable and therefore it is Fra\"\i ss\'e by Theorem \ref{closed-Fraisse}. The Fra\"\i ss\'e limit $\mathcal{A}_\mathfrak F$ of this category has the universality property (Corollary \ref{inj. uni. AF}) that any AF-algebra which is the limit of a left-invertible sequence of finite-dimensional $C^*$-algebras can be embedded via a left-invertible embedding into $\mathcal{A}_\mathfrak F$. In fact, $\mathcal{A}_\mathfrak F$ is surjectively universal in the category of all (separable) AF-algebras. \begin{theorem}\label{universal AF-algebra} There is a surjective homomorphism from $\mathcal{A}_{\mathfrak F}$ onto any separable AF-algebra. \end{theorem} \begin{proof} Suppose $\mathcal{B}$ is a separable AF-algebra. Proposition \ref{ess-quotient} states that there is an AF-algebra $\mathcal{A}$, which is the limit of a left-invertible sequence of finite-dimensional $C^*$-algebras and $\mathcal{A}/\mathcal{J} \cong \mathcal{B}$, for some ideal $\mathcal{J}$. By the universality of $\mathcal{A}_{\mathfrak F}$ (Corollary \ref{inj. uni. AF}) there is a left-invertible embedding $\phi: \mathcal{A} \hookrightarrow \mathcal{A}_{\mathfrak F}$. If $\theta: \mathcal{A}_{\mathfrak F} \twoheadrightarrow \mathcal{A}$ is a left inverse of $\phi$ then its composition with the quotient map $\pi: \mathcal{A} \twoheadrightarrow \mathcal{A} / \mathcal{J}$ gives a surjective homomorphism from $\mathcal{A}_{\mathfrak F}$ onto $\mathcal{B}$. \end{proof} \begin{remark}\label{not unique} Since $\mathcal{A}_\mathfrak F$ has the Cantor property (Lemma \ref{A_K-CS}), it does not have any minimal projections. Therefore, for example, it cannot be isomorphic to $\mathcal{A}_\mathfrak F \oplus \mathbb C$. Hence the property of being surjectively universal AF-algebra is not unique to $\mathcal{A}_\mathfrak F$. \end{remark} \begin{corollary}\label{main-corollary} An AF-algebra $\mathcal A$ is surjectively universal if and only if $\mathcal{A}_\mathfrak F$ is a quotient of $\mathcal{A}$. \end{corollary} Theorem \ref{A_K-characterization} provides a characterization of $\mathcal{A}_\mathfrak F$, up to isomorphism, in terms of its structure. \begin{corollary}\label{Bratteli diagram-A_F} $\mathcal{A}_\mathfrak F$ is the unique separable AF-algebra with Cantor property such that every matrix algebra $M_k$ is a retract of $\mathcal{A}$. Equivalently, an AF-algebra $\mathcal{A}$ is isomorphic to $\mathcal{A}_\mathfrak F$ if and only if there is a sequence $(\mathcal{A}_n, \phi_n^m)$ of finite-dimensional $C^*$-algebras and embeddings such that $\mathcal{A} = \varinjlim (\mathcal{A}_n, \phi_n^m)$ and the Bratteli diagram $\mathfrak D$ of $(\mathcal{A}_n, \phi_n^m)$ satisfies (D0)-(D2) and \begin{enumerate} \item[(D3)] for every $k$ there is $(n,s)\in \mathfrak D$ such that $\dim(n,s) = k$. \end{enumerate} \end{corollary} \begin{theorem}\label{A_F-char} $\mathcal{A}_\mathfrak F$ is the unique AF-algebra that is the limit of a left-invertible sequence of finite-dimensional $C^*$-algebras and for any finite-dimensional $C^*$-algebras $\mathcal{D}, \mathcal{E}$ and left-invertible embeddings $\phi: \mathcal{D} \hookrightarrow \mathcal{E}$ and $\alpha: \mathcal{D} \hookrightarrow \mathcal{A}_\mathfrak F$ there is a left-invertible embedding $\beta:\mathcal{E} \hookrightarrow \mathcal{A}_\mathfrak F$ such that $\beta \circ \phi = \alpha$. \end{theorem} \begin{proof} Suppose $\mathcal{A}_\mathfrak F$ is the limit of the Fra\"\i ss\'e $\mathfrak F$-sequence $(\mathcal{A}_n, \phi_n^m)$. By definition, $\alpha$ and $\phi$ are $\mathfrak F$-arrows. There is (Lemma~\ref{L1-L3}) a natural number $n$ and an $\mathfrak F$-arrow (a left-invertible embedding) $\psi: \mathcal{D} \hookrightarrow \mathcal{A}_n$ such that $\|\phi_n^\infty \circ \psi - \alpha\| <1$. Use the amalgamation property to find a finite-dimensional $C^*$-algebra $\mathcal{G}$ and left-invertible embeddings $\phi' : \mathcal{E} \hookrightarrow \mathcal{G}$ and $\psi': \mathcal{A}_n \hookrightarrow \mathcal{G}$ such that $\phi' \circ \phi = \psi' \circ \psi$ (see Diagram (\ref{G2-A})). The Fra\"\i ss\'e condition (F) implies the existence of $m\geq n$ and a left-invertible embedding $\delta : \mathcal{G} \hookrightarrow \mathcal{A}_m$ such that $\delta \circ \psi'= \phi_n^m$. Let $\beta' = \phi_m^\infty \circ \delta \circ \phi'$. It is clearly left-invertible. \begin{equation}\label{G2-A} \begin{tikzcd}[row sep=small] \mathcal{A}_1 \arrow[r, hook, "\phi_1^2"] & \mathcal{A}_2 \arrow[r, hook, "\phi_2^3"] & \dots \arrow[r, hook] &\mathcal{A}_n \arrow[dr, hook, "\psi'"] \arrow[rr, hook, "\phi_n^m"]& & \mathcal{A}_m \arrow[rr, hook, "\phi_m^\infty"]& &\mathcal{A}_\mathfrak F\\ &&& & \mathcal{G} \arrow[ur, hook, "\delta"]& \\ && \mathcal{D} \arrow[uur, hook', "\psi"] \arrow[rr, hook, "\phi"] \arrow[uurrrrr, hook, bend right=40, crossing over, "\alpha"] & & \mathcal{E} \arrow[u, hook, "\phi'"] \arrow[uurrr, hook, dashed, "\beta"]& \end{tikzcd} \end{equation} For every $d$ in $\mathcal{D}$ we have $$ \beta' \circ \phi (d) = \phi_m^\infty \circ \delta \circ \phi' \circ \phi (d) = \phi_m^\infty \circ \delta \circ \psi' \circ \psi (d) = \phi_m^\infty \circ \phi_n^m\circ \psi (d) = \phi_n^\infty \circ \psi (d). $$ Therefore $\|\beta' \circ \phi - \alpha\|<1$. Conjugating $\beta'$ with a unitary in $\widetilde \mathcal{A}_\mathfrak F$ gives the required left-invertible embedding $\beta$ (Lemma \ref{fd-af}). For the uniqueness, suppose $\mathcal{B}$ is the limit of a left-invertible sequence $(\mathcal{B}_n, \psi_n^m)$ of finite-dimensional $C^*$-algebras, satisfying the assumption of the theorem. Using this assumption we can show that $(\mathcal{B}_n, \psi_n^m)$ satisfies the Fra\"\i ss\'e condition (F) and therefore $\mathcal{B}$ is the Fra\"\i ss\'e limit of $\mathfrak F$. Uniqueness of the Fra\"\i ss\'e limit implies that $\mathcal{B}$ is isomorphic to $\mathcal{A}_\mathfrak F$. \end{proof} Let us conclude this section with another example of a Fra\"\i ss\'e category of finite-dimensional $C^*$-algebras. \begin{remark} Note that a similar argument as in \ref{tensor-cp} shows that $\mathcal{A}_{\mathfrak F} \otimes \mathcal{A}_{\mathfrak F}$ does not have the Cantor property. In particular, $\mathcal{A}_{\mathfrak F}$ is not self-absorbing, i.e., $\mathcal{A}_{\mathfrak F} \otimes \mathcal{A}_{\mathfrak F}$ is not isomorphic to $\mathcal{A}_{\mathfrak F}$. \end{remark} \subsection{The universal UHF-algebra}\label{universal uhf} Recall that a UHF-algebra is the (inductive) limit of $$M_{k_1} \xrightarrow{\phi_1^2} M_{k_2} \xrightarrow{\phi_2^3} M_{k_3} \xrightarrow{\phi_3^4} \dots $$ of full matrix algebras, with unital connecting maps $\phi_n^{n+1}$. In particular $k_j|k_{j+1}$ for each $j$. To each sequence of natural numbers $\{k_j\}_{j\in \mathbb N}$ (hence to the corresponding UHF-algebra) a \emph{supernatural number} $n$ is associated, which is the formal product $$n = \prod_{p \text{ prime}} p^{n_p}$$ where $n_p\in \{0,1,2, \dots, \infty\}$ and for each prime number $p$, $$n_p =\sup\{n: p^n|k_j \text{ for some } j\}.$$ Also to each supernatural number $n$ there is an associated UHF-algebra denoted, as it is common, by $M_n$ (e.g., the CAR-algebra is $M_{2^\infty}$). Glimm \cite{Glimm} showed that a supernatural number is a complete invariant for the associated UHF-algebra. Recall that the \emph{universal UHF-algebra} (see \cite{Rordam-Stormer}), denoted by $\mathcal Q$, is the UHF-algebra associated to the supernatural number $$n_\infty = \prod_{p \text{ prime}} p^{\infty}.$$ The universal UHF-algebra $\mathcal{Q}$ is also the unique unital AF-algebra such that $$\langle K_0(\mathcal{Q}), K_0(\mathcal{Q})_+, [1_\mathcal{Q}] \rangle \cong \langle \mathbb Q, \mathbb Q_+, 1 \rangle.$$ The multiplication of supernatural numbers is defined in the obvious way which means for supernatural numbers $n,m$ we have $M_n \otimes M_m \cong M_{nm}$. This in particular implies that $\mathcal{Q} \otimes \mathcal M \cong \mathcal{Q}$, for any UHF-algebra $\mathcal M$. Now suppose $\mathfrak M$ is the category of all nonzero matrix algebras and unital embeddings. Then $\mathfrak M$ is a Fra\"\i ss\'e category. The only nontrivial part of the latter statement is to show that $\mathfrak M$ has the amalgamation property, but this is quite easy since it is enough to make sure that the composition maps have the same multiplicities and then conjugating with a unitary makes sure that the composition maps are the same (this is similar to the proof of the amalgamation property in \cite[Theorem 3.4]{Eagle}). The Fra\"\i ss\'e limit of $\mathfrak M$ is $\mathcal Q$, since the universality property of the Fra\"\i ss\'e limit implies that the supernatural number associated to it must be $n_\infty$. \section{Unital categories}\label{unital-sec} The proof of Theorem \ref{closed-Fraisse} also shows that the category of all finite-dimensional $C^*$-algebras (or any $\oplus$-stable category) and \emph{unital} left-invertible embeddings has the (proper) amalgamation property. However, this category fails to have the joint embedding property (note that $0$ is no longer an object of the category), since for example one cannot jointly embed $M_2$ and $M_3$ into a finite-dimensional $C^*$-algebra with unital left-invertible maps. \subsection{The category $\widetilde{\mathfrak F}$} Let $\widetilde{\mathfrak F}$ denote the category of all finite-dimensional $C^*$-algebras isomorphic to $\mathbb C \oplus \mathcal{D}$, for a finite-dimensional $C^*$-algebra $\mathcal{D}$, and unital left-invertible embeddings. This category is no longer $\oplus$-stable, however, a similar proof to the one of Theorem \ref{closed-Fraisse}, where the maps are unital, shows that $\ddagger \widetilde{\mathfrak F}$ has the proper amalgamation property. Therefore $\widetilde{\mathfrak F}$ is a Fra\"\i ss\'e category, since $\mathbb C$ is the initial object of this category and therefore the joint embedding property is a consequence of the amalgamation property. The Fra\"\i ss\'e limit $\mathcal{A}_{\widetilde{\mathfrak F}}$ of this category is a separable AF-algebra with the universality property that any unital AF-algebra which can be obtained as the limit of a left-invertible unital sequence of finite-dimensional $C^*$-algebras isomorphic to $\mathbb C \oplus \mathcal{D}$, can be embedded via a left-invertible unital embedding into $\mathcal{A}_{\widetilde{\mathfrak F}}$. The unital analogue of Theorem \ref{universal AF-algebra} states the following. \begin{corollary} For every unital separable AF-algebra $\mathcal{B}$ there is a surjective homomorphism from $\mathcal{A}_{\widetilde{\mathfrak F}}$ onto $\mathcal{B}$. \end{corollary} \begin{proof} Suppose $\mathcal{B}$ is an arbitrary unital AF-algebra. Using Proposition \ref{ess-quotient} we can find a unital AF-algebra $\mathcal{A}\supseteq B$ which is the limit of a left-invertible unital sequence of finite-dimensional $C^*$-algebras, such that $\mathcal{B}$ is a quotient of $\mathcal{A}$. Thus $\mathbb C \oplus \mathcal{A}$ is the limit of a unital left-invertible sequence of finite-dimensional $C^*$-algebras of the form $\mathbb C \oplus \mathcal{D}$, for finite-dimensional $\mathcal{D}$. By the universality of $\mathcal{A}_{\widetilde{\mathfrak F}}$, there is a left-invertible unital embedding from $\mathbb C \oplus \mathcal{A}$ into $\mathcal{A}_{\widetilde{\mathfrak F}}$. Since $\mathcal{B}$ is a quotient of $\mathcal{A}$, there is a surjective homomorphism from $\mathbb C \oplus \mathcal{A}$ onto $\mathcal{B}$. Combining the two surjections gives us a surjective homomorphism from $\mathcal{A}_{\widetilde{\mathfrak F}}$ onto $\mathcal{B}$. \end{proof} \begin{remark} Small adjustments in the proof of Lemma \ref{A_K-CS} show that $\mathcal{A}_{\widetilde{\mathfrak F}}$ has the Cantor property (in the sense of Definition \ref{CS-unital-def}). In fact, it is easy to check that $\mathcal{A}_{\widetilde{\mathfrak F}}$ is isomorphic to $\widetilde{\mathcal{A}_\mathfrak F}$, the unitization of $\mathcal{A}_\mathfrak F$. This, in particular, implies that $\mathcal{A}_\mathfrak F$ is not unital. Since if it was unital, then $\widetilde{\mathcal{A}_\mathfrak F}$ (and hence $\mathcal{A}_{\widetilde{\mathfrak F}}$) would be isomorphic to $\mathcal{A}_\mathfrak F \oplus \mathbb C$, but this is not possible since $\mathcal{A}_{\widetilde{\mathfrak F}}$ has the Cantor property and therefore has no minimal projections. \end{remark} \begin{definition} We say $\mathcal{D}$ is a \emph{unital-retract} of the $C^*$-algebra $\mathcal{A}$ if there is a left-invertible unital embedding from $\mathcal{D}$ into $\mathcal{A}$. \end{definition} \subsection{The category $\widetilde{\mathfrak{K}}_\mathcal{A}$} If $\mathcal{A}$ is a unital AF-algebra with Cantor property (Definition \ref{CS-unital-def}), then let $\widetilde{\mathfrak{K}}_\mathcal{A}$ denote the category whose objects are finite-dimensional unital-retracts of $\mathcal{A}$ and morphisms are unital left-invertible embeddings. This category is not $\oplus$-stable, since it does not satisfy condition (1) of Definition \ref{closed-def}. However, $\ddagger\widetilde{\mathfrak{K}}_\mathcal{A}$ still has the proper amalgamations property. \begin{proposition} $\ddagger\widetilde{\mathfrak{K}}_\mathcal{A}$ has the proper amalgamation property. \end{proposition} \begin{proof} The proof is exactly the same as the proof of Lemma \ref{closed-Fraisse} where the maps are assumed to be unital. We only need to check that $\mathcal{D} \oplus \mathcal{E}_1 \oplus \mathcal{F}_1$ is a unital-retract of $\mathcal{A}$. By Lemma \ref{sub-retract}, for some $m$ both $\mathcal{E}\cong \mathcal{D} \oplus \mathcal{E}_1$ and $\mathcal{F} \cong \mathcal{D} \oplus \mathcal{F}_1$ are unital-retracts of $\mathcal{A}_m$. An easy argument using Proposition \ref{fact1} shows that $\mathcal{D} \oplus \mathcal{E}_1 \oplus \mathcal{F}_1$ is also a unital-retract of $\mathcal{A}_m$ and therefore a unital retract of $\mathcal{A}$. \end{proof} Also $\widetilde{\mathfrak{K}}_\mathcal{A}$ has a weakly initial object (by the next lemma). Therefore it is a Fra\"\i ss\'e category. Recall that an object is \emph{weakly initial} in $\mathfrak{K}$ if it has at least one $\mathfrak{K}$-arrow to any other object of $\mathfrak{K}$. \begin{lemma}\label{initial} Suppose $\mathcal{A}$ is a unital AF-algebra with Cantor property. The category $\widetilde{\mathfrak{K}}_\mathcal{A}$ has a weakly initial object, i.e., there is a finite-dimensional unital-retract of $\mathcal{A}$ which can be mapped into any other finite-dimensional unital-retract of $\mathcal{A}$ via a left-invertible unital embedding. \end{lemma} \begin{proof} Let $M_{k_1} \oplus \dots \oplus M_{k_l}$ be an arbitrary $\widetilde{\mathfrak{K}}_\mathcal{A}$-object. Suppose that $\{k'_1, \dots, k'_t\}$ is the largest subset of $\{k_1, \dots, k_l\}$ such that $k'_i$ cannot be written as $\sum_{\substack{j \leq n\\ j\neq i}} x_j k'_j$ for any natural set numbers $\{x_j: j \leq n \text{ and } j\neq i\}$, for any $i\leq t$. Since $\{k'_1, \dots, k'_t\}$ is the largest such subset, $\mathcal{D}= M_{k'_1} \oplus \dots \oplus M_{k'_t}$ is a unital-retract of $M_{k_1} \oplus \dots \oplus M_{k_l}$ and therefore a unital-retract of $\mathcal{A}$. Suppose $\mathcal{F}$ is an arbitrary $\widetilde{\mathfrak{K}}_\mathcal{A}$-object. Let $(\mathcal{A}_n, \phi_n^m)$ be a $\widetilde{\mathfrak{K}}_\mathcal{A}$-sequence with limit $\mathcal{A}$ such that $\mathcal{A}_1\cong \mathcal{F}$. Then $\mathcal{D}$ is a unital-retract of some $\mathcal{A}_m$, so $\mathcal{A}_m = \dot \mathcal{D} \oplus \mathcal{E}$, for some $\mathcal{E}$ and $\dot \mathcal{D} \cong \mathcal{D}$. Fix $i\leq t$. Since $\phi_1^m$ is a unital embedding, there is a subalgebra of $\mathcal{F}$ isomorphic to $M_{n_1} \oplus \dots \oplus M_{n_s}$ such that $\sum_{j=1}^s y_j n_j = k'_i$, for some $\{y_1, \dots, y_s\}\subseteq \mathbb N$. We claim that exactly one $n_j$ is equal to $k'_i$ and the rest are zero. If not, then for every $j\leq s$ we have $0<n_j < k'_i$. Since $\phi_1^m$ is left-invertible, for every $j\leq s$ a copy of $M_{n_j}$ appears as a summand of $\mathcal{A}_m$. Also because there is a unital embedding from $\mathcal{D}$ into $\mathcal{A}_m$, for some $\{x_1, \dots, x_r\}\subseteq \mathbb N$ we have $n_j = \sum_{\substack{j' \leq r\\ j'\neq i}} x_{j'} k'_{j'}$ for every $j\leq s$. But then $$k'_i = \sum_{j=1}^s \sum_{\substack{j' \leq n\\ j'\neq i}} x_{j'} y_j k'_{j'}, $$ which is a contradiction with the choice of $k_i'$. This means that $\mathcal{F} = \mathcal{F}_0 \oplus \mathcal{F}_1$ such that $ \mathcal{F}_0 \cong \mathcal{D}$ and there is a unital homomorphism from $\mathcal{D}$ onto $\dot \mathcal{F}_1$. Therefore $\mathcal{D}$ is a unital-retract of $\mathcal{F}$. \end{proof} \begin{corollary} Suppose $\mathcal{A}$ is a unital AF-algebra with Cantor property. The category $\widetilde{\mathfrak{K}}_\mathcal{A}$ is a Fra\"\i ss\'e category and $\ddagger\widetilde{\mathfrak{K}}_\mathcal{A}$ has the proper amalgamation property. The Fra\"\i ss\'e limit of $\widetilde{\mathfrak{K}}_\mathcal{A}$ is $\mathcal{A}$. \end{corollary} \begin{proof} The proof of the fact that $\mathcal{A}$ is the Fra\"\i ss\'e limit of $\widetilde{\mathfrak{K}}_\mathcal{A}$ is same as Theorem \ref{Cantor-closed-Fraisse}, where all the maps are unital. \end{proof} \section{Surjectively universal countable dimension groups}\label{K0-section} A countable partially ordered abelian group $\langle G, G^+ \rangle$ is a (countable) \textit{dimension group} if it is isomorphic to the inductive limit of a sequence $$\mathbb Z^{r_1} \xrightarrow{\alpha_1^2}\mathbb Z^{r_2} \xrightarrow{\alpha_2^3}\mathbb Z^{r_3} \xrightarrow{\alpha_3^4} \dots $$ for some natural numbers $r_n$, where $\alpha_i^j$ are positive group homomorphisms and $\mathbb Z^r$ is equipped with the ordering given by $$(\mathbb Z^r)^+ = \{(x_1, x_2, \dots,x_r)\in \mathbb Z^r: x_i\geq 0 \text{ for } i=1,\dots,r\}.$$ A partially ordered abelian group that is isomorphic to $\langle \mathbb Z^r , (\mathbb Z^r)^+ \rangle $, for a non-negative integer $r$, is usually called a \emph{simplicial group}. A \emph{scale} $S$ on the dimension group $\langle G, G^+ \rangle$ is a generating, upward directed and hereditary subset of $G^+$ (see \cite[IV.3]{Davidson}). \begin{notation}\label{Notation} If $\langle G, S \rangle$ is a scaled dimension group as above, we can recursively pick order-units $$\bar u_n = (u_{n,1}, u_{n,2}, \dots, u_{n,r_n})\in (\mathbb Z^{r_n})^+$$ of $\mathbb Z^{r_n}$ such that $\alpha_n^{n+1}(\bar u_n)\leq \bar u_{n+1} $ and $ S = \bigcup_n \alpha_n^\infty [[\bar 0, \bar u_n]]$. Then we say the scaled dimension group $\langle G, S\rangle$ is the limit of the sequence $( \mathbb Z^{r_n}, \bar u_n, \alpha_n^m )$. If $(\bar u_n)$ can be chosen such that $\alpha_n^{n+1} (\bar u_n) = \bar u_{n+1}$ for every $n\in \mathbb N$, then $G$ has an order-unit $u = \lim_n \alpha_n^\infty(\bar u_n)$. In this case we denote this dimension group with order-unit by $\langle G, u\rangle$. \end{notation} An isomorphism between scaled dimension groups is a positive group isomorphism which sends the scale of the domain to the scale of the codomain. Given a separable AF-algebra $\mathcal{A}$, its $K_0$-group $\langle K_0(\mathcal{A}), K_0(\mathcal{A})^+\rangle$ is a (countable) dimension group and conversely any dimension group is isomorphic to $K_0$-group of a separable AF-algebra. The \emph{dimension range} of $\mathcal{A}$, $$\mathcal D(\mathcal{A}) = \{[p]: p \text{ is a projection of }\mathcal{A}\}\subseteq K_0(\mathcal{A})^+$$ is a scale for $\langle K_0(\mathcal{A}), K_0(\mathcal{A})^+ \rangle$, and therefore $\langle K_0(\mathcal{A}), \mathcal{D}(\mathcal{A})\rangle$ is a scaled dimension group. Conversely, every scaled dimension group is isomorphic to $\langle K_0(\mathcal{A}), \mathcal{D}(\mathcal{A})\rangle$ for a separable AF-algebra $\mathcal{A}$. Elliott's classification of separable AF-algebras (\cite{Elliott}) states that $\langle K_0(\mathcal{A}), \mathcal{D}(\mathcal{A})\rangle$ is a complete isomorphism invariant for the separable AF-algebra $\mathcal{A}$. \begin{theorem}[Elliott \cite{Elliott}] Two separable AF-algebras $\mathcal{A}$ and $\mathcal{B}$ are isomorphic if and only if their scaled dimension groups are isomorphic. If $\mathcal{A}$ and $\mathcal{B}$ are unital, then they are isomorphic if and only if $\langle K_0(\mathcal{A}), [1_\mathcal{A}]\rangle \cong \langle K_0(\mathcal{B}), [1_\mathcal{B}]\rangle$, as partially ordered abelian groups with order-units. \end{theorem} \subsection{Surjectively universal dimension groups} The universality property of $\langle K_0(\mathcal{A}_\mathfrak F), \mathcal{D}(\mathcal{A}_{\mathfrak F}) \rangle$ can be obtained by applying $K_0$-functor to Theorem \ref{universal AF-algebra}. \begin{corollary} The scaled (countable) dimension group $\langle K_0(\mathcal{A}_\mathfrak F), \mathcal{D}(\mathcal{A}_{\mathfrak F}) \rangle$ maps onto any countable scaled dimension group. \end{corollary} By applying $K_0$-functor to Corollary \ref{Bratteli diagram-A_F}, we immediately obtain the following result. \begin{corollary}\label{coro-uni-K0} $\langle K_0(\mathcal{A}_\mathfrak F), \mathcal{D}(\mathcal{A}_{\mathfrak F}) \rangle$ is the unique scaled dimension group which is the limit of a sequence $( \mathbb Z^{r_n}, \bar u_n, \alpha_n^m )$ (as in Notation above) satisfying the following conditions: \begin{enumerate} \item for every $n\in \mathbb N$ and $1 \leq i \leq r_n$ there are $m\geq n $ and $1\leq j , j^\prime \leq r_m$ such that $j\neq j^\prime$, $u_{n,i} = u_{m,j} = u_{m,j^\prime}$ and $\pi_j \circ \alpha_n^m(u_{n,i}) = u_{m,j}$ and $\pi_{j^\prime} \circ \alpha_n^m(u_{n,i}) = u_{m,j^\prime}$, where $\pi_j$ is the canonical projection from $\mathbb Z^{r_m}$ onto its $j$-th coordinate. \item for every $n,n^\prime\in \mathbb N$, $1\leq i^\prime \leq r_{n^\prime}$ and $\{x_1, \dots, x_{r_n}\}\subseteq \mathbb N \cup \{0\}$ such that $\sum_{i=1}^{r_n} x_i u_{n,i} \leq u_{n^\prime, i^\prime} $ there are $m\geq n$ and $1\leq j\leq r_m$ such that $u_{n^\prime,i^\prime} \leq u_{m,j}$ and $\pi_j \circ \alpha_n^m(u_{n,i}) = x_i.u_{n,i}$ for every $i \in \{1, \dots, r_n\}$. \item For every $k\in \mathbb N$ there are natural numbers $n$ and $1\leq i \leq r_n$ such that $u_{n,i} = k$. \end{enumerate} \end{corollary} \begin{corollary} The (countable) dimension group with order-unit $\langle K_0(\mathcal{A}_{\widetilde{\mathfrak F}}), [1_{\mathcal{A}_{\widetilde{\mathfrak F}}}] \rangle$ maps onto (there is a surjective normalized positive group homomorphism) any countable dimension group with order-unit. \end{corollary} A similar characterization of the dimension group with order-unit $\langle K_0(\mathcal{A}_{\widetilde{\mathfrak F}}), [1_{\mathcal{A}_{\widetilde{\mathfrak F}}}] \rangle$ holds where $\alpha_n^m$ are order-unit preserving and in condition (2) of Corollary \ref{coro-uni-K0} the inequality $\sum_{i=1}^{r_n} x_i u_{n,i} \leq u_{n^\prime, i^\prime}$ is replaced with equality. \ \paragraph{\bf Acknowledgements.} We would like to thank Ilijas Farah and Eva Perneck\'a for useful conversations and comments. \bibliographystyle{amsplain}
proofpile-arXiv_065-4259
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The growing interest in the theory and design of networked control systems is stimulated in part by a realization that complex networks of simple sensors, actuators, and communication links exhibit features that have no counterparts in traditional linear and nonlinear feedback designs. In particular, important capabilities can be realized by interconnections of large numbers of rudimentary sensors and simple actuators that have very limited abilities when acting alone, but whose aggregated actions can carry out prescribed tasks. Taking inspiration from neurobiology, what follows presents work with simple models aimed at understanding how imprecise measurements from an adequately diverse set of sensors and control actions taken by groups of limited authority actuators can collectively achieve system objectives. Due to space limitations, the emphasis will be limited to groups of control actions. Within the broad area of networked control systems, a large body of work has been focused on the emergence of consensus and polarization among groups of intercommunicating agents (\cite{Hu,Altafini,Proskurnikov,Parsegov}). In these models, agents influence each other and evolve toward terminal states that indicate {\em consensus} (all states being the same) or {\em polarization} (individuals tending toward any of two or more distinct terminal states). In models of {\em neuromorphic control} considered below, agents themselves don't evolve, but rather they operate collectively in affinity groups that are associated with system-wide goals. Ongoing research is exploring how self organizing behavior and Hebbian learning is achieved by reinforcement of different patterns of agent activations. The control mechanism to be considered is weighted voting in which the weight assigned to any agent's input is a function of how effectively that input contributes to successful completion of a prescribed task. The overarching motivation for the research is the need for foundational principles and trustworthy algorithms to support autonomous mobility. A particular line of inquiry has been focused on the use of optical flow sensing to generate steering commands for both air and ground vehicles, \cite{Sebesta,Kong,Seebacher,Corvese}. Leveraging the capabilities of feature detectors, descriptors, and matchers that are now available in the {\em OpenCV} corpus (\cite{Canclini}), the research is aimed at understanding how to focus attention so as to mitigate the uncertainties of visual perception. The goal is to achieve implementations that reflect the robustness of the stylized model discussed in the next section. Preliminary work in \cite{Baillieul1} examined the challenges of robot navigation based on reaction to rapidly evolving patterns of features in the visual field. What follows below is an extension of this work. The paper is organized as follows. Section 2 provides a foundational result on the use of optical flow to robustly navigate between two parallel corridor walls. The model is an extreme idealization in that it makes use of two perfect photoreceptors that compute optical flow associated with whatever wall point their gaze falls on. Perfect flow sensing is not possible in either the natural world or implementations using computer vision, and the practical hedge for flow signals degraded by noise, data dropouts, and perceptual aliasing is the averaged weighting of flows associated with large numbers of visual features. This ``wisdom of crowds'' is the basis of the approach to {\em neuromorphic} control that is discussed in the remainder of the paper. Section 3 shows that the energy optimal cost of steering a finite dimensional linear system is always reduced by adding more input channels. Section 4 introduces the {\em standard parts} approach to control in which the goal is to realize system motions by activating groups of fixed motion primitives. The preliminary results described suggest a possible approach to what might be called Hebbian learning for control designs. Future research is discussed in the concluding Section 5. \section{Idealized optical flow based on two and many pixels} The optical flow based navigation discussed in \cite{Sebesta}, \cite{Kong} \cite{Seebacher}, and \cite{Corvese} exploits visual cues based on {\em time-to-transit} (sometimes referred to in the computer vision and visual psychology literature as {\em time-to-contact}, \cite{Horn}). Time-to-transit (referred to in the literature as {\em tau}---and frequently denoted by the symbol $\tau$) is a perceived quantity that is believed to be computable in the visual cortex as animals move about their environments. The basic idea is that when an object or feature is registered on the retina or image plane of a camera, the position of the image moves as a result of the animal's movement. To explore the foundation of $\tau$-based navigation, we model a planar world in which both environmental features and their images lie in a common plane. A camera-fixed coordinate frame is chosen such that forward motion is always in the camera $x$-axis direction. If we suppose that unobstructed forward movement is at a constant speed in a given direction, then as the camera moves through a field of environmental features, it will at various points in time be directly abeam of each of the objects in its field of view. The instant at which camera is directly abeam of an object or feature is referred to as the feature {\em transit time}. At this point, as illustrated in Fig.\ \oldref*}{fig:RobotFigs}{\bf (a)}, the object will lie instantaneously on the camera frame $y$-axis. As seen in the figure. similar triangles may be constructed between the image plane and environmental scene through the focal point of the camera. Here, $D$ represents real world distance of an object from the axis of motion (camera frame $x$-axis) while $\tilde x(t)$ represents the distance to the point where the focal point will transit the feature (object). Let $d_i(t)$ be the location of the object image on the camera image plane. By comparing the ratios of corresponding sides in similar triangles, we see that \begin{figure}[h] \begin{center} \includegraphics[scale=0.45]{RobotFigs.jpeg} \end{center} \caption{Vehicle with kinematics (\oldref*}{eq:jb:BasicVehicle}). Here, the stylized image plane is compressed to one dimension and represented by a line segment on the vehicle $y$-axis. The vehicle $x$-axis is the direction of travel, $\tan\varphi=f$, the pinhole camera focal length, and $\theta=\tilde\theta+\frac{\pi}2$. {\bf (a)} Time-to-transit, $\tau$, is available on the image plane as is evident from the geometry of similar triangles. {\bf (b)} The spatial features ${\cal O}_r$ and ${\cal O}_{\ell}$ are registered at $\pm 1$ respectively on the image plane ($y$-axis).} \label{fig:RobotFigs} \end{figure} \begin{equation} \frac{d(t)}{\tilde x(t)}=\frac{d_i(t)}{f}. \label{eq:jb:similar} \end{equation} If we assume that $d(t)=D$ and speed $\dot x(t)=v$ are constant, then by differentiating the quantities in (\oldref*}{eq:jb:similar}), we obtain \begin{equation} \dot d_i(t)\tilde x(t)-vd_i(t) = 0, \end{equation} and this may be rewritten as \begin{equation} \tau = \frac{\tilde x(t)}{v} = \frac{d_i(t)}{\dot d_i(t)}. \label{eq:jb:tau} \end{equation} This shows that the {\em time-to-transit} $\tau$ is computable from the movement of pixels on the image plane or retina. Referring to Fig.~\oldref*}{fig:RobotFigs}{\bf (b)}, we adopt a simple kinematic model of planar motion \begin{equation} \left(\begin{array}{c} \dot x \\ \dot y \\ \dot\theta\end{array}\right) = \left(\begin{array}{c} v\cos\theta \\ v\sin\theta \\ u\end{array}\right), \label{eq:jb:BasicVehicle} \end{equation} where $v$ is the forward speed in the direction of the body-frame $x$-axis, and $u$ is the turning rate. Following the derivation in \cite{Sebesta}, we find that the time-to-transit a feature located at $(x_r,y_r)$ in the world frame by a vehicle traveling at constant speed $v$ and having configuration $(x,y,\theta)$ (in world frame coordinates) is given by \begin{equation} \tau(t) = \frac{\cos\theta(t)(x_r-x(t))+\sin\theta(t)(y_r-y(t))}{v}. \label{eq:jb:tau1} \end{equation} The effectiveness of $\tau(t)$ as a steering signal can be easily established by considering an idealized visual model in which there are two perfect photoreceptors---one on each side of the center of focus. In Fig.~\oldref*}{fig:RobotFigs}{\bf (b)}, the photoreceptors are at $\pm 1$ along the camera frame $y$-axis. In the idealized model, it is assumed that instantaneous detections of perfectly clear features at both $d(t)=\pm 1$ and corresponding derivatives $\do d(t)$ are available to determine corresponding values of $\tau(t)$. If the camera-carrying robot that is located between two long parallel walls and aligned roughly with the walls as will be made precise below, the left (right) photoreceptor (at corresponding body-frame point $(0,1)$ ($(0-1)$)) will register a unique position along the right (left) wall. Assuming the vehicle moves forward at constant speed and that the feature point ${\cal O}_r=(x_r,y_r)$ is fixed in world frame coordinates and letting only $\theta$ vary, we find that $\tau=\tau(\theta)$ is maximized when the motion of the camera is aligned with its optical axis aimed directly at the feature point. If two features ${\cal O}_{\ell}=(x_{\ell},y_{\ell})$ and ${\cal O}_r=(x_r,y_r)$ are located at exactly the same distance in the forward direction from the camera on opposite walls of a corridor structure as depicted in Fig.~\oldref*}{fig:RobotFigs}{\bf (b)} ($y_{\ell}=y_r$), then we would find that $\tau_{\ell} = \tau_r$. When this equality of left and right values of {\em tau} holds and the camera frame origin is nearer to one wall or the other, the orientation of the camera frame will have a heading that is directed away from the near wall. Only when the vehicle is centered between the two walls will the vehicle heading be parallel to the walls. Given these observations, the question we explore next is whether it is possible to steer the robot vehicle (\oldref*}{eq:jb:BasicVehicle}) based on ego-centric comparisons of times to transit of feature points along opposite walls of the corridor. This question is partially answered by the following. \smallskip \begin{theorem} Consider a mobile camera moving along an infinitely long corridor with every point along both walls being a detectable feature that determines an accurate value of $\tau$. As depicted in Fig.~\oldref*}{fig:RobotFigs}{\bf (b)}, let $\tau_r=\tau({\cal O}_r)$ and $\tau_{\ell}=\tau({\cal O}_{\ell})$ be the respective times to transit the two feature points whose images appear at points equidistant on either side of the optical axis which defines the center of coordinates (focal point or {\em focus of expansion} (FOE)) in the image plane. Then for any gain $k>0$ there is an open neighborhood $U$ of $(x,\theta)=(0,\frac{\pi}{2})$, $U\subset\{(x,\theta)\; :\; -R<x<R;\; \varphi<\theta<\pi-\varphi\}$ such that for all initial conditions $(x_0,y_0,\theta_0)$ with $(x_0,\theta_0)\in U$, the steering law \begin{equation} u(t)=k(\tau_{\ell}-\tau_r) \label{eq:jb:tau-balance} \end{equation} asymptotically guides the vehicle with kinematics (\oldref*}{eq:jb:BasicVehicle}) onto the center line between the corridor walls. \label{th:jb:one} \end{theorem} \noindent{\em Proof:} \ There is no loss of generality in assuming that the image points at which transit times $\tau_r$ and $\tau_{\ell}$ are to be calculated lie at $\pm 1$ on the body frame $y$-axis (as depicted in Fig.\ \oldref*}{fig:RobotFigs}{\bf (b)}). It is also assumed that the corridor width, $2R$, is large enough to allow the vehicle to pass with some margin for deviation from a straight and centered path, and it is convenient to assume the vehicle travels at unit speed, $v=1$, in the forward direction. (This means that the vehicle trajectory is parameterized by arc length.) Letting $(f,0)$ be the body-frame coordinate of the pinhole lens in our idealized camera, we can locate the global frame coordinates of the wall features in terms of vehicle (camera-frame) coordinates $x,y,\theta$. Specifically, elementary (but tedious) geometric arguments give the world coordinates corresponding to the feature images $(0,-1)$ and $(0,1)$ as: \[ {\cal O}_{\ell}=\left( \begin{array}{c} -R \\ y+f \sin (\theta )+\frac{(R+x+f \cos (\theta )) (\cos (\theta )+f \sin (\theta ))}{\sin (\theta )-f \cos (\theta )} \\ \end{array} \right) \] and \[ {\cal O}_r=\left( \begin{array}{c} R \\ y+f \sin (\theta )+\frac{(R-x-f \cos (\theta )) (f \sin (\theta )-\cos (\theta ))}{f \cos (\theta )+\sin (\theta )} \\ \end{array} \right) \] respectively. Using (\oldref*}{eq:jb:tau1}) with $v=1$, these coordinates yield corresponding geometric times to transit: \[ \begin{array}{lc} \tau_r&=\cos (\theta ) (R-x) + \sin (\theta ) \Big( f \sin (\theta )\\[0.07in] &+\frac{(f \sin (\theta )-\cos (\theta )) (-f \cos (\theta )+R-x)}{f \cos (\theta )+\sin (\theta )} \Big) \end{array} \] and \[ \begin{array}{lc} \tau_{\ell}&=\cos (\theta ) (-R-x)+\sin (\theta ) \Big(f \sin (\theta ) \\[0.07in] &+\frac{(f \sin (\theta )+\cos (\theta )) (f \cos (\theta )+R+x)}{\sin (\theta )-f \cos (\theta )}\Big).\end{array} \] From these values and some further algebra, we obtain \begin{equation} k(\tau_{\ell}-\tau_r) = -\frac{2 f k (f \cos (\theta ) (\sin (\theta )+R)+x \sin (\theta ))}{f^2 \cos ^2(\theta )-\sin ^2(\theta )}. \label{eq:jb:x-theta} \end{equation} To complete the proof, we note that the form of (\oldref*}{eq:jb:x-theta}) allows us to isolate the subsystem \begin{equation} \left(\begin{array}{c} \dot x \\ \dot\theta\end{array}\right) = \left(\begin{array}{c} \cos\theta \\ k[\tau_{\ell}(x,\theta)-\tau_r(x,\theta)]\end{array}\right). \label{eq:jb:reduced} \end{equation} It is easy to see that $(x,\theta)=(0,\pi/2)$ is a rest point for (\oldref*}{eq:jb:reduced}), and indeed is the only rest point in the domain $\{(x,\theta)\; :\; -R<x<R;\; \varphi<\theta<\pi-\varphi\}$. We can linearize about this point to get the first order approximation \begin{equation} \left(\begin{array}{c} \dot{\delta x}\\ \dot{\delta\theta} \end{array}\right)= \left( \begin{array}{cc} 0 & -1 \\ 2 f k & -2 \left(k f^2+k R f^2\right) \\ \end{array}\right) \left(\begin{array}{c} \delta x\\ \delta\theta \end{array}\right). \label{eq:jb:linear1} \end{equation} The eigenvalues of the coefficient matrix are \begin{equation} -f^2k(1+R)\pm\sqrt{fk[f^3k(1+R)^2-2]}. \label{eq:jb:eigenvalues} \end{equation} These are always in the left half plane, proving the coefficient matrix is Hurvitz, and from this the theorem follows. \begin{flushright}$\Box$\end{flushright} \smallskip \begin{remark} \rm Letting the angle $\varphi$ be as depicted in Fig.\ \oldref*}{fig:RobotFigs}{\bf (b)}, the condition that the initial orientation lies in the open interval $\varphi<\theta_0<\pi-\varphi$ is necessary. Otherwise one of the walls will not register any features on one of the two image points (i.e., either $\pm 1$). A margin for safety requires that $\varphi+\xi<\theta_0<\pi-\varphi-\xi$ for some $\xi>0$. The values $\varphi$ and $\pi-\varphi$ are referred to as {\em critical heading angles}. \end{remark} \begin{remark} \rm {\em The qualitative dynamics of two-pixel steering} (\oldref*}{eq:jb:reduced}). ({\em i}) By adjusting the gain $k$ in equation (\oldref*}{eq:jb:tau-balance}), one has considerable control over the rate at which $(x,\theta)$ approaches $(0,\pi/2)$ corresponding to the vehicle aligning itself on the corridor center line. ({\em ii}) While the eigenvalues (\oldref*}{eq:jb:eigenvalues}) are always in the left half plane, in the range $0<k<2/(f^3(1+R)^2)$, they are complex numbers, implying that the linearized system (\oldref*}{eq:jb:linear1}) oscillates. Indeed, the vehicle undergoes small amplitude oscillatory motion settling onto the centerline trajectory. ({\em iii}) As $k$ increases through the critical value $2/(f^3(1+R)^2)$, the eigenvalues become real, negative, and quickly diverge from one another in magnitude. It is well known that when eigenvalues are negative with differing magnitudes that the phase portrait has trajectories aligned with the eigenaxis of the smaller magnitude eigenvalue. This eigenaxis is a linear approximation to the tau-balance curve, $\tau_{\ell}(x,\theta)-\tau_r(x,\theta) = 0$ in the $(x$-$\theta)$-plane. \end{remark} \begin{remark} \rm {\em The effect of delays and quantization}. In laboratory implementations, the computations needed to implement the {\em sense--act} cycle in the steering law (\oldref*}{eq:jb:tau-balance}) are complex and may introduce latency. Simulations of delays in the systems under discussion point to the need for soft gains $k$ in (\oldref*}{eq:jb:reduced}). Sampling also imposes limits as follows. \end{remark} \smallskip \begin{theorem} Consider the planar vehicle (\oldref*}{eq:jb:BasicVehicle}) for which the steering law is of the sample-and-hold type: \begin{equation} u(t)=k[\tau_{\ell}(x(t_i),\theta(t_i))-\tau_r(x(t_i),\theta(t_i))], \ \ t_i\le t<t_{i+1}, \label{eq:jb:sampled} \end{equation} where the sampling instants $t_0<t_1<\dots$ are uniformly spaced with $t_{i+1}-t_i = h>0$. Then for any sufficiently small sampling interval $h>0$, there is a range of values of the gain $0<k<k_{crit}$ such that the sampled control law (\oldref*}{eq:jb:sampled}) asymptotically guides the vehicle with kinematics (\oldref*}{eq:jb:BasicVehicle}) onto the center line between the corridor walls. \label{thm:jb:sampled} \end{theorem} \noindent{\em Proof:} \ As in the previous theorem, we assume that the forward speed is constant ($v=1$). We also assume a normalization of scales such that $f=1$. It is again convenient to consider the angular coordinate $\phi=\theta-\pi/2$. In terms of this, we have \[ \dot\phi = k[\tau_{\ell}(x(t_i),\phi(t_i)+\pi/{2})-\tau_r(x(t_i),\phi(t_i)+{\pi}/{2})] \] on the interval $t_i\le t < t_{i+1}$. Given the explicit formulas for $\tau_{\ell}$ and $\tau_r$, and given that the right hand side of the above differential equation is constant, we have the following discrete time evolution \begin{equation} \begin{array}{lcl} \phi(t_{i+1})&=&\phi(t_i) \\[0.07in] &&+ h k\frac{ 2 \sin \phi (t_1) (R+\cos \phi(t_i ))-2 x(t_i) \cos \phi(t_i )}{\sin ^2\phi (t_i)-\cos ^2\phi(t_i) } \end{array} \end{equation} In other words, the discrete time evolution of the heading $\phi$ is given by iterating the $x$-dependent mapping \begin{equation} g(\phi)=\phi+ h k\frac{ 2 \sin \phi (R+\cos \phi)-2 x \cos \phi}{\sin ^2\phi-\cos ^2\phi. } \label{eq:jb:iterate} \end{equation} Differentiating, we obtain \begin{equation}\begin{array}{rr} g^{\prime}(\phi) = 1 + \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ &\\[0.04in] \ \ \ \ \ \frac{2 h k (-2 -3 R \cos\phi +R \cos 3 \phi +3 x \sin \phi +x \sin3\phi )}{\cos ^2\phi -\sin ^2\phi }.&\end{array} \label{eq:jb:quantized} \end{equation} The numerator is negative in the parameter range of interest, while the denominator is positive. Hence, we can choose $k$ sufficiently small that $g$ is a contraction on $-\pi/4<\phi<\pi/4$ uniformly in $x$ in the range $-R<x<R$. Thus the iterates of $\phi$ under the mapping (\oldref*}{eq:jb:iterate}) converge to 0, and because $\dot x= -\sin\phi$, this proves the theorem. \ \ \ \ \ \ \ \hfill $\Box$ \smallskip The model characterized by Theorem \oldref*}{th:jb:one} is thus somewhat robust to both delays and input sampling. It can also be shown to be quite robust with respect to gaze misalignment. (Think of one of the photo receptors being out of its nominal position at $y=\pm1$.). Apart from the simple temporal quantization implicit in sampling, spatial quantization of the perceived environment takes on paramount importance when vision based steering algorithms are designed. In the corridor-following example above, if photoreceptors are located near the center of focus, the gaze will fall on wall points that may be at great distance from the current location. There is considerable uncertainty in registering features that are far away due to the fact that tiny differences in pixel location near the center of focus correspond to very large differences in the spatial location of the features along the corridor. Important open questions regarding optimal visual sampling of the field of view include how to balance input from very nearby features (which will have short time-to-transit and thus be relatively ephemeral), midrange features, and more distant features which may be important for the purpose of on-line replanning of motions. Part of the challenge is selecting the portion of the field-of-view that requires the agent's primary attention. In all cases, the environment as perceived by a camera is quantized into patches of pixels surrounding keypoints selected by feature detectors (e.g.\ SIFT, SURF, FAST, BRISK, and ORB from the OpenCV library). In the approach described above, {\em tau}-values associated with the keypoints are computed, and the steering algorithms take weighted averages as input to the steering laws. Whereas the idealized model of Theorem \oldref*}{th:jb:one} gives precise input to center the vehicle between corridor walls, {\em tau}-values computed for the keypoints are subject to many types of errors including a typical rate of around 30\% failing to be matched from one frame to the next, \cite{Canclini}. The reason that optical flow-based navigation works reasonably well is that if a large number of features are detected in each frame (around 2,000 in our implementations), these errors will average out. As shown in Fig.\ \oldref*}{fig:SparseFlow}, however, there may be portions of streaming video that have too few detectable features to provide a reliable steering signal. In such cases, algorithms that exploit alternative environmental cues are invoked. Some of these are discussed in \cite{Kong,Corvese}, and others are the focus of ongoing research. Continuing with analysis of highly idealized models, in the next section we consider the benefits of control designs for systems with relatively high dimensional input and output spaces. \begin{figure}[h] \begin{center} \includegraphics[scale=0.2]{Paper-Fig.jpg} \end{center} \caption{Feature dropouts and sparse flow challenge visual navigation.} \label{fig:SparseFlow} \end{figure} \section{Control input and observed output---why more channels are better} Because they are easy and revealing to analyze, we consider time-invariant (LTI) systems with $m$ inputs and $q$ outputs, whose evolution and output are given by \begin{equation} \begin{array}{l} \dot x(t)=Ax(t) + Bu(t), \ \ \ x\in\mathbb{R}^n, \ \ u\in\mathbb{R}^m, \ {\rm and}\\[0.07in] y(t)=Cx(t), \ \ \ \ \ \ \ \ \ \ \ \ \ \ y\in\mathbb{R}^q.\end{array} \label{eq:jb:linear} \end{equation} As in \cite{Baillieul1}. we shall be interested in the evolution and output of (\oldref*}{eq:jb:linear}) in which a portion of the input or output channels may or may not be available over any given subinterval of time. Among cases of interest, channels may intermittently switch in or out of operation. In all cases, we are explicitly assuming that $m > 1(\gg 1), q > 1 (\gg 1)$. Before addressing the problem of channel intermittency in Section 4, it will be shown that by increasing the number of input channels, the control energy is reduced. To see this suppose the system (\oldref*}{eq:jb:linear}) is controllable and consider the problem of finding the control input $u_0$ that steers the system from $x_0\in\mathbb{R}^n$ to $x_1\in\mathbb{R}^n$ so as to minimize \[ \eta=\int_0^T\, \Vert u(t)\Vert^2\, dt. \] It's well known that $u_0$ may be given explicitly, and the optimal cost is \[ \eta_0 = (x_1-e^{AT}x_0)^TW_0(0,T)^{-1} (x_1-e^{AT}x_0) \] where $W_0$ is the controllability grammian: \begin{equation} W_0(0,T) = \int_0^T\, e^{A(T-s)}BB^T{e^{A(T-s)}}^T\, ds. \label{eq:jb:grammian} \end{equation} (Here there is a slight abuse of notation in letting $T$ denote both a quatuty of time and the matrix transpose operator. See e.g.\ \cite{Brockett}.) \smallskip \begin{theorem} Suppose the system (\oldref*}{eq:jb:linear}) is controllable with $A,B$ $n\times n$ and $n\times m$ matrices respectively. Let $u_0(t)\in\mathbb{R}^m$ be the optimal control steering the system from $x_0\in\mathbb{R}^n$ to $x_1\in\mathbb{R}^n$ having minimum cost $\eta_0$ as above. Let $\bar b\in\mathbb{R}^n$ and consider the augmented $n\times (m+1)$ matrix $\hat B=(B\vdots \bar b)$. The $(m+1)$-dimensional control input $\hat u_0(t)$ that steers \begin{equation} \dot x(t)=Ax(t) + \hat B \hat u(t) \label{eq:jb:augmented} \end{equation} from $x_0\in\mathbb{R}^n$ to $x_1\in\mathbb{R}^n$ so as to minimize \[ \hat\eta=\int_0^T\, \Vert \hat u(t)\Vert^2\, dt \] has optimal cost $\eta_1\le\eta_0$. \label{thm:jb:augmented} \end{theorem} \smallskip \noindent{\em Proof:} \ It is easy to see that the augmented controllability grammian \[ \begin{array}{rcl} \hat W(0,T) & = & \displaystyle\int_0^T\, e^{A(T-s)}\hat B\hat B^T{e^{A(T-s)}}^T\, ds\\[0.2in] & = & W_0(0,T) + W_p(0,T), \end{array} \] where \[ W_p(0,T) = \int_0^T\, e^{A(T-s)}\bar b\bar b^T{e^{A(T-s)}}^T\, ds. \] To simplify notation, denote these matrices by $W_0$ and $W_p$ respectively, and let $z=x_1-e^{AT}x_0$. Then the optimal cost of steering the augmented system is $\eta_1 = z^T[W_0+W_p]^{-1}z$. Since (\oldref*}{eq:jb:linear}) is assumed to be controllable, $W_0$ has a positive definite square root; call it $S$. Then write $\eta_1$ as \[ \begin{array}{ll} z^T[W_0+W_p]^{-1}z & =z^T[S(I+S^{-1}W_pS^{-1})S]^{-1}z\\[0.07in] &=z^TS^{-1}(I+M)^{-1}S^{-1}z, \end{array} \] where $M=S^{-1}W_pS^{-1}$. Because $M$ is positive semidefinite and symmetric, there is a proper orthogonal matrix $U$ and diagonal $\Delta$ with nonnegative entries such that \[ \begin{array}{ll} \eta_1 &= z^TS^{-1}(I+U^T\Delta U)^{-1}S^{-1}z \\[0.07in] &=z^TS^{-1}[U^T(I+\Delta)U]^{-1}S^{-1}z\\[0.07in] &=z^TS^{-1}U^T(I+\Delta)^{-1}US^{-1}z = y^T(I+\Delta)^{-1}y, \end{array} \] where $y=US^{-1}z$. Clearly $ y^T(I+\Delta)^{-1}y \le y^Ty$, and noting that $y^Ty=z^TS^{-1}U^TUS^{-1}z = z^TW_0^{-1}z$, this proves the theorem. \begin{flushright}$\Box$\end{flushright} \smallskip \begin{remark} \rm We assume that (\oldref*}{eq:jb:linear}) is controllable so that $W_0$ is positive definite. $W_p$ need not be p.d., however, but if it is, then the inequality of costs is strict: $\eta_1<\eta_0$. \end{remark} \begin{example} \rm A simple illustration of Theorem \oldref*}{thm:jb:augmented} is the planar special case of (\oldref*}{eq:jb:linear}) where $A=\Big(\begin{array}{cc} 0 & 1\\ 0 & 0 \end{array}\Big)$ and $B=\Big(\begin{array}{c} 0 \\ 1 \end{array}\Big)$, $\Big(\begin{array}{cc} 0 &1 \\ 1 & 0\end{array}\Big)$, or $\Big(\begin{array}{ccc} 0 & 1 & 1\\ 1& 0 & 1\end{array}\Big)$, representing the cases of one, two, or three input channels. The system is ``optimally'' steered from the origin $(0,0)$ to points on the unit circle $(\cos\phi,\sin\phi)$ so as to minimize the $L_2$-norm on the control input. \begin{figure}[h] \begin{center} \includegraphics[scale=0.43]{Channels.jpg} \end{center} \caption{The special case of (\oldref*}{eq:jb:linear}) for a planar system with one, two and three input channels. {\bf (a),(b)} and {\bf (c)} are trajectory plots of the one, two, and three channel systems corresponding to goal points at $(1,0),(0,1),(-1,0)$, and $(0,-1)$. {\bf (d)} and {\bf (e)} plot the costs of reaching points as a function of angular coordinate $\phi$.} \label{fig:Channels} \end{figure} \end{example} Fig.\ \oldref*}{fig:Channels} illustrates optimal steering of (\oldref*}{eq:jb:linear}) with these coefficient matrices to four points on the unit circle. Trajectories for the one, two, and three channel cases are depicted in panels {\bf (a),(b),(c)} respectively. Fig.\ \oldref*}{fig:Channels}(a) is reduced in scale, and one sees a circuitous motion to the goal point. Whereas the system is controllable using only the first channel of $\Big(\begin{array}{cc} 0 &1 \\ 1 & 0\end{array}\Big)$, it is not controllable using only the second channel. Nevertheless, adding the second channel significantly changes the motion and dramatically lowers the energy cost $\eta$. As Theorem \oldref*}{thm:jb:augmented} requires, the cost is further reduced by adding the third channel (panel {\bf (e)}), but the cost improvement is much less dramatic. \section{The Case of Large Numbers of Standard Inputs} Having noted the control energy advantages of having large numbers of input channels, it is of interest in the context of neuroinspired models to consider the robustness inherent in having redundant channels as well as the possibility of designs in which groups of control primitives are chosen from catalogues and aggregated on the fly to achieve desired ends. Biological motor control in higher animals is governed by networks of interconnected neurons, and drawing inspiration from the neural paradigm as well as from our recent work on {\em standard parts} control and control communication complexity (\cite{Wong1,Wong2}), we consider linear systems (\oldref*}{eq:jb:linear}) in which control is achieved by means of selecting standard inputs from a catalogue of control functions that is large enough to ensure that the system can carry out a prescribed set of tasks. Using a class of nonlinear models, this general approach was carried out with catalogues of sinusoids in \cite{Wong1} and Fourier series in \cite{Wong2}. Here we briefly introduce dictionaries of {\em set-point stabilized} control primitives. Each control input will be of the form $u_j(t)=v_j+k_{j1}x_1 + \dots + k_{jn}x_n$ where the gains $k_{ji}$ are chosen to make the matrix $A+BK$ Hurwitz, and the $v_j's$ are then chosen to make the desired goal point $x_g\in\mathbb{R}^n$ an equilibrium of (\oldref*}{eq:jb:linear}). Thus, given $x_g$, once there gain matrix $K$ has been chosen, the vector $v$ must be found to satisfy the equation \begin{equation} (A+BK)x_g+Bv=0 \label{OffSet} \end{equation} \begin{proposition} Let (\oldref*}{eq:jb:linear}) be controllable, and let the $m\times n$ matrix $K$ be chosen such that the eigenvalues of $A+BK$ are in the open left half plane and the vector $v$ is is chosen to satisfy (\oldref*}{OffSet}). Then the $m$ control inputs \begin{equation} u_j(t)=v_j + k_{j1}x_1(t) + \cdots + k_{jn}x_n(t) \label{eq:jb:standard} \end{equation} steer (\oldref*}{eq:jb:linear}) toward the goal $x_g$. \end{proposition} We note that under the assumption that $m>n$, the values of both the matrix $K$ and the vector $v$ are underdetermined. Hence there is some flexibility in parametric exploration of the design (\oldref*}{eq:jb:standard}). Assuming the matrix $B$ has full rank $n$, we can define the $m\times n$ matrix $R=B^T(BB^T)^{-1}A$. Suppose $x_g\in\mathbb{R}^n$ is a desired goal to which we wish to steer the system (\oldref*}{eq:jb:linear}). Write \begin{equation} (R+K)x_g +\left(\begin{array}{c} v_1\\ \vdots\\ v_m\end{array} \right)= \left(\begin{array}{c} 0\\ \vdots\\ 0\end{array} \right), \label{OffSet1} \end{equation} where \[ K=\left(\begin{array}{ccc} k_{11} & \dots & k_{1n}\\ \vdots & \dots & \vdots\\ k_{m1} & \dots & k_{mn} \end{array}\right) \] is an $m\times n$ matrix of gain coefficients determined so as to place the poles of $A+BK$ at desired positions in the open left half plane. This determines the offset vector $v$ uniquely, but the resulting control inputs will not have the property of being robust with respect to channel intermittency. To examine the effect of channel unavailability and channel intermittency, let $P$ be an $m\times m$ diagonal matrix whose diagonal entries are $k$ 1's and $m$-$k$ 0's. For each of the $2^m$ such projection matrices, we shall be interested in cases where $(A,BP)$ is a controllable pair. We have the following: \begin{definition}\rm Let $P$ be such a projection matrix with $k$ $1's$ on the main diagonal. The system (\oldref*}{eq:jb:linear}) is said to be $k$-{\em channel controllable with respect to} $P$ if for all $T>0$, the matrix \[ W_P(0,T)= \int_0^T\, e^{A(T-s)}BPB^T{e^{A(T-s)}}^T\, ds. \] is nonsingular. \label{def:jb:One} \end{definition} \smallskip \begin{example} \rm Consider again the three input system \begin{equation} \left(\begin{array}{c} \dot x_1 \\ \dot x_2\end{array}\right) =\left(\begin{array}{cc} 0 & 1\\ 0 & 0\end{array} \right) \left(\begin{array}{c} x_1 \\ x_2\end{array}\right) +\left( \begin{array}{ccc} 0 & 1 & 1 \\ 1 & 0 & 1 \\ \end{array} \right) \left( \begin{array}{c} u_1 \\ u_2 \\ u_3 \\ \end{array} \right). \label{eq:jb:kChannel} \end{equation} Adopting the notation \[ P[i,j,k]= \left( \begin{array}{ccc} i & 0 & 0 \\ 0 & j & 0 \\ 0 & 0 &k \end{array}\right), \] the system (\oldref*}{eq:jb:kChannel}) is 3-channel controllable with respect to $P[1,1,1]$; it is 2-channel controllable with respect to $p[1,1,0],P[1,0,1],$ and $P[0,1,1]$. It is 1-channel controllable with respect to $P[1,0,0]$ and $P[0,0,1]$, but it fails to be 1-channel controllable with respect to $P[0,1,0]$. \end{example} It is natural to ask whether systems that are $k$-channel controllable are also $k$-{channel stabilizable} in the sense that loss of certain channels will not affect either the goal equilibrium or its stability. Once the $k_{ij}$'s are chosen as stabilizing feedback gains, the control offsets $v_i$ can be determined by solving (\oldref*}{OffSet}). With $m>n$, this is an underdetermined system, a particular solution of which is given by (\oldref*}{OffSet1}). To exploit the advantages of a large number of control input channels, we turn our attention to using the extra degrees of freedom in specifying the control offsets $v_1,\dots,v_m$ so as to make ({\oldref*}{eq:jb:linear}) resilient in the face of channels being intermittently unavailable. To see where this may be useful, consider using (\oldref*}{OffSet1}) to steer (\oldref*}{eq:jb:kChannel}) toward goal points on the unit circle. We compare $(\cos\phi,\sin\phi)$ when $\phi=0$ and $\phi=\frac{\pi}2$. If any two of the three input channels are operating, then the control inputs defined by these $k_{ij}$'s and $v_i$'s steer the system to the $\phi=0$ target (1,0). But if any of the three channels is missing, these controls will fail to approach the target (0,1). These cases are illustrated in Fig.\ \oldref*}{fig:jb:2Channel}. We note that the offset values $v_i$ that satisfy (\oldref*}{OffSet1}) also satisfy $B\left[(\hat A + K)x_g +v\right]=0$ where $\hat A$ is any $m\times n$ matrix satisfying $B\hat A = A$. Under the assumption that $B$ has full rank $n$, such matrix solutions can be found--although such an $\hat A$ will not be unique. Once $\hat A$ and the gain matrix $K$ has been chosen, the offset vector $v$ is determined by the equation \begin{equation} (\hat A + K)x_g+v=0. \label{OffSet2} \end{equation} The following gives conditions under which $\hat A$ may be chosen to make (\oldref*}{eq:jb:linear}) resilient to channel dropouts. \begin{theorem} Consider the linear system (\oldref*}{eq:jb:linear}) in which the number of control inputs, $m$, is strictly larger than the dimension of the state, $n$ and in which rank $B=n$. Let the gain $K$ be chosen such that $A+BK$ is Hurwitz, and assume that \begin{description} \item {i\rm )} $P$ is a projection of the form considered in Definition \oldref*}{def:jb:One} and (\oldref*}{eq:jb:linear}) is ${\ell}$-channel controllable with respect to $P$; \item{ii\rm )} $A+BPK$ is Hurwitz; \item{iii\rm )} the solution $\hat A$ of $B\hat A=A$ is invariant under $P$---i.e., $P\hat A = \hat A$; and \item{iv)\rm } $BP$ has rank $n$. \end{description} The the control inputs defined by (\oldref*}{eq:jb:standard}) steer (\oldref*}{eq:jb:linear}) toward the goal point $x_g$ whether or not the $(m-\ell)$ input channels that are mapped to zero by $P$ are available. \label{thm:jb:four} \end{theorem} \noindent{\em Proof:} \ Under the stated assumptions, the point $x_g$ is a stable rest point of (\oldref*}{eq:jb:linear}). If $(m-\ell)$ channels associated with the projection $P$ are unavailable, the evolution of (\oldref*}{eq:jb:linear}) becomes \[ \dot x=Ax+BPu = B(\hat Ax+Pu). \] Because $\hat A$ is invariant under $P$, this may be rewritten as \[ \dot x=BP(\hat A x +u), \] and with $v$ defined by (\oldref*}{OffSet2}), this can also be rendered as \[ \dot x = (A+BPK)x + BPv. \] The goal $x_g$ is the unique stable rest point of this system. \begin{flushright}$\Box$\end{flushright} \medskip \begin{example} \rm For (\oldref*}{eq:jb:kChannel}) (the system considered in Example 2), define control inputs of the form (\oldref*}{eq:jb:standard}) where the gains are chosen to simultaneously stabilize the three two-input systems \[ \left( \begin{array}{c} \dot x_1\\ \dot x_2 \end{array} \right) = \left(\begin{array}{cc} 0 & 1\\ 0 & 0\end{array} \right) \left( \begin{array}{c} x_1\\ x_2 \end{array} \right) + \left(\begin{array}{cc} k_{21} & k_{22}\\ k_{11} & k_{12} \end{array} \right)\left(\begin{array}{c} x_1\\ x_2 \end{array}\right), \] \[ \left( \begin{array}{c} \dot x_1\\ \dot x_2 \end{array} \right) = \left(\begin{array}{cc} 0 & 1\\ 0 & 0\end{array} \right) \left( \begin{array}{c} x_1\\ x_2 \end{array} \right) + \left(\begin{array}{cc} k_{31} & k_{32}\\ k_{11} + k_{31} & k_{12} +k_{32} \end{array} \right)\left(\begin{array}{c} x_1\\ x_2 \end{array}\right), \] and \[ \left( \begin{array}{c} \dot x_1\\ \dot x_2 \end{array} \right) = \left(\begin{array}{cc} 0 & 1\\ 0 & 0\end{array} \right) \left( \begin{array}{c} x_1\\ x_2 \end{array} \right) + \left(\begin{array}{cc} k_{21} + k_{31} & k_{22} + k_{32}\\ k_{31} & k_{32} \end{array} \right)\left(\begin{array}{c} x_1\\ x_2 \end{array}\right). \] \smallskip \noindent For any choice of LHP eigenvalues, this requires solving six equations in six unknowns, and choosing to place the eigenvalues in all cases at $s_1=-1,s_2=-1$ yields the values $k_{11}=0 ,k_{12}= -1,k_{21}= -1,k_{22}= 0, k_{31}= -1/2,k_{32}= -1/2$. We note that for these choices of gain parameters and controls (\oldref*}{eq:jb:standard}), that the full three-input closed loop system also has all poles in the open left half plane. As in Example 1, the desired goal points will be on the unit circle: $(\cos\phi,\sin\phi)$. Once the $k_{ij}$'s are determined, we can use (\oldref*}{OffSet}) to solve for $v_1,v_2,v_3$ as functions of $\phi$. A particular solution to (\oldref*}{OffSet}) is given by (\oldref*}{OffSet1}), and for this choice of stabilizing $k_{ij}$'s, the offsets are $(v_1,v_2,v_3) = (4/3\sin\phi,\cos\phi-2/3\sin\phi,1/2\cos\phi+1/6\sin\phi)$. As noted above, these values do not generally steer (\oldref*}{eq:jb:kChannel}) toward the goal in cases where an input channel is unavailable. An alternative design approach utilizing Theorem \oldref*}{thm:jb:four} is to rewrite (\oldref*}{OffSet}) as \[ B\left[(\hat A + K)x_g+v\right]=0, \] where $\hat A$ is a solution of $B\hat A = A$. For the system (\oldref*}{eq:jb:kChannel}), these solutions constitute a two parameter family: \[ \hat A= \left(\begin{array}{cc} -s & -\frac13 - t\\ -s & \frac23 - t \\ s & \frac12 +t \end{array} \right), \] with the values $s=t=0$ giving the particular solution $\hat A = R$ of equation (\oldref*}{OffSet1}). Checking for the invariance of $\hat A$ under the various projections $P[i,j,k]$, we find that $\hat A=\left(\begin{array}{cc} 0 & 0\\ 0 & 1\\ 0 & 0\end{array} \right)$ is invaraiant under $P[1,1,1], \ P[0,1,1],$ and $P[1,1,0]$, but not under $P[1,0,1]$. The controls defined by the given $k_{ij}$'s and satisfying (\oldref*}{OffSet2}) for this choice of $\hat A$ have the offsets $(v_1,v_2,v_3)=(\sin\phi,\cos\phi-\sin\phi,\frac12\cos\phi+\frac12\sin\phi)$. Using Theorem \oldref*}{thm:jb:four} and as illustrated in Fig.\ \oldref*}{fig:jb:2Channel}, the system (\oldref*}{eq:jb:linear}) steered by the control inputs defined by (\oldref*}{OffSet2}) reaches the desired goal state if either the first or the third channel drops out, but may fail to reach the target if the second channel drops out. \end{example} \begin{figure}[h] \begin{center} \includegraphics[scale=0.45]{CDC19Fig.jpg} \caption{Resilience to channel dropouts is illustrated. In (a), the controls defined by (\oldref*}{OffSet}) are seen to reach the goal point $(1,0)$ whether or not the channels for all three inputs are available. In (b), however, the controls defined by (\oldref*}{OffSet}) fail to reach the goal point (0,1) if any channel becomes unavailable. For the goal point (1,0), both (\oldref*}{OffSet}) and (\oldref*}{OffSet2}) define the same controls, but for the goal point (0,1), the controls defined by (\oldref*}{OffSet2}) are resilient to dropouts of either the first or third channel, but not to dropouts of the second.} \end{center} \label{fig:jb:2Channel} \end{figure} We conclude by asking whether a small number of standard inputs can be modulated to produce motion toward a goal that differs from the goal points toward which any single input steers the system. For instance, if we consider the control inputs of the previous example that were designed to reach the goal points (1,0) and (0,1), we can examine switching between these with the objective of steering the system to the point $(\frac{1}{\sqrt 2},\frac{1}{\sqrt 2})$. We have considered simple modulation in which the inputs are switched in and out according to Markovian switching schemes with a variety of probabilities of switching. Generally speaking the results were not encouraging, but using Markovian switching between the second and third channels produced a trajectory that came within 0.002 units of the goal $(1/\sqrt{2},1/\sqrt{2})$. Work on modulation of (possibly large) finite sets of standard inputs is ongoing. \section{Conclusions and further work} The paper has illustrated robustness and functionality advantages of having large numbers of input and output channels as well as benefits in being able to select activation patterns of simple control actions chosen from a catalogue. The examples have been low dimensional, but the qualitative features are expected to be present in higher dimensional systems. Further research is needed, however, to understand general approaches to feedback control and observer synthesis based on learning activation patterns in high dimensions. Given our projection operator formulation of the channel intermittency problem, we expect that subspace methods (\cite{Ho}) and the types of learning that have been successful in automated text recognition and visual search (\cite{Sivic}) will be of use going forward. Much of the work has been motivated by a desire to understand how the control algorithms we have proposed for optical flow-based robot navigation degrade when visual features become sparse and when advisable to switch from, say, flow balancing along corridor walls to some other algorithm that depends of different visual cues. The common thread is the goal of ``learning to move by moving to learn.'' \smallskip {\sc Acknowledgment:} This work has benefitted enormously from conversations with J. Paul; Seebacher, Laura Corvese, and Shuai Wang.
proofpile-arXiv_065-4264
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Recent developments in human embodied cognition posit a learning and understanding mechanism called ``conceptual metaphor'' \citep{lakoff2012}, where knowledge is derived from repeated patterns of experience. Neural circuits in the brain are substrates for these metaphors \citep{lakoff2014} and, therefore, are the drivers of semantics. Semantic grounding can be understood as the inferences which are instantiated as activation of these learned neural circuits. While not using the same abstraction of conceptual metaphor, other theories of embodied cognition also cast semantic memory and inference as encoding and activation of neural circuitry, differing only in terms of which brain areas are the core components of the biological semantic system \citep{kiefer2012,ralph2017}. The common factor between these accounts of embodied cognition is the existence of transmodal knowledge representations, in the sense that circuits are learned in a modality-agnostic way. This means that correlations between sensory, motor, linguistic, and affective embodied experiences create circuits connecting different modality-specific neuron populations. In other words, the statistical structure of human multimodal experience, which is captured and encoded by the brain, is what defines semantics. Music semantics is no exception, also being embodied and, thus, musical concepts convey meaning in terms of somatosensory and motor concepts \citep{koelsch2019,kreyn2018,leman2014}. The statistical and multimodal imperative for human cognition has also been hinted at, at least in some form, by research across various disciplines, such as in aesthetics \citep{cook2000,davies1994,kivy1980,kurth1991,scruton1997}, semiotics \citep{azcarate2011,bennett2008,blanariu2013,lemke1992}, psychology \citep{brown2011,dehaene2007,eitan2006,eitan2011,frego1999,krumhansl1997,larson2004,roffler1968,sievers2013,silver2007,styns2007,wagner1981}, and neuroscience \citep{fujioka2012,janata2012,koelsch2019,nakamura1999,nozaradan2011,penhune1998,platel1997,spence1997,stein1995,widmann2004,zatorre1994}, namely, for natural language, music, and dance. In this work, we are interested in the semantic link between music and dance (movement-based expression). Therefore, we leverage this multimodal aspect of cognition by modeling expected semiotic correlations between these modalities. These correlations are expected because they are mainly surface realizations of cognitive processes following embodied cognition. This framework implies that there is a degree of determinism underlying the relationship between music and dance, that is, dance design and performance are heavily shaped by music. This evident and intuitive relationship is even captured in some natural languages, where words for music and dance are either synonyms or the same \citep{baily1985}. In this work, we claim that, just like human semantic cognition is based on multimodal statistical structures, joint semiotic modeling of music and dance, through statistical computational approaches, is expected to provide some light regarding the semantics of these modalities as well as provide intelligent technological applications in areas such as multimedia production. That is, we can automatically learn the symbols/patterns (semiotics), encoded in the data representing human expression, which correlate across several modalities. Since this correlation defines and is a manifestation of underlying cognitive processes, capturing it effectively uncovers semantic structures for both modalities. Following the calls for technological applications based on sensorimotor aspects of semantics \citep{leman2010,matyja2016}, this work leverages semiotic correlations between music and dance, represented as audio and video, respectively, in order to learn latent cross-modal representations which capture underlying semantics connecting these two modes of communication. These representations are quantitatively evaluated in a cross-modal retrieval task. In particular, we perform experiments on a 592 music audio-dance video pairs dataset, using \acp{MVNN}, and report 75\% rank accuracy and 57\% pair accuracy instance-level retrieval performances and 26\% \ac{MAP} class-level retrieval performance, which are all statistically very significant effects (p-values $<0.01$). We interpret these results as further evidence for embodied cognition-based music semantics. Potential end-user applications include, but are not limited to, the automatic retrieval of a song for a particular dance or choreography video and vice-versa. To the best of our knowledge, this is the first instance of such a joint music-dance computational model, capable of capturing semantics underlying these modalities and providing a connection between machine learning of these multimodal correlations and embodied cognition perspectives. The rest of this paper is structured as follows: Section \ref{sec:related} reviews related work on embodied cognition, semantics, and semiotics, motivating this approach based on evidence taken from research in several disciplines; Section \ref{sec:setup} details the experimental setup, including descriptions of the evaluation task, \ac{MVNN} model, dataset, features, and preprocessing; Section \ref{sec:results} presents the results; Section \ref{sec:discussion} discusses the impact of these results; and Section \ref{sec:conclusions} draws conclusions and suggests future work. \section{Related work} \label{sec:related} Conceptual metaphor \citep{lakoff2012} is an abstraction used to explain the relational aspect of human cognition as well as its biological implementation in the brain. Experience is encoded neurally and frequent patterns or correlations encountered across many experiences define conceptual metaphors. That is, a conceptual metaphor is a link established in cognition (often subconsciously) connecting concepts. An instance of such a metaphor implies a shared meaning of the concepts involved. Which metaphors get instantiated depends on the experiences had during a lifetime as well as on genetically inherited biological primitives (which are also learned based on experience, albeit across evolutionary time scales). These metaphors are physically implemented as neural circuits in the brain which are, therefore, also learned based on everyday experience. The learning process at the neuronal level of abstraction is called ``Hebbian learning'', where ``neurons that fire together, wire together'' is the motto \citep{lakoff2014}. Semantic grounding in this theory, called \ac{NTTL}, which is understood as the set of semantic inferences, manifests in the brain as firing patterns of the circuits encoding such metaphorical inferences. These semantics are, therefore, transmodal: patterns of multimodal experience dictate which circuits are learned. Consequently, semantic grounding triggers multimodal inferences in a natural, often subconscious, way. Central to this theory is the fact that grounding is rooted in primitive concepts, that is, inference triggers the firing of neuron populations responsible for perception and action/coordination of the material body interacting in the material world. These neurons encode concepts like movement, physical forces, and other bodily sensations, which are mainly located in the somatosensory and sensorimotor systems \citep{desai2011,guevara2018,koelsch2019,lakoff2014}. Other theories, such as the \ac{CSC} \citep{ralph2017}, share this core multimodal aspect of cognition but defend that a transmodal hub is located in the \acp{ATL} instead. \cite{kiefer2012} review and compare several semantic cognition theories and argue in favor of the embodiment views of conceptual representations, which are rooted in transmodal integration of modality-specific (e.g., sensory and motor) features. In the remainder of this section, we review related work providing evidence for the multimodal nature of cognition and the primacy of primitive embodied concepts in music. Aesthetics suggests that musical structures evoke emotion through isomorphism with human motion \citep{cook2000,davies1994,kivy1980,scruton1997} and that music is a manifestation of a primordial ``kinetic energy'' and a play of ``psychological tensions'' \citep{kurth1991}. \cite{blanariu2013} claims that, even though the design of choreographies is influenced by culture, its aesthetics are driven by ``pre-reflective'' experience, i.e., unconscious processes driving body movement expression. The choreographer interprets the world (e.g., a song), via ``kinetic thinking'' \citep{laban1960}, which is materialized in dance in such a way that its surface-level features retain this ``motivating character'' or ``invoked potential'' \citep{peirce1991}, i.e., the conceptual metaphors behind the encoded symbols can still be accessible. The symbols range from highly abstract cultural encodings to more concrete patterns, such as movement patterns in space and time such as those in abstract (e.g., non-choreographed) dance \citep{blanariu2013}. \cite{bennett2008} characterizes movement and dance semantics as being influenced by both physiological, psychological, and social factors and based on space and forces primitives. In music, semantics is encoded symbolically in different dimensions (such as timbral, tonal, and rhythmic) and levels of abstraction \citep{juslin2013,schlenker2017}. These accounts of encoding of meaning imply a conceptual semantic system which supports several denotations \citep{blanariu2013}, i.e., what was also termed an ``underspecified'' semantics \citep{schlenker2017}. The number of possible denotations for a particular song can be reduced when considering accompanying communication channels, such as dance, video, and lyrics \citep{schlenker2017}. Natural language semantics is also underspecified according to this definition, albeit to a much lower degree. Furthermore, \cite{azcarate2011} emphasizes the concept of ``intertextuality'' as well as text being a ``mediator in the semiotic construction of reality''. Intertextuality refers to the context in which a text is interpreted, allowing meaning to be assigned to text \citep{lemke1992}. This context includes other supporting texts but also history and culture as conveyed by the whole range of semiotic possibilities, i.e., via other modalities \citep{lemke1992}. That is, textual meaning is also derived via multimodal inferences, which improve the efficacy of communication. This ``intermediality'' is a consequence of human cognitive processes based on relational thinking (conceptual metaphor) that exhibit a multimodal and contextualized inferential nature \citep{azcarate2011}. \cite{peirce1991} termed this capacity to both encode and decode symbols, via semantic inferences, as ``abstractive observation'', which he considered to be a feature required to learn and interpret by means of experience, i.e., required for being an ``intelligent consciousness''. Human behaviour reflects this fundamental and multimodal aspect of cognition, as shown by psychology research. For instance, \cite{eitan2011} found several correlations between music dimensions and somatosensory-related concepts, such as sharpness, weight, smoothness, moisture, and temperature. People synchronize walking tempo to the music they listen to and this is thought to indicate that the perception of musical pulse is internalized in the locomotion system \citep{styns2007}. The biological nature of the link between music and movement is also suggested in studies that observed pitch height associations with vertical directionality in 1-year old infants \citep{wagner1981} and with perceived spatial elevation in congenitally blind subjects and 4- to 5-year old children who did not verbally make those associations \citep{roffler1968}. Tension ratings performed by subjects independently for either music or a corresponding choreography yielded correlated results, suggesting tension fluctuations are isomorphically manifested in both modalities \citep{frego1999,krumhansl1997}. \cite{silver2007} showed that the perception of ``beat'' is transferable across music and movement for humans as young as 7 months old. \cite{eitan2006} observed a kind of music-kinetic determinism in an experiment where music features were consistently mapped onto kinetic features of visualized human motion. \cite{sievers2013} found further empirical evidence for a shared dynamic structure between music and movement in a study that leveraged a common feature between these modalities: the capacity to convey affective content. Experimenters had human subjects independently control the shared parameters of a probabilistic model, for generating either piano melodies or bouncing ball animations, according to specified target emotions: angry, happy, peaceful, sad, and scared. Similar emotions were correlated with similar slider configurations across both modalities and different cultures: American and Kreung (in a rural Cambodian village which maintained a high degree of cultural isolation). The authors argue that the isomorphic relationship between these modalities may play an important role in evolutionary fitness and suggest that music processing in the brain ``recycles'' \citep{dehaene2007} other areas evolved for older tasks, such as spatiotemporal perception and action \citep{sievers2013}. \cite{brown2011} suggest that this capacity to convey affective content is the reason why music and movement are more cross-culturally intelligible than language. A computational model for melodic expectation, which generated melody completions based on tonal movement driven by physical forces (gravity, inertia, and magnetism), outperformed every human subject, based on intersubject agreement \citep{larson2004}, further suggesting semantic inferences between concepts related to music and movement/forces. There is also neurological evidence for multimodal cognition and, in particular, for an underlying link between music and movement. Certain brain areas, such as the superior colliculus, are thought to integrate visual, auditory, and somatosensory information \citep{spence1997,stein1995}. \cite{widmann2004} observed evoked potentials when an auditory stimulus was presented to subjects together with a visual stimulus that infringed expected spatial inferences based on pitch. The engagement of visuospatial areas of the brain during music-related tasks has also been extensively reported \citep{nakamura1999,penhune1998,platel1997,zatorre1994}. Furthermore, neural entrainment to beat has been observed as $\beta$ oscillations across auditory and motor cortices \citep{fujioka2012,nozaradan2011}. Moreover, \cite{janata2012} found a link between the feeling of ``being in the groove'' and sensorimotor activity. \cite{kreyn2018} also explains music semantics from an embodied cognition perspective, where tonal and temporal relationships in music artifacts convey embodied meaning, mainly via modulation of physical tension. These tonal relationships consist of manipulations of tonal tension, a core concept in musicology, in a tonal framework (musical scale). Tonal tension is physically perceived by humans as young as one-day-old babies \citep{virtala2013}, which further points to the embodiment of music semantics, since tonal perception is mainly biologically driven. The reason for this may be the ``principle of least effort'', where consonant sounds consisting of more harmonic overtones are more easily processed and compressed by the brain than dissonant sounds, creating a more pleasant experience \citep{bidelman2009,bidelman2011}. \cite{leman2007} also emphasizes the role of kinetic meaning as a translator between structural features of music and semantic labels/expressive intentions, i.e., corporeal articulations are necessary for interpreting music. Semantics are defined by the mediation process when listening to music, i.e., the human body and brain are responsible for mapping from the physical modality (audio) to the experienced modality \citep{leman2010}. This mediation process is based on motor patterns which regulate mental representations related to music perception. This theory, termed \ac{EMC}, also supports the idea that semantics is motivated by affordances (action), i.e., music is interpreted in a (kinetic) way that is relevant for functioning in a physical environment. Furthermore, \ac{EMC} also states that decoding music expressiveness in performance is a sense-giving activity \citep{leman2014}, which falls in line with the learning nature of \ac{NTTL}. The \ac{PC} framework of \cite{koelsch2019} also points to the involvement of transmodal neural circuits in both prediction and prediction error resolution (active inference) of musical content. The groove aspect of music perception entails an active engagement in terms of proprioception and interoception, where sensorimotor predictions are inferenced (by ``mental action''), even without actually moving. In this framework, both sensorimotor and autonomic systems can also be involved in resolution of prediction errors. Recently, \cite{pereira2018} proposed a method for decoding neural representations into statistically-modeled semantic dimensions of text. This is relevant because it shows statistical computational modeling (in this instance, ridge regression) is able to robustly capture language semantics in the brain, based on \ac{fMRI}. This language-brainwaves relationship is an analogue to the music-dance relationship in this work. The main advantage is that, theoretically, brain activity will directly correlate to stimuli, assuming we can perfectly decode it. Dance, however, can be viewed as an indirect representation, a kinetic proxy for the embodied meaning of the music stimulus, which is assumed to be encoded in the brain. This approach provides further insights motivating embodied cognition perspectives, in particular, to its transmodal aspect. \ac{fMRI} data was recorded for three different text concept presentation paradigms: using it in a sentence, pairing it with a descriptive picture, and pairing it with a word cloud (several related words). The best decoding performance across individual paradigms was obtained with the data recorded in the picture paradigm, illustrating the role of intermediality in natural language semantics and cognition in general. Moreover, an investigation into what voxels were most informative for decoding, revealed that they were from widely distributed brain areas (language 21\%, default mode 15\%, task-positive 23\%, visual 19\%, and others 22\%), as opposed to being focalized in the language network, further suggesting an integrated semantic system distributed across the whole brain. A limitation of that approach in relation to the one proposed here is that regression is performed for each dimension of the text representation independently, failing to capture how all dimensions jointly covary across both modalities. \section{Experimental setup} \label{sec:setup} As previously stated, multimedia expressions referencing the same object (e.g., audio and dance of a song) tend to display semiotic correlations reflecting embodied cognitive processes. Therefore, we design an experiment to evaluate how correlated these artifact pairs are: we measure the performance of cross-modal retrieval between music audio and dance video. The task consists of retrieving a sorted list of relevant results from one modality, given a query from another modality. We perform experiments in a 4-fold cross-validation setup and report pair and rank accuracy scores (as done by \cite{pereira2018}) for instance-level evaluation and \ac{MAP} scores for class-level evaluation. The following sections describe the dataset (Section \ref{sub:dataset}), features (Section \ref{sub:features}), preprocessing (Section \ref{sub:preprocessing}), \ac{MVNN} model architecture and loss function (Section \ref{sub:model}), and evaluation details (Section \ref{sub:evaluation}). \subsection{Dataset} \label{sub:dataset} We ran experiments on a subset of the \emph{Let's Dance} dataset of 1000 videos of dances from 10 categories: ballet, breakdance, flamenco, foxtrot, latin, quickstep, square, swing, tango, and waltz \citep{castro2018}. This dataset was created in the context of dance style classification based on video. Each video is 10s long and has a rate of 30 frames per second. The videos were taken from YouTube at 720p quality and include both dancing performances and practicing. We used only the audio and pose detection data (body joint positions) from this dataset, which was extracted by applying a pose detector \citep{wei2016} after detecting bounding boxes in a frame with a real-time person detector \citep{redmon2016}. After filtering out all instances which did not have all pose detection data for 10s, the final dataset size is 592 pairs. \subsection{Features} \label{sub:features} The audio features consist of logarithmically scaled Mel-spectrograms extracted from 16,000Hz audio signals. Framing is done by segmenting chunks of 50ms of audio every 25ms. Spectra are computed via \ac{FFT} with a buffer size of 1024 samples. The number of Mel bins is set to 128, which results in a final matrix of 399 frames by 128 Mel-frequency bins per 10s audio recording. We segment each recording into 1s chunks (50\% overlap) to be fed to the \ac{MVNN} (detailed in Section \ref{sub:model}), which means that each of the 592 objects contains 19 segments (each containing 39 frames), yielding a dataset of a total of 11,248 samples. The pose detection features consist of body joint positions in frame space, i.e., pixel coordinates ranging from 0 (top left corner) to 1280 and 720 for width and height, respectively. The positions for the following key points are extracted: head, neck, shoulder, elbow, wrist, hip, knee, and ankle. There are 2 keypoints, left and right, for each of these except for head and neck, yielding a total of 28 features (14 keypoints with 2 coordinates, $\operatorname{x}$ and $\operatorname{y}$, each). Figure \ref{fig:keypoints} illustrates the keypoints. \begin{figure} \includegraphics[width=\linewidth]{keypoints} \caption{Pose detection illustration taken from \citep{chan2018}. Skeleton points represent joints.} \label{fig:keypoints} \end{figure} These features are extracted at 30fps for the whole 10s video duration ($t\in \{t_0 ... t_{299}\}$), normalized after extraction according to Section \ref{sub:preprocessing}, and then derived features are computed from the normalized data. The position and movement of body joints are used together for expression in dance. Therefore, we compute features that reflect the relative positions of body joints in relation to each other. This translates into computing the euclidean distance between each combination of two joints, yielding 91 derived features and a total of 119 movement features. As for audio, we segment this sequence into 1s segments (50\% overlap), each containing 30 frames. \subsection{Preprocessing} \label{sub:preprocessing} We are interested in modeling movement as bodily expression. Therefore, we should focus on the temporal dynamics of joint positions relative to each other in a way that is as viewpoint- and subject-invariant as possible. However, the positions of subjects in frame space varies according to their distance to the camera. Furthermore, limb proportions are also different across subjects. Therefore, we normalize the joint position data in a similar way to \cite{chan2018}, whose purpose was to transform a pose from a source frame space to a target frame space. We select an arbitrary target frame and project every source frame to this space. We start by taking the maximum ankle $\tt{y}$ coordinate $\tt{ankl}^{\tt{clo}}$ (Equation \ref{eq:clo}) and the maximum ankle $\tt{y}$ coordinate which is smaller than (spatially above) the median ankle $\tt{y}$ coordinate $\tt{ankl}^{\tt{med}}$ (Equation \ref{eq:med}) and about the same distance to it as the distance between it and $\tt{ankl}^{\tt{clo}}$ ($\tt{ankl}^{\tt{far}}$ in Equation \ref{eq:far}). These two keypoints represent the closest and furthest ankle coordinates to the camera, respectively. Formally: \begin{equation*} \tt{ankl} = \{\tt{ankl\_y}^{\tt{L}}_t\} \cup \{\tt{ankl\_y}^{\tt{R}}_t\} \end{equation*} \begin{equation} \tt{ankl}^{\tt{clo}} = \operatorname{max}_t(\{\tt{y}_t : \tt{y}_t \in \tt{ankl}\}) \label{eq:clo} \end{equation} \begin{equation} \tt{ankl}^{\tt{med}} = \operatorname{median}_t(\{\tt{y}_t : \tt{y}_t \in \tt{ankl}\}) \label{eq:med} \end{equation} \begin{equation} \tt{ankl}^{\tt{far}} = \operatorname{max}_t(\{\tt{y}_t : \tt{y}_t \in \tt{ankl} \wedge \tt{y}_t < \tt{ankl}^{\tt{med}} \wedge |\tt{y}_t - \tt{ankl}^{\tt{med}}| - \alpha|\tt{ankl}^{\tt{clo}} - \tt{ankl}^{\tt{med}}| < \epsilon\}) \label{eq:far} \end{equation} where $\tt{ankl\_y}^{\tt{L}}_t$ and $\tt{ankl\_y}_t^{\tt{R}}$ are the $\tt{y}$ coordinates of the left and right ankles at timestep $\tt{t}$, respectively. Following \citep{chan2018}, we set $\tt{\alpha}$ to 1, and $\tt{\epsilon}$ to 0.7. Then, we computed a scale $s$ (Equation \ref{eq:scale}) to be applied to the $\tt{y}$-axis according to an interpolation between the ratios of the maximum heights between the source and target frames, $\tt{heig}^{\tt{far}}_{src}$ and $\tt{heig}^{\tt{far}}_{tgt}$, respectively. For each dance instance, frame heights are first clustered according to the distance between corresponding ankle $\tt{y}$ coordinate and $\tt{ankl}^{\tt{clo}}$ and $\tt{ankl}^{\tt{far}}$ and then the maximum height values for each cluster are taken (Equations \ref{eq:heigclo} and \ref{eq:heigfar}). Formally: \begin{equation} s = \frac{\tt{heig}^{\tt{far}}_{tgt}}{\tt{heig}^{\tt{far}}_{src}} + \frac{\tt{ankl}^{\tt{avg}}_{src} - \tt{ankl}^{\tt{far}}_{src}}{\tt{ankl}^{\tt{clo}}_{src} - \tt{ankl}^{\tt{far}}_{src}} \left(\frac{\tt{heig}^{\tt{clo}}_{tgt}}{\tt{heig}^{\tt{clo}}_{src}} - \frac{\tt{heig}^{\tt{far}}_{tgt}}{\tt{heig}^{\tt{far}}_{src}}\right) \label{eq:scale} \end{equation} \begin{equation} \tt{heig}^{\tt{clo}} = \operatorname{max}_t(\{|\tt{head\_y}_t - \tt{ankl}^{\tt{LR}}_t| : |\tt{ankl}^{\tt{LR}}_t - \tt{ankl}^{\tt{clo}}| < |\tt{ankl}^{\tt{LR}}_t - \tt{ankl}^{\tt{far}}|\}) \label{eq:heigclo} \end{equation} \begin{equation} \tt{heig}^{\tt{far}} = \operatorname{max}_t(\{|\tt{head\_y}_t - \tt{ankl}^{\tt{LR}}_t| : |\tt{ankl}^{\tt{LR}}_t - \tt{ankl}^{\tt{clo}}| > |\tt{ankl}^{\tt{LR}}_t - \tt{ankl}^{\tt{far}}|\}) \label{eq:heigfar} \end{equation} \begin{equation*} \tt{ankl}^{\tt{LR}}_t = \frac{\tt{ankl\_y}^L_t + \tt{ankl\_y}^R_t}{2} \end{equation*} \begin{equation*} \tt{ankl}^{\tt{avg}} = \operatorname{average}_t(\{\tt{y}_t : \tt{y}_t \in \tt{ankl}\}) \end{equation*} where $\tt{head\_y}_t$ is the $\tt{y}$ coordinate of the head at timestep $\tt{t}$. After scaling, we also apply a 2D translation so that the position of the ankles of the subject is centered at 0. We do this by subtracting the median coordinates ($\tt{x}$ and $\tt{y}$) of the mean of the (left and right) ankles, i.e., the median of $\tt{ankl}^{\tt{LR}}_t$. \subsection{Multi-view neural network architecture} \label{sub:model} The \ac{MVNN} model used in this work is composed by two branches, each modeling its own view. Even though the final embeddings define a shared and correlated space, according to the loss function, the branches can be arbitrarily different from each other. The loss function is \ac{DCCA} \citep{andrew2013}, a non-linear extension of \ac{CCA} \citep{hotelling1936}, which has also been successfully applied to music by \cite{kelkar2018} and \cite{yu2019}. \ac{CCA} linearly projects two distinct view spaces into a shared correlated space and was suggested to be a general case of parametric tests of statistical significance \citep{knapp1978}. Formally, \ac{DCCA} solves: \begin{equation} \left(w_{\bf{x}}^*,w_{\bf{y}}^*,\varphi_{\bf{x}}^*,\varphi_{\bf{y}}^*\right)=\underset{\left(w_{\bf{x}},w_{\bf{y}},\varphi_{\bf{x}},\varphi_{\bf{y}}\right)}{\operatorname{argmax}}\operatorname{corr}\left(w_{\bf{x}}^{\bf{T}}\varphi_{\bf{x}}\left(\bf{x}\right),w_{\bf{y}}^{\bf{T}}\varphi_{\bf{y}}\left(\bf{y}\right)\right) \end{equation} where $\bf{x}\in{\rm I\!R}^m$ and $\bf{y}\in{\rm I\!R}^n$ are the zero-mean observations for each view. $\varphi_{\bf{x}}$ and $\varphi_{\bf{y}}$ are non-linear mappings for each view, and $w_{\bf{x}}$ and $w_{\bf{y}}$ are the canonical weights for each view. We use backpropagation and minimize: \begin{equation} -\sqrt{\operatorname{tr}\left(\left(C_{XX}^{-1/2}C_{XY}C_{YY}^{-1/2}\right)^{\bf{T}}\left(C_{XX}^{-1/2}C_{XY}C_{YY}^{-1/2}\right)\right)}\ \end{equation} \begin{equation} C_{XX}^{-1/2}=Q_{XX}\Lambda_{XX}^{-1/2} Q_{XX}^{\bf{T}} \end{equation} where $X$ and $Y$ are the non-linear projections for each view, i.e., $\varphi_{\bf{x}}\left(\bf{x}\right)$ and $\varphi_{\bf{y}}\left(\bf{y}\right)$, respectively. $C_{XX}$ and $C_{YY}$ are the regularized, zero-centered covariances while $C_{XY}$ is the zero-centered cross-covariance. $Q_{XX}$ are the eigenvectors of $C_{XX}$ and $\Lambda_{XX}$ are the eigenvalues of $C_{XX}$. $C_{YY}^{-1/2}$ can be computed analogously. We finish training by computing a forward pass with the training data and fitting a linear \ac{CCA} model on those non-linear mappings. The canonical components of these deep non-linear mappings implement our semantic embeddings space to be evaluated in a cross-modal retrieval task. Functions $\varphi_{\bf{x}}$ and $\varphi_{\bf{y}}$, i.e., the audio and movement projections are implemented as branches of typical neural networks, described in Tables \ref{tab:audio} and \ref{tab:movement}. We use \emph{tanh} activation functions after each convolution layer. Note that other loss functions, such as ones based on pairwise distances \citep{hermann2014,he2017}, can theoretically also be used for the same task. The neural network models were all implemented using TensorFlow \citep{abadi2015}. \begin{table}[htbp] \normalsize \begin{center} \caption{Audio Neural Network Branch} \label{tab:audio} \begin{tabular}{c|C{4mm}cC{4mm}cC{4mm}|c} \hline layer type & \multicolumn{5}{c|}{dimensions} & \# params\\ \hline input & 39 & $\times$ & 128 & $\times$ & 1 & 0\\ \hline batch norm & 39 & $\times$ & 128 & $\times$ & 1 & 4\\ \hline 2D conv & 39 & $\times$ & 128 & $\times$ & 8 & 200\\ \hline 2D avg pool & 13 & $\times$ & 16 & $\times$ & 8 & 0\\ \hline batch norm & 13 & $\times$ & 16 & $\times$ & 8 & 32\\ \hline 2D conv & 13 & $\times$ & 16 & $\times$ & 16 & 2064\\ \hline 2D avg pool & 3 & $\times$ & 4 & $\times$ & 16 & 0\\ \hline batch norm & 3 & $\times$ & 4 & $\times$ & 16 & 64\\ \hline 2D conv & 3 & $\times$ & 4 & $\times$ & 32 & 6176\\ \hline 2D avg pool & 1 & $\times$ & 1 & $\times$ & 32 & 0\\ \hline batch norm & 1 & $\times$ & 1 & $\times$ & 32 & 128\\ \hline 2D conv & 1 & $\times$ & 1 & $\times$ & 128 & 4224\\ \hline \multicolumn{6}{c|}{Total params} & 12892\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[htbp] \normalsize \begin{center} \caption{Movement Neural Network Branch} \label{tab:movement} \begin{tabular}{c|C{4mm}cC{4mm}|c} \hline layer type & \multicolumn{3}{c|}{dimensions} & \# of params\\ \hline input & 30 & $\times$ & 119 & 0\\ \hline batch norm & 30 & $\times$ & 119 & 476\\ \hline gru & 1 & $\times$ & 32 & 14688\\ \hline \multicolumn{4}{c|}{Total params} & 15164\\ \hline \end{tabular} \end{center} \end{table} \subsection{Cross-modal retrieval evaluation} \label{sub:evaluation} In this work, cross-modal retrieval consists of retrieving a sorted list of videos given an audio query and vice-versa. We perform cross-modal retrieval on full objects even though the \ac{MVNN} is modeling semiotic correlation between segments. In order to do this, we compute object representations as the average of the \ac{CCA} projections of its segments (for both modalities) and compute the cosine similarity between these cross-modal embeddings. We evaluate the ability of the model to capture semantics and generalize semiotic correlations between both modalities by assessing if relevant cross-modal documents for a query are ranked on top of the retrieved documents list. We define relevant documents in two ways: instance- and class-level. Instance-level evaluation considers the ground truth pairing of cross-modal objects as criterion for relevance, (i.e., the only relevant audio document for a dance video is the one that corresponds to the song that played in that video). Class-level evaluation considers that any cross-modal object sharing some semantic label is relevant (e.g., relevant audio documents for a dance video of a particular dance style are the ones that correspond to songs that played in videos of the same dance style). We perform experiments in a 4-fold cross-validation setup, where each fold partitioning is such that the distribution of classes is similar for each fold. We also run the experiments 10 runs for each fold and report the average performance across runs. We compute pair and rank accuracies for instance-level evaluation (similar to \cite{pereira2018}). Pair accuracy evaluates ranking performance in the following way: for each query from modality $X$, we consider every possible pairing of the relevant object (corresponding cross-modal pair) and non-relevant objects from modality $Y$. We compute the similarities between the query and each of the two cross-modal objects, as well as the similarities between both cross-modal objects and the corresponding non-relevant object form modality $X$. If the corresponding cross-modal objects are more similar than the alternative, the retrieval trial is successful. We report the average values over queries and non-relevant objects. We also compute a statistical significance test in order to show that the model indeed captures semantics underlying the artifacts. We can think of each trial as a binomial outcome, aggregating two binomial outcomes, where the probability of success for a random model is $0.5\times 0.5=0.25$. Therefore, we can perform a binomial test and compute its p-value. Even though there are $144\times 143$ trials, we consider a more conservative value for the trials $144$ (the number of independent queries). If the p-value is lower than $0.05$, then we can reject the null hypothesis that the results of our model are due to chance. Rank accuracy is the (linearly) normalized rank of the relevant document in the retrieval list: $\operatorname{ra}=1-\left(r-1\right)/\left(L-1\right)$, where $r$ is the rank of the relevant cross-modal object in the list with $L$ elements. This is similar to the pair accuracy evaluation, except that we only consider the query from modality $X$ and the objects from modality $Y$, i.e., each trial consists of one binomial outcome, where the probability of success for a random model is $0.5$. We also consider a conservative binomial test number of trials of $144$ for this metric. Even though the proposed model and loss function do not explicitly optimize class separation, we expect it to still learn embeddings which capture some aspects of the dance genres in the dataset. This is because different instances of the same class are expected to share semantic structures. Therefore, we perform class-level evaluation, in order to further validate that our model captures semantics underlying both modalities. We compute and report \ac{MAP} scores for each class, separately, and perform a permutation test on these scores against random model performance (whose \ac{MAP} scores are computed according to \cite{bestgen2015}), so that we can show these results are statistically significant and not due to chance. Formally: \begin{equation} \operatorname{MAP}_C = \frac{1}{|Q_C|} \sum_{q\in Q_C} \operatorname{AP}_C\left(q\right) \end{equation} \begin{equation} \operatorname{AP}_C\left(q\right) = \frac{\sum_{j=1}^{|R|}\operatorname{pr}\left(j\right)\operatorname{rel}_C\left(r_j\right)}{|R_C|} \end{equation} where $C$ is the class, $Q_C$ is the set of queries belonging to class $C$, $\operatorname{AP}_C\left(q\right)$ is the \ac{AP} for query $q$, $R$ is the list of retrieved objects, $R_C$ is the set of retrieved objects belonging to class $C$, $\operatorname{pr}\left(j\right)$ is the precision at cutoff $j$ of the retrieved objects list, and $\operatorname{rel}_C\left(r\right)$ evaluates whether retrieved object $r$ is relevant or not, i.e., whether it belongs to class $C$ or not. Note that the retrieved objects list always contains the whole (train or test) set of data from modality $Y$ and that its size is equal to the total number of (train or test) evaluated queries from modality $X$. \ac{MAP} measures the quality of the sorting of retrieved items lists for a particular definition of relevance (dance style in this work). \section{Results} \label{sec:results} Instance-level evaluation results are reported in Tables \ref{tab:pair} and \ref{tab:rank} for pair and rank accuracies, respectively, for each fold. Values shown in the X / Y format correspond to results when using audio / video queries, respectively. The model was able to achieve 57\% and 75\% for pair and rank accuracies, respectively, which are statistically significantly better (p-values $<0.01$) than the random baseline performances of 25\% and 50\%, respectively. \begin{table*}[htbp] \normalsize \begin{center} \caption{Instance-level Pair Accuracy} \label{tab:pair} \begin{tabular}{cccc|c|c} \hline Fold 0 & Fold 1 & Fold 2 & Fold 3 & Average & Baseline\\ \hline 0.57 / 0.57 & 0.57 / 0.56 & 0.60 / 0.59 & 0.55 / 0.56 & 0.57 / 0.57 & 0.25\\ \hline \end{tabular} \end{center} \end{table*} \begin{table*}[htbp] \normalsize \begin{center} \caption{Instance-level Rank Accuracy} \label{tab:rank} \begin{tabular}{cccc|c|c} \hline Fold 0 & Fold 1 & Fold 2 & Fold 3 & Average & Baseline\\ \hline 0.75 / 0.75 & 0.75 / 0.75 & 0.77 / 0.76 & 0.74 / 0.74 & 0.75 / 0.75 & 0.50\\ \hline \end{tabular} \end{center} \end{table*} Class-level evaluation results (\ac{MAP} scores) are reported in Table \ref{tab:map} for each class and fold. The model achieved 26\%, which is statistically significantly better (p-value $<0.01$) than the random baseline performance of 13\%. \begin{table*}[htbp] \normalsize \begin{center} \caption{Class-level \ac{MAP}} \label{tab:map} \begin{tabular}{c|cccc|c|c} \hline Style & Fold 0 & Fold 1 & Fold 2 & Fold 3 & Average & Baseline\\ \hline Ballet & 0.43 / 0.40 & 0.33 / 0.31 & 0.51 / 0.41 & 0.37 / 0.32 & 0.41 / 0.36 & 0.10\\ \hline Breakdance & 0.18 / 0.17 & 0.18 / 0.14 & 0.18 / 0.14 & 0.23 / 0.22 & 0.19 / 0.17 & 0.09\\ \hline Flamenco & 0.20 / 0.18 & 0.16 / 0.19 & 0.15 / 0.16 & 0.16 / 0.17 & 0.17 / 0.17 & 0.12\\ \hline Foxtrot & 0.22 / 0.24 & 0.23 / 0.24 & 0.21 / 0.21 & 0.16 / 0.18 & 0.20 / 0.22 & 0.12\\ \hline Latin & 0.23 / 0.23 & 0.19 / 0.20 & 0.21 / 0.22 & 0.20 / 0.19 & 0.21 / 0.21 & 0.14\\ \hline Quickstep & 0.21 / 0.20 & 0.14 / 0.12 & 0.19 / 0.19 & 0.21 / 0.16 & 0.19 / 0.17 & 0.09\\ \hline Square & 0.28 / 0.26 & 0.34 / 0.29 & 0.30 / 0.26 & 0.30 / 0.29 & 0.30 / 0.27 & 0.16\\ \hline Swing & 0.22 / 0.21 & 0.22 / 0.22 & 0.22 / 0.23 & 0.24 / 0.26 & 0.23 / 0.23 & 0.15\\ \hline Tango & 0.28 / 0.29 & 0.39 / 0.37 & 0.34 / 0.38 & 0.31 / 0.33 & 0.33 / 0.34 & 0.17\\ \hline Waltz & 0.52 / 0.51 & 0.35 / 0.35 & 0.38 / 0.31 & 0.48 / 0.41 & 0.43 / 0.40 & 0.15\\ \hline Average & 0.28 / 0.27 & 0.25 / 0.24 & 0.27 / 0.25 & 0.27 / 0.25 & 0.26 / 0.25 & 0.13\\ \hline Overall & 0.28 / 0.27 & 0.27 / 0.26 & 0.27 / 0.26 & 0.28 / 0.26 & 0.28 / 0.26 & 0.14\\ \hline \end{tabular} \end{center} \end{table*} \section{Discussion} \label{sec:discussion} Our proposed model successfully captured semantics for music and dance, as evidenced by the quantitative evaluation results, which are validated by statistical significance testing, for both instance- and class-level scenarios. Instance-level evaluation confirms that our proposed model is able to generalize the cross-modal features which connect both modalities. This means the model effectively learned how people can move according to the sound of music, as well as how music can sound according to the movement of human bodies. Class-level evaluation further strengthens this conclusion by showing the same effect from a style-based perspective, i.e., the model learned how people can move according to the music style of a song, as well as how music can sound according to the dance style of the movement of human bodies. This result is particularly interesting because the design of both the model and experiments does not explicitly address style, that is, there is no style-based supervision. Since semantic labels are inferenced by humans based on semiotic aspects, this implies that some of the latent semiotic aspects learned by our model are also relevant for these semantic labels, i.e., these aspects are semantically rich. Therefore, modeling semiotic correlations, between audio and dance, effectively uncovers semantic aspects. The results show a link between musical meaning and kinetic meaning, providing further evidence for embodied cognition semantics in music. This is because embodied semantics ultimately defends that meaning in music is grounded in motor and somatosensory concepts, i.e., movement, physical forces, and physical tension. By observing that dance, a body expression proxy for how those concepts correlate to the musical experience, is semiotically correlated to music artifacts, we show that music semantics is kinetically and biologically grounded. Furthermore, our quantitative results also demonstrate an effective technique for cross-modal retrieval between music audio and dance video, providing the basis for an automatic music video creation tool. This basis consists of a model that can recommend the song that best fits a particular dance video and the dance video that best fits a particular song. The class-level evaluation also validates the whole ranking of results, which means that the model can actually recommend several songs or videos that best fit the dual modality. \section{Conclusions and future work} \label{sec:conclusions} We proposed a computational approach to model music embodied semantics via dance proxies, capable of recommending music audio for dance video and vice-versa. Quantitative evaluation shows this model to be effective for this cross-modal retrieval task and further validates claims about music semantics being defined by embodied cognition. Future work includes correlating audio with 3D motion capture data instead of dance videos in order to verify whether important spatial information is lost in 2D representations, incorporating Laban movement analysis features and other audio features in order to have fine-grained control over which aspects of both music and movement are examined, test the learned semantic spaces in transfer learning settings, and explore the use of generative models (such as \acp{GAN}) to generate and visualize human skeleton dance videos for a given audio input. \bibliographystyle{model5-names}
proofpile-arXiv_065-4275
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} It has been anticipated that matter at high baryon number density (or equivalently large baryon chemical potential) and/or high temperature reveals various interesting properties, such as quark-gluon plasma (QGP), superfluidity of neutrons, superconductivity of protons, Bose-Einstein condensate of mesons and superconductivity of quarks which is called color superconductivity (CSC) \cite{CSC}. It is believed that those states of matter can be described by Quantum Chromodynamics (QCD), i.e., dynamics of quarks (constituents of hadrons) and gluons (mediator of the interaction between quarks). It is one of the major goals in hadron physics to comprehend what kind of matters exists at different temperatures and chemical potentials. This is nothing but the question of the phase diagram of QCD \cite{review}. Elucidating the QCD phase diagram is also important from experimental and observational points of view \cite{YHM}. QCD at high temperature is intimately connected with physics in the early universe right after Big-Bang, while QCD at high baryon density (and relatively small temperature) is related to physics of compact stars such as neutron stars, hypothetical quark stars and even black-holes. Quite recently, compact stars have been remarkable objects since their binary systems are widely recognized as a source of the gravitational wave signal \cite{LIGO}. Neutron star is supposed to have an onion-like structure with various kinds of phases (for a review of neutron stars, see \cite{NGK, Shapiro}). This means that there exist interfaces consisting of different phases in the neutron star interior. One of the authors studied the particle scattering happening at such interfaces before \cite{ST} and found an interesting phenomena called Andreev reflection, which was originally discovered in the context of condensed matter physics \cite{AR}. The Andreev reflection is a peculiar reflection at the metal-superconductor interface. An incident electron in metal hits the interface, then a hole, which carries information of the superconductor, is reflected. This is something like when we throw a ball against a wall, the reflected ball is not the original one, but changed with having information behind the wall. This phenomena is intuitively interpreted as follows. When the incident electron hits the interface, a part of its component goes into superconductor in the sense of quantum mechanics, namely, the wave function. If the incident electron energy is lower than the superconducting gap, the incident electron itself is not allowed to penetrate, but possible to pass through in the form of the Cooper pair. Then, the incident electron accompanies another electron below the Fermi level. Consequently, a hole is reflected. The purpose of this paper is to extend our previous study \cite{ST} to the case with hadron-color superconductor (CSC) interface. In section 2, we briefly review our previous study. In section 3, we study the Andreev reflection at hadron/CSC interface in a systematic way. Section 4 is devoted to summary and discussions. \section{Andreev reflection in color superconductor} The effective hamiltonian describing quark interaction with the diquark condensate (i.e., quark Cooper pairing) can be written at the mean field level \cite{ST}: \begin{eqnarray} H&=& \int d^3 x \left [ \sum_{a, i}\psi^{i\dagger}_a(-i\vec{\alpha}\cdot \nabla-\mu)\psi^i_a +\sum_{a,b,i,j}\Delta^{ij}_{ab}\left (\psi^{iT}_aC\gamma_5 \psi^j_b \right )+{\rm h.c} \right ], \label{2-1} \end{eqnarray} where $\psi^i_a$ is the Dirac spinor with $a, b$ and $i, j$ being color and flavor indices, respectively. $C$ is the charge conjugation matrix and $\mu$ the quark chemical potential. $\Delta^{ij}_{ab}$ is the gap matrix. For 2 flavor color superconductivity (2SC), it becomes \begin{equation} \Delta^{ij}_{ab}=\tilde{\Delta}\epsilon^{ij}\epsilon_{abB}, \label{2-2} \end{equation} where $i, j=u{\rm (up)},d{\rm (down)}$ and $a, b=R {\rm (red)},G {\rm (green)}$. The third direction in color space is chosen as blue (denoted by $B$) here. For 3 flavor case, on the other hand, the gap matrix takes the form as \begin{equation} \Delta^{ij}_{ab}=\Delta\epsilon^{ijK}\epsilon_{abK}=\Delta(\delta^i_a \delta^b_j-\delta^i_b \delta^j_a), \label{2-3} \end{equation} where $i, j=u{\rm (up)},d{\rm (down)}, s{\rm (strange)}$ and $a, b=R {\rm (red)},G {\rm (green)}, B{\rm (blue)}$. This is the color flavor-loked (CFL) condensate \cite{CFL}. The general form of the gap matrix can be written as follows: \begin{eqnarray} \Delta^{ij}_{ab} = \begin{pmatrix} 0 & \Delta_{ud} & \Delta_{us} & & & & & & \\ \Delta_{ud} & 0 & \Delta_{ds} & & & & & & \\ \Delta_{us} & \Delta_{ds} & 0 & & & & & & \\ & & & 0 & -\Delta_{ud}& & & & \\ & & & -\Delta_{ud} & 0 & & & & \\ & & & & & 0 & -\Delta_{us} & & \\ & & & & & -\Delta_{us} & 0 & & \\ & & & & & & & 0 & -\Delta_{ds} \\ & & & & & & & -\Delta_{ds} & 0 \\ \end{pmatrix} \label{2-4} \end{eqnarray} in the basis \begin{eqnarray} \left( u_{R}, \ d_{G}, \ s_{B}, \ d_{R}, \ u_{G}, \ s_{R}, \ u_{B}, \ s_{G},\ d_{B} \right). \label{2-5} \end{eqnarray} From this, one finds that for instance, a red up quark ($u_R$) is Andreev-reflected as a green down hole ($d_G^H$) or a blue strange hole ($s_B^H$). The Andreev reflection of quarks is summarized in Table 1. \begin{table}[h] \centering \begin{tabular}{|c|c|} \hline incident quark & reflected hole \\ \hline \hline $u_{R}$ & $d_{G}^{H}$ or $s_{B}^{H}$ \\ \hline $u_{G}$ & $d_{R}^{H}$ \\ \hline $u_{B}$ & $s_R^{H}$ \\ \hline $d_{R}$ & $u_{G}^{H}$ \\ \hline $d_{G}$ & $u_{R}^{H}$ or $s_{B}^{H}$ \\ \hline $d_{B}$ & $s_G^{H}$ \\ \hline $s_{R}$ & $u_{B}^{H}$ \\ \hline $s_{G}$ & $d_{B}^{H}$ \\ \hline $s_{B}$ & $u_{R}^{H}$ or $d_{G}^{H}$ \\ \hline \end{tabular} \caption{Andreeev reflection of quarks} \end{table} In 2SC phase, \begin{eqnarray} &\Delta_{us}& = \Delta_{ds} = 0, \ \Delta_{ud} = \tilde{\Delta}, \label{2-6} \end{eqnarray} while in CFL phase, \begin{eqnarray} &\Delta_{us}& = \Delta_{ds} = \Delta_{ud} = \Delta. \label{2-7} \end{eqnarray} The equations of motion obtained from (\ref{2-1}) are so called Bogoliubov - de-Gennes (BdG) equations \begin{eqnarray} \begin{pmatrix} -i\vec{\alpha}\cdot \nabla - \mu & \Delta C\gamma_5 \\ \Delta C\gamma_5 & i\vec{\alpha}\cdot \nabla +\mu \\ \end{pmatrix} \begin{pmatrix} \psi_p \\ \psi_h \\ \end{pmatrix} = E \begin{pmatrix} \psi_p \\ \psi_h \\ \end{pmatrix}. \label{2-8} \end{eqnarray} The BdG equations with the vanishing gap can be interpreted as those in the free quark (FQ) phase while with the non-vanishing one as the color superconducting (CSC) phase. Suppose here that there is a sharp interface between the FQ/CSC phases. Then the scattering of a single quark at the interface, which comes from the FQ phase, can be modeled as follows: \\ \\ (1) The interface is perpendicular to, say, the $x$ axis and is located at $x=0$. \\ (2) The gap is only a function of $x$ and takes the step function form $\Delta(x)=\Delta\Theta(x)$.\\ (3) Region $x<0$ describes the FQ phase while region $x>0$ the CSC phase.\\ (4) General solution of the BdG equations in each phase is obtained.\\ (5) Boundary conditions to match the solution in each phase at $x=0$ are imposed.\\ \\ As was stated in the previous section, our main interest in this paper is to extend the above study of a single-quark scattering into the multi-quark case, namely, the Andreev reflection between the hadronic and the CSC phases. \section{ Andreev reflection at Hadron/CSC interface} \label{sec:backreaction} In the previous section, we reviewed the scattering problem of a single quark at the interface between free quark and color superconducting phases. In this section, let us consider the same issue into the interface consisting of hadron and the CSC phases. The question is how the single-quark scattering problem is extended to the multi-quark one. This is, in general, a very hard problem and so far we have no clear answer. So let us make the following assumption: quarks in the hadronic phase are not interacting with each other, but they be always in the color-singlet states. This is the constituent quark picture. Hadrons in here are defined as a superposition of every single quark. So if a hadron hits the hadron/CSC interface, each quark inside the hadron is Andreev-reflected and the only color-singlet combination is left out of the reflected quarks. In the case of hadron/2SC interface, the Andreev reflection of $s$ quark from the hadronic side does not occur because it cannot make any Cooper pairing in 2SC phase. In the case of hadron/CFL interface, on the other hand, $s$ quark can make a Cooper pairing with other quarks so that the Andreev reflection does occur. Moreover, a blue quark with any flavor cannot make any Cooper pairing in 2SC phase. With this in mind, we shall describe the Andreev reflection for different cases below. \subsection{Andreev reflection of mesons} \subsubsection{Andreev reflection of mesons without $s$ quarks} First of all, let us discuss the Andreev reflection of mesons without $s$ quarks. As such an example, we have a charged pion, $\pi^{+}$, which consists of a $u$ quark and an anti-$d$ ($\bar{d}$) quark. Meson is always a color-singlet superposition of red($R$)-anti-red($\bar{R}$), green($G$)-anti-green($\bar{G}$) and blue($B$)-anti-blue($\bar{B}$). So we define here "meson" having a color component $a (=R, G, B)$ as follows: \begin{eqnarray} M_{a} \equiv i_{a}\bar{j}_{a}, \qquad {\rm (no \ sum)} \end{eqnarray} where $i$ and $j$ are flavor indices. Then, from the gap matrix structure (\ref{2-4}) and Table 1, one easily finds that $\pi^{+}_{R}$ ($\pi^{+}_{G}$) is Andreev-reflected as a hole of $\pi^{-}_{G}$ ($\pi^{-}_{R}$), respectively: \begin{eqnarray} \pi^{+}_{R} = u_{R}\bar{d}_{R} \ \to \ (d_{G}\bar{u}_{G})^{H} \equiv (\pi^{-}_{G})^{H}, \\ \pi^{+}_{G} = u_{G}\bar{d}_{G} \ \to \ (d_{R}\bar{u}_{R})^{H} \equiv (\pi^{-}_{R})^{H}. \end{eqnarray} $\pi^{+}_{B}$, on the other hand, is not Andreev-reflected but reflected as $\pi^{+}_{B}$ itself. So the result is that $\pi^{+}$ is partially Andreev-reflected and partially ordinary reflected. Then the question is how to interpret this. To this end, we remark that a hole is a sort of charge conjugation of a particle and could be regarded as a anti-particle. Relying on the idea, we suppose that $(\pi^{-}_{G})^{H}$ corresponds to $\pi^{+}_{G}$ so that $\pi^{+}$ is consequently reflected as $\pi^{+}$ at the hadron/2SC interface. In the case of hadron/CFL interface, since $u_{B}$ and $\bar{d}_{B}$ also join the Cooper pairing, we expect $\pi^{+}_{B}$ is Andreev-reflected as well. However, this is not the case because the color-singlet state is not realized: \begin{eqnarray} \pi^{+}_{B} = u_{B} \bar{d}_{B} \ \to \ s_{R}\bar{s}_{G} = \times \end{eqnarray} Here $\times$ shows that the Andreev reflection does not occur. To summarize, mesons without $s$ quarks are reflected as the original ones in both 2SC and CFL cases. See Table 2. \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|} \hline incident particle & incident(color) & reflected(color) & reflected particle \\ \hline \hline & $\pi^{+}_{R}$ & $(\pi^{-}_{G})^{H}$ & \\ \cline{2-3} $\pi^{+}$ & $\pi^{+}_{G}$ & $(\pi^{-}_{R})^{H}$ & $\pi^{+}$ \\ \cline{2-3} & $\pi^{+}_{B}$ & $\times$ & \\ \cline{2-3}\hline & $\pi^{-}_{R}$ & $(\pi^{+}_{G})^{H}$ & \\ \cline{2-3} $\pi^{-}$ & $\pi^{-}_{G}$ & $(\pi^{+}_{R})^{H}$ & $\pi^{-}$ \\ \cline{2-3} & $\pi^{-}_{B}$ & $\times$ & \\ \cline{2-3}\hline & $\pi^{0}_{R}$ & $(\pi^{0}_{G})^{H}$ & \\ \cline{2-3} $\pi^{0}$ & $\pi^{0}_{G}$ & $(\pi^{0}_{R})^{H}$ & $\pi^{0}$ \\ \cline{2-3} & $\pi^{0}_{B}$ & $\times$ & \\ \cline{2-3}\hline \end{tabular} \caption{Andreeev reflection of pions} \end{table} \subsubsection{Andreev reflection of mesons with $s$ quarks} Next, let us discuss the Andreev reflection of mesons with $s$ quarks. As such an example, we consider a charged K meson, $K^{+}=u\bar{s}$. In the case of hadron/2SC interface, the Andreev reflections for any color component do not occur and therefore $K^{+}$ is reflected as the original one. In the case of hadron/CFL interface, we see the following results: \begin{eqnarray} K^{+}_{R} = u_{R}\bar{s}_{R} \ \to \ (s_{B} \bar{u}_{B})^{H} = (K^{-}_{B})^{H} \\ K^{+}_{B} = u_{B}\bar{s}_{B} \ \to \ (s_{R} \bar{u}_{R})^{H} = (K^{-}_{R})^{H} \end{eqnarray} On the other hand, for the green component of $K^{+}$, i.e., $K^{+}_{G}$, the Andreev reflection does occur unlike the 2SC case, but it cannot be the color-singlet. Consequently, $K^{+}$ is reflected as $K^{+}$. Lastly, we considered $\eta$ meson as an incident particle in both 2SC and CFL cases. Then we found that the Andreev reflection for the incident $\eta$ meson does not occur for any color component. The results of this section is summarized in Table 3. \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|} \hline incident particle & incident(color) & reflected(color) & reflected particle \\ \hline \hline & $K^{+}_{R}$ & $(K^{-}_{B})^{H}$ & \\ \cline{2-3} $K^{+}$ & $K^{+}_{G}$ & $\times$ & $K^{+}$ \\ \cline{2-3} & $K^{+}_{B}$ & $(K^{-}_{R})^{H}$ & \\ \cline{2-3}\hline & $K^{-}_{R}$ & $(K^{+}_{B})^{H}$ &\\ \cline{2-3} $K^{-}$ & $K^{-}_{G}$ & $\times$ & $K^{-}$ \\ \cline{2-3} & $K^{-}_{B}$ & $(K^{+}_{R})^{H}$ & \\ \cline{2-3}\hline & $K^{0}_{R}$ & $\times$ & \\ \cline{2-3} $K^{0}$ & $K^{0}_{G}$ & $(\bar{K}^{0}_{B})^{H}$ & $K^{0}$ \\ \cline{2-3} & $K^{0}_{B}$ & $(\bar{K}^{0}_{G})^{H}$ & \\ \cline{2-3}\hline & $\bar{K}^{0}_{R}$ & $\times$ & \\ \cline{2-3} $\bar{K}^{0}$ & $\bar{K}^{0}_{G}$ & $(K^{0}_{B})^{H}$ & $\bar{K}^{0}$ \\ \cline{2-3} & $\bar{K}^{0}_{B}$ & $(K^{0}_{G})^{H}$ & \\ \cline{2-3}\hline \end{tabular} \caption{Andreev reflection of K mesons} \end{table} \subsection{Andreev reflection of baryons} Let us move on to arguing the Andreev reflection of baryons. Since any baryon includes one blue quark, the Andreev reflection of baryons does not occur at hadron/2SC interface, as described in the previous subsection. So we concentrate on the case of hadron/CFL interface below. Let us first consider a neutron, $n=udd$ where two of three quarks are the same flavor ones. Then a neutron consists of a certain sum of six color-singlet combinations: \begin{eqnarray} u_{R}d_{G}d_{B}, \quad u_{R}d_{B}d_{G}, \quad u_{G}d_{R}d_{B}, \quad u_{G}d_{B}d_{R}, \quad u_{B}d_{R}d_{G}, \quad u_{B}d_{G}d_{R} \nonumber \end{eqnarray} From the gap matrix structure (\ref{2-4}), one obtains the following results: \begin{eqnarray} u_{R}d_{G}d_{B} \to (s_{B}u_{R}s_{G})^H &=& (\Xi^{0})^{H},\\ u_{R}d_{B}d_{G} \to (s_{B}s_{G}u_{R})^H &=& (\Xi^{0})^{H}, \\ u_{G}d_{R}d_{B} \to (d_{R} s_{G}u_{G})^H &=& \ \times, \\ u_{G}d_{B}d_{R} \to (d_{R} s_{G}u_{G})^H &=& \ \times, \\ u_{B}d_{R}d_{G} \to (s_{R}u_{G}s_{B})^H &=& (\Xi^{0})^{H}, \\ u_{B}d_{G}d_{R} \to (s_{R}s_{B}u_{G})^H &=& (\Xi^{0})^{H}. \end{eqnarray} As previously, $\times$ shows that the Andreev reflection does not occur. Therefore in this case, an incident neutron is reflected as a hole of $\Xi^0$ as well as neutron and its ratio is two to one. In Table 4, the results of baryons with 2 different flavors are summarized. Note here that unlike the case of mesons, a hole is not interpreted as an anti-particle. Also we are neglecting the effects of finite quark masses. \begin{table}[h] \centering \begin{tabular}{|c|c|c|} \hline incident particle & constitution & reflected particles \\ \hline \hline $p$ & $uud$ & $p$, \ $(\Xi^{-})^{H}$ \\ \cline{2-2} \hline $n$ & $udd$ & $n$, \ $(\Xi^{0})^{H}$ \\ \cline{2-2} \hline $\Delta^{+}$ & $uud$ & $\Delta^{+}$, \ $(\Xi^{-})^{H}$ \\ \cline{2-2} \hline $\Delta^{0}$ & $udd$ & $\Delta^{0}$, \ $(\Xi^{0})^{H}$ \\ \hline $\Sigma^{+}$ & $uus$ & $\Sigma^{+}$, \ $(\Sigma^{-})^{H}$ \\ \hline $\Sigma^{-}$ & $dds$ & $\Sigma^{-}$, \ $(\Sigma^{+})^{H}$ \\ \hline $\Xi^{0}$ & $uss$ & $\Xi^{0}$, \ $(\Delta^{0})^{H}$ \ or \ $(n)^{H}$ \\ \hline $\Xi^{-}$ & $dss$ & $\Xi^{-}$, \ $(\Delta^{+})^{H}$ \ or \ $(p)^{H}$ \\ \hline \end{tabular} \caption{Baryons with two different flavors} \end{table} Next let us consider baryons with 3 different flavors such as $\Lambda=uds$. In this case, \begin{eqnarray} u_{R}d_{G}s_{B} &\to& uds = (\Lambda)^{H}, \\ u_{B}d_{R}s_{G} &\to& s_{R}u_{G}d_{B} = (\Lambda)^{H}, \\ u_{G}d_{B}s_{R} &\to& d_{R}s_{G}u_{B} = (\Lambda)^{H}, \\ u_{R}d_{B}s_{G} &\to& \ * \ s_{G}d_{B} = \ \times, \\ u_{G}d_{R}s_{B} &\to& d_{R}u_{G} \ * \ = \ \times, \\ u_{B}d_{G}s_{R} &\to& s_{R} * \ u_{B} = \ \times. \end{eqnarray} The reflected particles are $\Lambda$ itself and its own hole $(\Lambda)^{H}$. $*$ shows that there are two possible reflections due to the gap matrix structure. But in any case, we cannot achieve the color-singlet combinations. Table 5 summarizes the results. \begin{table}[h] \centering \begin{tabular}{|c|c|c|} \hline incident particle & constitution & reflected particles \\ \hline \hline $\Lambda$ & $uds$ & $\Lambda$, \ $(\Lambda)^{H}$ \\ \cline{2-2} \hline $\Sigma^{0}$ & $uds$ & $\Sigma^{0}$, \ $(\Sigma^{0})^{H}$ \\ \cline{2-2} \hline \end{tabular} \caption{Baryons with three different flavors} \end{table} Lastly, let us comment on the case of baryons with one flavor such as $\Delta^{++}=uuu$. In this case, it easily turns out that we cannot make any color-singlet combination. Therefore, the Andreev reflection does not occur. \section{Summary and Discussions} In this paper, we were interested in the Andreev reflection at the interface between hadronic and color superconducting phases. Hadrons were defined as a superposition of constituent quarks, each of which is described by the BdG equations. The reflected quarks (holes, indeed) must form the color-singlet states. Based on this observation, systematic studies were performed for the meson and baryon cases, respectively. Then we obtained some peculiar patterns of reflected hadrons. Here is some future perspective. In this study, we have not performed any concrete computations of quantities such as reflection and transmission probabilities as well as probability currents, which were calculated before \cite{ST}. These will be necessary when we try to apply the results obtained here into physics of neutron stars. Besides, we have not taken into account the influence of magnetic field to the BdG equations in this paper. This is also worth to considering. Furthermore, in the original paper by Andreev \cite{AR}, the BdG equations provided a nice interpretation for excitations in different phases (electrons and holes in conductor while quasi-articles in superconductor). How such an interpretation works, however, is not guaranteed in the case of hadron/CSC interfaces. This is because the system we argued here is relativistic so that creation and annihilation of particles are possible. This suggests that we have to consider our problem at the field theoretical level. Then the previous work, where one of the authors joined, might give some insight \cite{HTYB}. This will be reported elsewhere in the future. \section*{Acknowledgments} The authors thank the Yukawa Institute for Theoretical Physics at Kyoto University. Discussions during the YITP workshop YITP-W-19-08 on "Thermal Quantum Field Theory and Their Applications" were useful to complete this work. They are also grateful M. Sadzikowski for his careful reading of this manuscript and useful comments. \defA. \arabic{equation}{A. \arabic{equation}} \setcounter{equation}{0}
proofpile-arXiv_065-4322
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:introduction} Transfer learning \citep{pan2009survey} is a prominent strategy to address a machine learning task of interest using information and parameters already learned and/or available for a related task. Such designs significantly aid training of overparameterized models like deep neural networks \citep[e.g.,][]{bengio2012deep,shin2016deep,long2017deep}, which are inherently challenging due to the vast number of parameters compared to the number of training data examples. There are various ways to integrate the previously-learned information from the source task in the learning process of the target task; often this is done by taking subsets of parameters (e.g., layers in neural networks) learned for the source task and plugging them in the target task model as parameter subsets that can be set fixed, finely tuned, or serve as non-random initialization for a thorough learning process. Obviously, transfer learning is useful only if the source and target tasks are sufficiently related with respect to the transfer mechanism utilized \citep[e.g.,][]{rosenstein2005transfer,zamir2018taskonomy,kornblith2019better}. Moreover, finding a successful transfer learning setting for deep neural networks was shown by \cite{raghu2019transfusion} to be a delicate engineering task. The importance of transfer learning in contemporary practice should motivate fundamental understanding of its main aspects via \textit{analytical} frameworks that may consider linear structures \citep[e.g.,][]{lampinen2018analytic}. In general, the impressive success of overparameterized architectures for supervised learning have raised fundamental questions on the classical role of the bias-variance tradeoff that guided the traditional designs towards seemingly-optimal underparameterized models \citep{breiman1983many}. Recent empirical studies \citep{spigler2018jamming,geiger2019scaling,belkin2019reconciling} have demonstrated the phenomenon that overparameterized supervised learning corresponds to a generalization error curve with a \textit{double descent} trend (with respect to the number of parameters in the learned model). This double descent shape means that the generalization error peaks when the learned model starts to interpolate the training data (i.e., to achieve zero training error), but then the error continuously decreases as the overparmeterization increases, often arriving to a global minimum that outperforms the best underparameterized solution. This phenomenon has been studied theoretically from the linear regression perspective in an extensive series of papers, e.g., by \citet{belkin2019two,hastie2019surprises,xu2019number,mei2019generalization,bartlett2020benign,muthukumar2020harmless}. The next stage is to provide corresponding fundamental understanding to learning problems beyond a single fully-supervised regression problem (see, e.g., the study by \citet{dar2020subspace} on overparameterized linear subspace fitting in unsupervised and semi-supervised settings). In this paper we study the fundamentals of the natural meeting point between {\em overparameterized models} and the \textit{transfer learning} concept. Our analytical framework is based on the least squares solutions to two related linear regression problems: the first is a source task whose solution has been found independently, and the second is a target task that is addressed using the solution already available for the source task. Specifically, the target task is carried out while keeping a subset of its parameters fixed to values transferred from the source task solution. Accordingly, the target task includes three types of parameters: free to-be-optimized parameters, transferred parameters set fixed to values from the source task, and parameters fixed to zeros (which in our case correspond to the elimination of input features). The mixture of the parameter types defines the parameterization level (i.e., the relation between the number of free parameters and the number of examples given) and the transfer-learning level (i.e., the portion of transferred parameters among the solution layout). We conduct a non-asymptotic statistical analysis of the generalization errors in this transfer learning structure. Clearly, since the source task is solved independently, its generalization error follows a regular (one-dimensional) double descent shape with respect to the number of examples and free parameters available in the source task. Hence, our main contribution and interest are in the characterization of the generalization error of the target task that is carried out using the transfer learning approach described above. We show that the generalization error of the target task follows a double descent trend that depends on the double descent shape of the source task and on the transfer learning factors such as the number of parameters transferred and the correlation between the source and target tasks. We also examine the generalization error of the target task as a function of two quantities: the number of free parameters in the source task and the number of free parameters in the target task. This interpretation presents the generalization error of the target task as having a \textit{two-dimensional double descent} trend that clarifies the fundamental factors affecting the performance of the overall transfer learning approach. We also show how the generalization error of the target task is affected by the \textit{specific set} of transferred parameters and its delicate interplay with the forms of the true solution and the source-target task relation. By that, we provide an analytical theory to the fragile nature of successful transfer learning designs. This paper is organized as follows. In Section \ref{sec:Transfer Learning in the Linear Regression Case: Problem Definition} we define the transfer learning architecture examined in this paper. In Sections \ref{sec:Main Analytic Results - On Average}-\ref{sec:Main Analytic Results - Specific} we present the analytical and empirical results that characterize the generalization errors of the target task, and outline the cases where transfer of parameters is beneficial. Note that Section \ref{sec:Main Analytic Results - On Average} studies the on-average generalization error in a simplified setting where transferred parameters are chosen uniformly at random, whereas Section \ref{sec:Main Analytic Results - Specific} examines the generalization error induced by transfer of a single, specific set of parameters. Section \ref{sec:Conclusion} concludes the paper. The Appendices include all of the proofs and mathematical developments as well as additional details and results for the empirical part of the paper. \section{Transfer Learning between Linear Regression Tasks: Problem Definition} \label{sec:Transfer Learning in the Linear Regression Case: Problem Definition} \subsection{Source Task: Data Model and Solution Form} We start with the \textit{source} data model, where a $d$-dimensional Gaussian input vector $\vec{z}\sim \mathcal{N}\left(\vec{0},\mtx{I}_d\right)$ is connected to a response value $v\in\mathbb{R}$ via the noisy linear model \begin{equation} \label{eq:source data model} v = \vec{z}^T \vecgreek{\theta} + \xi, \end{equation} where $\xi\sim\mathcal{N}\left(0,\sigma_{\xi}^2\right)$ is a Gaussian noise component independent of $\vec{z}$, $\sigma_{\xi}>0$, and $\vecgreek{\theta}\in\mathbb{R}^d$ is an unknown vector. The data user is unfamiliar with the distribution of $\left(\vec{z},v\right)$, however gets a dataset of $\widetilde{n}$ independent and identically distributed (i.i.d.) draws of $\left(\vec{z},v\right)$ pairs denoted as ${\widetilde{\mathcal{D}}\triangleq\Big\{ { \left({\vec{z}^{(i)},v^{(i)}}\right) }\Big\}_{i=1}^{\widetilde{n}}}$. The $\widetilde{n}$ data samples can be rearranged as ${\mtx{Z}\triangleq \lbrack {\vec{z}^{(1)}, \dots, \vec{z}^{(\widetilde{n})}} \rbrack^{T}}$ and $\vec{v}\triangleq \lbrack{ v^{(1)}, \dots, v^{(\widetilde{n})} } \rbrack^{T}$ that satisfy the relation ${\vec{v} = \mtx{Z} \vecgreek{\theta} + \vecgreek{\xi}}$ where ${\vecgreek{\xi}\triangleq \lbrack{ {\xi}^{(1)}, \dots, {\xi}^{(\widetilde{n})} } \rbrack^{T}}$ is an unknown noise vector that its $i^{\rm th}$ component ${\xi}^{(i)}$ participates in the relation ${v^{(i)} = \vec{z}^{{(i)},T} \vecgreek{\theta} + \xi^{(i)}}$ underlying the $i^{\rm th}$ data sample. The \textit{source} task is defined for a new (out of sample) data pair $\left( { \vec{z}^{(\rm test)}, v^{(\rm test)} } \right)$ drawn from the distribution induced by (\ref{eq:source data model}) independently of the $\widetilde{n}$ examples in $\widetilde{\mathcal{D}}$. For a given $\vec{z}^{(\rm test)}$, the source task is to estimate the response value $v^{(\rm test)}$ by the value $\widehat{v}$ that minimizes the corresponding out-of-sample squared error (i.e., the generalization error of the \textit{source} task) \begin{equation} \label{eq:out of sample error - source data class} \widetilde{\mathcal{E}}_{\rm out} \triangleq \expectation{ \left( \widehat{v} - v^{(\rm test)} \right)^2 } = \sigma_{\xi}^2 + \expectation{ \left \Vert { \widehat{\vecgreek{\theta}} - \vecgreek{\theta} } \right \Vert _2^2 } \end{equation} where the second equality stems from the data model in (\ref{eq:source data model}) and the corresponding linear form of ${\widehat{v} = \vec{z}^{({\rm test}),T}\widehat{\vecgreek{\theta}}}$ where $\widehat{\vecgreek{\theta}}$ estimates $\vecgreek{\theta}$ based on $\widetilde{\mathcal{D}}$. To address the source task based on the $\widetilde{n}$ examples, one should choose the number of free parameters in the estimate $\widehat{\vecgreek{\theta}}\in \mathbb{R}^{d} $. Consider a predetermined layout where $\widetilde{p}$ out of the $d$ components of $\widehat{\vecgreek{\theta}}$ are free to be optimized, whereas the remaining $d-\widetilde{p}$ components are constrained to zero values. The coordinates of the free parameters are specified in the set ${\mathcal{S}\triangleq\lbrace { s_1, \dots, s_{\widetilde{p}} }\rbrace}$ where ${1 \le s_1 < \dots < s_{\widetilde{p}} \le d}$ and the complementary set ${\mathcal{S}^{\rm c} \triangleq \lbrace{1,\dots,d}\rbrace \setminus \mathcal{S}}$ contains the coordinates constrained to be zero valued. We define the $\rvert{\mathcal{S}}\lvert\times d$ matrix $\mtx{Q}_{\mathcal{S}}$ as the linear operator that extracts from a $d$-dimensional vector its $\rvert{\mathcal{S}}\lvert$-dimensional subvector of components residing at the coordinates specified in $\mathcal{S}$. Specifically, the values of the $\left (k, s_k \right) $ components ($k=1,\dots,\rvert{\mathcal{S}}\lvert$) of $\mtx{Q}_{\mathcal{S}}$ are ones and the other components of $\mtx{Q}_{\mathcal{S}}$ are zeros. The definition given here for $\mtx{Q}_{\mathcal{S}}$ can be adapted also to other sets of coordinates (e.g., $\mtx{Q}_{\mathcal{S}^{\rm c}}$ for $\mathcal{S}^{\rm c}$) as denoted by the subscript of $\mtx{Q}$. We now turn to formulate the \textit{source} task using the linear regression form of \begin{align} \label{eq:constrained linear regression - source data class} \widehat{\vecgreek{\theta}} = \argmin_{\vec{r}\in\mathbb{R}^{d}} \left \Vert \vec{v} - \mtx{Z}\vec{r} \right \Vert _2^2 ~~~\text{subject to}~~\mtx{Q}_{\mathcal{S}^{\rm c}} \vec{r} = \vec{0} \end{align} that its min-norm solution (see details in Appendix \ref{appendix:subsec:Mathematical Development of Estimate of Theta}) is \begin{equation} \label{eq:constrained linear regression - solution - source data class} {\widehat{\vecgreek{\theta}}=\mtx{Q}_{\mathcal{S}}^T \mtx{Z}_{\mathcal{S}}^{+} \vec{v}} \end{equation} where $\mtx{Z}_{\mathcal{S}}^{+}$ is the pseudoinverse of $\mtx{Z}_{\mathcal{S}}\triangleq \mtx{Z} \mtx{Q}_{\mathcal{S}}^{T}$. Note that $\mtx{Z}_{\mathcal{S}}$ is a $\widetilde{n} \times \widetilde{p}$ matrix that its $i^{\rm th}$ row is formed by the $\widetilde{p}$ components of $\vec{z}^{(i)}$ specified by the coordinates in $\mathcal{S}$, namely, only $\widetilde{p}$ out of the $d$ features of the input data vectors are utilized. Moreover, $\widehat{\vecgreek{\theta}}$ is a $d$-dimensional vector that may have nonzero values only in the $\widetilde{p}$ coordinates specified in $\mathcal{S}$ (this can be easily observed by noting that for an arbitrary ${\vec{w}\in\mathbb{R}^{\rvert{\mathcal{S}}\lvert}}$, the vector ${\vec{u}=\mtx{Q}_{\mathcal{S}}^T\vec{w}}$ is a $d$-dimensional vector that its components satisfy ${u_{s_k}=w_k}$ for ${k=1,...,\rvert{\mathcal{S}}\lvert}$ and ${u_j=0}$ for $j\notin \mathcal{S}$). While the specific optimization form in (\ref{eq:constrained linear regression - source data class}) was not explicit in previous studies of non-asymptotic settings \citep[e.g.,][]{breiman1983many,belkin2019two}, the solution in (\ref{eq:constrained linear regression - solution - source data class}) coincides with theirs and, thus, the formulation of the generalization error of our \textit{source} task (which is a linear regression problem that, by itself, does not have any transfer learning aspect) is available from \citet{breiman1983many,belkin2019two} and provided in Appendix \ref{appendix:subsec:Double Descent Formulation for Source Task} in our notations for completeness of presentation. \subsection{Target Task: Data Model and Solution using Transfer Learning} \label{subsec: Target Task} A second data class, which is our main interest, is modeled by $\left(\vec{x},y\right)\in\mathbb{R}^{d}\times\mathbb{R}$ that satisfy \begin{equation} \label{eq:target data model} y = \vec{x}^T \vecgreek{\beta} + \epsilon \end{equation} where ${\vec{x}\sim \mathcal{N}\left(\vec{0},\mtx{I}_d\right)}$ is a Gaussian input vector including $d$ features, ${\epsilon\sim\mathcal{N}\left(0,\sigma_{\epsilon}^2\right)}$ is a Gaussian noise component independent of $\vec{x}$, $\sigma_{\epsilon}>0$, and ${\vecgreek{\beta}\in\mathbb{R}^d}$ is an unknown vector related to the $\vecgreek{\theta}$ from (\ref{eq:source data model}) via \begin{equation} \label{eq:theta-beta relation} \vecgreek{\theta} = \mtx{H}\vecgreek{\beta} + \vecgreek{\eta} \end{equation} where ${\mtx{H}\in\mathbb{R}^{d\times d}}$ is a deterministic matrix and ${\vecgreek{\eta}\sim\mathcal{N}\left(\vec{0},\sigma_{\eta}^2\mtx{I}_d\right)}$ is a Gaussian noise vector with ${\sigma_{\eta}\ge 0}$. Here $\vecgreek{\eta}$, $\vec{x}$, $\epsilon$, $\vec{z}$ and $\xi$ are independent. The data user does not know the distribution of $\left(\vec{x},y\right)$ but receives a small dataset of $n$ i.i.d.\ draws of ${\left(\vec{x},y\right)}$ pairs denoted as ${\mathcal{D}\triangleq\Big\lbrace { \left(\vec{x}^{(i)},y^{(i)}\right) }\Big\rbrace_{i=1}^{n}}$. The $n$ data samples can be organized in a $n\times d$ matrix of input variables ${\mtx{X}\triangleq \lbrack {\vec{x}^{(1)}, \dots, \vec{x}^{(n)}} \rbrack^{T}}$ and a $n\times 1$ vector of responses ${\vec{y}\triangleq \lbrack{ y^{(1)}, \dots, y^{(n)} } \rbrack^{T}}$ that together satisfy the relation ${\vec{y} = \mtx{X} \vecgreek{\beta} + \vecgreek{\epsilon}}$ where ${\vecgreek{\epsilon}\triangleq \lbrack{ {\epsilon}^{(1)}, \dots, {\epsilon}^{(n)} } \rbrack^{T}}$ is an unknown noise vector that its $i^{\rm th}$ component ${\epsilon}^{(i)}$ is involved in the connection ${y^{(i)} = \vec{x}^{{(i)},T} \vecgreek{\beta} + \epsilon^{(i)}}$ underlying the $i^{\rm th}$ example pair. The \textit{target} task considers a new (out of sample) data pair ${\left( { \vec{x}^{(\rm test)}, y^{(\rm test)} } \right)}$ drawn from the model in (\ref{eq:target data model}) independently of the training examples in $\mathcal{D}$. Given $\vec{x}^{(\rm test)}$, the goal is to establish an estimate $\widehat{y}$ of the response value ${y^{(\rm test)}}$ such that the out-of-sample squared error, i.e., the generalization error of the \textit{target} task, \begin{equation} \label{eq:out of sample error - target data class - beta form} \mathcal{E}_{\rm out} \triangleq \expectation{ \left( \widehat{y} - y^{(\rm test)} \right)^2 } = \sigma_{\epsilon}^2 + \expectation{ \left \Vert { \widehat{\vecgreek{\beta}} - \vecgreek{\beta} } \right \Vert _2^2 } \end{equation} is minimized, where ${\widehat{y} = \vec{x}^{({\rm test}),T}\widehat{\vecgreek{\beta}}}$, and the second equality stems from the data model in (\ref{eq:target data model}). The target task is addressed via linear regression that seeks for an estimate ${\widehat{\vecgreek{\beta}}\in\mathbb{R}^{d}}$ with a layout including three disjoint sets of coordinates ${\mathcal{F}, \mathcal{T}, \mathcal{Z}}$ that satisfy ${\mathcal{F}\cup \mathcal{T} \cup \mathcal{Z} = \{{1,\dots,d}\}}$ and correspond to three types of parameters: \begin{itemize} \item $p$ parameters are \textit{free} to be optimized and their coordinates are specified in $\mathcal{F}$. \item $t$ parameters are \textit{transferred} from the co-located coordinates of the estimate $\widehat{\vecgreek{\theta}}$ already formed for the source task. Only the free parameters of the \textit{source} task are relevant to be transferred to the target task and, therefore, $\mathcal{T}\subset\mathcal{S}$ and $t\in\{0,\dots,\widetilde{p}\}$. The transferred parameters are taken as is from $\widehat{\vecgreek{\theta}}$ and set fixed in the corresponding coordinates of $\widehat{\vecgreek{\beta}}$, i.e., for $k\in\mathcal{T}$, ${ \widehat{{\beta}}_{k} = \widehat{{\theta}}_{k} }$ where $\widehat{{\beta}}_{k}$ and $\widehat{{\theta}}_{k}$ are the $k^{\rm th}$ components of $\widehat{\vecgreek{\beta}}$ and $\widehat{\vecgreek{\theta}}$, respectively. \item $\ell$ parameters are set to \textit{zeros}. Their coordinates are included in ${\mathcal{Z}}$ and effectively correspond to ignoring features in the same coordinates of the input vectors. \end{itemize} Clearly, the layout should satisfy $p+t+\ell = d$. Then, the constrained linear regression problem for the target task is formulated as \begin{align} \label{eq:constrained linear regression - target task} &\widehat{\vecgreek{\beta}} = \argmin_{\vec{b}\in\mathbb{R}^{d}} \left \Vert \vec{y} - \mtx{X}\vec{b} \right \Vert _2^2 \\ \nonumber &\mathmakebox[5em][l]{\text{subject to}}\mtx{Q}_{\mathcal{T}} \vec{b} = \mtx{Q}_{\mathcal{T}}\widehat{\vecgreek{\theta}} \\ \nonumber &\qquad\qquad\quad\mtx{Q}_{\mathcal{Z}} \vec{b} = \vec{0} \end{align} where $\mtx{Q}_{\mathcal{T}}$ and $\mtx{Q}_{\mathcal{Z}}$ are the linear operators extracting the subvectors corresponding to the coordinates in ${\mathcal{T}}$ and ${\mathcal{Z}}$, respectively, from $d$-dimensional vectors. Here $\widehat{\vecgreek{\theta}}\in \mathbb{R}^{d}$ is the \textit{precomputed} estimate for the source task and considered a constant vector for the purpose of the target task. The examined transfer learning structure includes a single computation of the source task (\ref{eq:constrained linear regression - source data class}), followed by a single computation of the target task (\ref{eq:constrained linear regression - target task}) that produces the eventual estimate of interest $\widehat{\vecgreek{\beta}}$ using the given $\widehat{\vecgreek{\theta}}$. The min-norm solution of the target task in (\ref{eq:constrained linear regression - target task}) is (see details in Appendix \ref{appendix:subsec:Mathematical Development of Estimate of Beta}) \begin{equation} \label{eq:constrained linear regression - solution - target task} {\widehat{\vecgreek{\beta}}=\mtx{Q}_{\mathcal{F}}^T \mtx{X}_{\mathcal{F}}^{+} \left( \vec{y} - \mtx{X}_{\mathcal{T}} \widehat{\vecgreek{\theta}}_{\mathcal{T}} \right)} + \mtx{Q}_{\mathcal{T}}^T\widehat{\vecgreek{\theta}}_{\mathcal{T}} \end{equation} where $\widehat{\vecgreek{\theta}}_{\mathcal{T}} \triangleq \mtx{Q}_{\mathcal{T}}\widehat{\vecgreek{\theta}}$, $\mtx{X}_{\mathcal{T}}\triangleq \mtx{X} \mtx{Q}_{\mathcal{T}}^T$, and $\mtx{X}_{\mathcal{F}}^{+}$ is the pseudoinverse of $\mtx{X}_{\mathcal{F}}\triangleq \mtx{X} \mtx{Q}_{\mathcal{F}}^T$. Note that the desired layout is indeed implemented by the $\widehat{\vecgreek{\beta}}$ form in (\ref{eq:constrained linear regression - solution - target task}): the components corresponding to $\mathcal{Z}$ are zeros, the components corresponding to $\mathcal{T}$ are taken as is from $\widehat{\vecgreek{\theta}}$, and only the $p$ coordinates specified in $\mathcal{F}$ are adjusted for the purpose of minimizing the in-sample error in the optimization cost of (\ref{eq:constrained linear regression - target task}) while considering the transferred parameters. In this paper we study the generalization ability of overparameterized solutions (i.e., when $p>n$) to the \textit{target} task formulated in (\ref{eq:constrained linear regression - target task})--(\ref{eq:constrained linear regression - solution - target task}). \section{Transfer of Random Sets of Parameters} \label{sec:Main Analytic Results - On Average} To analytically study the generalization error of the target task we consider, in this section, the \textit{overall layout of coordinate subsets} ${\mathcal{L}\triangleq \lbrace{ \mathcal{S}, \mathcal{F}, \mathcal{T}, \mathcal{Z} }\rbrace}$ as a random structure that lets us to formulate the expected value (with respect to $\mathcal{L}$) of the generalization error of interest. The simplified settings of this section provide useful insights towards Section \ref{sec:Main Analytic Results - Specific} where we analyze transfer of a specific set of parameters and formulate the generalization error for a specific layout ${\mathcal{L}}$ (i.e., there is no expectation over $\mathcal{L}$ in the formulations in Section \ref{sec:Main Analytic Results - Specific}). \begin{definition} \label{definition:coordinate subset layout - uniformly distributed} A coordinate subset layout ${\mathcal{L} = \lbrace{ \mathcal{S}, \mathcal{F}, \mathcal{T}, \mathcal{Z} }\rbrace}$ that is ${\lbrace{ \widetilde{p}, p, t }\rbrace}$-uniformly distributed, for ${\widetilde{p}\in \lbrace{1,\dots,d}\rbrace}$ and ${\left( {p,t}\right)\in \left\{{0,\dots,d}\right\}\times \left\{{0,\dots,\widetilde{p}}\right\}}$ such that ${p+t\le d}$, satisfies: $\mathcal{S}$ is uniformly chosen at random from all the subsets of $\widetilde{p}$ unique coordinates of $\{{1,\dots,d}\}$. Given $\mathcal{S}$, the target-task coordinate layout ${\lbrace{ \mathcal{F}, \mathcal{T}, \mathcal{Z} }\rbrace}$ is uniformly chosen at random from all the layouts where $\mathcal{F}$, $\mathcal{T}$, and $\mathcal{Z}$ are three disjoint sets of coordinates that satisfy ${\mathcal{F}\cup \mathcal{T} \cup \mathcal{Z} = \{{1,\dots,d}\}}$ such that $\rvert{\mathcal{F}}\lvert = p$, $\rvert{\mathcal{T}}\lvert = t$ and $\mathcal{T}\subset\mathcal{S}$, and $\rvert{\mathcal{Z}}\lvert = d-p-t$. \end{definition} Recall the relation between the two tasks as provided in (\ref{eq:theta-beta relation}) and let us denote ${\vecgreek{\beta}^{(\mtx{H})} \triangleq \mtx{H} \vecgreek{\beta}}$. Assume that ${\vecgreek{\beta}\ne\vec{0}}$. The following definitions emphasize crucial aspects in the examined transfer learning framework. The \textit{source task energy} is $ { \kappa \triangleq \expectationwrt{\Ltwonorm{\vecgreek{\theta}}}{\vecgreek{\eta}}= { \Ltwonorm{ \vecgreek{\beta}^{(\mtx{H})} } + d \sigma_{\eta}^2 }}$. Assume that ${\vecgreek{\theta}}$ is not deterministically degenerated into a zero vector, hence, ${\kappa\ne 0}$. The \textit{normalized task correlation} between the two tasks is $ { \rho \triangleq \frac{ \langle { \vecgreek{\beta}^{(\mtx{H})}, \vecgreek{\beta} } \rangle }{ \kappa} } $. Let us characterize the expected out-of-sample error of the target task with respect to transfer of uniformly-distributed sets of parameters. \begin{theorem} \label{theorem:out of sample error - target task} Let ${\mathcal{L} = \lbrace{ \mathcal{S}, \mathcal{F}, \mathcal{T}, \mathcal{Z} }\rbrace}$ be a coordinate subset layout that is ${\lbrace{ \widetilde{p}, p, t }\rbrace}$-uniformly distributed. Then, the expected out-of-sample error of the target task has the form of \begin{align} \label{eq:out of sample error - target task - theorem - general decomposition form} &\expectationwrt{ \mathcal{E}_{\rm out} }{\mathcal{L}} = \begin{cases} \mathmakebox[24em][l]{\frac{n-1}{n-p-1}\left( (1 - \frac{p}{d})\Ltwonorm{\vecgreek{\beta}} + \sigma_{\epsilon}^2 + t \cdot \Delta\mathcal{E}_{\rm transfer}\right)} \text{for } p \le n-2, \\ \mathmakebox[24em][l]{\infty} \text{for } n-1 \le p \le n+1, \\ \mathmakebox[24em][l]{\frac{p-1}{p-n-1} \left({ (1 - \frac{p}{d}) \Ltwonorm{\vecgreek{\beta}}+ \sigma_{\epsilon}^2 + t \cdot \Delta\mathcal{E}_{\rm transfer} } \right) + \frac{ p - n}{d} \Ltwonorm{\vecgreek{\beta}} } \text{for } p \ge n+2, \end{cases} \end{align} where \begin{align} \label{eq:out of sample error - target task - theorem - transfer term - src error form} &\Delta\mathcal{E}_{\rm transfer} = { \frac{1}{\widetilde{p}} \cdot\left({ \expectationwrt{\widetilde{\mathcal{E}}_{\rm out}}{\mathcal{S},\vecgreek{\eta}} - \sigma_{\xi}^2 - \kappa }\right) -2\frac{\kappa}{d}\left({\rho - 1}\right)\times{ \begin{cases} 1 & \text{for } \widetilde{p} \le \widetilde{n}, \\ \frac{\widetilde{n}}{\widetilde{p}} & \text{for } \widetilde{p} > \widetilde{n} \end{cases} } } \end{align} is the expected error difference introduced by each constrained parameter that is transferred from the source task instead of being set to zero. Recall that $\widetilde{\mathcal{E}}_{\rm out}$ and ${\mathcal{E}}_{\rm out}$ are the out-of-sample errors of the source and target tasks, respectively. \end{theorem} The last theorem is proved using non-asymptotic properties of Wishart matrices (see Appendix \ref{appendix:sec:Proof of Theorem 3.1}). Negative values of $\Delta\mathcal{E}_{\rm transfer}$ imply beneficial transfer learning and this occurs, for example, when the \textit{task correlation} $\rho$ is positive and sufficiently large with respect to the \textit{generalization error level in the source task} $\expectationwrt{\widetilde{\mathcal{E}}_{\rm out}}{\mathcal{S},\vecgreek{\eta}}$ that should be sufficiently low (see Corollary \ref{corollary:benefits from parameter transfer - conditions for task correlation} below). Note that the out-of-sample error formulation in (\ref{eq:out of sample error - target task - theorem - general decomposition form}) depends on the parameterization level of the \textit{target} task (i.e., the $p,n$ pair), whereas (\ref{eq:out of sample error - target task - theorem - transfer term - src error form}) shows that the error difference $\Delta\mathcal{E}_{\rm transfer}$ depends on the expected generalization error of the \textit{source} task $\expectationwrt{\widetilde{\mathcal{E}}_{\rm out}}{\mathcal{S},\vecgreek{\eta}}$. Using the explicit expression (see details in Appendix \ref{appendix:subsec:Double Descent Formulation for Source Task}) \begin{align} \label{appendix:eq:out of sample error - source task - expectation over S and eta} \expectationwrt{\widetilde{\mathcal{E}}_{\rm out} }{\mathcal{S},\vecgreek{\eta}} = \begin{cases} \mathmakebox[18em][l]{\frac{\widetilde{n}-1}{\widetilde{n}-\widetilde{p}-1} \left( \left({1 - \frac{\widetilde{p}}{\widetilde{d}}}\right)\kappa + \sigma_{\xi}^2\right)} \text{for } \widetilde{p} \le \widetilde{n}-2, \\ \mathmakebox[18em][l]{\infty} \text{for } \widetilde{n}-1 \le \widetilde{p} \le \widetilde{n}+1, \\ \mathmakebox[18em][l]{\frac{\widetilde{p}-1}{\widetilde{p}-\widetilde{n}-1} \left( \left({1 - \frac{\widetilde{p}}{\widetilde{d}}}\right)\kappa + \sigma_{\xi}^2\right) + \frac{\widetilde{p} - \widetilde{n}}{d} \kappa } \text{for } \widetilde{p} \ge \widetilde{n}+2. \end{cases} \end{align} we provide the next formula for $\Delta\mathcal{E}_{\rm transfer}$ that depicts the detailed dependency on the \textit{parameterization level} of the \textit{source} task (i.e., the $\widetilde{p},\widetilde{n}$ pair). See proof in Appendix \ref{appendix:subsec:Corollary 3 - Proof Outline}. \begin{corollary} \label{corollary:out of sample error - target task - transfer error term - detailed} The expected error difference term $\Delta\mathcal{E}_{\rm transfer}$ from (\ref{eq:out of sample error - target task - theorem - transfer term - src error form}) can be explicitly written as \begin{align} \label{eq:out of sample error - target task - theorem - transfer term - detailed} &\Delta\mathcal{E}_{\rm transfer} = \frac{\kappa}{d} \times \begin{cases} 1 - 2\rho + \frac{d-\widetilde{p}+d\cdot\kappa^{-1}\cdot \sigma_{\xi}^2}{\widetilde{n} - \widetilde{p} - 1} & \text{for } \widetilde{p} \le \widetilde{n}-2, \\ \infty& \text{for } \widetilde{n}-1 \le \widetilde{p} \le \widetilde{n}+1, \\ \frac{\widetilde{n}}{\widetilde{p}} \left( 1 - 2\rho + \frac{d-\widetilde{p}+d\cdot\kappa^{-1}\cdot \sigma_{\xi}^2}{\widetilde{p} - \widetilde{n} - 1} \right) & \text{for } \widetilde{p} \ge \widetilde{n}+2. \end{cases} \end{align} \end{corollary} Figure \ref{fig:target_generalization_errors_vs_p} presents the curves of $\expectationwrt{ \mathcal{E}_{\rm out} }{\mathcal{L}}$ with respect to the number of free parameters $p$ in the target task, whereas the source task has $\widetilde{p}=d$ free parameters. In Fig.~\ref{fig:target_generalization_errors_vs_p}, the solid-line curves correspond to analytical values induced by Theorem \ref{theorem:out of sample error - target task} and Corollary \ref{corollary:out of sample error - target task - transfer error term - detailed} and the respective empirically computed values are denoted by circles (all the presented results are for $d=80$, ${n=20}$, $\widetilde{n}=50$, $\| \vecgreek{\beta} \|_2^2 = d$, $\sigma_{\xi}^2 = 0.025\cdot d$. See additional details in Appendix \ref{appendix:sec:Empirical Results for Section 3 Additional Details and Demonstrations}). The number of free parameters $p$ is upper bounded by $d-t$ that gets smaller for a larger number of transferred parameters $t$ (see, in Fig.~\ref{fig:target_generalization_errors_vs_p}, the earlier stopping of the curves when $t$ is larger). Observe that the generalization error peaks at $p=n$ and, then, decreases as $p$ grows in the overparameterized range of $p>n+1$. We identify this behavior as a \textit{double descent} phenomenon, but without the first descent in the underparameterized range (this was also the case in settings examined by \citet{belkin2019two,dar2020subspace}). We can interpret the results in Figure \ref{fig:target_generalization_errors_vs_p} as examples for important cases of transfer learning settings, where each subfigure considers a different task relation with a different pair of noise level $\sigma_{\eta}^2$ and operator $\mtx{H}$. Fig.~\ref{fig:target_generalization_errors_vs_p_eta_0_p_tilde_is_d} corresponds to transfer learning between two \textit{identical tasks}, therefore, transfer learning is \textit{beneficial} in the sense that for a given $p\notin\{n-1,n,n+1\}$ the error decreases as $t$ increases (i.e., as more parameters are transferred instead of being omitted). Figs.~\ref{fig:target_generalization_errors_vs_p_eta_0.5_p_tilde_is_d},\ref{fig:H_average3_target_generalization_errors_vs_p_eta_0_p_tilde_80},\ref{fig:H_average15_target_generalization_errors_vs_p_eta_0_p_tilde_80},\ref{fig:H_average27_target_generalization_errors_vs_p_eta_0_p_tilde_80} correspond to transfer learning between two \textit{related tasks} (although not identical), hence, transfer learning is still \textit{beneficial, but less} than in the former case of identical tasks. Figs.~\ref{fig:target_generalization_errors_vs_p_eta_1_p_tilde_is_d},\ref{fig:H_average80_target_generalization_errors_vs_p_eta_0_p_tilde_80} correspond to transfer learning between two \textit{unrelated tasks} (although not extremely far), hence, transfer learning is \textit{useless}, but not harmful (i.e., for a given $p$, the number of transferred parameters $t$ does not affect the out-of-sample error). Fig.~\ref{fig:target_generalization_errors_vs_p_eta_2_p_tilde_is_d} corresponds to transfer learning between two \textit{very different tasks} and, accordingly, transfer learning \textit{degrades} the generalization performance (namely, for a given $p$, transferring more parameters increases the out-of-sample error). \begin{figure}[t] \floatconts {fig:target_generalization_errors_vs_p} {\caption{The expected generalization error of the target task, $\expectationwrt{ \mathcal{E}_{\rm out} }{\mathcal{L}}$, with respect to the number of free parameters (in the target task). The analytical values, induced from Theorem \ref{theorem:out of sample error - target task}, are presented using solid-line curves, and the respective empirical results obtained from averaging over 250 experiments are denoted by circle markers. Each subfigure considers a different case of the source-target task relation (\ref{eq:theta-beta relation}) with a different pair of $\sigma_{\eta}^2$ and $\mtx{H}$. The second row of subfigures correspond to $\mtx{H}$ operators that perform local averaging, each subfigure (e)-(h) is w.r.t.~a different size of local averaging neighborhood. Each curve color refers to a different number of transferred parameters.}} {% \subfigure[$\mtx{H}=\mtx{I}_d$]{\label{fig:target_generalization_errors_vs_p_eta_0_p_tilde_is_d}\includegraphics[width=0.24\textwidth]{figures/target_generalization_errors_vs_p_eta_0_p_tilde_80.eps}} \subfigure[$\mtx{H}=\mtx{I}_d$]{\label{fig:target_generalization_errors_vs_p_eta_0.5_p_tilde_is_d}% \includegraphics[width=0.24\textwidth]{figures/target_generalization_errors_vs_p_eta_0_5_p_tilde_80.eps}} \subfigure[$\mtx{H}=\mtx{I}_d$]{\label{fig:target_generalization_errors_vs_p_eta_1_p_tilde_is_d}% \includegraphics[width=0.24\textwidth]{figures/target_generalization_errors_vs_p_eta_1_p_tilde_80.eps}} \subfigure[$\mtx{H}=\mtx{I}_d$]{\label{fig:target_generalization_errors_vs_p_eta_2_p_tilde_is_d}% \includegraphics[width=0.24\textwidth]{figures/target_generalization_errors_vs_p_eta_2_p_tilde_80.eps}} \subfigure[{\small$\mtx{H}$:\hspace{.4em}local\hspace{.4em}averaging neighborhood size 3}]{\label{fig:H_average3_target_generalization_errors_vs_p_eta_0_p_tilde_80}\includegraphics[width=0.24\textwidth]{figures/H_average3_target_generalization_errors_vs_p_eta_0_p_tilde_80.eps}} \subfigure[{\small$\mtx{H}$:\hspace{.4em}local\hspace{.4em}averaging neighborhood size 15}]{\label{fig:H_average15_target_generalization_errors_vs_p_eta_0_p_tilde_80}% \includegraphics[width=0.24\textwidth]{figures/H_average15_target_generalization_errors_vs_p_eta_0_p_tilde_80.eps}} \subfigure[{\small$\mtx{H}$:\hspace{.4em}local\hspace{.4em}averaging neighborhood size 27}]{\label{fig:H_average27_target_generalization_errors_vs_p_eta_0_p_tilde_80}% \includegraphics[width=0.24\textwidth]{figures/H_average27_target_generalization_errors_vs_p_eta_0_p_tilde_80.eps}} \subfigure[{\small$\mtx{H}$:\hspace{.4em}overall\hspace{.4em}averaging neighborhood size 80}]{\label{fig:H_average80_target_generalization_errors_vs_p_eta_0_p_tilde_80}% \includegraphics[width=0.24\textwidth]{figures/H_average80_target_generalization_errors_vs_p_eta_0_p_tilde_80.eps}} } \end{figure} By considering the generalization error formula from Theorem \ref{theorem:out of sample error - target task} as a function of $\widetilde{p}$ and $p$ (i.e., the number of free parameters in the source and target tasks, respectively) we receive a \textit{two-dimensional double descent} behavior as presented in Fig.~\ref{fig:target_generalization_errors_p_vs_p_tilde_2D_planes} and its extended version Fig.~\ref{appendix:fig:target_generalization_errors_p_vs_p_tilde_2D_planes} where each subfigure is for a different pair of $t$ and $\sigma_{\eta}^2$. The results show a double descent trend along the $p$ axis (with a peak at $p=n$) and also, when parameter transfer is applied (i.e., $t>0$), a double descent trend along the $\widetilde{p}$ axis (with a peak at $\widetilde{p}=\widetilde{n}$). Our solution structure implies that $\widetilde{p}\in\{t,\dots,d\}$ and $p\in\{0,\dots,d-t\}$, hence, a larger number of transferred parameters $t$ eliminates a larger portion of the underparameterized range of the source task and also eliminates a larger portion of the overparameterized range of the target task (see in Fig.~\ref{appendix:fig:target_generalization_errors_p_vs_p_tilde_2D_planes} the white eliminated regions that grow with $t$). When $t$ is high, the wide elimination of portions from the $(\widetilde{p},p)$-plane hinders the complete form of the two-dimensional double descent phenomenon (see, e.g., Fig.~\ref{fig:analytical_target_generalization_errors_eta_0.5_t48}). Conceptually, we can observe a \textit{tradeoff between transfer learning and overparamterized learning}: an increased transfer of parameters limits the level of overparameterization applicable in the target task and, in turn, this limits the overall potential gains from the transfer learning. Yet, when the source task is \textit{sufficiently related} to the target task (see, e.g., Figs.~\ref{fig:target_generalization_errors_vs_p_eta_0_p_tilde_is_d}, \ref{fig:target_generalization_errors_vs_p_eta_0.5_p_tilde_is_d}), the parameter transfer compensates, at least partially, for an insufficient number of free parameters (in the target task). The last claim is evident in Figures \ref{fig:target_generalization_errors_vs_p_eta_0_p_tilde_is_d}, \ref{fig:target_generalization_errors_vs_p_eta_0.5_p_tilde_is_d} where, for $p>n+1$, there is a range of generalization error values that is achievable by several settings of $(p,t)$ pairs (i.e., specific error levels can be attained by curves of different colors in the same subfigure). E.g., in Fig.~\ref{fig:target_generalization_errors_vs_p_eta_0.5_p_tilde_is_d} the error achieved by $p=60$ free parameters and no parameter transfer can be also achieved using $p=48$ free parameters and $t=32$ parameters transferred from the source task. In Appendix \ref{appendix:sec:Special Cases of Theorem 3.1} we elaborate on two special cases that are induced by setting $t=0$ or $p=0$ in the result of Theorem \ref{theorem:out of sample error - target task}. \begin{figure}[t] \begin{center} \subfigure[]{\includegraphics[width=0.24\textwidth]{figures/analytical_target_generalization_errors_eta_0_5_t0.eps}\label{fig:analytical_target_generalization_errors_eta_0.5_t0}} \subfigure[]{\includegraphics[width=0.24\textwidth]{figures/analytical_target_generalization_errors_eta_0_5_t16.eps} \label{fig:analytical_target_generalization_errors_eta_0.5_t16}} \subfigure[]{\includegraphics[width=0.24\textwidth]{figures/analytical_target_generalization_errors_eta_0_5_t32.eps} \label{fig:analytical_target_generalization_errors_eta_0.5_t32}} \subfigure[]{\includegraphics[width=0.24\textwidth]{figures/analytical_target_generalization_errors_eta_0_5_t48.eps} \label{fig:analytical_target_generalization_errors_eta_0.5_t48}} \caption{Analytical evaluation of the expected generalization error of the target task, $\expectationwrt{ \mathcal{E}_{\rm out} }{\mathcal{L}}$, with respect to the number of free parameters $\widetilde{p}$ and $p$ (in the source and target tasks, respectively). Each subfigure considers a different number of transferred parameters $t$. The white regions correspond to $\left(\widetilde{p},p\right)$ settings eliminated by the value of $t$ in the specific subfigure. The yellow-colored areas correspond to values greater or equal to 800. All subfigures are for $\sigma_{\eta}^2 = 0.5$ and $\mtx{H}=\mtx{I}_d$. See Fig.~\ref{appendix:fig:target_generalization_errors_p_vs_p_tilde_2D_planes} for settings with different values of $\sigma_{\eta}^2$. See Fig.~\ref{appendix:fig:empirical_target_generalization_errors_p_vs_p_tilde_2D_planes} for the corresponding empirical evaluation.} \label{fig:target_generalization_errors_p_vs_p_tilde_2D_planes} \end{center} \vspace*{-5mm} \end{figure} Let us return to the general case of Theorem \ref{theorem:out of sample error - target task} and examine the expected generalization error for a given number of free parameters $p$ in the \textit{target} task. We now formulate an analytical condition on the correlation $\rho$ between the two tasks which is required for having a useful parameter transfer. \begin{corollary} \label{corollary:benefits from parameter transfer - conditions for task correlation} The term $\Delta\mathcal{E}_{\rm transfer}$, which quantifies the expected error difference due to each parameter being transferred instead of set to zero, satisfies ${\Delta\mathcal{E}_{\rm transfer} < 0}$ (i.e., \textbf{parameter transfer is beneficial} for ${p\notin\{n-1,n,n+1\}}$) if the \textit{correlation} between the two tasks is \textbf{sufficiently high} such that \begin{equation} \label{eq:benefits from parameter transfer - conditions for task correlation} \rho > 1 + \frac{d}{2 \widetilde{p}} \left({ \frac{\expectationwrt{\widetilde{\mathcal{E}}_{\rm out}}{\mathcal{S},\vecgreek{\eta}} - \sigma_{\xi}^2}{\kappa} - 1 }\right) \times { \left({ \begin{cases} 1 & \text{for } \widetilde{p} \le \widetilde{n}, \\ \frac{\widetilde{p}}{\widetilde{n}} & \text{for } \widetilde{p} > \widetilde{n}. \end{cases} }\right) } \end{equation} Otherwise, $\Delta\mathcal{E}_{\rm transfer} \ge 0$ (i.e., parameter transfer is not beneficial). \end{corollary} The last corollary is simply proved by using the formula for ${\Delta\mathcal{E}_{\rm transfer}}$ from (\ref{eq:out of sample error - target task - theorem - transfer term - src error form}) and reorganizing the respective inequality ${\Delta\mathcal{E}_{\rm transfer} < 0}$. The last result also emphasizes that the transfer learning performance depends on the interplay between the quality of the solution of the source task (reflected by $\expectationwrt{\widetilde{\mathcal{E}}_{\rm out}}{\mathcal{S},\vecgreek{\eta}}$) and the correlation between the two tasks. A source task that generalizes well is important to good transfer learning performance that induces good generalization at the target task. The number of free parameters $\widetilde{p}$ is a prominent factor that determines the \textit{source} task generalization ability. Accordingly, in Appendix \ref{appendix:subsec:Proof of Theorem 3.1} we use the detailed formulation of $\expectationwrt{\widetilde{\mathcal{E}}_{\rm out}}{\mathcal{S},\vecgreek{\eta}}$ to translate the condition (\ref{eq:benefits from parameter transfer - conditions for task correlation}) to explicitly reflect the interplay between $\widetilde{p}$ and $\rho$. The detailed formulation is provided in Corollary \ref{corollary:benefits from parameter transfer - conditions for p_tilde} in Appendix \ref{appendix:subsec:Proof of Theorem 3.1}) and its main lesson is that \textit{parameter transfer is beneficial} for ${p\notin\{n-1,n,n+1\}}$ if the \textit{source} task is \textit{sufficiently overparameterized} or sufficiently underparameterized with respect to the correlation between the tasks. This result is in accordance with the shapes of the generalization error curves in our settings that indeed have improved generalization performance at the two extreme cases of under and over parameterization. The results in Fig.~\ref{fig:transfer_learning_usefulness_plane - H averaging} show that there are less settings of beneficial transfer learning as the source and target tasks are less related, which is demonstrated in Fig.~\ref{fig:transfer_learning_usefulness_plane - H averaging} by higher noise levels $\sigma_{\eta}^2$ and/or when $\mtx{H}$ is an averaging operator over a larger neighborhood (e.g.,~Fig.~\ref{fig:transfer_learning_usefulness_plane__H_average59_analytic}), or in the case where $\mtx{H}$ is a discrete derivative operator (Fig.~\ref{fig:transfer_learning_usefulness_plane__H_derivative_analytic}). The analytical thresholds for useful transfer learning (Corollaries \ref{corollary:benefits from parameter transfer - conditions for task correlation},\ref{corollary:benefits from parameter transfer - conditions for p_tilde}) are demonstrated by the black lines in Fig.~\ref{fig:transfer_learning_usefulness_plane - H averaging}. Our analytical thresholds excellently match the regions where the \textit{empirical} settings yield useful parameter transfer (i.e., where $\Delta\mathcal{E}_{\rm transfer}<0$ is empirically satisfied). Additional analytical and empirical results, as well as details on the empirical computation of $\Delta\mathcal{E}_{\rm transfer}$ are available in Appendix \ref{appendix:subsec:Details on empirical evaluation of transfer learning error difference term}. \begin{figure}[t] \begin{center} \subfigure[{\small$\mtx{H}$:\hspace{.4em}local\hspace{.4em}averaging neighborhood size 3}]{\includegraphics[width=0.24\textwidth]{figures/transfer_learning_usefulness_plane__H_average3_analytic.eps}\label{fig:transfer_learning_usefulness_plane__H_average3_analytic}} \subfigure[{\small$\mtx{H}$:\hspace{.4em}local\hspace{.4em}averaging neighborhood size 15}]{\includegraphics[width=0.24\textwidth]{figures/transfer_learning_usefulness_plane__H_average15_analytic.eps} \label{fig:transfer_learning_usefulness_plane__H_average15_analytic}} \subfigure[{\small$\mtx{H}$:\hspace{.4em}local\hspace{.4em}averaging neighborhood size 59}]{\includegraphics[width=0.24\textwidth]{figures/transfer_learning_usefulness_plane__H_average59_analytic.eps} \label{fig:transfer_learning_usefulness_plane__H_average59_analytic}} \subfigure[{\small$\mtx{H}$:\hspace{.4em}discrete\hspace{.4em}derivative}]{\includegraphics[width=0.24\textwidth]{figures/transfer_learning_usefulness_plane__H_derivative_analytic.eps} \label{fig:transfer_learning_usefulness_plane__H_derivative_analytic}} \caption{The analytical values of $\Delta\mathcal{E}_{\rm transfer}$ defined in Theorem \ref{theorem:out of sample error - target task} (namely, the expected error difference due to transfer of a parameter from the source to target task) as a function of $\widetilde{p}$ and $\sigma_{\eta}^2$. The positive and negative values of $\Delta\mathcal{E}_{\rm transfer}$ appear in color scales of red and blue, respectively. The regions of negative values (appear in shades of blue) correspond to beneficial transfer of parameters. The positive values were truncated in the value of 2 for the clarity of visualization. The solid black lines (in all subfigures) denote the analytical thresholds for useful transfer learning as implied by Corollary \ref{corollary:benefits from parameter transfer - conditions for p_tilde}. Each subfigure corresponds to a different task relation model induced by the definitions of $\mtx{H}$ as: \textit{(a)-(c)} local averaging operators with different neighborhood sizes, \textit{(d)} discrete derivative. For all the subfigures, $d=80$, $\widetilde{n}=50$, $\sigma_{\xi}^2 = 0.025\cdot d$, $\| \vecgreek{\beta} \|_2^2 = d$ where $\vecgreek{\beta}$ components have a piecewise-constant form (see Fig.~\ref{appendix:fig:piecewise_constant_beta_structure}). See corresponding empirical results in Fig.~\ref{appendix:fig:transfer_learning_usefulness_plane - H is averaging}.} \label{fig:transfer_learning_usefulness_plane - H averaging} \end{center} \vspace*{-5mm} \end{figure} \section{Transfer of a Specific Set of Parameters} \label{sec:Main Analytic Results - Specific} Equipped with the fundamental analytical understanding of the key principles formulated in the former section for the case of uniformly-distributed coordinate layout, we now proceed to the analysis of settings that consider a \textit{specific non-random} layout of coordinates ${\mathcal{L} = \lbrace{ \mathcal{S}, \mathcal{F}, \mathcal{T}, \mathcal{Z} }\rbrace}$. First, we define the next quantities with respect to the \textit{specific} coordinate layout used: The \textit{energy of the transferred parameters} is ${\kappa_{\mathcal{T}}\triangleq\Ltwonorm{ \vecgreek{\beta}^{(\mtx{H})}_{\mathcal{T}}} + t \sigma_{\eta}^2 }$. The \textit{normalized task correlation in $\mathcal{T}$} between the two tasks is defined as $ { \rho_{\mathcal{T}} \triangleq \frac{ \langle { \vecgreek{\beta}^{(\mtx{H})}_{\mathcal{T}}, \vecgreek{\beta}_{\mathcal{T}} } \rangle }{ \kappa_{\mathcal{T}} } } $ for $t>0$. The following theorem formulates the generalization error of the target task that is solved using a specific set of transferred parameters indicated by the coordinates in $\mathcal{T}$. See proof in Appendix \ref{appendix:sec:Transfer of Specific Sets of Parameters: Proof of Theorem 4.1}. \begin{theorem} \label{theorem:out of sample error - target task - specific layout} Let ${\mathcal{L} = \lbrace{ \mathcal{S}, \mathcal{F}, \mathcal{T}, \mathcal{Z} }\rbrace}$ be a \textbf{specific, non-random} coordinate subset layout. Then, the out-of-sample error of the target task has the form of \begin{align} &\mathcal{E}_{\rm out}^{(\mathcal{L})} = \nonumber \begin{cases} \mathmakebox[24em][l]{\frac{n-1}{n-p-1}\left( \Ltwonorm{\vecgreek{\beta}_{\mathcal{F}^{c}}} + \sigma_{\epsilon}^{2} + \Delta\mathcal{E}_{\rm transfer}^{(\mathcal{T},\mathcal{S})}\right) } \text{for } p \le n-2, \\ \mathmakebox[24em][l]{\infty} \text{for } n-1 \le p \le n+1, \\ \mathmakebox[24em][l]{\frac{p-1}{p-n-1} \left( \Ltwonorm{\vecgreek{\beta}_{\mathcal{F}^{c}}} + \sigma_{\epsilon}^{2} + \Delta\mathcal{E}_{\rm transfer}^{(\mathcal{T},\mathcal{S})}\right) + \left({1-\frac{n}{p}}\right)\Ltwonorm{\vecgreek{\beta}_{\mathcal{F}}}}\text{for } p \ge n+2, \end{cases} \end{align} where \begin{align} &\Delta\mathcal{E}_{\rm transfer}^{(\mathcal{T},\mathcal{S})} = \expectation{\Ltwonorm{\widehat{\vecgreek{\theta}}_{\mathcal{T}} - \vecgreek{\theta}_{\mathcal{T}} } } - \kappa_{\mathcal{T}} \times \left({ 1 + 2\left({\rho_{\mathcal{T}} -1 }\right) \times \left({\begin{cases} \mathmakebox[2em][l]{1} \text{for } \widetilde{p} \le \widetilde{n}, \\ \mathmakebox[2em][l]{ \frac{\widetilde{n}}{\widetilde{p}} } \text{for } \widetilde{p} > \widetilde{n} \end{cases} }\right)}\right) \label{eq:out of sample error - target task - theorem - transfer term - general form - specific layout} \end{align} for $t>0$, and $\Delta\mathcal{E}_{\rm transfer}^{(\mathcal{T},\mathcal{S})} = 0$ for $t=0$. Here $\Delta\mathcal{E}_{\rm transfer}^{(\mathcal{T},\mathcal{S})}$ is the error difference introduced by transferring from the source task the parameters specified in $\mathcal{T}$ instead of setting them to zero. \end{theorem} The formulation of the error difference term in (\ref{eq:out of sample error - target task - theorem - transfer term - general form - specific layout}) demonstrates the interplay between the generalization performance in the source task (reflected by $\expectation{\Ltwonorm{\widehat{\vecgreek{\theta}}_{\mathcal{T}} - \vecgreek{\theta}_{\mathcal{T}} }}$), and the correlation between the source and target tasks (reflected by $\rho_{\mathcal{T}}$) --- however, unlike Theorem \ref{theorem:out of sample error - target task}, the current case is affected by source generalization ability and task relation \textit{only in the subset of transferred parameters} $\mathcal{T}$. The following corollary explicitly formulates the error difference term due to transfer learning (see proof in Appendix \ref{appendix:subsec:Proof of Corollary 6}). \begin{corollary} \label{corollary:out of sample error - target task - specific layout - detailed - specific layout} The error difference term $\Delta\mathcal{E}_{\rm transfer}^{(\mathcal{T},\mathcal{S})}$ from (\ref{eq:out of sample error - target task - theorem - transfer term - general form - specific layout}) can be written as \begin{align} &\Delta\mathcal{E}_{\rm transfer}^{(\mathcal{T},\mathcal{S})} = \kappa_{\mathcal{T}} \times \begin{cases} \mathmakebox[20em][l]{1 - 2\rho_{\mathcal{T}} + t\frac{ \zeta_{\mathcal{S}^{c}} + \sigma_{\xi}^2 }{\left({\widetilde{n} - \widetilde{p} - 1}\right) \kappa_{\mathcal{T}}}} \text{for } 1\le\widetilde{p} \le \widetilde{n}-2, \\ \mathmakebox[20em][l]{\infty} \text{for } \widetilde{n}-1 \le \widetilde{p} \le \widetilde{n}+1, \\ \mathmakebox[20em][l]{\frac{\widetilde{n}}{\widetilde{p}} \left( \frac{ \left({\widetilde{p}^2-\widetilde{n}\widetilde{p}}\right)\psi_{\mathcal{T}} + \widetilde{n}\widetilde{p} - 1 }{\widetilde{p}^2 - 1} - 2\rho_{\mathcal{T}} + t\frac{ \zeta_{\mathcal{S}^{c}} + \sigma_{\xi}^2 }{\left({\widetilde{p} - \widetilde{n} - 1}\right)\kappa_{\mathcal{T}}} \right)} \text{for } \widetilde{p} \ge \widetilde{n}+2. \end{cases}\nonumber \label{eq:out of sample error - target task - theorem - transfer term - detailed - specific layout} \end{align} Here we used the following definitions. The \textit{energy of the zeroed parameters in the source task} is ${\zeta_{\mathcal{S}^{c}}\triangleq\Ltwonorm{ \vecgreek{\beta}^{(\mtx{H})}_{\mathcal{S}^{c}}} + (d-\widetilde{p}) \sigma_{\eta}^2 }$. The \textit{possibly-transferred to actually-transferred energy ratio of the source task} is defined as ${\psi_{\mathcal{T}}\triangleq \frac{t}{\widetilde{p}}\cdot\frac{\Ltwonorm{ \vecgreek{\beta}^{(\mtx{H})}_{\mathcal{S}} } + \widetilde{p}\sigma_{\eta}^2 }{\kappa_{\mathcal{T}}}}$ for $t>0$ and $\widetilde{p}>0$. We can also interpret ${\psi_{\mathcal{T}}^{-1}}$ as the \textit{utilization} of the transfer of $t$ out of the $\widetilde{p}$ parameters of the source task. \end{corollary} The formulation of error difference due to transfer learning in Corollary \ref{corollary:out of sample error - target task - specific layout - detailed - specific layout} shows that the benefits from transfer learning increase for greater positive value of \textit{task correlation in the transferred coordinates} $\rho_{\mathcal{T}}$. Moreover, \textit{higher utilization} ${\psi_{\mathcal{T}}^{-1}}$ promotes benefits from the transfer learning process. Figures \ref{fig:error_curves_for_specific_layouts__main_text_sample},\ref{appendix:fig:target_generalization_errors_vs_p__specific_extended_for_linear_beta},\ref{appendix:fig:target_generalization_errors_vs_p__specific_extended_for_sparse_beta} show the curves of $\mathcal{E}_{\rm out}^{(\mathcal{L})} $, for specific coordinate layouts $\mathcal{L}$ that evolve with respect to the number of free parameters $p$ in the target task. The excellent fit of the analytical results to the empirical values is evident. The effect of the specific coordinate layout utilized is clearly visible by the less-smooth curves (compared to the on-average results for random coordinate layouts in Fig.~\ref{fig:target_generalization_errors_vs_p}). We examine two different cases for the true $\vecgreek{\beta}$ (a linearly-increasing (Fig.~\ref{appendix:fig:linear_beta_graph}) and a sparse (Fig.~\ref{appendix:fig:sparse_beta_graph}) layout of values, with the same norm) and the resulting error curves significantly differ despite the use of the same sequential construction of the coordinate layouts with respect to $p$ (e.g., compare Figs.~\ref{fig:specific_sparse_beta__H_is_I__target_generalization_errors_vs_p_eta_0.5_p_tilde_is_d} and \ref{fig:specific_linear_beta__H_is_I__target_generalization_errors_vs_p_eta_0.5_p_tilde_is_d}). The linear operator $\mtx{H}$ in the task relation model greatly affects the generalization error curves as evident from comparing our results for different types of $\mtx{H}$: an identity, local averaging (with neighborhood size 11), and discrete derivative operators (e.g., compare subfigures within the first row of Fig.~\ref{fig:error_curves_for_specific_layouts__main_text_sample}, also see the complete set of results in Figs.~\ref{appendix:fig:target_generalization_errors_vs_p__specific_extended_for_linear_beta}-\ref{appendix:fig:target_generalization_errors_vs_p__specific_extended_for_sparse_beta} in Appendix \ref{appendix:sec:Transfer of Specific Sets of Parameters: Additional Analytical and Empirical Evaluations of Generalization Error Curves}). The results clearly show that the interplay among the structures of $\mtx{H}$, $\vecgreek{\beta}$, and the coordinate layout significantly affects the generalization performance. \begin{figure}[t] \begin{center} {\subfigure[]{\includegraphics[width=0.24\textwidth]{figures/linear_beta_graph.eps}\label{appendix:fig:linear_beta_graph}} \subfigure[Linear-shape $\vecgreek{\beta}$]{\includegraphics[width=0.24\textwidth]{figures/specific_linear_beta__H_is_I__target_generalization_errors_vs_p_eta_0_5_p_tilde_is_d.eps} \label{fig:specific_linear_beta__H_is_I__target_generalization_errors_vs_p_eta_0.5_p_tilde_is_d}} \subfigure[Linear-shape $\vecgreek{\beta}$]{\includegraphics[width=0.24\textwidth]{figures/specific_linear_beta__H_local_average__target_generalization_errors_vs_p_eta_0_5_p_tilde_is_d.eps} \label{fig:specific_linear_beta__H_local_average__target_generalization_errors_vs_p_eta_0.5_p_tilde_is_d}} \subfigure[Linear-shape $\vecgreek{\beta}$]{\includegraphics[width=0.24\textwidth]{figures/specific_linear_beta__H_is_derivative__target_generalization_errors_vs_p_eta_0_5_p_tilde_is_d.eps} \label{fig:specific_linear_beta__H_is_derivative__target_generalization_errors_vs_p_eta_0.5_p_tilde_is_d}} \\ \subfigure[]{\includegraphics[width=0.24\textwidth]{figures/sparse_beta_graph.eps} \label{appendix:fig:sparse_beta_graph}}} \subfigure[Sparse $\vecgreek{\beta}$]{\includegraphics[width=0.24\textwidth]{figures/specific_sparse_beta__H_is_I__target_generalization_errors_vs_p_eta_0_5_p_tilde_is_d.eps}\label{fig:specific_sparse_beta__H_is_I__target_generalization_errors_vs_p_eta_0.5_p_tilde_is_d}} \subfigure[Sparse $\vecgreek{\beta}$]{\includegraphics[width=0.24\textwidth]{figures/specific_sparse_beta__H_local_average__target_generalization_errors_vs_p_eta_0_5_p_tilde_is_d.eps} \label{fig:specific_sparse_beta__H_local_average__target_generalization_errors_vs_p_eta_0.5_p_tilde_is_d}} \subfigure[Sparse $\vecgreek{\beta}$]{\includegraphics[width=0.24\textwidth]{figures/specific_sparse_beta__H_is_derivative__target_generalization_errors_vs_p_eta_0_5_p_tilde_is_d.eps} \label{fig:specific_sparse_beta__H_is_derivative__target_generalization_errors_vs_p_eta_0.5_p_tilde_is_d}} \caption{Analytical (solid lines) and empirical (circle markers) values of $\mathcal{E}_{\rm out}^{(\mathcal{L})}$ for specific, non-random coordinate layouts. All subfigures use the same sequential evolution of $\mathcal{L}$ with $p$. Here $\sigma_{\eta}^2 = 0.5$. See Figures \ref{appendix:fig:target_generalization_errors_vs_p__specific_extended_for_linear_beta}-\ref{appendix:fig:target_generalization_errors_vs_p__specific_extended_for_sparse_beta} for the complete set of results. } \label{fig:error_curves_for_specific_layouts__main_text_sample} \end{center} \vspace*{-10mm} \end{figure} Theorem \ref{theorem:out of sample error - target task - specific layout} yields the next corollary, which is a direct consequence of (\ref{eq:out of sample error - target task - theorem - transfer term - general form - specific layout}). \begin{corollary} \label{corollary:benefits from parameter transfer - specific layout} Let $\mathcal{S}$ be given. Then, the parameter transfer induced by a specific ${\mathcal{T}\subset\mathcal{S}}$ is \textbf{beneficial} for ${p\notin\{n-1,n,n+1\}}$ if $\Delta\mathcal{E}_{\rm transfer}^{(\mathcal{T},\mathcal{S})} < 0$, which implies that the \textit{correlation in the subset $\mathcal{T}$} between the two tasks should be \textbf{sufficiently high} such that \begin{equation} \label{eq:benefits from parameter transfer - conditions for task correlation - specific layout} \rho_{\mathcal{T}} > 1 + \frac{1}{2} \left({ \frac{1}{\kappa_{\mathcal{T}}}{\expectation{\Ltwonorm{\widehat{\vecgreek{\theta}}_{\mathcal{T}} - \vecgreek{\theta}_{\mathcal{T}} } }} - 1 }\right) \times { \left({ \begin{cases} 1 & \text{for } \widetilde{p} \le \widetilde{n}, \\ \frac{\widetilde{p}}{\widetilde{n}} & \text{for } \widetilde{p} > \widetilde{n}. \end{cases} }\right) } \end{equation} Otherwise, this specific parameter transfer is not beneficial over zeroing the parameters (i.e., omitting the input features) corresponding to $\mathcal{T}$. \end{corollary} The last corollary extends Corollary \ref{corollary:benefits from parameter transfer - conditions for task correlation} by emphasizing that transferring a specific set of parameters that generalizes well on the respective part of the source task can soften the demand on the task correlation level that is required for successful transfer learning. Our results also exhibit that a specific set $\mathcal{T}$ of $t$ transferred parameters can be the best setting for a given set $\mathcal{F}$ of $p$ free parameters but not necessarily for an extended set $\mathcal{F}'\supset\mathcal{F}$ of $p'>p$ free parameters (e.g., Fig.~\ref{fig:specific_sparse_beta__H_local_average__target_generalization_errors_vs_p_eta_0.5_p_tilde_is_d} where the orange and red colored curves do not consistently maintain their relative vertical order at the overparameterized range of solutions). Our results exemplify that transfer learning settings are fragile and that finding a successful setting is a delicate engineering task. Hence, our theory qualitatively explains similar practical behavior in deep neural networks \citep{raghu2019transfusion}. \section{Conclusions} \label{sec:Conclusion} In this work we have established an analytical framework for the fundamental study of transfer learning in conjunction with overparameterized models. We used least squares solutions to linear regression problems for shedding clarifying light on the generalization performance induced for a target task addressed using parameters transferred from an already completed source task. We formulated the generalization error of the target task and presented its two-dimensional double descent shape as a function of the number of free parameters individually available in the source and target tasks. Our results demonstrate an inherent tradeoff between overparameterized learning and transfer learning, namely, a more extensive transfer of parameters limits the maximal degree of overparameterization in the target task and its potential benefits --- nevertheless, in proper settings (e.g., when the source and target tasks are sufficiently related) transfer learning can be a substitute for an increased overparameterization. We characterized the conditions for a beneficial transfer of parameters and demonstrated its high sensitivity to the delicate interaction among crucial aspects such as the source-target task relation, the specific choice of transferred parameters, and the form of the true solution. We believe that our work opens a new research direction for the fundamental understanding of the generalization ability of transfer learning designs. Future work may study the theory and practice of additional transfer learning layouts such as: fine tuning of the transferred parameters, inclusion of explicit regularization together with transfer learning, and well-specified settings where the task relation model is known and utilized in the actual learning process.
proofpile-arXiv_065-4336
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Online recruitment platforms, e.g., LinkedIn, make it easy for companies to post jobs and for job seekers to submit resumes. In recent years, the number of both job posts and resumes submitted to online recruitment platforms is growing rapidly. For example, in U.S, there are over 3 million jobs posted on LinkedIn in every month\cite{linkedinreport}. Traditionally, resumes submitted for each job are reviewed manually by the recruiter to decide whether to offer the candidates the job interview. However, manual reviewing is slow and expensive to handle the overwhelming new job posts and resumes on online platforms. It is essential to design effective algorithms to do job-resume matching automatically. This problem is called person-job fit. Multiple approaches have been proposed for person-job fit. Earlier solutions consider person-job fit as a recommendation problem and apply collaborative filtering (CF) algorithms~\cite{iscid2014,DBLP:conf/asunam/DiabyVL13,DBLP:conf/www/LuHG13}. However, CF algorithms ignore the content of the job post and the resume, e.g., the working experience of the candidate and the job requirement. In contrast, when we do manual reviewing, we read the resume to understand the candidate (e.g., the skills and experience); we read the job post to understand the requirements; then we make a decision, i.e., whether the candidate should be offered an interview. We can see that the content of the resume and job post plays a key role in person-job fit. It is thus vital to extract effective representation of the content. Recently, deep learning models have largely improved the performance of natural language processing tasks, including semantic matching~\cite{DBLP:conf/cikm/HuangHGDAH13,DBLP:conf/www/Mitra0C17} and question answering. Deep-learning-based methods~\cite{DBLP:conf/cikm/LeHSZ0019,DBLP:conf/kdd/YanLSZZ019,DBLP:conf/sigir/QinZXZJCX18,bian-etal-2019-domain} are consequently introduced for person-job fit, focusing on learning effective representations of the free text of the job post and resume. The learned representations are then compared to generate a matching score. However, they only process the text paragraphs including the working experience and job requirements, and fail to comprehend other (semi-) structured fields like the education, skills, etc. This is partly because deep learning models are typically applied for natural language sentences instead of (semi-) structured fields. As a result, valuable information from these fields are left unexploited. \begin{figure}[h] \begin{subfigure}{0.23\textwidth} \centering \includegraphics[width=\textwidth]{fig/JobsPerCand} \caption{} \label{fig:jobspercand} \end{subfigure} \hfill \begin{subfigure}{0.23\textwidth} \centering \includegraphics[width=\textwidth]{fig/ResumesPerJob} \caption{} \label{fig:resumesperjob} \end{subfigure} \caption{Frequency for the number of jobs applied per candidate (left) and resumes received per job (right).} \label{fig:history} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=0.98\textwidth]{fig/problem} \caption{Illustration of candidates, job posts and resumes.} \label{fig:prob} \end{figure*} Moreover, most of existing deep-learning-based solutions ignore the historical applications of the candidate and the job post. It is common for a candidate to apply multiple jobs and for a job to receive multiple resumes, as shown in Figure~\ref{fig:history}. The numbers are derived from our experiment dataset. In specific, about 36\% have applied more than one jobs and about 88\% jobs have received more than one resumes. The history data could provide extra information of the candidate and job. Specifically, sometimes the job description is not written carefully or comprehensively, e.g., missing some requirements or unclear preference between two skills (Python versus C++); sometimes the recruiter's requirement or expectation may change dynamically, e.g., increasing if the received resumes so far are of very high quality. For such cases, the history data including accepted resumes and rejected resumes of a job could help to infer the recruiter's implicit intentions not elaborated in the job description. In addition, deep learning models are typically difficult to interpret~\cite{DBLP:journals/corr/Lipton16a} due to complex internal transformations, although a lot of attention has been paid to this issue~\cite{8099837,DBLP:conf/iccv/SelvarajuCDVPB17,DBLP:conf/icml/KohL17}. As a result, deep-learning-based person-job fit solutions face the interpretation problem. In real deployment, yet, it is necessary to explain why a candidate is accepted or rejected for a given job post. Towards these issues, in this paper, we propose a feature fusion solution. First, we propose a semantic entity extraction step to extract semantic entities, including the university and working years, from the whole resume and job post. All extracted entities are then concatenated into a vector, which captures the high-level semantics of the content and is easy to interpret. The semantic vector is further transformed through an adapted DeepFM model to learn the correlations among the entities. We also apply a convolutional neural network (CNN) over the text fields in the resume (resp. job post) following existing works. The outputs from DeepFM and CNN are fused via concatenation to produce a feature vector representing the explicit intention of a resume (resp. job post). Second, to exploit the history information, we propose a new encoding scheme for the job-resume pair from an application. All the historical applications, including both the accepted and rejected cases, of a candidate (resp. a job post) are processed by a LSTM model to learn the implicit intention. Last, we do a late fusion of the representations for the explicit and implicit intentions to represent the job and candidate comprehensively. Extensive experimental evaluations over real data show that our solution outperforms existing methods. We also conduct ablation studies to verify the effectiveness of each component of our solution. In addition, case studies are presented to demonstrate the contribution of semantic entities to model interpretation. Our solution has been deployed partially for one year. Some experience on improving the efficiency and reducing the cost will be shared at the end of this paper. Our contributions include \begin{enumerate} \item We propose a method to learn the representation for the explicit intention of the candidate and recruiter by processing the whole content of the resume and job post. \item We propose a method to learn the representation for the implicit intention of the candidate and recruiter by aggregating the historical applications. \item We fuse the two sets of representations to get an effective and comprehensive representation for both the job post and candidate. Extensive experiments are conducted to confirm the effectiveness and interpretability of our proposed solution. \end{enumerate} \section{Related Works} With the proliferation of online recruitment services, various recruitment analysis tasks have been studied, including career transition (a.k.a. talent flow) analysis~\cite{DBLP:conf/kdd/ChengXCACG13,DBLP:conf/www/WangZPB13,DBLP:journals/tkde/XuYYXZ19}, job (resume) recommendation and person-job fit. We shall review the recent papers on the last two tasks in more details. \subsection{Job and Resume Recommendation} Existing online recruitment platforms like LinkedIn typically have the user information including the demographic profile and working experience. They can also record the actions of the user on each job post, e.g., clicking and browsing time. Existing recommendation algorithms such as collaborative filtering have been adapted~\cite{1579569, DBLP:conf/www/LuHG13, 10.1145/2492517.2500266} to recommend jobs to users or recommend resumes to recruiters based on these information together with the user-user relationship (e.g., friends or following). The RecSys Challenge 2016~\cite{10.1145/2987538} is about the job recommendation problem. Our person-job fit problem differs to the job (resp. resume) recommendation problem in two aspects. First, job (resp. resume) recommendation is trying to predict and rank the jobs (resp. resume) based on the users' (resp. recruiters') preference; in person-job fit, given a candidate-job pair, we already know that the candidate has applied (i.e., shown interests) for the given job; the task is to predict whether the recruiter will offer him/her an interview or not. Second, in person-job fit, there is no social networking data or action data; instead, only resumes and job posts are available. \subsection{Person-Job Fit}\label{sec:related-person-job} Generally, good person-job fit algorithms should follow the manual review process, which checks the resume of the candidate and the job requirement to decide if they are matching. Understanding the semantics of the resume and job post is vital. Therefore, this task is highly related to natural language processing, especially text representation learning. Deep neural networks such as convolutional neural networks~\cite{kim-2014-convolutional} (CNN), recurrent neural networks~\cite{DBLP:conf/nips/SutskeverVL14} (RNN), attention models~\cite{DBLP:journals/corr/BahdanauCB14,DBLP:conf/nips/VaswaniSPUJGKP17} and BERT models~\cite{DBLP:conf/naacl/DevlinCLT19}, have made breakthrough in representation learning for text~\cite{DBLP:journals/corr/Goldberg15c}, including word, sentence and paragraph representation. Significant improvements are observed in search ranking where the similarity of the query and candidate documents are computed based on the learned representations~\cite{DBLP:conf/cikm/HuangHGDAH13,DBLP:conf/kdd/ShanHJWYM16}. These models can be adopted in our algorithm for learning representations of free text in resumes and job posts. The most relevant works are \cite{acmtmis,DBLP:conf/cikm/RamanathIPHGOWK18,DBLP:conf/sigir/QinZXZJCX18,DBLP:conf/kdd/YanLSZZ019}. They all apply deep neural networks to learn representations for the resumes and jobs. Zhu et al. Chen et al.\cite{acmtmis} feed the embedding of each word in the resume and the job post into two CNN models respectively to extract their representations. Cosine similarity is applied against the extracted representations to calculate the matching score. Qin et al.\cite{DBLP:conf/sigir/QinZXZJCX18} use RNN models with attention hierarchically to learn word-level, single ability-aware and multiple ability-aware representations of the job post and resume respectively. The multiple ability-aware representations of the job post and the resume are combined and then fed into a binary classification sub-network. Yan et al. \cite{DBLP:conf/kdd/YanLSZZ019} analyze the interview history of the candidate and the job post to infer their preference. Memory networks are adopted to embed the preference in the representations of the candidate and the job post. Le et al.\cite{DBLP:conf/cikm/LeHSZ0019} define the intention of the candidate and the recruiter according to the actions including submitting a resume, accepting a resume and rejecting a resuming. Next, they train the representations of the job post and the resume to predict the intention rates and matching score simultaneously. Bian et al~\cite{bian-etal-2019-domain} learn the representations for jobs from some popular categories and then transfer the representations for jobs in other categories using domain adaptation algorithms. Our method uses deep learning models to learn the job and resume representations as well. However, different to existing methods that learn the representations against only the free text in the resume and job post, our method additionally learns the representations of entities extracted from the whole resume and job post. The extracted entities are human readable, which help explain the matching results. Our method also exploits the actions as \cite{DBLP:conf/cikm/LeHSZ0019} to infer the preference (or intention) of candidates and recruiters; nonetheless, we learn the representations by accumulating all actions of each candidate and each recruiter instead of a single action. Our final representation of a candidate or job post is a fusion of the representations for the explicit and implicit intentions. \section{Methodology} \subsection{Problem Definition}\label{sec:problem} We denote the candidate set as $C=\{c_i\}_{i=1}^m$ and the job post set as $P=\{p_j\}_{j=1}^n$. A set of historical applications (i.e., the training dataset) denoted as $H=\{(c_i, r_{ik}, p_j, t_{ik})\}$ is given, where $r_{ik}$ is the resume of candidate $c_i$ submitted to the job post $p_j$; $k$ indicates that this is the $k$-th resume (i.e., $k$-th application) submitted by $c_i$ \footnote{In this paper, we process the resume submitted in every application individually, although a candidate may submit the same resume for different job posts.}; $t_{ik}\in\{0, 1\}$ is the truth label for the matching result (1 for accept and 0 for reject). Figure.~\ref{fig:prob} illustrates the interactions between the candidates and job posts, where each edge represents a resume of a candidate submitted for a job. The same candidate may submit different resumes for different jobs. The person-job fit task is to predict the matching result for a new application. It is possible that the candidate and the job post of this new application never appears in $H$. In the rest of this paper, we use $r, c, p, t$ respectively for the resume, candidate, job post and truth label from an application when the context is clear. \subsection{Overview} Our overall idea is to learn an effective representation (i.e., feature vector) of the candidate and the job post, denoted as $f(c)$ and $g(p)$ respectively. The matching score is then calculated as $\sigma(f(c)^T g(p))$, i.e., the inner-product of the feature vectors normalized by sigmoid function. For the feature vector of the candidate (resp. job post), we do a late feature fusion, i.e., concatenation, of the feature vectors learned from two aspects. First, we learn a feature vector of the content of the job post to represent the recruiter's explicit intention, i.e., the job requirements, denoted as $g_E(p)$ (resp. the resume $f_E(r)$ for the candidate's explicit skills). Second, we learn a feature vector to capture the implicit intention of the recruiter, denoted as $f_I(p)$, based on the previous applications he/she rejected and accepted. Similarly, we learn a feature vector to capture the implicit capabilities of the candidate denoted as $g_I(c)$ based on the applications he/she submitted, including accepted and rejected. Two neural network models are trained separately for the explicit and implicit features using the binary cross-entropy loss over the training tuples $H$, \begin{eqnarray} L_E(c, r, p, t) &=& -t\ log\sigma(f_E(r)^T g_E(p)) \label{eq:loss_e}\\ L_I(c, r, p, t) &=& -t\ log\sigma(f_I(c)^T g_I(p)) \label{eq:loss_i} \end{eqnarray} By fusing, i.e., concatenating, the feature vectors from the two models, we get $f(c) = [f_E(r); f_I(c)], g(p)=[g_E(p); g_I(p)]$, where $r$ is the resume of the candidate $c$ submitted for job post $p$. A threshold for the matching score $\sigma(f(c)^T g(p))$ is tuned using a validation dataset for making the final decision, i.e., accept or reject. \subsection{Learning Explicit Features}\label{sec:explicit} In this subsection, we learn feature vectors directly for the resume and the job post from one application. The learned features represent the explicit information described in the resume and job post. Both the job post and resume have many fields as illustrated in Figure~\ref{fig:prob}. For example, there are (simi-) structured fields (in blue text) such as education and skills in the resume, and job title and skill requirements in the job post. There are also free text fields (in gray text) such as working experience in the resume and job description in the job post. Most of existing works only learn features from the free text fields using deep learning techniques; that is to say, they skip the other fields, which in fact may contain valuable information. Towards this issue, we propose to extract semantic entities from the whole resume (and job post) using machine learning models and rules. The semantic entities are then embedded and fed into an adapted DeepFM ~\cite{DBLP:conf/ijcai/GuoTYLH17} to learn the correlations among the features, whose output feature vector is concatenated with the feature vector generated by a convolutional neural network against the free text fields. Figure~\ref{fig:explicit} shows the model structure. In this way, the final output feature vectors represent the resume and job post explicitly and comprehensively. Next, we shall introduce the algorithm to learn the features for resumes. The same algorithm is applied for job posts except for some hyper-parameters which are stated specifically. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fig/explicit} \caption{Model structure for learning explicit features.} \label{fig:explicit} \end{figure} \textbf{Semantic Entities} Given a resume, we preprocess it to parse the content into a JSON format data structure, including education, working experience, skills, etc. For different fields, we apply different techniques to extract the semantic entities. First, regular expressions and rules are applied to extract simple entities such as age, gender, location, etc. Some entities have different mentions (e.g., abbreviations) that are not easy to recognize using rules. Second, neural network classifiers are trained to extract these complex entities respectively, including university, major, position of the previous job, etc. For instance, we feed the text from the education field as the input to the model to predict the university index from a predefined university list. The BERT model~\cite{DBLP:conf/naacl/DevlinCLT19} is fine-tuned for this task. Third, the rest entities are derived based on domain knowledge~\footnote{We created a knowledge base for this purpose. The knowledge base construction is out of the scope of this paper; therefore, we skip it.} and the entities from the previous two steps. For example, we get the tier (e.g., top50) of a candidate's university by checking the university (from the second step) over an university ranking list, e.g., the QS ranking or USNews. After the three steps, we get a list of $s$ entities, among which some are categorical values, e.g., the university index, and some are real values, e.g., the duration of a candidate's previous job. By converting every categorical entity into a one-hot representation and standardizing the real value entities, we get a sparse vector $\mathbf{x}$ of length $d_x$, which is fed into the adapted DeepFM model as shown in Figure~\ref{fig:explicit} (left). Different to the original DeepFM model that generates a single scalar value as the output of the FM Block, we produce a feature vector consisting of two parts: 1) $squeeze(\mathbf{w} * \mathbf{x})$, which multiplies the sparse vector with a weight vector $\mathbf{w}\in R^{d_x}$ (to be trained) element-wisely and then removes the empty entries. Since there are $s$ entities in each resume, the output vector from $squeeze()$ has $s$ elements. This vector captures the first order feature of the entities. 2) $\sum_{i\neq j}\mathbf{V_i} * \mathbf{V_j} \mathbf{x}_i \mathbf{x}_j$, where $V_i\in R^{d_{fm}}$ is the dense embedding vector (to be trained), as shown in the shadow area, of the i-th element in $\mathbf{x}$. The summation result is a vector of length $d_{fm}$. This vector represents the second-order features of the semantic entities. For the Linear Blocks in Figure~\ref{fig:explicit}, we follow the structure in the original DeepFM model. The input is a dense vector concatenated by all the embedding vectors of the $s$ entities. Finally, we concatenate the feature vectors from the FM block and Linear blocks as the output from the DeepFM side. \textbf{Free Text} We follow \cite{acmtmis} to extract explicit features from the free text fields. The model is a convolutional neural network (CNN) with 2 2D convolution blocks and a linear layer. Each sentence from the free text fields is truncated and padded into a fixed length. A maximum sentence length is also set to get a fixed number of sentences via truncation and padding (at the sentence level). Therefore, the input to the CNN model is a matrix (one row per sentence) for each resume. The words are embedded through a dense embedding layer and the result (3D tensor) is fed into the convolution blocks. \textbf{Training} The output feature vectors from the DeepFM model and the CNN model are concatenated and fed into a linear layer to produce the explicit feature vector of length $d_E$, i.e., $f_E(r)$ and $g_E(p)$ for a resume and a job post respectively. The whole model in Figure~\ref{fig:explicit} is trained end-to-end using back-propagation and mini-batch stochastic gradient descent algorithm (SGD) with the loss function defined in Equation~\ref{eq:loss_e}. \subsection{Learning Implicit Features}\label{sec:implicit} \textbf{Job Post} A job post is written to describe the intention of the job recruiter; however, the text may not capture the intention comprehensively. For example, the recruiters may miss to mention some skills carelessly or dynamically change the expectation if the received resumes are of very high (or low) quality. Nonetheless, previously accepted and rejected resumes can help infer these implicit intentions. Therefore, we exploit the application history to learn the implicit features for each job post. Given a job post $p_j$ from an application, we extract all applications to $p_j$ before this one from $H$ as $H(p_j)=\{(c_i, r_{ik}, p_j, t_{ik})\}$. A LSTM model\footnote{We leave the testing of other advanced RNN models, e.g., Bi-directional LSTM, as a future work.} is applied to learn the features from $H(p_j)$, as shown in Figure~\ref{fig:implicit} (left side). Specifically, we propose a novel input encoding scheme for each tuple in $H(p_j)$. For each $(c_i, r_{ik}, p_j, t_{ik})\in H(p_j)$, we concatenate the explicit feature vectors as $[f_E(r_{ik})$; $onehot(t_{ik})$; $g_E(p_j)]$ $\in R^{2d_E+2}$, where onehot($t_{ik}$) is the onehot representation of the result, i.e, $01$ for rejecting or $10$ for accepting. By feeding the explicit features of the resumes and the current job post as well as the decision (i.e., accept or reject), we expect the LSTM model to learn the features for representing the implicit intention of $p_j$. These encoded vectors are fed into the LSTM model according to time when they are reviewed by the recruiter. The last hidden state vector of the LSTM model is transformed by a linear layer with $d_I$ neurons, whose output is assigned to $g_I(p_j)$. \textbf{Candidate} Since a candidate may change the resume over time and submit different versions for different job posts, it is more reasonable to learn the implicit intention of the candidate instead of every resume. The same model structure is applied to learn the implicit features for each candidate as shown in Figure~\ref{fig:implicit} (right side). Note that for each job $p_j$ applied by candidate $c_i$, it is encoded into the input to the LSTM as $[f_E(r_{ik});$ $onehot(t_{ik});$ $g_I(p_j)]\in R^{2d_E+2}$, where $r_{ik}$ is the resume submitted for $p_j$. The last hidden vector of the LSTM model is transformed by a linear layer with $d_I$ neurons to generate $f_I(c_i)$. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{fig/implicit} \caption{Model structure for learning implicit features.} \label{fig:implicit} \end{figure} \textbf{Training} The whole model in Figure~\ref{fig:implicit} is trained end-to-end by back-propagation and SGD with Equation~\ref{eq:loss_i} as the loss function. For each training tuple (i.e., application) $(c_i, r_{ik}, p_j, t_{ik})\in H$, we collect all previously applied jobs for $c_i$ and all previously received resumes for $p_j$ to construct the inputs for the two LSTM models respectively. To facilitate fast training, we truncate or pad the inputs to have a fixed length of 20 (left) and 5 (right) respectively. The numbers are derived with reference to the average number of resumes per job and average number of jobs per candidate. If padding is applied, the onehot vector for the decision is $00$ (for unknown). For the case where there is no historical data for $p_j$ (or $c_i$), all the input resumes are generated via padding. Note such cases take a small portion of all tuples as it happens only when the job post or the candidate is new to the system. Although the output does not carry any extra (intention) information, we still concatenate it with the explicit features to make all job posts and candidates have the feature vectors of the same length, i.e., $d_E+d_I$. With that, we can compute a matching score between any job post and candidate. \section{Experiment} \subsection{Experiment setup} \textbf{Dataset} Both the resumes and job posts are sensitive data. As far as we know, there is no public data available for the person-job fit problem. Existing papers use private data provided by commercial recruitment platforms. In our experiment, we use the real data collected by our recruitment platform from January 2019 to October 2019. In total, there about 13K (K=1000) job posts, 580K candidates and 1.3 million resumes. Each application is a data sample (i.e., a tuple defined in Section~\ref{sec:problem}). We order them according to the time when the recruiter reviews the application, which is in accordance with the order to the LSTM model. The last 100K samples are used for testing and the second last 100K samples are used for validation; the rest (about 1.1 million) is used for training. \textbf{Baselines} We compare our feature fusion solution, called PJFFF (Person-Job Fit based on Feature Fusion) with the following baseline methods, including traditional machine learning models, a collaborative filtering algorithm and three deep learning based methods. More details of the last three methods are introduced in Section~\ref{sec:related-person-job}. \begin{itemize} \item Logistic regression (LR), Factorized Machine (FM), Multi-layer Perceptron (MLP) and LightGBM~\cite{DBLP:conf/nips/KeMFWCMYL17}. For each resume and job post pair (from one application), we feed the sparse vector $\mathbf{x}$ from Section~\ref{sec:explicit} as the input to these models. \item LightGCN~\cite{he2020lightgcn} proposes a graph convolutional neural network to implement collaborative filtering. The embedding vectors of the candidate nodes and job nodes are learned and compared (inner-product) for recommendation. \item PJFNN~\cite{acmtmis} uses two CNN models to learn the features of the free text in the resume and job post respectively. The structure is the same as that for learning the explicit features of the free text in Figure~\ref{fig:explicit} (right side). \item APJFNN \cite{DBLP:conf/sigir/QinZXZJCX18} applies attention modelling recursively to learn ability-aware representations for the resume and job post. \item JRMPM\cite{DBLP:conf/kdd/YanLSZZ019} also exploits the application history. It uses a memory network to embed the preference of the candidate and the job recruiter into their representations. \end{itemize} LR, FM, MLP and our model are trained using PyTorch; We use the open-source implementation of LightGBM and LightGCN. The codes of the PJFNN, APJFNN and JRMPM are kindly shared by the authors of \cite{bian-etal-2019-domain,DBLP:conf/kdd/YanLSZZ019}. \textbf{Evaluation Metrics} Following previous works~\cite{DBLP:conf/kdd/YanLSZZ019,DBLP:conf/sigir/QinZXZJCX18}, we evaluate the accuracy, AUC, precision, recall and F1 of each method. We tune the threshold for generating the decision (accept or reject) over the validation dataset to maximize the F1 metric. The same threshold is applied for computing the accuracy metric. According to the business requirement, we tune another threshold to make the recall at 0.8 and then compute the precision@recall=0.8. \textbf{Hyper-parameter Setting}. In our experiment, we extract $s=264$ entities for each resume and $s=57$ entities for each job post. The length of the sparse feature vector $d_x$ is about 37K for a resume and 1.6K for a job post respectively. For the adapted DeepFM model in Figure~\ref{fig:explicit}, we set embedding vector size as $d_{fm}=7$ and the size of the last linear layer as $d_E=128$. The CNN model in Figure~\ref{fig:explicit} has the same structure as that in \cite{acmtmis}. The default size of the hidden state vector and the linear layer in the LSTM models is set to 64. Hyper-parameter sensitivity analysis is conducted for them in Section~\ref{sec:hyper}. To train the models in Figure~\ref{fig:explicit} and Figure~\ref{fig:implicit}, we use mini-batch Adam~\cite{kingma2017method} as the optimizer with batchsize 512. The learning rate is set to 0.005 and 0.001 for training the models in Section~\ref{sec:explicit} and Section~\ref{sec:implicit} respectively. The weight decay (coefficient or L2 regularization) is $10^{-5}$ and $10^{-4}$ for the two models respectively. \subsection{Performance Comparison} The performance metrics of all methods are listed in Table~\ref{tab:perf}. We can see that our solution PJFFF outperforms all baseline methods across all evaluation metrics with a large margin. PJFNN, APJFNN and JRMPM are the most relevant methods to our solution. However, they fail to read and process the non-free text fields in resumes and job posts, which actually may contain valuable information to capture the explicit intention of the candidates and recruiters. Moreover, PJFNN and APJFNN ignore the historical applications of each candidate and each job post. In contrast, JRMPM and PJFFF both utilize the historical information to infer the implicit intention. Therefore, they get better performance. LightGBM is the second best method, which even outperforms the recently proposed deep-learning-based methods, namely, PJFNN, APJFNN and JRMPM. In fact, LR, FM and DNN are comparable to the deep-learning-based methods. We notice that in APJFNN\cite{DBLP:conf/sigir/QinZXZJCX18} and JRMPM\cite{DBLP:conf/kdd/YanLSZZ019} papers, these methods' performance is much worse than APJFNN and JRMRM. We think the major reason is that we are using different input feature vectors. In APJFNN and JRMPM papers, they average the embeddings of all words from the free text fields of a resume and a job post respectively as the inputs to GBDT. In our experiment, we feed the sparse entity feature vector into these model. The result shows that our entity feature vector is effective in representing the resume and job post. LightGCN has the worst performance. Although it has shown to be effective than other collaborative filtering approaches for recommendation problems, LightGCN is not a good solution for person-job fit. We think this is mainly because LightGCN does not use the content of the resume and job post at all. It learns the embedding vectors of each resume and each job post merely based on the interactions among the resume nodes and job post nodes in the graph. Moreover, some nodes do not have connections to others. In other words, for new candidates and job posts, LightGCN has the cold starting issue, which is a common challenge for collaborative filtering methods. For our method, although we cannot get any implicit intention information for such cases, we still have the explicit features. \begin{table}[] \centering \caption{Performance comparison for person-job fit.} \begin{tabular}{|l|l|l|l|l|} \hline Method& AUC & Accuracy &Precision & F1 \\ &(\%) &(\%) & @Recall=0.8 (\%)&(\%) \\ \hline LR & 88.6 & 87.6 & 48.5 & 50.2 \\ FM & 90.0 & 88.7 & 51.2 & 60.9 \\ DNN & 91.2 & 89.4 & 55.9 & 64.9 \\ LightGBM& 91.7 & 90.2 & 56.9 & 67.6\\ LightGCN & 64.1 & 87.5 & 12.7 & 36.5\\ \hline PJFNN & 89.6 & 89.3 & 46.0 & 64.2 \\ APJFNN & 90.4& 90.8& 48.7& 68.0 \\ JRMPM & 91.3 & 88.7 & 51.8 & 66.7\\ \hline PJFFF & \textbf{95.3}& \textbf{92.9}& \textbf{73.3}& \textbf{77.1}\\ \hline \end{tabular} \label{tab:perf} \end{table} \subsection{Ablation Study} Our PJFFF fuses two sets of features, namely explicit and implicit features. The explicit features are generated by the adapted DeepFM and CNN. Consequently, it has multiple variations by combining these components differently. In this section, we evaluate four variations to study the contribution of each component. \begin{enumerate} \item $f_E\&g_E (entity)$ This variation uses the adapted DeepFM model only to learn the explicit features, i.e., the left part of Figure~\ref{fig:explicit}. The free text data is ignored. \item $f_E\&g_E (both)$ This variation learns the explicit features using both DeepFM and CNN, i.e., Figure~\ref{fig:explicit}. No implicit features are learned. \item $f\&g (entity)$ This variation fuses the explicit features generated by DeepFM and the implicit features without using the free text data. \item $f\&g (both)$ This is the complete PJFFF model which fuses the implicit features and explicit features over the whole resume and job post. \end{enumerate} \begin{table}[] \centering \caption{Ablation study of our PJFFN.} \begin{tabular}{|l|l|l|l|l|} \hline Method& AUC & Accuracy &Precision & F1 \\ & (\%)& (\%)& @Recall=0.8(\%)&(\%) \\ \hline $f_E \& g_E$ (entity) & 91.4 & 89.6 & 56.4 & 65.7\\ $f_E \& g_E$ (both) & 94.2 & 91.0& 65.6& 73.1\\ $f \& g$ (entity) & 93.1 & 91.5 & 63.3 & 71.3\\ $f \& g$ (both) & 95.3 & 92.9& 73.3 & 77.1 \\ \hline \end{tabular} \label{tab:ablation} \end{table} The ablation study results are listed in Table~\ref{tab:ablation}. First, the $f_E\&g_E ($ $entity)$ variation has the worst performance. However, although it only learns the explicit features using DeepFM over the semantic entities, the performance is comparable to other baseline methods in Table~\ref{tab:perf}. The result confirms that the semantic entities carry valuable information. Second, by comparing the performance of $f_E\&g_E (both)$ and $f_E\&g_E (entity)$, we can see there is a significant improvement. This is due to the contribution of the CNN model over the free text fields. Similar improvement is also observed by comparing $f \& g (entity)$ and $f \& g (both)$. Third, when we compare the rows for $f_E\&g_E (entity)$ and $f \& g (entity)$ (resp. $f_E\&g_E (both)$ and $f \& g (both)$), the performance is improved. It indicates that the implicit features make positive contributions. To conclude, all components involved in PJFFF contributes positively to the final performance. By combining them together, we get the best model. \subsection{Hyper-parameter Sensitivity Study}\label{sec:hyper} In this section, we study the sensitivity of two hyper-parameters of PJFFF pertaining to the model structure, namely the hidden state size of the LSTM layer and the last linear layer size in Figure~\ref{fig:implicit}. Other model structure hyper-parameters are mainly set following existing papers; and the training hyper-parameters are tuned over the validation dataset. Figure~\ref{fig:sensititivy} shows the results when we vary the size. We can see that the performance metrics are almost on horizontal lines, indicating that PJFFF is not sensitive to the two hyper-parameters. \begin{figure}[h] \begin{subfigure}{0.23\textwidth} \centering \includegraphics[width=\textwidth]{fig/hidsize} \caption{Size of the linear layer.} \label{fig:hidsize} \end{subfigure} \hfill \begin{subfigure}{0.23\textwidth} \centering \includegraphics[width=\textwidth]{fig/linearsize} \caption{Size of the hidden state.} \label{fig:lsize} \end{subfigure} \caption{Sensitivity analysis of hyper-parameters.} \label{fig:sensititivy} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{fig/case} \caption{From top to bottom: Job post, accepted resume and rejected resume.} \label{fig:case} \end{figure*} \subsection{Case Study} Case studies are conducted to inspect and verify our solution from another perspective. Figure~\ref{fig:case} shows two application cases for the same job. The left side illustrates the job post and resumes~\footnote{We reconstruct the job post and resumes from the parsed JSON files and present a subset of the fields for better illustration.}. The right side displays some extracted semantic entities. The matching score generated for Resume 1 and Resume 2 is 0.78 and 0.04 respectively. We can see that our solution is able to rank the accepted resume and rejected resume correctly. To understand why the two resumes get different scores, we can refer to the semantic entities, which clearly indicate the requirement of this job, and the experience and capability of the candidates. We can see that the entities of Resume 1 match those of the job post well. For example, the candidate worked on algorithm development for two jobs, and he/she has the deep learning and AI skills which are required by the job post. However, for Resume 2, the semantic entities are less relevant to those of the job post. To conclude, these semantic entities are informative for interpreting the result generated by our method. For example, human resource (HR) managers who do not have the expertise knowledge of the position can refer to the semantic entities to understand the model's decision. \subsection{Online Deployment} Our solution is developed in 2 stages, where the first stage learns the explicit features using the semantic entities as the input. For this part, we have deployed it for online serving for 1 year. Some special optimization and engineering work is done to improve the efficiency and reduce the cost for large-scale deployment. In specific, considering that GPUs are expensive, we deploy our method on CPU machines. To improve the efficiency, we re-implement all pre-processing steps and semantic entity extraction steps using the Go language, which reduces the inference time per sample by 50\%. We also applied model distillation~\cite{DBLP:journals/corr/abs-1903-12136} to replace the BERT model for extracting the complex semantic entities. After that, the inference time is reduced by another 50\% and is 0.2-0.3 second per sample. The second stage adds the rest components introduced in this paper. These additional components increase the time complexity. We are optimizing the efficiency and will deploy them later. \section{Conclusion} In this paper, we have introduced a feature fusion based method, called PJFFF, for the person-job fit problem. Our method differs to existing solutions in two aspects. First, instead of purely relying on deep learning models to process the free text fields in a resume and job post, PJFFF extract semantic entities from the whole resume and job post in order to learn comprehensive explicit features. Second, we propose a new scheme to encode the historical application data of each candidate and each job post. They are fed into LSTM models to learn implicit intentions of the candidate and recruiters respectively. Finally, the two sets of features are fused together via concatenation to produce an effective representation. Experimental study confirms the superiority of our method over existing methods and verifies the contributions from each component of PJFFF. \bibliographystyle{ACM-Reference-Format}
proofpile-arXiv_065-4337
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{intro} Gust fronts are a typical phenomena associated with convective outflows from thunderstorms that are growing in a dry environment \citep{droegemeier1985three,mueller1987dynamics,weckwerth1992initiation}. The evaporative cooling of rain and the associated downdrafts initiate the cold pool, which introduces cold and moist air into the dry and sub-saturated boundary layer. As the moist air interfaces with the dry surrounding air, the density differences between the two air masses create the gust front. The speed of the wind gusts may attain values more than 15 m s$^{-1}$ associated with temperature drops more than 6 to 10 K due to the intrusion of the cold air (see \citet{hutson2017using} and references therein). The gust front propagates as the cold pool expands and sometimes secondary gust fronts are also generated by the storm outflows \citep{doviak1984atmospheric}. There can also be multiple gust fronts from other storms in the neighbourhood. Gust fronts play an important role in the formation of new convection and the organization of convection \citep{kingsmill1995convection}. The understanding of the gust front and their interaction with the surface is of importance for various applications such as the nowcasting, pollution dispersion, wind damage to structures, etc. The convective outflows and gust fronts can influence the surface fluxes and turbulence in the boundary layer \citep{oliveira2020planetary}. The signatures of the gust fronts are imprinted on the surface measurements in the form of a sudden increase in the wind speed followed by a sharp drop in the temperature. This typical gust front feature has remarkable resemblance with the temperature micro-fronts, commonly observed and extensively well-studied in the context of a nocturnal boundary layer over the mid latitudes \citep{blumen1999analysis,prabha2007low,mahrt2019microfronts}. However, to the best of our knowledge there are no systematic investigations available about the effect of wind gust on the turbulent characteristics of temperature fluctuations in the surface layer of a tropical convective boundary layer. The surface layer is a generalization of the inertial layer of unstratified wall-bounded flows by including the effect of buoyancy, where the effects of surface roughness are no longer important and the modulations by the outer eddies are not too strong \citep{barenblatt1996scaling,davidson2015turbulence}. In a convective surface layer flow, the temperature fluctuations behave like an active scalar, where they preferentially couple with the velocity fluctuations in the vertical direction to transport heat and drive the flow \citep{tennekes1972first}. Along with that, the turbulent fluctuations in temperature also share characteristics akin to the laboratory Rayleigh B\'{e}nard convection experiments \citep{adrian1986turbulent,balachandar1991probability,wang2019turbulent}. The large-eddy simulations of \citet{khanna1998three} and aircraft experiments of \citet{williams1992composite,williams1993interactions} demonstrate that similar cellular convective structures as found in the laboratory experiments also exist in a convective surface layer. These studies illustrate that the positive temperature fluctuations aggregate along the cell edges, whereas the negative fluctuations reside within the cell center. Apart from that, in a convective surface layer the temperature fluctuations are also unevenly distributed due to their non-Gaussian nature in the local free-convection limit. This uneven distribution is caused due to the intermittent bursting of the warm plumes rising from the ground, interspersed with relatively more frequent quiescent cold plumes bringing well-mixed air from aloft \citep{chu1996probability,liu2011probability,garai2013interaction,lyu2018high}. As a consequence, the temperature fluctuations in a convective surface layer display intermittent characteristics. Intermittency is defined as a property of a signal which is quiescent for much of the time and occasionally burst into life with unexpectedly high values more common than in a Gaussian signal \citep{davidson2015turbulence}. In general, there are two aspects which characterize an intermittent turbulent signal \citep{bershadskii2004clusterization,sreenivasan2006clustering,cava2019submeso}. The first one of these is related to an uneven distribution of quiescent versus energetic states in space or time, caused due to the aggregation or clustering of the turbulent structures. On the other hand, the second one is related to the amplitude variability associated with the signal. However, in a convective surface layer, systematic investigation of the intermittent characteristics of temperature fluctuations by separating the clustering and amplitude variability effects is quite rare. In fact, there are only two available studies by \citet{cava2009effects} and \citet{cava2012role}, where such an analysis has been carried out. By employing a telegraphic approximation (TA) of the temperature signal, they showed that the temperature fluctuations displayed significant clustering due to the presence of the coherent ramp-cliff patterns. Nevertheless, they also discovered that the clustering effect in temperature was significantly dependent on the surface heterogeneity, such that the temperature was more clustered within a canopy sub layer rather than over a bare soil. They attributed this difference to a hypothesis that the large-scale structures such as ramps are less distorted in the canopy sub layers when compared to their atmospheric counterparts. Therefore, it is prudent to ask in a convective surface layer, \begin{enumerate} \item How does the presence of a gust front modify the aggregation (clustering) properties of the turbulent structures related to the temperature fluctuations? \item Does this modification occur across all the turbulent scales or is there any particular threshold beyond which the effect ceases to exist? \end{enumerate} These are the primary research questions which motivate this study. In our course of investigation, we employ novel statistical methods \citep{bershadskii2004clusterization,poggi2009flume,chowdhuri2020persistence} to carry out an in-depth analysis of the turbulent structures of temperature variations associated with a gust front event which occurred during the afternoon time of 22-September-2018. The reason this day has been chosen for our analysis is because the availability of simultaneous and co-located observations from a Doppler weather radar and from a micrometeorological tower. Together these measurements present a unique opportunity to scrutinize the spatial morphology of the wind gust as it approached the location of the tower and dissect the associated effects on the turbulent temperature characteristics. The present article is organized in three different sections. In Sect. \ref{Data}, we describe the dataset, in Sect. \ref{results} we present and discuss the results, and lastly in Sect. \ref{concl} we conclude our findings and present future research direction. \section{Dataset description} \label{Data} To investigate the turbulent characteristics, we have used the micrometeorological dataset from a 50-m instrumented tower erected over a non-irrigated grassland in Solapur, India (17.6$^{\circ}$ N, 75.9$^{\circ}$ E, 510 m above mean sea level). This dataset was collected during the Integrated Ground Observational Campaign (IGOC), as a part of the fourth phase (2018-2019) of the Cloud Aerosol Interaction and Precipitation Enhancement Experiment (CAIPEEX). The fourth phase of the CAIPEEX was conducted over the arid rain shadow region of Western-Ghats with a dense observational network. Further details about the CAIPEEX program and its objectives can be found in \citet{kulkarni2012cloud} and \citet{prabha2011microphysics}. \begin{figure*}[h] \centering \resizebox{.6\textwidth}{0.8\textwidth}{\includegraphics {Figure_1.jpg}} \vspace{3mm} \caption{A partial view of the 50-m instrumented tower, looking towards the South. The booms where the instruments are mounted face towards the West. The fetch area at the South-West sector of the tower includes scattered trees and bushes, representing a typical site.} \label{fig:1} \end{figure*} Figure \ref{fig:1} shows a partial view of the 50-m tower, along with its surroundings. The terrain was relatively flat at the tower location, populated with grassland, scattered trees and bushes at the South-West sector, having an average roughness height of about 10 cm in the southerly direction. Note that, a small one-storeyed building was located at the North side (not seen in Fig. \ref{fig:1}) of the tower, approximately at a distance of 100 m. Due to this reason, the data were only used when the wind direction was from the South-West sector. The tower was instrumented with the sonic anemometers (Gill Windmaster-Pro) at four levels, corresponding to the heights above the ground at $z=$ 4, 8, 20, and 40 m. The data from these four sonic anemometers were sampled continuously at 10-Hz frequency and divided into half hour intervals. The time-synchronization among the four levels was ensured with the help of GPS clocks, supplied by the manufacturer. Apart from that, five all-in-one weather sensors (Gill MaxiMet GMX600) were also installed on the tower at $z=$ 2, 6, 10, 30, and 49 m. These sensors measured the horizontal wind speed and direction, temperature, relative humidity, pressure, and rainfall. The data from all-in-one weather sensors were logged into a CR3000 data-logger (Campbell Scientific Inc.) at both 1-min and 30-min intervals. Apart from the tower observations, a secondary dataset from a C-band (5.625 GHz) polarimetric Doppler weather radar has also been used in this study to establish the spatial features associated with the passage of the wind-gust event. The radar was positioned on the roof of a four-storeyed building, situated at a distance of around 1.5 km towards the North from the location of the tower. It had a range of 250 km and we used an image in the Plan Position Indicator (PPI) scan at an elevation angle of 1.5$^{\circ}$ to describe the gust front event. The gust front signature could be identified due to the resolution of the radar in the near range of 50 km, as the scans were configured with 1 $\mu$s pulse-width corresponding to 150 m range resolution. We used the radar reflectivity, which is the sixth power of the droplet size, within a radius of 50 km around Solapur. The 40 dBZ reflectivity corresponds to 10 mm hr$^{-1}$ rain rate, identifying the heavy rain echos. Our specific interest was to investigate the formation and propagation of the gust front in association with any deep convective clouds over the region. \section{Results and discussion} \label{results} We begin with discussing the general meteorological characteristics associated with the gust front, as detected from the Doppler C-band radar. Subsequently, we present the results related to the effect on the surface measurements as obtained from the sensors on the 50-m instrumented micrometeorological tower. Later, we show other relevant results from persistence and clustering analyses to indicate any difference in the statistical properties of the turbulent structures as the gust front moved past the tower location. Plausible physical interpretations are also provided during the course of our investigation. \subsection{Meteorological characteristics of the gust front} \label{gust} \begin{figure}[h] \centering \hspace*{-0.6in} \includegraphics[width=1.3\textwidth]{Figure_2.jpg} \caption{Radar reflectivities are shown as dBZ contours, obtained from a Doppler C-band radar at the lowest elevation within 50 km of Solapur, during the passage of a gust front on 22-September-2018. The colour bar on the right hand side of the panels show the reflectivity values in dBZ. The dotted lines indicate the observed gust front and the `+’ symbols specify the location of the 50-m micrometeorological tower. At the top left corner of all the panels, the local times are shown corresponding to each snapshot.} \label{fig:2} \end{figure} Figure \ref{fig:2} shows the reflectivity from the C-band Doppler radar for a 50 km range at an elevation angle of 1.5$^{\circ}$, corresponding to six different PPI scans at six different local times of 22-September-2018 (GMT+05:30). Note that, each of the plots shown in Fig. \ref{fig:2} point towards the North. From the first panel of Fig. \ref{fig:2} (12:47 PM), we could notice the presence of few isolated convective cells at the South-West sector, identified as the contours of reflectivity greater than 30 dBZ. Additionally, several shallow non-precipitating cumulus clouds with reflectivity below 20 dBZ are also noted in and around the study area. However, as the time progressed (12:47 PM to 14:35 PM, first to fourth panels), these isolated convective cells formed wide clusters of deep-convective clouds commensurate with heavy precipitation (contour values more than 40 dBZ). Since the precipitation was noted over a wide area, this phenomenon cooled the surroundings and resulted in clear areas around, without clouds. This interpretation is supported by the observations, where we could see the majority of the areas around the deep convective clusters were mostly cloud-free on the South-West sector (see third and fourth panels). These clear areas were thus associated with the cold air which was pushing forward in the boundary layer and advanced towards the tower location (showed with a `+' sign in Fig. \ref{fig:2}). The boundaries between the clear and cloudy areas are designated with dashed lines in Fig. \ref{fig:2} (see the second, third, and fourth panels), which indicate the interface between the cold and the warm air. Apart from that, a convergence area could also be noted over the East and North-East sector of the tower location around 14:35 PM (see the fourth panel in Fig. \ref{fig:2}). Regardless, during the later times at 15:42 and 16:40 PM, the deep-convective clusters were not present in the proximity of the tower location (see the fifth and sixth panels in Fig. \ref{fig:2}). In summary, we interpret these observations from Fig. \ref{fig:2} as the evidence that the gust front, which originated from the outflows of several deep-convective clouds, approached the tower location starting from the period approximately around 14:00 PM. After the passage of the gust front, clear conditions existed over the tower location. Given the uniqueness of this event, it is an opportunity to investigate its associated effects on the surface measurements with a special focus on the consequent changes in the turbulent characteristics. We present results related to these aspects in the subsequent sections. \subsubsection{Surface measurements from all-in-one weather sensors} \label{all-in-one} \begin{figure}[h] \centering \hspace*{-1in} \includegraphics[width=1.3\textwidth]{Figure_3.jpg} \caption{The 24-hr time series of every 1-min data from the five all-in-one weather sensors ($z=$ 2, 6, 10, 30, and 49 m) are shown for the variables such as: (a) horizontal wind speed (${U}$), (b) wind direction from the North (${\theta}$), and (c) dry-bulb temperature ($T$), corresponding to 22-September-2018. The grey shaded regions in all the panels indicate the period between 14:00-16:10 PM (local time), where a sudden decrease in the dry-bulb temperature is observed, with a near-constant wind direction from the South-West sector ($180^{\circ}<\theta<270^{\circ}$). The legend in panel (c) describe the different lines.} \label{fig:3} \end{figure} Figure \ref{fig:3} shows the 24-hr time series of every 1-min data from the five all-in-one weather sensors ($z=$ 2, 6, 10, 30, and 49 m) corresponding to the variables such as: horizontal wind speed (${U}$), wind direction from the North (${\theta}$), and the dry-bulb temperature ($T$). By analysing the radar images from Fig. \ref{fig:2}, we inferred that a gust front approached the tower location starting from the period around 14:00 PM. To investigate its effect on the surface measurements, a grey shaded area is shown in Fig. \ref{fig:3} corresponding to a period between 14:00-16:10 PM (local time). One may notice from Figs. \ref{fig:3}a and c that, during this period there was an increase in the horizontal wind speed approximately beyond 15:00 PM, with a sudden drop around 4$^{\circ}$C in the dry-bulb temperature. Remarkably, this drop in $T$ happened across all the five heights on the tower, starting from $z=$ 2 m to $z=$ 49 m. However, before the occurrence of this event, the temperature increased with the time of the day as would be expected in a canonical convective surface layer as a result of surface heating. On the other hand, the wind-direction stayed approximately constant within the grey shaded region (14:00-16:10 PM) with the values between $180^{\circ}<\theta<270^{\circ}$, indicating the wind approached the tower from the South-West sector. This observation is consistent with the radar images, where we detected the movement of the gust front was from the South-West sector. Noticeably, beyond 16:10 PM the surge in the horizontal wind-speed subsided with values decreasing up to 4 m s$^{-1}$ at all the five heights. This decreasing trend in $U$ lasted up to 19:00 PM, after which an another surge was observed in the wind speed. In a nutshell, the above results imply that the gust front which passed the tower location, impacted the horizontal wind speed and dry-bulb temperature at all the five heights, continuing almost up to 16:10 PM. Moreover, it was fortuitous to note, the wind direction during this period stayed within the South-West sector, clear of any obstacles in the incoming flow (see Sect. \ref{Data}). Therefore, this allowed us to investigate the associated effect on the characteristics of wind speed and temperature, as obtained from the high-frequency sonic anemometer measurements. \subsubsection{High-frequency measurements from the sonic anemometers} \label{sonic_meas} \begin{figure}[h] \centering \hspace*{-1in} \includegraphics[width=1.3\textwidth]{Figure_4.jpg} \caption{The 10-Hz time series of the horizontal wind speed are shown in panel (a) from the four sonic anemometers ($z=$ 4, 8, 20, and 40 m) corresponding to the period 14:00-16:10 PM. The bottom panels (b and c) show the respective probability density functions (PDF's) of the horizontal wind speed and acceleration during this period. The legend representing the colours associated with each height is shown in panel (a).} \label{fig:4} \end{figure} Figure \ref{fig:4}a shows the time series of the horizontal wind speed, as obtained from the 10-Hz sonic anemometer measurements at four different heights, during the period 14:00-16:10 PM. In general, the high-frequency time series of $U$ presented in Fig. \ref{fig:4}a show similar characteristics as in Fig. \ref{fig:3}a, with a sudden increase being visible at times beyond 15:00 PM. Nevertheless, given the fine temporal resolution, the turbulent characteristics of $U$ such as their probability density functions (PDF's) can also be obtained from these measurements. Figures \ref{fig:4}b and c show the PDF's of $U$ and their temporal increments $\Delta U$ (acceleration), corresponding to the said period for all the four heights. A systematic shift towards the larger values is observed in the peak positions of the $U$ PDF's as the observation heights increased. Apart from that, there is also an indication of a secondary peak in the $U$ PDF's for the highest measurement level, i.e. at $z=$ 40 m. On the contrary, for the increment PDF's no such variation with height is observed in Fig. \ref{fig:4}b. Note that, the acceleration statistics are related to the small-scale (short-lived) eddies, given their association with the longitudinal gradient and hence with dissipation \citep{chu1996probability}. Therefore, the height-invariance of the acceleration PDF's as observed in Fig. \ref{fig:4}b, implies the variations of the wind speed in the surface layer during the gust front, were mainly dominated by the long-lived coherent eddies. Next we will present results for the temperature variations. \begin{figure}[h] \centering \hspace*{-1in} \includegraphics[width=1.3\textwidth]{Figure_5.jpg} \caption{Same as in Fig. \ref{fig:4}, but for the sonic temperatures ($T_{S}$) shown for $z=$ (a) 4 m, (b) 8 m, (c) 20 m, and (d) 40 m. Similar to Fig. \ref{fig:3}c, a decrease in $T_{S}$ is visible between 15:00-15:10 PM at all the heights. The thick black lines on all the panels indicate a Fourier filtered low-frequency signal (threshold frequency set at 0.01 Hz) used to remove the trend from the time series to calculate the turbulent fluctuations. The grey shaded regions designate the two respective periods between 14:00-15:00 PM and 15:10-16:10 PM, which occurred before and after the drop in $T_{S}$.} \label{fig:5} \end{figure} Figure \ref{fig:5} shows the high-frequency time series of the sonic temperatures ($T_{S}$) at four different heights, between the period 14:00-16:10 PM. The first thing what we notice from Fig. \ref{fig:5} is the sharp drop in $T_{S}$ between 15:00-15:10 PM. Physically we interpret this as, due to the passing of the gust front, resulted in a strong flow of cold air into the surface layer which caused such a drastic reduction in the temperature. It is incredible to note that, such reduction in temperature simultaneously existed at all the four measurement heights from $z=$ 4 to 40 m. Additionally, this drop in the temperature created an interface, which separated two different regimes. For their better identification, these two regimes are shaded in grey in Fig. \ref{fig:5}. One could notice, in the first regime (14:00-15:00 PM) the $T_{S}$ values exhibited significant temporal variation. Whereas in the second regime (15:10-16:10 PM), the intensity of the variation in $T_{S}$ was extremely weak. Apart from that, in the period between 14:00-15:00 PM, the $T_{S}$ time series displayed the classical ramp-cliff patterns, commonly observed in convective surface layers. This was in sharp contrast with the period between 15:10-16:10 PM, where such signatures were completely absent. Undoubtedly, this type of a phenomenon presents an interesting case to investigate the respective turbulent characteristics of the temperature variations associated with the periods before and after the drop in the temperature. In fact, to gain a deeper perspective, one could ask, \emph{whether there is any systematic difference between the structural properties of the turbulent temperature fluctuations related to their size distributions and temporal organization?}. However, since this is a non-trivial occurrence, the computation of the turbulent temperature fluctuations are difficult given the variation in the mean state itself. Therefore, in order to compute the turbulent fluctuations the apparent trend in the $T_{S}$ time series need to be removed. To accomplish that, we computed the Fourier spectrum of the $T_{S}$ time series for the whole period (14:00-16:10 PM). We noted that, for frequencies lesser than or equal to 0.01 Hz, the Fourier amplitudes displayed a clear linear trend. From \citet{kaimal1994atmospheric}, we know such trend is associated with low-frequency oscillations in a turbulent time series which need to be removed in order to compute the fluctuations. We thus applied a Fourier filter where the contributions from the frequencies lesser than 0.01 Hz were removed and then the inverse transformation was applied to retrieve the filtered time series. The thick black lines shown in all the four panels of Fig. \ref{fig:5} indicate this Fourier-filtered low-frequency variation. To compute the turbulent fluctuations in temperature ($T^{\prime}$), this trend was removed from the $T_{S}$ series. We next investigate the respective structural properties of the turbulent $T^{\prime}$ signals for the periods between 14:00-15:00 PM and 15:10-16:10 PM. \subsection{The turbulent structures of temperature variations} \label{temp_var} We commence our description of structural properties of turbulence associated with the two periods (shown as the grey shaded regions of Fig. \ref{fig:5}), by discussing the differences in the PDF's of the temperature fluctuations and their temporal increments. This is because, the tails of the PDF's of any turbulent fluctuations are influenced by the coherent eddy motions, whereas the tails of the increment PDF's are governed by the small-scale eddy motions \citep{chu1996probability,pouransari2015statistical}. \subsubsection{PDF's of temperature fluctuations and increments} \label{temp_pdfs} \begin{figure}[h] \centering \hspace*{-1.3in} \includegraphics[width=1.4\textwidth]{Figure_6.jpg} \caption{The PDF's associated with turbulent temperature fluctuations ($T^{\prime}/\sigma_{T}$) and their increments ($\Delta T^{\prime}/\sigma_{\Delta T}$) are shown separately for the two periods as designated in Fig. \ref{fig:5}. The top panels (a) and (b) show the respective PDF's corresponding to the fluctuations and their increments during the period 14:00-15:00 PM. Similar information are also presented in the bottom panels (c) and (d), but for the period 15:10-16:10 PM. Note that, the fluctuations and increments are normalized by their respective standard deviations, computed over the entire period 14:00-16:10 PM, after removal of the low-frequency trend from the sonic temperature signals. The legend representing the colours associated with each height is shown in panel (a).} \label{fig:6} \end{figure} Figure \ref{fig:6} shows the PDF's of the temperature fluctuations ($T^{\prime}$) and its increments ($\Delta T^{\prime}$) for the periods 14:00-15:00 PM and 15:10-16:10 PM (see Fig. \ref{fig:5}). Note that, the fluctuations and the increments in Fig. \ref{fig:6} are normalized by their respective standard deviations ($\sigma_{T}$ and $\sigma_{\Delta T}$) computed over the whole 14:00-16:10 PM period, after the removal of the low-frequency trend. From Figs. \ref{fig:6}a and c, we notice that the $T^{\prime}$ PDF's for both the periods are approximately symmetric with respect to 0 and do not reveal any substantial difference with height. Nevertheless, the $T^{\prime}$ PDF's for the period 14:00-15:00 PM exhibit a significantly broader range in fluctuations compared to the period 15:10-16:10 PM. This observation is consistent with our visual inspection of Fig. \ref{fig:5}, where the temperature signals at all the heights are apparently more jagged in the first period as opposed to the next. Apart from that, from Figs. \ref{fig:6}b and d we also observe a substantial difference in the increment PDF's between the two periods. For all the four heights, the increment PDF's in Fig. \ref{fig:6}b display a heavy left tail associated with the large negative values and an extruded peak at the smaller values. This feature is remarkably consistent with \citet{chu1996probability}, where they also observed similar characteristics in an unstable surface layer. They explained this heavy left tail as a consequence of the abundance of ramp-cliff patterns present in the temperature signal during the convective conditions. However, in Fig. \ref{fig:6}d no such evidence of a heavy tail could be found in the increment PDF's corresponding to all the four heights. Contrarily, the increment PDF's in Fig. \ref{fig:6}d look strikingly similar to the PDF's of the fluctuations itself in Fig. \ref{fig:6}b. To summarize, the aforementioned results imply that during the period 14:00-15:00 PM the statistical characteristics of the large- and small-scale turbulent structures are considerably different, given the clear discrepancy in the shapes of the fluctuation and increment PDF's. On the other hand, for the period 15:10-16:10 PM, the fluctuation and increment PDF's display very similar character, with almost no change being observed in their shapes. This points out a statistical equivalence between the large- and small-scale structures when the temperature fluctuations are weak and remain quiescent. Therefore, one may ask, \emph{how this difference in the statistical attributes between the energetic and the quiescent periods is related to the size distributions and clustering tendencies of the turbulent structures}? Note that, from the above discussed PDF's no information can be obtained about the time or length scales (sizes) of the associated turbulent structures. This is because, while performing the binning exercise to construct the PDF's we mask any dependence with time. Nonetheless, in a turbulent signal the positive and negative fluctuations occur with a range of different time scales as they tend to exit and re-enter to their respective states \citep{kalmar2019complexity,chowdhuri2019revisiting}. To extract that additional information, we turn our attention to persistence analysis. \subsubsection{Persistence analysis of temperature structures} \label{persistence} Persistence is defined as the probability that the local value of a fluctuating field does not change its sign for a certain amount of time \citep{majumdar1999persistence}. Put differently, the concept of persistence is also equivalent to the distributions of the inter-arrival times between the successive zero-crossings of a stochastic signal \citep{chamecki2013persistence}. The zero-crossings in a stochastic signal are identified by using a telegraphic approximation (TA) of its fluctuations ($x^{\prime}$), expressed as, \begin{equation} (x^{\prime})_{\rm TA}=\frac{1}{2}(\frac{x^{\prime}(t)}{|x^{\prime}(t)|}+1), \label{TA} \end{equation} and locating the points where this TA series changes its values from 0 (off state) to 1 (on state) or vice-versa. For laboratory boundary layer flows, \citet{sreenivasan1983zero} and \citet{kailasnath1993zero} interpreted the distributions of the persistence time scales as the size distributions of the turbulent structures, after converting those to spatial length scales by employing Taylor's frozen turbulence hypothesis. For further details on the application of persistence in turbulent flows, the readers may consult \citet{chowdhuri2020persistence}. \begin{figure}[h] \centering \hspace*{-1.3in} \includegraphics[width=1.4\textwidth]{Figure_7.jpg} \caption{The persistence PDF's of the turbulent temperature fluctuations ($T^{\prime}$) are plotted separately for the same two periods as shown in Fig. \ref{fig:5}. The persistence time scales ($t_{p}$) are normalized by two different ways such as: they are converted to a streamwise size $(t_{p}\overline{u})$ ($\overline{u}$ is the mean horizontal wind speed) and normalized by $z$ (see panels (a) and (b)) or they are normalized directly by the integral scale $\Gamma_{T}$ associated with temperature (see panels (c) and (d)). Due to the second normalization, a good collapse is observed in the persistence PDF's for all the heights with a clear power-law signature. The associated power-law functions are shown in the legends of panels (c) and (d). The legend representing the colours associated with each height is shown in panel (a). The dashed black lines with solid arrows in panels (c) and (d) indicate the position $t_{p}/\Gamma_{T}=$ 1.} \label{fig:7} \end{figure} Figure \ref{fig:7} shows the persistence PDF's or equivalently the size distributions of the turbulent structures associated with the temperature fluctuations, corresponding to the same two periods as indicated in Fig. \ref{fig:5}. Note that, a log-log representation is chosen to display the PDF's, such that any power-law functions in such plots would be shown as straight lines. To compute these PDF's, we follow the same methodology as detailed in \citet{chowdhuri2020persistence}. Figures \ref{fig:7}a and b show the size distributions for the two periods, where the persistence time ($t_{p}$) is converted to a streamwise length scale ($t_{p}\overline{u}$, where $\overline{u}$ is the mean horizontal wind speed as obtained from Fig. \ref{fig:4}) and normalized by the height $z$ above the surface. This normalization is chosen under the assumption that in a convective surface layer the turbulent structures are self-similar with height \citep{kader1991spectra}. In spite of that, one may notice, normalizing the persistence length scales with $z$ does not collapse the size distributions as there is a clear separation among the PDF's at different heights (Figs. \ref{fig:7}a and b). To investigate this further, in Figs. \ref{fig:7}c and d the same information are presented, but the $t_{p}$ values are normalized by the integral time scales of temperature fluctuations. We use such normalization because, \citet{chowdhuri2020persistence} demonstrated that the statistical characteristics of the persistence PDF's are related to the temporal coherence of the time series, expressed by its integral scale. These integral time scales ($\Gamma_{T}$) are computed separately for the two periods, from the first zero-crossings of the auto-correlation functions of the temperature fluctuations \citep{kaimal1994atmospheric,katul1997energy,li2012mean}. In Fig. \ref{fig:s1} of the supplementary material, we provide the respective auto-correlation functions and the integral scales for these two periods and their vertical variation corresponding to different measurement levels. We note that, by applying this particular normalization with the integral length scale, the size distributions of the turbulent structures collapsed for all the heights, following a power-law distribution. The exponents of these power-laws were computed by performing a linear regression on the log-log plots. We obtained the best fit values ($R^{2}>0.96$) for the exponents (slopes of the straight lines) as $2.1$ and $2.5$ respectively for the two corresponding periods (Figs. \ref{fig:7}c and d). The difference in these exponents between the two periods remains unexplained at present. Notwithstanding the above limitation, it is intriguing to note that the power-law feature of the persistence PDF's in Figs. \ref{fig:7}c and d extend for the time scales even larger than the integral scale. This observation is in disagreement with the results from canonical convective surface layers. \citet{cava2012role} and \citet{chowdhuri2020persistence} have demonstrated that in a statistically stationary convective surface layer for a wide range of stability conditions (spanning from highly-convective to near-neutral), the persistence PDF's display a power-law distribution for time scales smaller than the integral scales. They attributed this to the self-similar Richardson cascading mechanism, commonly observed in a well-developed turbulent flow. In addition to that, they also noticed at scales larger than the integral scales, there was an exponential drop in the persistence PDF's, hallmark of a Poisson type process. \citet{cava2012role} surmised this phenomenon as a consequence of random deformation of the large coherent structures, giving rise to several sub-structures with independent arrival times. However, for the present context such an exponential drop is not visible in Figs. \ref{fig:7}c and d. Since power-laws are synonymous with scale-invariance \citep{newman2005power,verma2006universal}, it implies the entire size distributions of the turbulent structures for both the periods (14:00-15:00 PM and 15:10-16:10 PM) are scale-free in nature. This indicates the passing of the gust front initiated a scale-free response which governed the turbulent characteristics of the temperature fluctuations, generating a self-similar size distribution of the associated structures. It is noteworthy to mention that this type of a situation has qualitative similarities with the self organized criticality (SOC) observed in the sandpile model of \citet{bak1988self}, famously known as the BTW experiment. \citet{lewis2010cause} and \citet{accikalin2017concept} have specified that the SOC occurs in such complex systems which operate at a point near to the critical state, where a small external perturbation can create avalanches having a power-law size distribution. To provide such a heuristic analogy with the present case in hand, we expect the deep-convective cells whose outflows generated the gust front, acted as an external stimuli which disturbed its surroundings beyond the tipping point and created a scale-free response. Over the course of time, this response propagated to the surface layer of the convective boundary layer and generated structures having self-similar size distributions. Despite sound promising, at present the aforementioned connection with SOC is a hypothesis and future research in this direction is required to corroborate it further. Be that as it may, we next investigate whether these self-similar temperature structures displayed any clustering tendency related to their temporal organization during these two periods. \subsubsection{Clustering properties of temperature structures} \label{clustering} While the persistence analysis describes the size distributions of the turbulent structures, the clustering or aggregation property is related to the temporal organization pattern of these structures. From a statistical point of view, a time series is clustered if the on and off states of the corresponding TA series (see Eq. \ref{TA}) are distributed unevenly in time \citep{cava2009effects,poggi2009flume,cava2012role,cava2019submeso}. \begin{figure}[h] \centering \hspace*{-1.6in} \includegraphics[width=1.6\textwidth]{Figure_8.jpg} \caption{The RMS values of the zero-crossing density fluctuations $\Big[{\overline{{{\delta n}_{T^{\prime}}(\tau)}^{2}}}^{1/2}\Big]$ are shown for the $T^{\prime}$ signals, plotted against the normalized lags $\tau/\Gamma_{T}$, during the periods (a) 14:00-15:00 PM and (b) 15:10-16:10 PM. The clustering exponents are computed from the power-law fits to the RMS zero-crossing density fluctuations at different lags, as shown in the legends of panels (a) and (b). The grey lines in both the panels denote the power-law exponent of $0.5$, corresponding to a white-noise signal which displays no clustering effect. The dashed black lines with solid arrows indicate the position $\tau/\Gamma_{T}=$ 1.} \label{fig:8} \end{figure} To quantify such behaviour, a clustering exponent ($\alpha$) is computed such as, \begin{equation} {\overline{{{\delta n}(\tau)}^{2}}}^{1/2} \propto \tau^{-\alpha}, \label{CE} \end{equation} where $\tau$ are the time lags and ${\overline{{{\delta n}(\tau)}^{2}}}^{1/2}$is the root-mean-square (RMS) of the zero-crossing density fluctuations, defined as, \begin{equation} {\delta n}(\tau)=n_{\tau}(t)-\overline{n_{\tau}(t)}, \label{CE_1} \end{equation} where $n_{\tau}(t)$ represents the zero-crossing densities at each $\tau$. A step-by-step implementation of the method to compute the clustering exponents is provided by \citet{poggi2009flume}, which we have followed in this study. Additionally, \citet{sreenivasan2006clustering} showed that for a white noise signal which displays no clustering tendency, the clustering exponent (see Eq. \ref{CE}) equals to 0.5. Thus, if $\alpha<0.5$, it is an indication that the turbulent structures have a tendency to aggregate or cluster \citep{poggi2009flume}. Figure \ref{fig:8} shows the RMS values of the zero-crossing density fluctuations for the $T^{\prime}$ signals during the two periods (14:00-15:00 PM and 15:10-16:10 PM), corresponding to all the four heights. Note that, the time lags ($\tau$) shown in Fig. \ref{fig:8} are normalized by the integral scales of temperature ($\Gamma_{T}$), associated with the two periods. The clustering exponents ($\alpha$) are computed by fitting power-laws to these RMS values at different lags, as shown in Fig. \ref{fig:8}. Similar to Fig. \ref{fig:7}, a log-log representation is chosen so the power-laws are shown as straight lines. Moreover, for comparison purpose, the grey lines on both the panels indicate the clustering exponent of 0.5 related to a white noise signal. From Fig. \ref{fig:8}a, we note that for the period 14:00-15:00 PM, the clustering exponents of the $T^{\prime}$ signals were significantly different from 0.5 ($\alpha<0.5$) at all the four heights. Apart from that, a clustering tendency was observed for the turbulent structures greater than the sizes of the integral scales. Nevertheless, for the lower two levels ($z=$ 4 and 8 m), a slight break in the slopes of the clustering exponents was visible at the scales approximately equal to the integral scales. However, as the heights increased ($z=$ 20 and 40 m), this difference almost disappeared and a substantial drop was noted in the clustering exponents ($\alpha \approx$ 0.2) compared to the lower two levels ($\alpha \approx$ 0.3). On the other hand, for the period 15:10-16:10 PM, the clustering exponents corresponding to all the four heights were approximately equal to each other, as could be seen from Fig. \ref{fig:8}b ($\alpha \approx$ 0.3). This indicates, even though the turbulent structures are self-similar for both the periods, their clustering properties with height were significantly different. Despite such discrepancy with Fig. \ref{fig:8}a, a similar clustering tendency was observed in Fig. \ref{fig:8}b at the scales larger than the integral scales, with $\alpha$ values being substantially smaller than 0.5. This observation is in clear variance with the case presented by \citet{cava2012role}. They found that, in a canonical convective surface layer, at scales larger than the integral scales, the turbulent structures displayed no clustering tendency as the exponents ($\alpha$) approached 0.5. Therefore, an important question arises, \emph{whether such disparity is related to the self-similar nature of the turbulent temperature structures extending over all the scales of motions as a result of a scale-free response associated with the gust front?}. A plausible hypothesis could be, the passage of the gust front in a convective surface layer generates a cascading effect which permeates across all the scales and modulates the aggregation properties of the turbulent structures even at those sizes larger than the integral scale. However, the further confirmation of this scale-modulation effect is beyond the scope of the present article and reserved for our future endeavours. We present our conclusions in the next section. \section{Conclusion} \label{concl} In this paper, we present a case study utilising the simultaneous observations from a C-band Doppler weather radar and an instrumented micrometeorological tower with multi-level measurements, to delineate the influence of a gust front on the surface layer turbulence in a tropical convective boundary layer. We find that, for this particular case, a gust front detected from the Doppler weather radar passed over the location of the 50-m micrometeorological tower. This gust front was originated from the outflows of several deep-convective clouds, placed within the 50 km radius of the tower location. Due to the intrusion of the cold air associated with the gust front, a drop in the temperature was noted at heights within the surface layer. We investigated the consequent effects of this on the turbulent temperature characteristics and the following results emerged: \begin{enumerate} \item Due to the passage of a gust front, a sudden drop of $\approx 4^{\circ}$C was noted in the temperature at heights within the surface layer. Additionally, it was found that this drop in the temperature created an interface which separated two different regimes. In one regime, the temperature fluctuations were large and energetic, whereas in the other regime they were weak and quiescent. \item By investigating the structural properties of the turbulent temperature fluctuations associated with these two regimes, we discovered that the size distributions of the turbulent structures for both of these regimes displayed a clear power-law signature. Since power-laws are synonymous with scale-invariance, this indicated the passing of the gust front initiated a scale-free response which governed the turbulent characteristics of the temperature fluctuations. To explain this, we provide a heuristic analogy with self organized criticality (SOC) as observed in the complex systems. \item Despite the self-similar nature of the turbulent structures, their aggregation or clustering properties differed between these two regimes. For the regime corresponding to large temperature fluctuations, the turbulent structures were significantly clustered, whose clustering properties changed with height. However, for the second regime where the temperature fluctuations were weak, there was comparatively a lesser tendency to cluster with no discernible change being observed with height. \end{enumerate} In summary, going back to the two research questions raised in the introduction regarding 1) the modification of the clustering due to the presence of gust front and 2) the scale related effect, we show that there is a definite clustering tendency for the turbulent temperature structures at scales larger than the integral scales. This finding is at odds with a canonical convective surface layer, where the turbulent structures display no clustering tendency at scales larger than the integral scales. A plausible reason for this oddity is, in our case with the presence of a gust front, the self-similar nature of the temperature turbulent structures extend for scales even larger than the integral scales. Last but not the least, in general the clustering or aggregation properties are related to small-scale phenomena, which cause anomalous scaling in the inertial subrange of the turbulence spectrum. In the present study, we observe during the existence of a gust front, the turbulent temperature structures exhibit clustering at scales larger than the integral scales. Since integral scales are associated with energy-containing motions, this preliminary evidence suggests a scale-modulation effect where the small scales influence the larger scales. Therefore, for our future research on the effect of gust front in convective surface layer turbulence, it would be worth to ask, \emph{Do the vertical velocity fluctuations also display similar characteristics and what are the associated implications on the turbulent flux modelling?} \section*{Data availability} On reasonable request, the datasets analysed during the current study can be made available to the interested researchers by contacting Thara V Prabha (\href{mailto:thara@tropmet.res.in}{thara@tropmet.res.in}). The computer codes needed to reproduce the figures are available by contacting the corresponding author Subharthi Chowdhuri at \href{mailto:subharthi.cat@tropmet.res.in}{subharthi.cat@tropmet.res.in}. \section*{Conflict of Interest} The authors declare that they have no conflict of interest. \section*{Author contributions} The authors Subharthi Chowdhuri and Thara V Prabha conceptualized the study. The data collection was performed by Subharthi Chowdhuri, Kiran Todekar, Anand K Karipot, and Palani Murugavel. All the analyses for the paper were carried out by Subharthi Chowdhuri. The first draft of the manuscript was written by Subharthi Chowdhuri and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. \section*{Acknowledgements} Cloud Aerosol Interaction and Precipitation Enhancement Experiment is conducted by the Indian Institute of Tropical Meteorology, which is an autonomous institute and fully funded by the Ministry of Earth Sciences, Government of India. Authors are grateful to several colleagues who contributed to the success of the CAIPEEX project. The authors also acknowledge the local support and hospitality provided by N. B. Navale Sinhgad College of Engineering (NBNSCOE), Kegaon-Solapur, during the experiment. The author Subharthi Chowdhuri expresses his gratitude to Dr. Tirtha Banerjee and to Dr. Tam\'{a}s Kalm\'{a}r-Nagy for many fruitful discussions on the concepts of persistence, zero-crossing densities, and SOC phenomenon. \bibliographystyle{apalike}
proofpile-arXiv_065-4341
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction and main results} There has been recent interest in Schr\"odinger-type operators of the form \begin{align} \label{eq:defhlambda} H_{\lambda} = T(-\mathbb{I}\nabla)-\lambda V \quad \text{in}\ L^2(\mathbb{R}^d)\,, \end{align} where the kinetic energy $T(\xi)$ vanishes on a submanifold of codimension one, $V$ is a real-valued potential, and $\lambda>0$ is a coupling constant. We are interested in the weak coupling limit $\lambda\to 0$ for potentials that decay slowly in some $L^p$ sense to be made precise. Operators of this type appear in many areas of mathematical physics \cite{MR0404846,MR1671985,MR772044,MR1970614,MR2303586,MR2466689,MR2250814,PhysRevB.77.184517,MR2365659,MR2450161,MR2410898,MR2643024,hoang2016quantitative,Gontier_2019}. The goal of \cite{MR2643024} was to generalize the results and techniques of \cite{MR2365659} and \cite{PhysRevB.77.184517} to a large class of kinetic energies. Our goal, complementary to \cite{MR2643024}, is to relax the conditions on the potential. To keep technicalities to a minimum, we state our result for $T(-\mathbb{I}\nabla)=|\Delta+1|$. This was one of the main motivations to study operators of the form \eqref{eq:defhlambda}, due to their role in the BCS theory of superconductivity \cite{MR2365659,MR2410898}. As in previous works \cite{MR2365659,MR2643024} a key role is played by an operator $\mathcal{V}_S$ on the unit sphere $S\subset\mathbb{R}^d$, whose convolution kernel is given by the Fourier transform of $V$. The potentials we consider here need not be in $L^1(\mathbb{R}^d)$, but $\mathcal{V}_S$ may be defined as a norm limit of a regularized version (see Section \ref{Section def V_S} for details). The potential $V$ is assumed to belong to the amalgamated space $\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}$, where the first space measures global (average) decay and the second measures local regularity (see \eqref{potential class def.}). We note that $L^{\frac{d+1}{2}}\cup L^{\frac d2}\subseteq \ell^{\frac{d+1}{2}}L^{\frac d2}$ by Jensen's inequality. \begin{theorem} \label{thm. asymptotics} Let $d\geq 3$ and $H_{\lambda} =|\Delta+1|-\lambda V$. If $V\in\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}$, then for every eigenvalue $a_S^j>0$ of $\mathcal{V}_S$ in \eqref{eq:lswbs2}, counting multiplicity, and every $\lambda>0$, there is an eigenvalue $-e_j(\lambda)<0$ of $H_{\lambda}$ with weak coupling limit \begin{align} \label{eq. weak coupling limit} e_j(\lambda) = \exp\left(-\frac{1}{\lambda a_S^j}(1+o(1))\right) \quad \text{as}\ \lambda \to 0. \end{align} \end{theorem} For simplicity we stated the result for $d\geq 3$, but it easily transpires from the proof that it also holds in $d=2$ for $V\in\ell^{\frac{d+1}{2}}L^{1+\epsilon}$ for arbitrary $\epsilon>0$. All other possible negative eigenvalues (not corresponding to $\mathcal{V}_S$) satisfy $e_j(\lambda)\leq \exp(-c/\lambda^2)$. The statement in \cite{MR2643024} about the convergence of eigenfunctions also holds for the potentials considered here. Since the proofs are completely analogous we will not discuss them. In previous works \cite{MR2365659,MR2643024} it was assumed that $V\in L^{1}(\mathbb{R}^d)\cap L^{\frac{d}{2}}(\mathbb{R}^d)$. Our main contribution is to remove the $L^1$ assumption, allowing for potentials with slower decay. The main new idea is to use the Tomas--Stein theorem (see Subsection \ref{subsection def. V_s} and \eqref{eq:tsorg}, \eqref{eq:tsexplicit}). In view of its sharpness, our result is optimal in the sense that the exponent $(d+1)/2$ in our class of admissible potentials cannot be increased, unless one imposes further (symmetry) restrictions on $V$, see also the discussion below. Moreover, the use of amalgamated spaces allows us to relax the global regularity to the local condition $V\in L_{{\rm loc}}^{\frac{d}{2}}(\mathbb{R}^d)$ which just suffices to guarantee that $H_{\lambda}$ is self-adjoint. The idea of applying the Tomas--Stein theorem and related results such as \cite{MR894584} to problems of mathematical physics is not new, see, e.g., \cite{MR2219246} and \cite{MR2820160}. The validity of the Tomas--Stein theorem crucially depends on the curvature of the underlying manifold. A slight modification of our proof (see, e.g., \cite{MR3608659,MR3942229}) shows that the result of Theorem \ref{thm. asymptotics} continues to hold for general Schr\"odinger-type operators (with a suitable modification of the local regularity assumption) of the form \eqref{eq:defhlambda} as long as the Fermi surface $S=\{\xi\in\mathbb{R}^d:\ T(\xi)=0\}$ is smooth and has everywhere non-vanishing Gaussian curvature. For example, if $T$ is elliptic at infinity of order $2d/(d+1)\leq s<d$, then the assumption on the potential becomes $V\in\ell^{\frac{d+1}{2}}L^{\frac{d}{s}}$. This is outlined in Theorem \ref{asymptoticsgen} and improves \cite[Theorem 2.1]{MR2643024}. The moment-type condition on the potential in that theorem is unnecessary, regardless of whether the kinetic energy is radial or not. A straightforward generalization to the case where $S$ has at least $k$ non-vanishing principal curvatures can be obtained from the results of \cite{MR620265,MR3942229}. In that case the global decay assumption has to be strengthened to $V\in\ell^{\frac{k+2}{2}}L^{\frac{d}{s}}$. Sharp restriction theorems for surfaces with degenerate curvature are available in the three-dimensional case \cite{MR3524103}. Based on the results of \cite{MR1479544,MR3713021,Vega1992}, if the potential $V$ is radial, one might be able to relax the assumption in Theorem \ref{thm. asymptotics} to $V\in\ell^{d}L^{\frac{d}{2}}$. This naive belief is supported by the discussion in Appendix \ref{a:mt}, see especially Theorem \ref{asymptoticsradial} where we generalize Theorem \ref{thm. asymptotics} to spherically symmetric potentials with almost $L^d$ decay. For long-range potentials the weak coupling limit \eqref{eq. weak coupling limit} does not hold in general. Gontier, Hainzl, and Lewin \cite{Gontier_2019} showed $\exp(-C_1/\sqrt{\lambda})\leq e_1(\lambda)\leq\exp(-C_2/\sqrt{\lambda})$ for the Coulomb potential $V=|x|^{-1}$ in $d=3$. The key estimate \eqref{Key estimate} is a consequence of the Tomas--Stein theorem. The remainder of the proof is standard first order perturbation theory that is done in exactly the same way as in \cite{MR2365659,MR2643024}. In a similar manner -- again following \cite{PhysRevB.77.184517,MR2643024} -- we will carry out higher order perturbation theory in Subsection \ref{ss:higherorders} and show how one may in principle obtain any order in the asymptotic expansion of $e_j(\lambda)$ at the cost of restricting the class of admissible potentials. For instance, our methods allow us to derive the second order for $V\in L^{\frac{d+1}{2}-\epsilon}$ and some $\epsilon\in(0,1/2]$. For $V\in L^1\cap L^{d/2}$ this was first carried out in \cite{PhysRevB.77.184517,MR2643024}. Furthermore, we will give an alternative proof for the existence of eigenvalues of $H_\lambda$ based on Riesz projections in Subsection \ref{Section alternative proof}. This approach allows us to handle complex-valued potentials\footnote{In this case, a transformation of statements about non-self-adjoint operators into those about a self-adjoint operator as in the proof of Theorem \ref{thm. asymptotics} seems impossible.} on the same footing as real-valued ones. The former play a role, e.g., in the theory of resonances, but are also of independent interest. We use the following notations: For two non-negative numbers $a,b$ the statement $a\lesssim b$ means that $a\leq C b$ for some universal constant $C$. If the estimate depends on a parameter $\tau$, we indicate this by writing $a\lesssim_{\tau} b$. The dependence on the dimension $d$ is always suppressed. We will assume throughout the article that the (asymptotic) scales $e$ and $\lambda$ are positive, sufficiently small, and that $\lambda\ln(1/e)$ remains uniformly bounded from above and below. The symbol $o(1)$ stands for a constant that tends to zero as $\lambda$ (or equivalently $e$) tends to zero. We set $\langle\nabla\rangle=(\mathbf{1}-\Delta)^{1/2}$. \section{Preliminaries} \label{s:prelims} \subsection{Potential class} Let $\{Q_s\}_{s\in\mathbb{Z}^d}$ be a collection of axis-parallel unit cubes such that $\mathbb{R}^d=\bigcup_{s}Q_s$. We then define the norm \begin{align} \label{potential class def.} \|V\|_{\ell^{\frac{d+1}{2}} L^{\frac{d}{2}}} := \left[\sum_s\|V\|_{L^{\frac{d}{2}}(Q_s)}^{\frac{d+1}{2}}\right]^{\frac{2}{d+1}}. \end{align} The exponent $(d+1)/2$ is natural (cf. \cite{MR2038194,MR2252331}) in view of the Tomas--Stein theorem. This is the assertion that the Fourier transforms of $L^p(\mathbb{R}^d)$ functions indeed belong to $L^2(S)$ whenever $p\in[1,\kappa]$ where $\kappa=2(d+1)/(d+3)$ denotes the ``Tomas--Stein exponent''. We discuss this theorem and a certain extension thereof in more detail in the next subsection. Observe that $1/\kappa-1/\kappa'=2/(d+1)$. The following lemma is a straightforward generalization of \cite[Lemma 6.1]{MR2219246}. \begin{lemma}\label{lemma Ionescu--Schlag} Let $s\geq2d/(d+1)$ and $V\in \ell^{\frac{d+1}{2}}L^{\frac{d}{s}}$. Then \begin{align} \||V|^{1/2}\langle\nabla\rangle^{-\left(\frac{s}{2}-\frac{d}{d+1}\right)}\phi\|_{L^2} \lesssim \|V\|^{1/2}_{\ell^{\frac{d+1}{2}}L^{\frac{d}{s}}}\|\phi\|_{L^{\kappa'}}. \end{align} \end{lemma} \begin{proof} We abbreviate $\alpha=s/2-d/(d+1)\geq0$ and first note that, by duality, the assertion is equivalent to $$ \|\langle\nabla\rangle^{-\alpha}|V|^{1/2}\phi\|_{L^\kappa} \lesssim \|V\|^{1/2}_{\ell^{\frac{d+1}{2}}L^{\frac{d}{s}}}\|\phi\|_{L^2}\,. $$ If $\alpha=0$, the claim follows from H\"older's inequality, $d/s=(d+1)/2$ in this case, and $\ell^p L^p= L^p$ for all $p\in[1,\infty]$. On the other hand, if $\alpha\geq d$ we use the fact that $\langle\nabla\rangle^{-\gamma}$ is $L^p$ bounded for all $p\in(1,\infty)$ and $\gamma\geq0$ (by the H\"ormander--Mihlin multiplier theorem, cf. \cite[Theorem 6.2.7]{MR3243734}). Thus we shall show $$ \||V|^{1/2}\langle\nabla\rangle^{-\alpha}\phi\|_{L^2} \lesssim \|V\|^{1/2}_{\ell^{\frac{d+1}{2}}L^{\frac{d}{s}}}\|\phi\|_{L^{\kappa'}} $$ for $\alpha=s/2-d/(d+1)$ with $s\geq2d/(d+1)$ such that $\alpha\in(0,d)$. Let $\{Q_s\}_{s\in\mathbb{Z}^d}$ be the above family of axis-parallel unit cubes tiling $\mathbb{R}^d$, i.e., for $s\in\mathbb{Z}^d$ let $Q_s=\{x\in\mathbb{R}^d:\, \max_{j=1,...,d}|x_j-s_j|\leq1/2\}$. Next, recall that for $\alpha\in(0,d)$, we have for any $N\in\mathbb{N}_0$ \begin{align*} |\langle\nabla\rangle^{-\alpha}\phi(x)| \lesssim_{\alpha,N} |\phi|\ast W_\alpha(x) \end{align*} where \begin{align} \label{eq:defwalpha} W_\alpha(x) = |x|^{-(d-\alpha)} \mathbf{1}_{\{|x|\leq1\}} + |x|^{-N}\mathbf{1}_{\{|x|\geq1\}}\,. \end{align} (For a proof of these facts, see, e.g., \cite[p. 132]{Stein1970}.) Abbreviating further $q_0=d/s$, we obtain \begin{align*} \||V|^{1/2}\langle\nabla\rangle^{-\alpha}\phi\|_{L^2}^2 & \lesssim_{\alpha,N} \sum_{s\in\mathbb{Z}^d} \int_{Q_s}|V(x)| [(|\phi|\ast W_\alpha)(x)]^2\,dx\\ & \leq \sum_{s\in\mathbb{Z}^d} \|V\|_{L^{q_0}(Q_s)}\cdot \||\phi|\ast W_\alpha\|_{L^{2q_0'}(Q_s)}^2\\ & \leq \sum_{s\in\mathbb{Z}^d} \|V\|_{L^{q_0}(Q_s)}\left[\sum_{s'\in\mathbb{Z}^d}\|(\mathbf{1}_{Q_{s'}}|\phi|)\ast W_\alpha\|_{L^{2q_0'}(Q_s)}\right]^2\\ & \lesssim \sum_{s\in\mathbb{Z}^d} \|V\|_{L^{q_0}(Q_s)}\left[\sum_{s'\in\mathbb{Z}^d}\|\phi\|_{L^{\kappa'}(Q_{s'})}(1+|s-s'|)^{-N}\right]^2\\ & \lesssim_N \left[\sum_{s\in\mathbb{Z}^d}\|V\|_{L^{q_0}(Q_s)}^{(d+1)/2}\right]^{2/(d+1)}\|\phi\|_{L^{\kappa'}}^2 \end{align*} where we used H\"older's inequality in the second line, the Hardy--Littlewood--Sobolev inequality in the penultimate line, and H\"older's and Young's inequality in the last line. This concludes the proof. \end{proof} \subsection{Definition of $\mathcal{V}_S$} \label{Section def V_S}\label{subsection def. V_s} As observed in \cite{MR1970614}, the weak coupling limit of $e_j(\lambda)$ is determined by the behavior of the potential on the zero energy surface of the kinetic energy, i.e., on the unit sphere $S$. We denote the Lebesgue measure on $S$ by $\mathrm{d}\omega$. For $V\in L^1(\mathbb{R}^d)$ we consider the self-adjoint operator $\mathcal{V}_S:L^2(S)\to L^2(S)$, defined by \begin{align} \label{eq:lswbs} (\mathcal{V}_Su)(\xi) = \int_S \widehat{V}(\xi-\eta) u(\eta)\,\mathrm{d}\omega(\eta), \quad u\in L^2(S), \end{align} see, e.g., \cite[Formula (2.2)]{MR2365659}. Here we have absorbed the prefactors in the definition of the Fourier transform, i.e., we use the convention \begin{align*} \widehat{V}(\xi)=\int_{\mathbb{R}^d}{\rm e}^{-2\pi \mathbb{I} x\cdot\xi}V(x)\mathrm{d} x. \end{align*} Our definition of $\mathcal{V}_S$ differs from that of \cite{MR2365659,MR2643024} by a factor of $2$; this is reflected in the formula \eqref{eq. weak coupling limit}. Since $V\in L^1(\mathbb{R}^d)$, its Fourier transform is a bounded continuous function by the Riemann--Lebesgue lemma and is therefore defined pointwise. The Tomas--Stein theorem allows us to extend the definition of $\mathcal{V}_S$ to a larger potential class. To this end we observe that the operator in \eqref{eq:lswbs} can be written as \begin{align} \label{eq:lswbs2} \mathcal{V}_S = \mathcal{F}_{S}V\mathcal{F}_{S}^*\,, \end{align} where $\mathcal{F}_{S}:\mathcal{S}(\mathbb{R}^d)\to L^2(S)$, $\phi\mapsto\widehat{\phi}|_S$ is the Fourier restriction operator (here $\mathcal{S}$ is the Schwartz space on $\mathbb{R}^d$). Its adjoint, the Fourier extension operator $\mathcal{F}_S^*:L^2(S)\to \mathcal{S}'(\mathbb{R}^d)$, is given by \begin{align} (\mathcal{F}_S^* u)(x) = \int_S u(\xi) \me{2\pi ix\cdot\xi} \,\mathrm{d}\omega(\xi)\,. \end{align} A fundamental question in harmonic analysis is to find optimal sufficient conditions for $\kappa$ such that $\mathcal{F}_S$ is an $L^{\kappa}\to L^q$ bounded operator. By the Hausdorff--Young inequality, the case $\kappa=1$ is trivial. On the other hand, the Knapp example (see, e.g., \cite[p. 387-388]{Stein1993}) and the decay of the Fourier transform of the surface measure \cite{Herz1962} show that $\kappa<2d/(d+1)$ and $(d+1)/\kappa'\leq(d-1)/q$ are necessary conditions. The content of the Tomas--Stein theorem (unpublished, but see, e.g., Stein \cite[Theorem 3]{Stein1986} and Tomas \cite{Tomas1975}) is that, for $q=2$, these conditions are indeed also sufficient. Concretely, the estimate \begin{align} \label{eq:tsorg} \|\mathcal{F}_S\phi\|_{L^2(S)} \lesssim \|\phi\|_{L^p(\mathbb{R}^d)} \,, \quad p\in[1,\kappa]\,, \quad \kappa=2(d+1)/(d+3) \end{align} holds for all $d\geq2$, whenever $S$ is a smooth and compact hypersurface with everywhere non-zero Gaussian curvature. In particular, this estimate is applicable to the Fermi surfaces that we consider later in Subsection \ref{generalkinen}. Moreover, by H\"older's inequality it follows that $|V|^{1/2}\mathcal{F}_S^*$ is an $L^2(S)\to L^2(\mathbb{R}^d)$ bounded operator, whenever $V\in L^q(\mathbb{R}^d)$ and $q\in[1,(d+1)/2]$. In this case, $\mathcal{V}_S$ is of course $L^2(S)$ bounded as well with \begin{align} \label{eq:tsexplicit} \|\mathcal{V}_S\| \lesssim \|V\|_{L^{q}}\,, \quad q\in[1,(d+1)/2]\,. \end{align} In the following, we will often refer to this estimate as the Tomas--Stein theorem. Recently, Frank and Sabin \cite[Theorem 2]{MR3730931} extended \eqref{eq:tsexplicit} and showed \begin{align} \label{eq:tsfsexplicitgen} \|W_1\mathcal{F}_S^*\mathcal{F}_SW_2\|_{\mathfrak{S}^{\frac{(d-1)q}{d-q}}} \lesssim_{q} \|W_1\|_{L^{2q}}\|W_2\|_{L^{2q}} \,, \quad W_1,W_2\in L^{2q}(\mathbb{R}^d)\,,\ q\in[1,(d+1)/2] \end{align} where $\mathfrak{S}^q(L^2)$ denotes the $q$-th Schatten space over $L^2$. Observe that the Schatten exponent is monotonously increasing in $q$. In particular, taking $q=(d+1)/2$, $W_1=|V|^{1/2}$, and $W_2=V^{1/2}$ where $V^{1/2}=|V|^{1/2}\sgn(V)$ with $\sgn(V(x))=1$ whenever $V(x)=0$, shows that $\mathcal{V}_S$ belongs to $\mathfrak{S}^{d+1}(L^2(S))$ with \begin{align} \label{eq:tsfsexplicit} \|\mathcal{V}_S\|_{\mathfrak{S}^{d+1}} \lesssim \|V\|_{L^{(d+1)/2}}\,. \end{align} We will now extend the definition of \eqref{eq:lswbs2} to incorporate potentials in the larger class $\ell^{(d+1)/2}L^{d/2}$ that appears in our main result. \begin{proposition} \label{proposition def. V_Sw} Let $V\in \ell^{\frac{d+1}{2}}L^{\frac{d}{2}}$. Then \eqref{eq:lswbs2} defines a bounded operator on $L^2(S)$. Moreover, if $(V_n)_n$ is a sequence of Schwartz functions converging to $V$ in $\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}$ and $\mathcal{V}_S^{(n)}$ are the corresponding operators in \eqref{eq:lswbs}, then $\mathcal{V}_S$ is the norm limit of the $\mathcal{V}_S^{(n)}$. \end{proposition} \begin{proof} We first assume that $V\in L^{\frac{d+1}{2}}(\mathbb{R}^d)$. It follows from the above discussion that $\mathcal{V}_S$ is the norm limit of the $\mathcal{V}_S^{(n)}$. To extend the definition to all $V\in \ell^{\frac{d+1}{2}}L^{\frac{d}{2}}$, we prove \begin{align} \label{enhanced Tomas--Stein} \|\mathcal{F}_{S}V\mathcal{F}_{S}^*\| \lesssim \|V\|_{\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}}. \end{align} To this end we use the following observation. For $u\in L^2(S)$ and $\xi\in S$ we write \begin{align} \label{eq:trick} (\mathcal{V}_S^{(n)}u)(\xi) = \int_S (\widehat{V_n}\phi)(\xi-\eta) u(\eta)\,\mathrm{d} \omega(\eta), \end{align} where $\phi$ is a bump function that equals $1$ in $B(0,2)$. This has the same effect as replacing $V_n$ by $\phi^{\vee}*V_n$. (Here, $\phi^\vee(x):=\int_{\mathbb{R}^d}\me{2\pi ix\cdot\xi}\phi(\xi)\,\mathrm{d}\xi$ denotes the inverse Fourier transform.) Since \eqref{enhanced Tomas--Stein} is equivalent to the bound \begin{align} \label{enhanced Tomas--Stein TT*} \|\sqrt{|V|}\mathcal{F}_{S}^*\mathcal{F}_{S}\sqrt{V}\| \lesssim \|V\|_{\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}}, \end{align} where $V^{1/2}=|V|^{1/2}\sgn(V)$ and $\sgn(V)$ is a unitary multiplication operator, we may assume without loss of generality that $V\geq 0$. Passing to a subsequence, we may also assume that $(V_n)_n$ converges to $V$ almost everywhere. By Fatou's lemma, for any $u\in L^2(S)$, \begin{align} \label{eq:defvsaux} \begin{split} \langle \mathcal{F}_{S}^*u,V\mathcal{F}_{S}^*u\rangle & \leq \liminf_{n\to\infty}\langle \mathcal{F}_{S}^*u,V_n\mathcal{F}_{S}^*u\rangle \leq \liminf_{n\to\infty}\|(\phi^{\vee}\ast V_n)(F_S^*u)\|_{L^{\kappa}}\|u\|_2\\ & \lesssim \|V\|_{\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}}\|u\|_2^2\,, \end{split} \end{align} where the penultimate inequality follows from the Tomas--Stein theorem \eqref{eq:tsexplicit} and the last inequality from the bound \begin{align} \label{smoothing bound} \|(\phi^{\vee}\ast V)(F_S^*u)\|_{L^\kappa} \leq \|\phi^\vee\ast V\|_{L^{\frac{d+1}{2}}}\|F_S^*u\|_{L^{\kappa'}} \lesssim_{\phi} \|V\|_{\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}}\|u\|_{L^2} \end{align} whose proof is similar to that of Lemma \ref{lemma Ionescu--Schlag} since the convolution kernel of $\phi^\vee$ is a Schwartz function, i.e., in particular $|\phi^\vee(x)|\lesssim_N(1+|x|)^{-N}$ for any $N\in\mathbb{N}$. More precisely, for the same family $\{Q_s\}_{s\in\mathbb{Z}^d}$ of axis-parallel unit cubes tiling $\mathbb{R}^d$ that we used in the proof of Lemma \ref{lemma Ionescu--Schlag}, we have for any $N>0$, \begin{align} \label{eq:pfsmoothingbound} \begin{split} \|\phi^\vee\ast V\|_{\frac{d+1}{2}}^{\frac{d+1}{2}} & = \|\sum_{s}\mathbf{1}_{Q_s}(\phi^\vee\ast V)\|_{\frac{d+1}{2}}^{\frac{d+1}{2}} = \sum_s \|\phi^\vee\ast (\sum_{s'}V\mathbf{1}_{Q_{s'}})\|_{L^{\frac{d+1}{2}}(Q_s)}^{\frac{d+1}{2}}\\ & \leq \sum_s\left[\sum_{s'}\|\phi^\vee\ast(V\mathbf{1}_{Q_{s'}})\|_{L^{\frac{d+1}{2}}(Q_s)}\right]^{\frac{d+1}{2}}\\ & \lesssim_N \sum_s\left[\sum_{s'}(1+|s-s'|)^{-N}\|V\|_{L^{\frac d2}(Q_{s'})}\right]^{\frac{d+1}{2}} \lesssim_N \|V\|_{\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}}^{\frac{d+1}{2}} \end{split} \end{align} where we used Young's inequality in the last two estimates. This concludes the proof. \end{proof} \subsection{Compactness of $\mathcal{V}_S$} We show that $\mathcal{V}_S$ belongs to a certain Schatten space $\mathfrak{S}^{p}(L^2(S))$ and is thus a compact operator. In particular, the spectrum of $\mathcal{V}_S$ is compact and countable with accumulation point $0$. The nonzero elements are eigenvalues of finite multiplicity. That $0$ is in the spectrum follows from the fact that $L^2(S)$ is infinite-dimensional. \begin{lemma} \label{lemma compactenss of V_S} Let $V\in \ell^{\frac{d+1}{2}}L^{\frac{d}{2}}$. Then $\mathcal{V}_S\in \mathfrak{S}^{d+1}(L^2(S))$ and \begin{align*} \|\mathcal{V}_S\|_{\mathfrak{S}^{d+1}} \lesssim \|V\|_{\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}}. \end{align*} \end{lemma} \begin{proof} We recycle the proof of Proposition \ref{proposition def. V_Sw} and suppose $V\geq0$ without loss of generality again. We apply the Tomas--Stein theorem \eqref{eq:tsfsexplicit} for trace ideals with $V$ replaced by $\phi^\vee\ast V$ where $\phi$ is the same bump function as in that proof. Note that, by \eqref{eq:trick}, this replacement does not affect the value of $\|\mathcal{V}_S\|_{\mathfrak{S}^{d+1}}$ since the eigenvalues remain the same. Thus, by \eqref{eq:pfsmoothingbound}, $\|\mathcal{V}_S\|_{\mathfrak{S}^{d+1}}\lesssim \|\phi^\vee\ast V\|_{L^{(d+1)/2}} \lesssim \|V\|_{\ell^{(d+1)/2}L^{d/2}}$. \end{proof} \subsection{Birman--Schwinger operator} As in \cite{MR2365659,MR2643024}, our proof is based on the well-known Birman--Schwinger principle. This is the assertion that, if \begin{align} \label{BS(e)} BS(e):=\sqrt{|V|}(T+e)^{-1}\sqrt{V} \end{align} with $e>0$, then \begin{align} \label{eq:bsprinc} -e\in \mathrm{spec}\left(H_\lambda\right)\iff \frac{1}{\lambda}\in \mathrm{spec}\left(BS(e)\right). \end{align} Here $\sqrt{V}:=\sgn(V)\sqrt{|V|}$ and $T=|\Delta+1|$. Thus, \eqref{eq. weak coupling limit} would follow from \begin{align} \label{To show} \ln(1/e)a_S^j(1+o(1))\in \mathrm{spec}(BS(e)) \end{align} for every eigenvalue $a_S^j>0$ of $\mathcal{V}_S$. We note that since $V$ and the symbol of $(T+e)^{-1}$ both vanish at infinity, $BS(e)$ is a compact operator, see, e.g., \cite[Chapter 4]{MR2154153}. Moreover, we have the following operator norm bound. \begin{lemma} \label{lemma bound BS(e)} Let $V\in \ell^{\frac{d+1}{2}}L^{\frac{d}{2}}$. Then \begin{align*} \|BS(e)\|\lesssim \ln(1/e)\|V\|_{\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}} \end{align*} for all\, $e\in (0,1/2)$. \end{lemma} \begin{proof} The proof follows from \eqref{BShigh bound} and \eqref{BSlow bound} below. \end{proof} \section{Proof of Theorem \ref{thm. asymptotics}} \label{proofmainresult} \subsection{Outline of the proof} We briefly sketch the strategy of the proof of \eqref{To show}. We first split the Birman--Schwinger operator into a sum of high and low energy pieces \begin{align*} BS(e)=BS^{\rm low}(e)+BS^{\rm high}(e). \end{align*} More precisely, we fix $\chi\in C_c^{\infty}(\mathbb{R}^d)$ such that $0\leq \chi\leq 1$ and $\chi\equiv 1$ on the unit ball. We also fix $0<\tau<1$ and set \begin{align*} BS^{\rm low}(e)=\sqrt{|V|}\chi(T/\tau)(T+e)^{-1}\sqrt{V}. \end{align*} As we will see in \eqref{BShigh bound}, the high energy piece is harmless. The low energy piece is split further into a singular and a regular part, \begin{align*} BS^{\rm low}(e)=BS^{\rm low}_{\rm sing}(e)+BS^{\rm low}_{\rm reg}(e), \end{align*} where the singular part is defined as \begin{align} \label{BSlowsing def.} BS^{\rm low}_{\rm sing}(e) = \ln\left(1+\tau/e\right)\sqrt{|V|}\mathcal{F}_{S}^*\mathcal{F}_{S}\sqrt{V}. \end{align} Note that $\sqrt{|V|}\mathcal{F}_{S}^*\mathcal{F}_{S}\sqrt{V}$ is isospectral to $\mathcal{V}_S$. As already mentioned in the introduction and the previous section, Theorem \ref{thm. asymptotics} would follow from standard perturbation theory if we could show the key bound \begin{align} \lambda\|BS^{\rm low}_{\rm reg}(e)\|&=o(1)\label{Key estimate} \end{align} for $V\in \ell^{\frac{d+1}{2}}L^{\frac{d}{2}}$, as long as $\lambda\ln(1/e)$ remains uniformly bounded from above and below. \subsection{Bound for $BS^{\rm high}(e)$} Here we prove that \begin{align} \label{BShigh bound} \|BS^{\rm high}(e)\|\lesssim_{\tau} \|V\|_{\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}} \end{align} \begin{proof} By a trivial $L^2$-bound we have \begin{align} \label{trivial L2 bound BShigh} \|BS^{\rm high}(e)\| \lesssim_{\tau} \||V|^{1/2}\langle\nabla\rangle^{-1}\|^2\,. \end{align} The $TT^*$ version of Lemma \ref{lemma Ionescu--Schlag} for $s=2$, \begin{align*} \|\langle\nabla\rangle^{-\frac{1}{d+1}}V\langle\nabla\rangle^{-\frac{1}{d+1}}\phi\|_{L^{\kappa}} \lesssim \|V\|_{\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}}\|\phi\|_{L^{\kappa'}}, \end{align*} together with Sobolev embedding $H^{\frac{d}{d+1}}(\mathbb{R}^d)\subset L^{\kappa'}(\mathbb{R}^d)$ yields \begin{align*} \|\langle\nabla\rangle^{-1}V\langle\nabla\rangle^{-1}\phi\|_{L^{2}} \lesssim \|V\|_{\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}}\|\phi\|_{L^2}. \end{align*} Combining the last inequality with \eqref{trivial L2 bound BShigh} yields the claim. \end{proof} \subsection{Bound for $BS^{\rm low}(e)$} The Fermi surface of $T$ at energy $t\in (0,\tau]$ consists of two connected components $S_t^{\pm}=(1\pm t)^{1/2}S$. The spectral measure $E_{T}$ of $T$ is given by \begin{align} \label{spectral measure 2} \mathrm{d} E_{T}(t) = \sum_{\pm}\mathcal{F}_{S_t^{\pm}}^*\mathcal{F}_{S_t^{\pm}}\frac{\mathrm{d} t}{2\sqrt{1\pm t}}. \end{align} in the sense of Schwartz kernels, see, e.g., \cite[Chapter XIV]{MR705278}. By the spectral theorem, \eqref{spectral measure 2} implies that \begin{align} \label{BSlow spectral measure rep.} BS^{\rm low}(e) = \sum_{\pm}\int_0^{\tau}\frac{\sqrt{|V|}\mathcal{F}_{S_t^{\pm}}^*\mathcal{F}_{S_t^{\pm}}\sqrt{V}}{t+e}\,\frac{\mathrm{d} t}{2\sqrt{1\pm t}}. \end{align} Together with the proof of Lemma \ref{lemma compactenss of V_S} this yields \begin{align} \label{BSlow bound} \|BS^{\rm low}(e)\|_{\mathfrak{S}^{d+1}} \lesssim_\tau \ln(1/e) \|V\|_{\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}}. \end{align} \subsection{Proof of the key bound \eqref{Key estimate}}\label{pfkeybound} From \eqref{BSlow spectral measure rep.} and the definition of $BS^{\rm low}_{\rm sing}(e)$ (see \eqref{BSlowsing def.}) we infer that \begin{align} \label{BSlowreg} BS^{\rm low}_{\rm reg}(e) = \sum_{\pm}\int_0^{\tau}\frac{\sqrt{|V|}(\mathcal{F}_{S_t^{\pm}}^*\mathcal{F}_{S_t^{\pm}}-\sqrt{1\pm t}\,\mathcal{F}^*_{S}\mathcal{F}_{S})\sqrt{V}}{t+e}\,\frac{\mathrm{d} t}{2\sqrt{1\pm t}}. \end{align} If $V$ were a strictly positive Schwartz function, then by the Sobolev trace theorem, the map $t\mapsto\sqrt{V}\mathcal{F}_{S_t^{\pm}}^*\mathcal{F}_{S_t^{\pm}}\sqrt{V}$ would be Lipschitz continuous in operator norm, see, e.g.,\ \cite[Chapter 1, Proposition 6.1]{MR2598115}, \cite[Theorem IX.40]{MR0493420}. Hence, we would obtain a stronger bound than \eqref{Key estimate} in this case. Using \eqref{spectral measure 2} and observing that \begin{align*} \mathcal{F}^*_{\mu S}\mathcal{F}_{\mu S}(x,y) = \mu^{d-1}\int_S{\rm e}^{2\pi \mathbb{I}\mu(x-y)\cdot\xi}\mathrm{d}\omega(\xi) \end{align*} for $\mu>0$, it is not hard to see that Lipschitz continuity even holds in the Hilbert--Schmidt norm. Since $\mathfrak{S}^2\subseteq\mathfrak{S}^{d+1}$ we conclude that, if $V$ were Schwartz, we would get \begin{align} \label{Key estimate Schatten norm} \lambda\|BS^{\rm low}_{\rm reg}(e)\|_{\mathfrak{S}^{d+1}}=o(1). \end{align} We now prove that \eqref{Key estimate Schatten norm} (and hence also \eqref{Key estimate}) holds for the potentials considered in Theorem \ref{thm. asymptotics}. \begin{lemma} \label{proofkeybound} If $V\in \ell^{\frac{d+1}{2}}L^{\frac{d}{2}}$, then \eqref{Key estimate Schatten norm} holds as $\lambda\to 0$ and $\lambda\ln(1/e)$ remains bounded. \end{lemma} \begin{proof} Without loss of generality we may again assume $V\geq 0$. Let $V_{n}^{1/2}$ be strictly positive Schwartz functions converging to $V^{1/2}$ in $\ell^{d+1}L^{d}$. We use that the bound \eqref{enhanced Tomas--Stein TT*} is locally uniform in $t$ and can be upgraded to a Schatten bound as in Lemma \ref{lemma compactenss of V_S}. That is, for fixed $\tau$, we have the bound \begin{align} \label{eq:tslocallyuniform} \sup_{t\in [0,\tau]}\|\sqrt{V}\mathcal{F}_{S_t}^*\mathcal{F}_{S_t}\sqrt{V}\|_{\mathfrak{S}^{d+1}} \lesssim_\tau \|V\|_{\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}}. \end{align} Since we have already proved \eqref{Key estimate Schatten norm} for such $V_n$, we may thus estimate \begin{align*} \lambda\|BS^{\rm low}_{\rm reg}(e)\|_{\mathfrak{S}^{d+1}} \lesssim_\tau \lambda\ln(1/e) \|\sqrt{V}-\sqrt{V_n}\|_{\ell^{d+1}L^{d}}\|\sqrt{V}\|_{\ell^{d+1}L^{d}} + o(1)\,. \end{align*} Since $\lambda\ln(1/e)$ is bounded, \eqref{Key estimate Schatten norm} follows upon letting $n\to\infty$. \end{proof} \section{Further results} The purpose of this subsection is fourfold. First we outline how our main result, Theorem \ref{thm. asymptotics}, can be generalized to treat operators whose kinetic energy vanishes on other smooth, curved surfaces. Second, we provide an alternative proof (to that of \cite{MR2365659,MR2643024}), based on Riesz projections, that weakly coupled bound states of $H_{\lambda}=|\Delta+1|-\lambda V$ actually exist, provided $\mathcal{V}_S$ has at least one positive eigenvalue. This follows from standard perturbation theory \cite[Sections IV.3.4-5]{Ka}, but the argument is robust enough to handle complex-valued potentials. In fact, we do not know how the arguments in \cite{MR2365659,MR2643024} could be adapted to treat such potentials as the Birman--Schwinger operator cannot be transformed to a self-adjoint operator anymore. Third, we give two examples of (real-valued) potential classes for which the operator $\mathcal{V}_S$ does have at least one positive eigenvalue. In both examples the potentials are neither assumed to be integrable, nor positive. Fourth, we derive the second order in the asymptotic expansion of $e_j(\lambda)$ in Theorem \ref{secondorder} for $V\in L^{\frac{d+1}{2}-\epsilon}$ and $\epsilon\in(0,1/2]$. \subsection{Generalization to other kinetic energies} \label{generalkinen} As the Tomas--Stein theorem holds for arbitrary compact, smooth, curved surfaces (cf. \cite[Theorem 3]{Stein1986} and \cite[Theorem 2]{MR3730931}) it is not surprising that Theorem \ref{thm. asymptotics} continues to hold for more general symbols $T(\xi)$. In what follows, we assume that $T(\xi)$ satisfies the geometric and analytic assumptions stated in \cite{MR2643024} -- that we recall in a moment -- and a certain curvature assumption. First, we assume that $T(\xi)$ attains its minimum, which we set to zero for convenience, on a manifold \begin{align} S = \{\xi\in\mathbb{R}^d:T(\xi)=0\} \end{align} of codimension one. Next, we assume that $S$ consists of finitely many connected and compact components and that there exists a $\delta>0$ and a compact neighborhood $\Omega\subseteq\mathbb{R}^d$ of $S$ containing $S$ with the property that the distance of any point in $S$ to the complement of $\Omega$ is at least $\delta$. We now make some analytic assumptions on the symbol $T(\xi)$. We assume that \begin{enumerate} \item there exists a measurable, locally bounded function $P\in C^\infty(\Omega)$ such that $T(\xi)=|P(\xi)|$, \item $|\nabla P(\xi)|>0$ for all $\xi\in\Omega$, and \item there exist constants $C_1,C_2>0$ and $s\in[2d/(d+1),d)$ such that $T(\xi)\geq C_1|\xi|^s+C_2$ for $\xi\in\mathbb{R}^d\setminus\Omega$. \end{enumerate} Since $S$ is the zero set of the function $P\in C^\infty(\Omega)$ and $\nabla P\neq0$, it is a compact $C^\infty$ submanifold of codimension one. Finally, we also assume that $S$ has everywhere non-zero Gaussian curvature\footnote{The precise definition of Gaussian curvature can be found, e.g., in \cite[p. 321-322]{Stein1986}.}. Note that this assumption was not needed in \cite{MR2643024}. Next, we redefine the singular part of the Birman--Schwinger operator \eqref{eq:lswbs}, namely \begin{align} \label{eq:lswbsgen} (\mathcal{V}_Su)(\xi) = \int_S \widehat{V}(\xi-\eta) u(\eta)\,\mathrm{d}\sigma_S(\eta)\,, \quad u\in L^2(S,\mathrm{d}\sigma_S)\,. \end{align} Here, $\mathrm{d}\sigma_S(\xi):=|\nabla P(\xi)|^{-1}\mathrm{d}\omega(\xi)$ where $\mathrm{d}\omega$ denotes the euclidean (Lebesgue) surface measure on $S$. In particular, the elementary volume $\mathrm{d}\xi$ in $\mathbb{R}^d$ satisfies $\mathrm{d}\xi=\mathrm{d} r\,\mathrm{d}\sigma_S(\xi)$ where $\mathrm{d} r$ is the Lebesgue measure on $\mathbb{R}$. In what follows, we abbreviate the notation and write $L^2(S)$ instead of $L^2(S,\mathrm{d}\sigma_S)$. The new definition \eqref{eq:lswbsgen} of $\mathcal{V}_S$ now does not differ anymore from that of \cite{MR2365659,MR2643024} by a factor of $2$. Similarly as before, \eqref{eq:lswbsgen} can be written as \begin{align} \label{eq:lswbs2gen} \mathcal{V}_S = \mathcal{F}_{S}V\mathcal{F}_{S}^*\,, \end{align} where $\mathcal{F}_{S}:\mathcal{S}(\mathbb{R}^d)\to L^2(S)$, $\phi\mapsto\widehat{\phi}|_S$ is the Fourier restriction operator and its adjoint, the Fourier extension operator $\mathcal{F}_S^*:L^2(S)\to \mathcal{S}'(\mathbb{R}^d)$, is now given by \begin{align} (\mathcal{F}_S^* u)(x) = \int_S u(\xi) \me{2\pi ix\cdot\xi} \,\mathrm{d}\sigma_S(\xi)\,. \end{align} Recall that the Tomas--Stein theorem asserts that $\mathcal{F}_S$ is a $L^p(\mathbb{R}^d)\to L^2(S)$ bounded operator for all $p\in[1,\kappa]$. In particular, the extension to trace ideals \cite[Theorem 2]{MR3730931} continues to hold, i.e., $\|\mathcal{V}_S\|_{\mathfrak{S}^{d+1}}\lesssim \|V\|_{L^{(d+1)/2}}$. By Sobolev embedding and $s<d$, the operator $T(-i\nabla)-\lambda V$ can be meaningfully defined if $V\in L^{d/s}(\mathbb{R}^d)$. By the assumption $s\geq 2d/(d+1)$, we have $(d+1)/2\geq d/s$. \medskip We will now outline the necessary changes in the proof of Theorem \ref{thm. asymptotics} for $T$ as above and $V\in \ell^{\frac{d+1}{2}}L^{\frac{d}{s}}$. First, the corresponding analogs of Proposition \ref{proposition def. V_Sw} and Lemma \ref{lemma compactenss of V_S} follow immediately from the Tomas--Stein theorem that we just discussed, and the analog of \eqref{eq:pfsmoothingbound} (using $(d+1)/2\geq d/s$). Next, the splitting of $BS(e)$ is the same as in Section \ref{proofmainresult}. There we have the analogous bound \eqref{BShigh bound}, i.e., $\|BS^{\mathrm{high}}(e)\|\lesssim_\tau\|V\|_{\ell^{(d+1)/2}L^{d/s}}$ by the same arguments of that proof (cf. Lemma \ref{lemma Ionescu--Schlag}). Next the Fermi surface of $T$ at energy $t\in(0,\tau]$ again consists of two connected components $S_t^\pm$. Using the above definition of $\mathcal{F}_{S_t^\pm}$, we observe that the spectral measure $E_T$ of $T$ is now given by \begin{align*} \mathrm{d} E_T(t) = \sum_\pm \mathcal{F}_{S_t^\pm}^*\mathcal{F}_{S_t^\pm}\, \mathrm{d} t\,. \end{align*} Thus, by the spectral theorem, \begin{align*} BS^{\mathrm{low}}(e) = \sum_\pm\int_0^\tau \frac{\sqrt{|V|}\mathcal{F}^*_{S_t^\pm}\mathcal{F}_{S_t^\pm}\sqrt{V}}{t+e}\,\mathrm{d} t \end{align*} and by the proof of the analog of Lemma \ref{lemma compactenss of V_S} (i.e., the Tomas--Stein theorem and the analog of \eqref{eq:pfsmoothingbound}), we again obtain $\|BS^{\mathrm{low}}(e)\|_{\mathfrak{S}^{d+1}}\lesssim_\tau \ln(1/e)\|V\|_{\ell^{(d+1)/2}L^{d/s}}$. Thus, we are left to prove the analog of the key bound \eqref{Key estimate}. But this just follows from the proof of Lemma \ref{proofkeybound} and the fact that the Tomas--Stein estimate \eqref{eq:tslocallyuniform} is valid locally uniform in $t$ for surfaces $S_t$ that we discuss here. In turn, by \cite[Theorem 1.1]{MR2831841}, this is a consequence of the following assertion. \begin{proposition} \label{tsuniform} Assume $T(\xi)$ satisfies the assumptions stated at the beginning of this section. Then for fixed $\tau>0$, one has $\sup_{t\in[0,\tau]}|(\mathrm{d}\sigma_{S_t^\pm})^\vee(x)|\lesssim_\tau(1+|x|)^{-\frac{d-1}{2}}$. \end{proposition} \begin{proof} For $t=0$ this estimate is well known, see, e.g., \cite[Theorem 1]{Stein1986}. Now let $t\in(0,\tau]$. First note that $S_t=S_t^+\cup S_t^-$ where $S_t^{+},S_t^-$ lie outside, respectively inside $S$. In the following we treat $S_t^+$ and abuse notation by writing $S_t\equiv S_t^+$. The arguments for $S_t^-$ are completely analogous. We will now express $\mathrm{d}\sigma_{S_t}$ in terms of $\mathrm{d}\sigma_S$. To that end we follow \cite[Chapter 2, Section 1]{MR2598115}. Let $\psi(t):S\to S_t$ be the diffeomorphism\footnote{Its construction is carried out in \cite[p. 112-113]{MR2598115} and requires actually only $P\in C^2$. However, we need the smoothness of $P$ to obtain the claimed decay of $(\mathrm{d}\sigma_{S_t})^\vee$ by means of a stationary phase argument.} defined by the formula \begin{align*} \psi(t)\zeta = \xi(t)\,, \quad \zeta\in S \end{align*} where $\xi(t)$ solves the differential equation \begin{align*} \begin{cases} \frac{\mathrm{d}\xi(t)}{\mathrm{d}t} = j(\xi(t))\\ \xi(0)=\zeta\in S \end{cases} \end{align*} with \begin{align*} j(\xi) := \frac{\nabla P(\xi)}{|\nabla P(\xi)|^2} \in C^\infty(P^{-1}[0,t])\,, \end{align*} i.e., $j(\xi(t))$ is the vector field generating the flow $\xi(t)$ along the normals of $S_t$. Next, \begin{align*} \tau(t,\xi) = \frac{\mathrm{d}\sigma_{S_t}(\psi(t)\xi)}{\mathrm{d}\sigma_S(\xi)}\,, \quad \xi\in S \end{align*} is the Radon--Nikod\'ym derivative of the preimage of the measure $\mathrm{d}\sigma_{S_t}$ under the mapping $\psi(t)$ with respect to the measure $\mathrm{d}\sigma_S$. By \cite[Chapter 2, Lemma 1.9]{MR2598115} it is given by $$ \tau(t,\xi) = \exp\left(\int_0^t (\Div j)(\psi(\mu)\xi)\,\mathrm{d}\mu\right)\,, \quad \xi\in S\,. $$ Thus, we have \begin{align} \label{eq:diffftmeas} \begin{split} (\mathrm{d}\sigma_{S_t})^\vee(x) & = \int_S \mathrm{d}\sigma_S(\xi)\,\me{2\pi ix\cdot\psi(t)\xi}\exp\left(\int_0^t \Div j(\psi(\mu)\xi)\,\mathrm{d}\mu\right)\\ & \equiv \int_S \mathrm{d}\sigma_S(\xi)\,\me{2\pi ix\cdot\xi} F_{t,x}(\xi) \end{split} \end{align} with $$ F_{t,x}(\xi) := \me{2\pi ix\cdot(\psi(t)\xi-\xi)}\exp\left(\int_0^t \Div j(\psi(\mu)\xi)\,\mathrm{d}\mu\right) $$ which depends smoothly on $\xi$. Thus, we are left to show that the absolute value of the right side of \eqref{eq:diffftmeas} is bounded by $C_\tau(1+|x|)^{-(d-1)/2}$ for all $t\in(0,\tau]$. Decomposing $F_{t,x}(\xi)$ on $S$ smoothly into (sufficiently small) compactly supported functions, say $\{F_{t,x}(\xi)\chi_\kappa(\xi)\}_{\kappa=1}^K$ for a finite, smooth partition of unity $\{\chi_\kappa\}_{\kappa=1}^K$ subordinate to $S$, shows that there is for every $x\in\mathbb{R}^d\setminus\{0\}$, at most one point $\overline\xi(x)\in S$ with a normal pointing in the direction of $x$. Then, by the stationary phase method, Hlawka \cite{Hlawka1950} and Herz \cite{Herz1962} (see also Stein \cite[p. 360]{Stein1993}) already showed that the leading order in the asymptotic expansion (as $|x|\to\infty$) of \eqref{eq:diffftmeas} with the cut off amplitude $F_{t,x}\chi_\kappa$ is given by $$ |x|^{-(d-1)/2} F_{t,x}(\overline{\xi}(x))\chi_\kappa(\overline{\xi}(x)) |K(\overline{\xi}(x))|^{-1/2}\me{i\pi n/4+2\pi i x\cdot\overline{\xi}(x)}\,. $$ Here, $|K(\xi)|$ is the absolute value of the Gaussian curvature of $S$ at $\xi\in S$ which is, by assumption, strictly bigger than zero, and $n$ is the excess of the number of positive curvatures over the number of negative curvatures in the direction $x$. But since $|F_{t,x}(\xi)|\lesssim_\tau1$ for all $t\in(0,\tau]$, $x\in\mathbb{R}^d$, and $\xi\in S$, this concludes the proof. \end{proof} We summarize the findings of this subsection as follows. \begin{theorem} \label{asymptoticsgen} Let $d\geq2$, $s\in[2d/(d+1),d)$, and assume $T(\xi)$ satisfies the assumptions stated at the beginning of this subsection. If $V\in\ell^{\frac{d+1}{2}}L^{\frac{d}{s}}$, then for every eigenvalue $a_S^j>0$ of $\mathcal{V}_S$ in \eqref{eq:lswbs2gen}, counting multiplicity, and every $\lambda>0$, there is an eigenvalue $-e_j(\lambda)$ of $T(-i\nabla)-\lambda V$ with weak coupling limit \begin{align*} e_j(\lambda) = \exp\left(-\frac{1}{2\lambda a_S^j}(1+o(1))\right) \quad \text{as}\ \lambda\to0\,. \end{align*} \end{theorem} \subsection{Alternative proof and complex-valued potentials} \label{Section alternative proof} We first consider the case where $V$ is real-valued and then indicate how to modify the proof in the complex-valued case. For simplicity, we even assume $V\geq 0$ so that the Birman--Schwinger operator is automatically self-adjoint. The case where $V$ does not have a sign could also be treated by the methods of \cite{MR2365659}, but here it follows from the general case considered later. For $V\geq 0$ we have, by self-adjointness, \begin{align}\label{bound for inverse of BS} \|(BS^{\rm low}_{\rm sing}(e)-z)^{-1}\|\leq 1/\min_j|z-z_j(e)|, \end{align} where $z_j(e)=\ln\left(1+\tau/e\right)a_S^j$ are the eigenvalues of $BS^{\rm low}_{\rm sing}(e)$. Fixing an integer $i$ and a range for $e$ such that $\lambda\ln(1/e)$ is bounded by an absolute constant from above and below, it follows that if $\gamma$ is a circle of radius $c\ln(1/e)$ around the eigenvalue $z_i(e)$, with $c$ a sufficiently small positive number, then there are no other eigenvalues in the interior of $\gamma$, and \begin{align*} \max_{z\in\gamma}\|(BS^{\rm low}_{\rm sing}(e_i(\lambda))-z)^{-1}\| \leq 1/(c\ln(1/e)). \end{align*} Hence, by \eqref{BShigh bound} and \eqref{Key estimate}, if we set $C(z)=(BS^{\rm low}_{\rm sing}(e)-z)^{-1}(BS(e)-BS^{\rm low}_{\rm sing}(e))$, then \begin{align}\label{<1!} r^{-1} := \max_{z\in\gamma}\|C(z)\|\leq c^{-1} o(1), \end{align} and this is $<1$ for $\lambda$ small enough. It follows from a Neumann series argument that $\gamma$ is contained in the resolvent set of the family $T(\kappa)=BS^{\rm low}_{\rm sing}(e)+\kappa(BS(e)-BS^{\rm low}_{\rm sing}(e))$ for $|\kappa|<r$ and that $(T(\kappa)-z)^{-1}$ is continuous in $|\kappa|<r$, $z\in\gamma$. This implies that the Riesz projection \begin{align*} P(\kappa)=-\frac{1}{2\pi\mathbb{I}}\oint_{\gamma}(T(\kappa)-z)^{-1}\mathrm{d} z \end{align*} has constant rank for $|\kappa|<r$. In particular, $\rk P(0)=\rk P(1)$, which means that $BS^{\rm low}_{\rm sing}(e)$ and $BS(e)$ have the same number of eigenvalues in the interior of $\gamma$. Hence $BS(e)$ has exactly one (real) eigenvalue $w_i(e)$ at a distance $\leq c\ln(1/e)$ from $z_i(e)$. Since $c$ can be chosen arbitrarily small, it follows that $w_i(e)=z_i(e)(1+o(1))$. By the Birman--Schwinger principle this implies \eqref{eq. weak coupling limit}. We now drop the assumption that $V$ is real-valued. By inspection of the proof, it is evident that Lemma \ref{lemma bound BS(e)} and \eqref{Key estimate Schatten norm} continue to hold for complex-valued $V$ and $e$ if $\ln(1/e)$ is replaced by its absolute value. We assume here that $e\in \mathbb{C}\setminus(-\infty,0]$ and take the branch of the logarithm that agrees with the real logarithm on the positive real line. We also replace our standing assumption by requiring that $|e|,\lambda>0$ are sufficiently small and $\lambda|\ln(e)|$ remains uniformly bounded from above and below. The additional difficulty in the present case is that the bound for the inverse \eqref{bound for inverse of BS} fails in general. We use the following replacement, which is a consequence of \cite[Theorem 4.1]{MR2047381}, \begin{align*} \|(BS^{\rm low}_{\rm sing}(e)-z)^{-1}\| \leq \frac{1}{d(e;z)}\exp\left(a\,\frac{\|BS^{\rm low}_{\rm sing}(e)\|_{\mathfrak{S}^{d+1}}^{d+1}}{d(e;z)^{d+1}}+b\right), \end{align*} where $d(e;z)=\dist(z,\mathrm{spec}(BS^{\rm low}_{\rm sing}(e)))$ and $a,b>0$. Note that $\|BS^{\rm low}_{\rm sing}(e)\|_{\mathfrak{S}^{d+1}}\lesssim |\ln(1/e)|\|V\|_{\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}}$ by Lemma \ref{lemma compactenss of V_S}. Thus, for a similar circle $\gamma$ of radius $c|\ln(1/e)|$ around $z_i(e)$, we find that \eqref{<1!} holds with an additional factor of $\exp(a/c^{d+1}+b)$ on the right, and hence we conclude $\rk P(0)=\rk P(1)$ as before. \subsection{Existence of positive eigenvalues of $\mathcal{V}_S$} It is well known that operators of the form \eqref{eq:defhlambda} have at least one negative eigenvalue if either $V\in L^1(\mathbb{R}^d)$ and $\int V>0$ or if $V\geq 0$ and not almost everywhere vanishing \cite{MR1970614,MR2365659,MR2643024,hoang2016quantitative}. In the latter case there are even infinitely many negative eigenvalues \cite[Corollary 2.2]{MR2643024}. By Theorem \ref{thm. asymptotics}, $H_\lambda$ has at least as many negative eigenvalues as $-\mathcal{V}_S$. We will therefore restrict our attention to this operator. By a slight modification of the following two examples (where the trial state is an approximation of the identity in Fourier space to a thickened sphere), this result may also be obtained without reference to Theorem \ref{thm. asymptotics}. Since $\mathcal{F}_S^*\phi=(\phi\mathrm{d}\omega)^{\vee}$ it follows from \eqref{eq:lswbs2} that \begin{align} \label{phiV_Sphi} \langle \phi,\mathcal{V}_S\phi\rangle = \int_{\mathbb{R}^d}V(x)|(\phi\mathrm{d}\omega)^{\vee}(x)|^2\mathrm{d} x,\quad \phi\in L^2(S). \end{align} If $\phi$ is a radial function, then so is $(\phi\mathrm{d}\omega)^{\vee}$. In particular, for $\phi\equiv 1$ we get \begin{align*} \langle \phi,\mathcal{V}_S\phi\rangle = \int_0^\infty \mathrm{d} r\, r^{d-1}|(\mathrm{d}\omega)^{\vee}(r)|^2\left(\int_S V(r\omega)\mathrm{d}\omega\right). \end{align*} Standard stationary phase computations show that $(\mathrm{d}\omega)^{\vee}(r)=\mathcal{O}((1+r)^{-(d-1)/2})$ and that it oscillates on the unit scale; in fact, it is proportional to the Bessel function $J_{\frac{d-2}{2}}$, see, e.g., \cite[Appendix B.5]{MR3243734}. The integral is convergent if the spherical average of $V$ is in $L^1(\mathbb{R}_+,\min\{r^{d-1},1\}\mathrm{d} r)$. This condition is satisfied, e.g., if $V$ is short range, $|V(x)|\lesssim (1+|x|)^{-1-\epsilon}$ for some $\epsilon>0$. If the integral is positive, then $\mathcal{V}_S$ has a positive eigenvalue. For the second example we take $\phi$ as a normalized bump function adapted to a spherical cap of diameter $R^{-1/2}$ with $R>1$; this is called a Knapp example in the context of Fourier restriction theory. Then $(\phi\mathrm{d}\omega)^{\vee}$ will be a Schwartz function concentrated on a tube $T=T_{R}$ of length $R$ and radius $R^{1/2}$, centered at the origin. More precisely, let \begin{align} \label{normalized bump adapted to cap} \phi(\xi) = R^{\frac{d-1}{4}}\widehat{\chi}(R(\xi_1-1),R^{1/2}\xi') \end{align} where $\xi_1=\sqrt{1-|\xi'|^2}$ and $\widehat{\chi}$ is a bump function. We write $\xi=(\xi_1,\xi')\in \mathbb{R}\times\mathbb{R}^{d-1}$ and similarly for $x$ here. We may choose $\chi\geq 0$ and such that $\chi\geq \mathbf{1}_{B(0,1)}$. Indeed, if $g$ is an even bump function, then we can take $\widehat{\chi}(\xi)=A^dB(g*g)(A\xi)$ for some $A>1,B>0$. Then the $L^2(S)$-norm of $\phi$ is bounded from above and below uniformly in $R$ and \begin{align*} (\phi\mathrm{d}\omega)^{\vee}(x)=R^{-\frac{d-1}{4}}e^{2\pi i x_1}\chi_{T}(x), \end{align*} where $\chi_T$ is a Schwartz function concentrated on \begin{align} \label{tube} T=\{x\in\mathbb{R}^d:\,|x_1|\leq R,|x'|\leq R^{1/2}\}, \end{align} i.e., a tube pointing in the $x_1$ direction. We can also take linear combinations of the wave packets \eqref{normalized bump adapted to cap} to obtain real-valued trial functions. Indeed, choosing $\chi$ symmetric and setting $\psi(\xi)=[\phi(\xi_1,\xi')+\phi(-\xi_1,\xi')]/2$, we get \begin{align*} (\psi\mathrm{d}\omega)^{\vee}(x) = R^{-\frac{d-1}{4}}\cos(2\pi x_1)\chi_{T}(x), \end{align*} with a slightly different $\chi_T$. Without loss of generality we may assume that $\chi_{T}(x)\geq 1$ for $x\in T$. By \eqref{phiV_Sphi}, if $V\in L^1_{\rm loc}(\mathbb{R}^d)$ and of tempered growth, then \begin{align*} \langle \psi,\mathcal{V}_S\psi\rangle = R^{-\frac{d-1}{2}}\int_{\mathbb{R}^d} V(x)\cos^2(2\pi x_1)|\chi_{T}(x)|^2\mathrm{d} x. \end{align*} In particular, this holds for $V\in \ell^{\frac{d+1}{2}}L^{\frac{d}{2}}$, which we assume from now on. By H\"older and the rapid decay of $\chi_T$ away from $T$, we have that, for any $M,N>1$, \begin{align*} |\int_{\mathbb{R}^d\setminus MT}V(x)|\chi_{T}(x)|^2\mathrm{d} x| \leq \|\mathbf{1}_{\mathbb{R}^d\setminus MT}\chi_T^2\|_{\ell^{\frac{d+1}{d-1}}L^{\frac{d}{d-2}}}\|V\|_{\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}} \lesssim_{N}M^{-N}R^{\frac{d-1}{2}}\|V\|_{\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}}. \end{align*} It follows that for $V\in \ell^{\frac{d+1}{2}}L^{\frac{d}{2}}$, \begin{align} \label{second example} \langle \psi,\mathcal{V}_S\psi\rangle \geq R^{-\frac{d-1}{2}}\int_{MT} V(x)\cos^2(2\pi x_1)|\chi_T(x)|^2\mathrm{d} x-C_{N}M^{-N}\|V\|_{\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}}. \end{align} If the first term on the right is positive and bounded from below by, say, a fixed power of $M^{-1}$, then this expression is positive for large $R$. As a concrete example, consider the potential \begin{align*} V(x) = \frac{\cos(4\pi x_1)}{(1+|x_1|+|x'|^2)^{1+\epsilon}}, \end{align*} with $\epsilon>0$ (see also \cite{MR2024415,MR3713021,arXiv:1709.06989} for related examples). A straightforward calculation shows that $V\in \ell^{\frac{d+1}{2}}L^{\frac{d}{2}}$. Since the average of $\cos^2(2\pi x_1)\cos(4\pi x_1)$ over a full period of $\cos(4\pi x_1)$ is always $\gtrsim 1$ and $|\chi_T|^2$ is approximately constant on the unit scale, with $\geq 1$ on $T$, a computation shows that the first term on the right side of \eqref{second example} is bounded from below by $MR^{-\epsilon}$. Taking $M=R^{\epsilon}$ yields positivity of the whole expression for sufficiently large $R$. Therefore, $-\mathcal{V}_S$, and hence $H_{\lambda}$, has a negative eigenvalue. This example has a straightforward generalization to more than one eigenvalue. Let $(\kappa_j)_{j=1}^K$ be mutually disjoint spherical caps of diameter $R^{-1/2}$ and let $\phi_j$ be normalized bump functions adapted to $\kappa_j$, similar to \eqref{normalized bump adapted to cap}. Note that $K\lesssim R^{-\frac{d-1}{2}}$ since the caps are disjoint. If the condition following \eqref{second example} is satisfied for all tubes $T_j$ corresponding to the caps $\kappa_j$ (these are dual to the caps and centered at the origin), then the expression \eqref{second example} is positive (for large $R$) for every $\phi_j$. Since the $\phi_j$ are orthogonal (by Plancherel), it follows that $\mathcal{V}_S$ has at least $K$ positive eigenvalues. \subsection{Higher orders in the eigenvalue asymptotics} \label{ss:higherorders} Hainzl and Seiringer carried out the higher order asymptotic expansion of the eigenvalues $e_j(\lambda)$ in \cite[Formula (16)]{PhysRevB.77.184517} and \cite[Theorem 2.7]{MR2643024} under the assumption that $V$ has an $L^1$ tail. Similarly as in Theorems \ref{thm. asymptotics} and \ref{asymptoticsgen}, the purpose of this section is to show that their findings in fact hold for potentials decaying substantially slower. For the sake of simplicity and concreteness, we again only consider $T=|\Delta+1|$ here. Let $BS_\mathrm{reg}(e)=BS(e)-BS_\mathrm{sing}^\mathrm{low}(e)$ and recall that if $1+\lambda BS_\mathrm{reg}(e)$ is invertible, then the Birman--Schwinger principle \eqref{eq:bsprinc} asserts that $H_\lambda$ has a negative eigenvalue $-e$ if and only if the operator \begin{align} \label{eq:bsprinciplerewritten2} \frac{\lambda}{1+\lambda BS_\mathrm{reg}(e)} BS_\mathrm{sing}^\mathrm{low}(e) \end{align} has an eigenvalue $-1$. The following is a simple but useful observation which follows from a Neumann series argument and the fact that \eqref{eq:bsprinciplerewritten2} is isospectral to \begin{align*} \ln(1+\tau/e)\mathcal{F}_S V^{1/2} \frac{\lambda}{1+\lambda BS_\mathrm{reg}(e)}|V|^{1/2}\mathcal{F}_S^*\,. \end{align*} \begin{lemma} \label{neumann} Let $e,\lambda>0$ and suppose $V$ is real-valued and such that \begin{align} \label{eq:assumbssmall} \lambda\|BS_\mathrm{reg}(e)\|<1\,. \end{align} Then $H_\lambda$ has an eigenvalue $-e$ if and only if \begin{align} \label{eq:neumann} \lambda\ln(1+\tau/e)\mathcal{F}_S V^{1/2} \left(\sum_{n\geq0}(-1)^n(\lambda BS_\mathrm{reg}(e))^n\right) |V|^{1/2}\mathcal{F}_S^* \end{align} has an eigenvalue $-1$. \end{lemma} Recall that assumption \eqref{eq:assumbssmall} is satisfied for $V\in\ell^{\frac{d+1}{2}}L^{\frac{d}{2}}$ (cf. \eqref{BShigh bound} and Lemma \ref{proofkeybound}), i.e., in particular for $V\in L^{\frac{d+1}{2}-\epsilon}$ with $\epsilon\in(0,1/2]$. In fact, combining Lemma \ref{proofkeybound} for $BS_\mathrm{reg}^\mathrm{low}(e)$ and the Seiler--Simon inequality (cf. \cite[Theorem 4.1]{MR2154153}) for $BS^\mathrm{high}(e)$ shows\footnote{Using Cwikel's inequality \cite[Theorem 4.2]{MR2154153}, one obtains $\|BS_\mathrm{reg}(e)\|_{\mathfrak{S}^{\frac{(d-1)((d+1)/2-\epsilon)}{(d-1)/2+\epsilon},\infty}}=o_V(\ln(1/e))$ for $\epsilon=1/2$.} \begin{align} \label{eq:bsreglpschatten} \begin{split} \|BS_\mathrm{reg}(e)\|_{\mathfrak{S}^{\frac{(d-1)((d+1)/2-\epsilon)}{(d-1)/2+\epsilon}}} & \leq \|BS_\mathrm{reg}^\mathrm{low}(e)\|_{\mathfrak{S}^{\frac{(d-1)((d+1)/2-\epsilon)}{(d-1)/2+\epsilon}}} + \|BS^\mathrm{high}(e)\|_{\mathfrak{S}^{\frac{(d-1)((d+1)/2-\epsilon)}{(d-1)/2+\epsilon}}}\\ & = o_V(\ln(1/e))\,, \quad V\in L^{\frac{d+1}{2}-\epsilon}\,, \quad \epsilon\in(0,1/2)\,. \end{split} \end{align} We will now use \eqref{eq:neumann} to compute the eigenvalue asymptotics of $e_j(\lambda)$ to second order. To that end, we define \begin{align} \begin{split} \mathcal{W}_S(e) :&= \mathcal{F}_S V^{1/2}BS_\mathrm{reg}(e) |V|^{1/2}\mathcal{F}_S^* \end{split} \end{align} which is, modulo the $-\lambda^2\ln(1+\tau/e)$ prefactor, just the second summand in \eqref{eq:neumann}. Note that due to the additional operators $\mathcal{F}_S V^{1/2}$ on the left and $|V|^{1/2}\mathcal{F}_S^*$ on the right of $BS_\mathrm{reg}(e)$, estimate \eqref{eq:tsfsexplicitgen}, and $\lambda\|BS_\mathrm{reg}(e)\|=o_V(1)$, we infer \begin{align} \label{eq:wsschatten} \|\mathcal{W}_S(e)\|_{\mathfrak{S}^{\frac{(d-1)((d+1)/2-\epsilon)}{(d-1)/2+\epsilon}}} = o_V(\ln(1/e))\,, \quad V\in L^{\frac{d+1}{2}-\epsilon}\,, \quad \epsilon\in(0,1/2]\,. \end{align} We will momentarily show the existence of $\mathcal{W}_S(0)$ and the limit $\lim_{e\searrow0}\mathcal{W}_S(e)=\mathcal{W}_S(0)$ in operator norm for $V\in L^{\frac{d+1}{2}-\epsilon}$. Let $b_S^j(\lambda)<0$ denote the negative eigenvalues of \begin{align} \label{eq:limitingopsecondorder} \mathcal{B}_S(\lambda) := \mathcal{V}_S - \lambda\mathcal{W}_S(0) \quad \text{on}\ L^2(S) \end{align} and recall that $\mathcal{V}_S\in\mathfrak{S}^{\frac{(d-1)((d+1)/2-\epsilon)}{d-(d+1)/2+\epsilon}}$ if $V\in L^{\frac{d+1}{2}-\epsilon}$ by \eqref{eq:tsfsexplicitgen}. This and \eqref{eq:wsschatten} show that $\mathcal{B}_S(\lambda)$ is a compact operator as well. Note that, by the definition of $BS_\mathrm{reg}(e)$, the operator $\mathcal{B}_S(\lambda)$ has at least one negative eigenvalue if $\mathcal{V}_S$ has a zero-eigenvalue. The asymptotic expansion of $e_j(\lambda)$ to second order then reads as follows. \begin{theorem} \label{secondorder} Let $d\geq3$ and $V\in L^{\frac{d+1}{2}-\epsilon}$ for some $\epsilon\in(0,1/2]$. If $\lim_{\lambda\searrow0}b_S^j(\lambda)<0$ then $H_\lambda$ has, for small $\lambda$, a corresponding negative eigenvalue $-e_j(\lambda)<0$ that satisfies \begin{align} \lim_{\lambda\to0}\left(\ln(1+1/e_j(\lambda))+\frac{1}{\lambda b_S^j(\lambda)}\right) = 0\,. \end{align} \end{theorem} The proof of Theorem \ref{secondorder} relies on the fact that $|V|^{1/2}(\mathcal{F}_{S_t^\pm}^*\mathcal{F}_{S_t^\pm}-\sqrt{1\pm t}\mathcal{F}_S^*\mathcal{F}_S)V^{1/2}$ is H\"older continuous in $\mathcal{B}(L^2(\mathbb{R}^d),L^2(\mathbb{R}^d))$ for $t\leq\tau\in(0,1)$. We already saw in Subsection \ref{pfkeybound} that this is true for $V\in\mathcal{S}(\mathbb{R}^d)$ (or more generally $V$ satisfying $|V(x)|\lesssim(1+|x|)^{-1-\epsilon}$) because of H\"older continuity of the Sobolev trace theorem. The following proposition, whose proof is deferred to Appendix \ref{a:tsholdercont}, yields H\"older continuity of the (non-endpoint) Tomas--Stein theorem. \begin{proposition} \label{holdercontts} Let $0<\tau<1$, $1\leq p<\kappa$, $1/q=1/p-1/p'$, i.e., $1\leq q<(d+1)/2$, and $0<\alpha<\min\{(d+1)/2-q,q\}$. Then \begin{align} \label{eq:holdercont} \sup_{t\in(0,\tau)}\|\mathcal{F}_{S_t^\pm}^*\mathcal{F}_{S_t^\pm}-\sqrt{1\pm t}\mathcal{F}_S^*\mathcal{F}_S\|_{L^{p}\to L^{p'}} \lesssim_{\alpha,q,\tau} t^{\alpha/q}\,. \end{align} \end{proposition} \begin{proof}[Proof of Theorem \ref{secondorder}] Recall that $V\in L^{\frac{d+1}{2}-\epsilon}$ satisfies the assumption of Lemma \ref{neumann}. Thus, $H_\lambda$ has an eigenvalue $-e_j(\lambda)<0$ if and only if \begin{align} \begin{split} \lambda\ln(1+\tau/e_j(\lambda)) & [\mathcal{B}_S(\lambda) + \lambda(\mathcal{W}_S(0)-\mathcal{W}_S(e_j(\lambda)))\\ & \quad + \mathcal{F}_SV^{1/2}\left(\sum_{n\geq2}(-1)^n(\lambda BS_\mathrm{reg}(e_j(\lambda)))^n\right)|V|^{1/2}\mathcal{F}_S^*] \end{split} \end{align} has an eigenvalue $-1$. Thus, our claim is established, once we show $\lim_{e\to0}\mathcal{W}_S(e)=\mathcal{W}_S(0)$ in operator norm topology. In turn, by the definition of $\mathcal{W}_S(e)$, this follows once we show the existence of $\lim_{e\to0}BS_\mathrm{reg}(e)=BS_\mathrm{reg}(0)$ since $|V|^{1/2}\mathcal{F}_S^*$ and $\mathcal{F}_SV^{1/2}$ are bounded by the Tomas--Stein theorem \eqref{eq:tsexplicit}. We decompose $BS_\mathrm{reg}(e)=BS^\mathrm{high}(e)+BS_\mathrm{reg}^\mathrm{low}(e)$ and observe that $BS^\mathrm{high}(e)\to BS^\mathrm{high}(0)$ (e.g., by Plancherel and dominated convergence). On the other hand, Proposition \ref{holdercontts} shows that the difference \begin{align*} & BS_\mathrm{reg}^\mathrm{low}(e) - BS_\mathrm{reg}^\mathrm{low}(0)\\ & \quad = \sum_{\pm}\int_0^{\tau} \left(\sqrt{|V|}(\mathcal{F}_{S_t^{\pm}}^*\mathcal{F}_{S_t^{\pm}}-\sqrt{1\pm t}\,\mathcal{F}^*_{S}\mathcal{F}_{S})\sqrt{V}\right)\left(\frac{1}{t+e}-\frac{1}{t}\right)\,\frac{\mathrm{d} t}{2\sqrt{1\pm t}} \end{align*} vanishes in operator norm as $e\to0$. This concludes the proof. \end{proof}
proofpile-arXiv_065-4362
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:introduction} The Kepler satellite revealed that planets are ubiquitous in the Milky Way \citep{coughlin2016,thompson2018}. However, most of the planets detected by Kepler lie too far away from us to enable measuring their masses and characterising their atmospheres. To obviate this problem, the TESS mission \citep{ricker2015}, and in the near future the PLATO mission \citep{rauer2014}, look for transiting planets orbiting bright, nearby stars. The first planet detected by TESS, $\pi$\,Men\,c, demonstrates the success of this strategy \citep{gandolfi2018,huang2018}. $\pi$\,Men\,c\ orbits a bright ($V$\,=\,5.65\,mag), nearby (d\,$\approx$\,18.3\,pc) G0\,V star previously known to host a long-period sub-stellar companion ($\pi$\,Men\,b). Thanks to the stellar brightness, both planetary mass and radius have been measured with high precision obtaining a mass of 4.52$\pm$0.81\,\ensuremath{M_{\oplus}}\ and a radius of 2.06$\pm$0.03\,\ensuremath{R_{\oplus}}\ \citep{gandolfi2018}. These measurements indicate that $\pi$\,Men\,c\ is a super-Earth with a bulk density of about 2.8\,g\,cm$^{-3}$, suggesting that the planet may host a significant atmosphere, possibly water-rich, which would become hydrogen dominated in the upper layers following the dissociation of water \citep{garcia2020}. Preliminary one-dimensional (1D) hydrodynamic simulations \citep{kubyshkina2018} of a supposedly hydrogen-dominated atmosphere showed that such an atmosphere may be subject to strong escape of the order of 1.2$\times$10$^{10}$\,g\,s$^{-1}$, corresponding to roughly 1\% of the planetary mass per Gyr \citep{gandolfi2018}. Such a strong escape together with a hydrogen-dominated atmosphere would imply the presence of an extended gaseous envelope that could be directly probed by Ly$\alpha$ transit observations \citep[e.g.,][]{vidal2003}. \citet{garcia2020} presented the results of one Ly$\alpha$ transit observation carried out with the STIS spectrograph on board HST. The data led to a clear detection of the stellar Ly$\alpha$ line, but of no planetary absorption. They obtained 1$\sigma$ upper limits for the planet-to-star radius ratio at the wavelengths covered by the Ly$\alpha$ line ($R_{\rm p,Ly\alpha}$/$R_{\rm star}$) of 0.13 and 0.12 in the [$-$215,$-$91]\,km\,s$^{-1}$ and [+57,+180]\,km\,s$^{-1}$ velocity ranges, respectively. Furthermore, the detection and reconstruction of the stellar Ly$\alpha$ line enabled them to constrain the stellar high-energy (X-ray\,+\,EUV; 5--912\,\AA; hereafter XUV) flux to 1350\,erg\,cm$^{-2}$\,s$^{-1}$ at the position of $\pi$\,Men\,c\ (i.e., about 6\,erg\,cm$^{-2}$\,s$^{-1}$ at 1\,AU, which is close to the solar value). This estimate also considered the results of \citet{france2018} obtained from an HST/COS far-ultraviolet spectrum and those of \citet{king2019} obtained from archival X-ray observations. In particular, \citet{france2018} and \citet{king2019} derived stellar XUV fluxes at the distance of the planet of 1060\,erg\,cm$^{-2}$\,s$^{-1}$ and 1810\,erg\,cm$^{-2}$\,s$^{-1}$, which are within a factor of $\approx$1.3 from that given by \citet{garcia2020}. Further to presenting the HST observations, \citet{garcia2020} showed the results of 1D hydrodynamic simulations, accounting for hydrogen and oxygen (photo)chemistry, of the planetary upper atmosphere aiming at reproducing the lack of planetary Ly$\alpha$ absorption. They showed that the mass-loss rate is somewhat sensitive to the atmospheric bulk composition, but that a water-dominated atmosphere could comply with the Ly$\alpha$ non-detection, because the presence of a large amount of oxygen would reduce the extension of the atmosphere, which would also turn from mostly neutral to mostly ionised at a low enough altitude to make neutral hydrogen not detectable at Ly$\alpha$ during transit. \citet{vidotto2020} presented 1D hydrodynamic simulations of a stellar-planetary wind interaction proposing that the non-detection of hydrogen absorption during transit may be due to the confinement below the sonic point of the planetary atmosphere by the stellar wind. \begin{figure*}[ht!] \centering \includegraphics[width=7.cm]{./Figure1A.jpg} \hspace{1.5cm} \includegraphics[width=7.cm]{./Figure1B.jpg} \vspace{0.2cm} \includegraphics[width=7.cm]{./Figure1C.jpg} \hspace{1.5cm} \includegraphics[width=7.cm]{./Figure1D.jpg} \caption{Top left: proton density distribution in the ecliptic plane across the whole simulated domain computed with a stellar XUV flux at 1\,AU of 6\,erg\,cm$^{-2}$\,s$^{-1}$, a stellar mass-loss rate of 10$^{11}$\,g\,s$^{-1}$ (i.e., weak SW), and SW terminal velocity of 400\,km\,s$^{-1}$ (i.e., velocity of 250\,km\,s$^{-1}$ and temperature of 0.65\,MK at the planetary orbit), leading to a SW density at the planetary orbit of 3$\times$10$^2$\,cm$^{-3}$. The axes are scaled in units of planetary radii and the planet sits at the center of the coordinate reference frame, while the star lies at $X$\,=\,762. Top right: same as top-left, but for a stellar mass-loss rate of 2$\times$10$^{12}$\,g\,s$^{-1}$ (i.e., moderate SW), leading to a SW density at the planetary orbit of 6$\times$10$^3$\,cm$^{-3}$. Bottom left: same as top-right, but for the density distribution of ENAs and closing-up to the position of the planet. Bottom right: same as bottom-left, but for the electron temperature in the $X-Z$ plane. In all panels, the lines with arrows indicate streamlines of the corresponding fluids.} \label{fig:3dmaps} \end{figure*} We present here full three-dimensional (3D) hydrodynamic modelling of an atmosphere of $\pi$\,Men\,c, composed of hydrogen and helium, and its interaction with the stellar wind. Although the model does not take into account the range of (photo)chemistry that would be needed to simulate an atmosphere containing possibly large amounts of elements heavier than helium \citep{garcia2020}, it overcomes the limitations of the 1D assumption imposed by \citet{garcia2020} and \citet{vidotto2020}. The simulations cover two values of the stellar wind velocity and a wide range of stellar XUV emission and mass-loss rate values. With this multiparameter study, we aim at identifying the physical conditions reproducing the non-detection of the planetary Ly$\alpha$ absorption. Section~\ref{sec:model} gives a brief presentation of the modelling framework, while Sect.~\ref{sec:result} presents the results. In Sect.~\ref{sec:discussion}, we discuss the results and draw the conclusions. \section{The theoretical framework}\label{sec:model} We employ the 3D multi-fluid hydrodynamic model of \citet{ildar2018,ildar2020} and \citet{khodachenko2019}, which is an upgraded version of the two-dimensional one presented by \citet{khodachenko2015,khodachenko2017} and \citet{ildar2016}. We present here the most relevant features of the model. The code solves numerically the hydrodynamic continuity, momentum, and energy equations for all species in the simulated multi-component flow. In this work, we consider the atmosphere to be composed by hydrogen and helium, with a helium mixing ratio of He/H\,=\,0.1 (approximately the solar He/H abundance ratio), and account for H, H$^+$, H$_2$, H$_2^+$, H$_3^+$, He, and He$^+$. Including H$_3^+$ is important, because it influences the planetary mass loss by up to 30\% \citep[see also][]{garcia2020}. The energetic neutral atoms (ENAs) generated by charge exchange between H (of the planetary wind; hereafter PW) and H$^+$ (of the stellar wind; hereafter SW) are calculated as an independent fluid, because their velocity and temperature are significantly different from those of the neutral hydrogen atoms of planetary origin. The photochemical reactions of the H-He plasma are described in \citet{khodachenko2015} and \citet{ildar2016}. Photoionisation of both hydrogen and helium results in strong heating by the produced photoelectrons, which is the driver of the hydrodynamic outflow of the planetary atmosphere. The model derives the corresponding heating term by integrating the stellar XUV spectrum of \citet{garcia2020}. As shown, for example, by \citet{khodachenko2019}, the heating term is computed as \begin{equation} \begin{multlined} \label{eq:heating} W_{\rm XUV} = \frac{1}{N_{\rm tot}}(\gamma_{\rm a}-1)\,\,\times\,\,n_{\rm a} [\langle(\hslash\nu - E_{\rm ion})\sigma_{\rm XUV}F_{\rm XUV}\rangle\,\, \\ -\,\,n_{\rm e} \nu_{\rm Te} (E_{21}\sigma_{21} + E_{\rm ion}\sigma_{\rm ion})]\,, \end{multlined} \end{equation} where $N_{\rm tot}$ is the total density of all particles, including electrons, $\gamma_{\rm a}$ is the adiabatic specific heat ratio that we take being 5/3, $n_{\rm a}$ is the density of hydrogen atoms, $h\nu$ is the photon energy, $E_{\rm ion}$ is the hydrogen ionisation energy of 13.6\,eV, $\sigma_{\rm XUV}$ is the wavelength dependent ionisation cross section to XUV radiation, $F_{\rm XUV}$ is the XUV stellar flux at the planetary orbital distance, $n_{\rm e}$ is the electron density, $\nu_{\rm Te}$ is the terminal velocity of electrons, $E_{21}$ is the $n$\,=\,1 level to $n$\,=\,2 level hydrogen excitation energy, and $\sigma_{21}$ and $\sigma_{\rm ion}$ are the hydrogen excitation and ionisation cross sections by electron impact, respectively. Under these conditions, the photoionisation time of unshielded hydrogen atoms at the planetary orbit is about 4 hours. The model equations are solved in a non-inertial spherical reference frame fixed at the planetary center and rotating at the same rate as the planet orbits around the star, so that the planet faces the star always with the same side. The $X$ axis connects the star and the planet, while the $Y$ axis is perpendicular to the $X$ axis and lies in the ecliptic plane. The polar-axis $Z$ is directed perpendicular to the ecliptic plane and completes the so-called tidally locked spherical reference frame. In this frame, we properly account for the non-inertial terms, namely the generalised gravity potential and Coriolis force. Since the gas beyond the planetary exobase is mostly ionised, the species in the simulation domain are collisional through the action of the Coulomb force, justifying the hydrodynamic approach \citep[e.g.,][]{debrecht2020,vidotto2020}. Furthermore, the model also takes into account radiation pressure, which we find being mostly negligible compared to other forces \citep[e.g.,][]{murray2009,khodachenko2017,khodachenko2019,debrecht2020}. We tested this by computing models with twice and ten times larger stellar Ly$\alpha$ emission compared to the baseline of 5.6\,erg\,s$^{-1}$\,cm$^{-2}$ at 1\,AU. We obtained that only the model with the highest Ly$\alpha$ flux affects the Ly$\alpha$ planetary absorption profile, but only at velocities between -100 and 0\,km\,s$^{-1}$, which are anyway too low to be detectable by the observations and contaminated by interstellar medium (ISM) absorption and geocoronal emission \citep{garcia2020}. The fluid velocity at the planetary surface is taken to be zero. Furthermore, to keep the number of grid points in the model small enough to be manageable by the numerical code, the radial mesh is highly non-uniform, with the grid step increasing linearly from the planetary surface. This allows us to resolve the highly stratified upper atmosphere of the planet, where the required grid step is as small as $\Delta$$r$\,=$R_{\rm p}$/400, where $r$ is the radial distance from the center of the planet and $R_{\rm p}$ the planetary radius. As initial state, we take a fully neutral atmosphere in barometric equilibrium composed by H$_2$ and He. For all simulations, we considered the system parameters of \citet{gandolfi2018} that we reproduce in Table~\ref{tab:system_parameters}. At the inner boundary of the simulation domain, at $r$\,=\,$R_{\rm p}$, we set a temperature and a pressure of 1000\,K and 0.05\,bar, respectively. The former is close to the planetary equilibrium temperature obtained considering zero albedo (see Table~\ref{tab:system_parameters}), while the latter is the pressure at which the planetary atmosphere is optically thick to photons with a wavelength longer than of 10\,\AA\ (i.e., the blue edge of the XUV range). \begin{table}[h!] \caption{Adopted system parameters of $\pi$\,Men\ and $\pi$\,Men\,c\ from \citet{gandolfi2018}.} \label{tab:system_parameters} \begin{center} \begin{tabular}{l|c} \hline \hline Parameter & Value \\ \hline Stellar mass, $M_{\rm s}$ [$M_{\odot}$] & 1.02 \\ Stellar radius, $R_{\rm s}$ [$R_{\odot}$] & 1.10 \\ Semi-major axis of planet c, $a$ [AU] & 0.06702 \\ Planetary mass, $M_{\rm p}$ [$M_{\rm Earth}$] & 4.52 \\ Planetary radius, $R_{\rm p}$ [$R_{\rm Earth}$] & 2.06 \\ Planetary equilibrium temperature, T$_{\rm eq}$, [K] & 1147 \\ \hline \end{tabular} \end{center} \end{table} The model also incorporates self-consistently the flow of the SW plasma, which we consider being composed by protons, in the way described by \citet{khodachenko2019} and \citet{ildar2020}, enabling one to model the whole system. Thus, the simulation domain has one further boundary at the base of the stellar corona. At distances from the stellar center shorter than 20 stellar radii, the SW is accelerated by an empirical heating term, derived from an analytical 1D polytropic Parker-like model \citep{keppens1999}, that we compute as \begin{equation} \label{eq:heating_star} W_{\rm SW} = (\gamma_{\rm a} - \gamma_{\rm p})\,\,\times\,\,T_{\rm p}(r_{\rm s})\,\,\times\,\,div\,V_{\rm p}(r_{\rm s})\,. \end{equation} In Eq.~\ref{eq:heating_star}, $\gamma_{\rm a}$ is the adiabatic specific heat ratio that we take being 5/3 to correctly model shocks and $r_{\rm s}$ is the distance from the center of the star, while $\gamma_{\rm p}$, $T_{\rm p}$, and $V_{\rm p}$ are respectively the polytropic index, temperature, and velocity obtained from the polytropic solution. The simulated stellar wind is isotropic in space, stationary in time, and in good agreement with the Parker analytical solution. We further employ the simulations to compute the absorption at the position of the Ly$\alpha$ line following \citet{ildar2018}. This procedure has already been successfully employed to generate synthetic observations for the hot Jupiter HD\,209458\,b \citep{khodachenko2017,ildar2018,ildar2020} and the warm Neptune GJ\,346\,b \citep{khodachenko2019}, providing physically reasonable and self-consistent interpretation of Ly$\alpha$ transit observations, as well as of transit observations of resonance lines of minor species (e.g., C, O, Si). \section{Results}\label{sec:result} The velocity of the PW driven by XUV heating reaches moderate values of about 10\,km\,s$^{-1}$, which are by far not enough to produce Doppler shifted absorption beyond the wavelengths strongly contaminated by ISM absorption and geocoronal emission. Therefore, absorption in the wings of the Ly$\alpha$ line during transit can be caused only by ENAs generated during the interaction of the PW with the SW. $\pi$\,Men is an $\approx$5\,Gyr old solar-type star \citep{gandolfi2018}, which is therefore likely to have solar-like wind properties. To explore how strong the Ly$\alpha$ absorption might be under different physical conditions, we simulate the PW-SW interaction considering two winds typical of solar-like plasma, namely a fast and a slow wind of terminal velocities $V_{\rm sw,\infty}$\,=\,800\,km\,s$^{-1}$ and 400\,km\,s$^{-1}$, respectively, further varying the stellar mass-loss rate, hence SW density, by about an order of magnitude. Another parameter directly affecting the Ly$\alpha$ absorption is the stellar XUV emission, which we also vary by about an order of magnitude. Figure~\ref{fig:3dmaps} presents simulation results for two cases. The first case is for a very weak SW with a density ten times smaller than solar. The stellar mass-loss rate in this case is of 10$^{11}$\,g\,s$^{-1}$. Under these conditions, the SW, which is still subsonic at the planetary orbital separation, diverts the planetary outflow only far from the planet, at an $X$\,=\,$r$/$R_{\rm p}$ value of $\approx$200--300. Therefore, the SW-PW interaction occurs in a low-density region of the planetary atmosphere, leading to a small ENA production. The second case is for a moderate SW, namely with a stellar mass-loss rate of 2$\times$10$^{12}$\,g\,s$^{-1}$ \citep[the average solar mass-loss rate is about 2.5$\times$10$^{12}$\,g\,s$^{-1}$; e.g.,][]{phillips1995}. This second case is significantly different from the previous one, because the SW pressure is enough to stop the planetary outflow close to the planet, at $X$ $\approx$20--30, redirecting the planetary escaping material towards the tail, further generating a large amount of ENAs. Because of its supersonic nature, this interaction generates a bow shock, as evinced by the temperature distribution (Fig.~\ref{fig:3dmaps}). The ENAs, which produce significantly high velocity absorption in the blue wing of the Ly$\alpha$ line, are generated inside the bow shock region. Figure~\ref{fig:details} presents the details of the simulated distribution of species computed for the moderate SW (i.e., top-right and bottom panels of Fig.~\ref{fig:3dmaps}) along the star-planet line. It shows that the simulation domain is split by the shock into two regions, one dominated by the escaping planetary material and one dominated by SW material. The supersonic SW remains undisturbed until the bow shock region, where it experiences a sharp deceleration, compression, and heating. The region dominated by planetary material, instead, is characterised by a relatively low temperature and velocity, though before the ionopause the PW is supersonic. At the ionopause, the normal component of the proton velocity goes to zero. However, the neutral PW particles penetrate into the shock region, where they are rapidly ionised by the hot electrons and XUV radiation. In the shocked region, adjacent to the ionopause, the energetic SW protons charge exchange with planetary atoms producing ENAs. For this simulation, we obtained a planetary mass-loss rate of about 2$\times$10$^{10}$\,g\,s$^{-1}$, which is in good agreement with previous estimates based on 1D simulations \citep{gandolfi2018,garcia2020}; we obtain roughly the same planetary atmospheric mass-loss rate (within a few \%) for all conducted simulations. We remark that, besides the difference in the geometry of the simulations, the mass-loss rates of \citet{gandolfi2018} were obtained accounting only for H and those of \citet{garcia2020} for H and O, while our simulations consider H and He. \begin{figure} \centering \vspace{-0.3cm} \includegraphics[width=\hsize,clip]{./Figure2.png}\vspace{-0.5cm} \caption{Distribution profiles of major species along the planet-star line obtained in the simulation with the moderate SW (i.e., top-right and bottom panels of Fig.~\ref{fig:3dmaps}). The left axis is for the density (in log[cm$^{-3}$]) of H$_2$ (orange line), H (red line), H$^+$ (blue line), and ENAs (green line). The right axis is for the proton temperature (in 10$^4$\,K; black solid line) and velocity (in km\,s$^{-1}$; dotted line). The vertical dashed lines indicate the approximate positions of the ionopause and of the bow shock. The planet lies at $X$\,=\,0, while the star is located to the right.} \label{fig:details} \end{figure} The result that models with different assumptions and geometries lead to similar planetary mass-loss rates is found also for other planets \citep[see, for example, Table 2 of][]{kubyshkina2018b}. This may be because the physical properties (mainly density and velocity) of the gas at the sonic point are robust against the typical assumptions taken in the different codes, though detailed comparisons of the model outputs would be necessary to confirm this. However, the most interesting result is possibly the similarity of mass-loss rates computed by 1D and 3D models, where the former are integrated across the whole planet (i.e., 1D mass-loss rates multiplied by 4$\pi$). Indeed, although one might expect that mass loss would be strongly reduced on the night side, the large size of the upper atmosphere of close-in planets and the redistribution of heat across it lead to similar conditions throughout the majority of the upper atmosphere. Figure~\ref{fig:details} enables one to gather a rough estimate of the absorption at the position of the Ly$\alpha$ line. The integral of the density of ENAs along the $X$ axis (i.e., the ENA column density; hereafter NL) is equal to NL\,=\,4.5$\times$10$^4$\,$R_{\rm p}$\,=\,0.6$\times$10$^{14}$\,cm$^{-2}$, while the resonant cross-section absorption of the Ly$\alpha$ line is $\sigma$\,=\,6$\times$10$^{-14}$\,cm$^2$ \citep{khodachenko2017}. Assuming that ENAs form a shell of radius $R_{\rm abs,ENAs}$\,$\sim$\,30\,R$_{\rm p}$\,$\sim$\,0.5\,$R_{\rm star}$ around the planet (see Fig.~\ref{fig:3dmaps}), the Ly$\alpha$ absorption caused by ENAs is 0.5$^2$\,NL\,$\sigma$\,$\approx$\,0.9, meaning that the size of the planet at Ly$\alpha$ wavelengths $R_{\rm p,Ly\alpha}$/$R_{\rm star}$ is $\approx$\,1.0. Figure~\ref{fig:absorption} shows the actual absorption profile obtained from the simulation computed considering the weak and moderate SW (i.e., top panels of Fig.~\ref{fig:3dmaps}), and the distribution of the wavelength-integrated absorption depths (i.e., 1$-\exp^{-{\rm NL}\,\sigma}$) across the stellar disk obtained from the two simulations. The absorption profiles have been computed accounting for all neutral hydrogen particles, further considering their own velocities and temperatures. In the weak SW case, the absorption takes place mostly in the [-30,50]\,km\,s$^{-1}$ velocity range, where any planetary absorption signature is unobservable, because of contamination by ISM absorption and geocoronal airglow emission. In contrast, in the moderate SW case, the absorption is largest at higher negative velocities and comes mostly from the shocked region, as shown by the bottom panel of Fig.~\ref{fig:absorption}, which presents the distribution of the high-velocity absorption, hence from ENAs, across the stellar disk. \begin{figure} \centering \vspace{-0.3cm} \includegraphics[width=\hsize]{./Figure3A.png}\vspace{-0.5cm} \includegraphics[width=6.cm]{./Figure3B.jpg} \caption{Top: Out-of-transit (black circles) and in-transit (red circles) observed Ly$\alpha$ profiles \citep[from][]{garcia2020}. The dashed line shows the synthetic in-transit Ly$\alpha$ profile obtained from the simulation computed under the weak SW conditions (i.e., top-left panel in Fig.~\ref{fig:3dmaps}), while the solid line is for the simulation computed with the moderate SW (i.e., top-right and bottom panels of Fig.~\ref{fig:3dmaps}). Of the two simulations represented in this figure, only the weak SW case is consistent with the HST non-detection. Bottom: distribution of the Ly$\alpha$ absorption along the line of sight averaged over the blue wing of the line, in the [$-$215,$-$91]\,km\,s$^{-1}$ velocity range, as seen by a remote observer at mid-transit and considering the moderate SW. The absorption ranges between 0 and 1, where 0 means no absorption, while 1 means full absorption. The black circle at the outer boundary indicates the star and the white horizontal line shows the planetary orbital path accounting for the impact parameter (the planet moves form left to right). At mid-transit, the planet is located at the intersection of the horizontal white solid line and of the vertical dashed black line.} \label{fig:absorption} \end{figure} Finally, we performed a systematic study of the Ly$\alpha$ planetary absorption depth as a function of the stellar XUV flux, ranging between 3 and 20\,erg\,s$^{-1}$\,cm$^{-2}$ at 1\,AU, and of the stellar mass-loss rate, ranging between 2$\times$10$^{11}$ and 4$\times$10$^{12}$\,g\,s$^{-1}$. For this grid of models, the SW terminal velocity is kept constant at the slow SW, namely 400\,km\,s$^{-1}$. We run an additional, smaller grid considering the fast SW, namely 800\,km\,s$^{-1}$, and stellar mass-loss rates, ranging between 10$^{11}$ and 7$\times$10$^{11}$\,g\,s$^{-1}$. We remind the reader that the estimated XUV flux of $\pi$\,Men at 1\,AU is 6\,erg\,s$^{-1}$\,cm$^{-2}$, while the average solar mass-loss rate is 2.5$\times$10$^{12}$\,g\,s$^{-1}$, hence comparable to the strongest SW we considered (i.e., that with the highest stellar mass-loss rate). Figure~\ref{fig:summary} summarises the results of the systematic analysis. It presents the absorption in terms of $R_{\rm p,Ly\alpha}$/$R_{\rm star}$ integrated over the blue wing of the Ly$\alpha$ line in the [$-$215,$-$91]\,km\,s$^{-1}$ velocity range, for which \citet{garcia2020} obtained 1$\sigma$ and 3$\sigma$ upper limits of $R_{\rm p,Ly\alpha}$/$R_{\rm star}$\,=\,0.13 and 0.24, respectively. We do not consider in the analysis the red wing of the Ly$\alpha$ line, because at those velocities none of the computed models led to absorption redwards of the region contaminated by ISM absorption and geocoronal emission, in agreement with the observations. We first analyse the results obtained from the larger grid computed considering the slow SW. Figure~\ref{fig:summary} indicates that all simulations in the upper-left quadrant (i.e., lower XUV flux and larger stellar mass-loss rate) show strong Ly$\alpha$ absorption, contrary to the observations. For each considered XUV flux value, there is a narrow band of stellar wind densities, following roughly the lower green full line in Fig.~\ref{fig:summary}, across which the planetary absorption decreases significantly. This is because a stronger SW (i.e., larger stellar mass-loss rate) will stop the expanding PW closer to the planet (see e.g. Fig.~\ref{fig:3dmaps}), where the ENA production is efficient. In contrast, a weaker SW (i.e., smaller stellar mass-loss rate) will result in inefficient ENA production and therefore the planetary absorption in the blue wing of Ly$\alpha$ comes only from natural line broadening, which amounts to $R_{\rm p,Ly\alpha}$/$R_{\rm star}$\,$\approx$\,0.04. Despite this small value, the planetary plasmasphere is as large as $\sim$10\,R$_{\rm p}$, because the SW is not strong enough to confine the planetary atmosphere. \begin{figure} \centering \includegraphics[width=9cm]{./Figure4.png} \caption{Planetary Ly$\alpha$ absorption in $R_{\rm p,Ly\alpha}$/$R_{\rm star}$ integrated in the [$-$215,$-$91]\,km\,s$^{-1}$ velocity range as a function of stellar XUV flux (at 1\,AU; x-axis) and input stellar mass-loss rate (left y-axis). Red circles indicate the results obtained with a SW temperature and terminal velocity of 0.64\,MK and 400\,km\,s$^{-1}$, respectively. Blue circles are for a SW temperature and terminal velocity of 1.2\,MK and 800\,km\,s$^{-1}$, respectively. The size of the red circles is proportional to the planetary absorption, whose value is given beside each circle. For comparison with observations, the green solid lines show the approximate position of the 0.13 and 0.24 absorption levels with respect to the results given by the red circles. The gray curve gives the planetary mass-loss rate (right y-axis) obtained from the simulations as a function of stellar XUV flux in the case of the slow SW. The planetary mass-loss rate is very weakly dependent of the stellar mass-loss rate (see Sect.~\ref{sec:discussion}).} \label{fig:summary} \end{figure} Figure~\ref{fig:summary} shows that, in comparison to the slow SW case (i.e., SW terminal velocity of 400\,$\rm km\,s^{-1}$), the fast SW (i.e., SW terminal velocity of 800\,$\rm km\,s^{-1}$), even with the same stellar mass-loss rate, produces significantly higher Ly$\alpha$ absorption. This is because the pressure of the fast SW is higher, confining the planetary atmosphere inside a bow shock closer to the planet. This implies that the fast SW interacts with a denser PW, leading to a larger ENA production that generates stronger absorption in the blue wing of the Ly$\alpha$ line (see bottom panel of Fig.~\ref{fig:absorption}). \section{Discussion and conclusion}\label{sec:discussion} We ran 3D hydrodynamic simulations of the interaction between the expanding upper atmosphere of $\pi$\,Men\,c\ and of the wind of its host star, considering a planetary atmosphere composed of H and He. We ran simulations assuming two distinct values of the SW terminal velocity (400 and 800\,km\,s$^{-1}$) and a range of stellar XUV fluxes and mass-loss rates. We find that, assuming a slow SW and for stellar XUV fluxes close to those estimated for $\pi$\,Men, the non-detection of Ly$\alpha$ absorption during transit can be reproduced by considering stellar winds significantly weaker than the average solar wind, with a density more than 6 times smaller. Reproducing the Ly$\alpha$ non-detection employing a faster SW would require an even lower SW density. We find that with a solar-like SW, fitting the Ly$\alpha$ non-detection would require an improbably higher stellar XUV flux, namely about 4 times of that estimated for $\pi$\,Men, while the highest XUV estimate is of just $\approx$1.3 times larger \citep{king2019} and uncertainties on the reconstructed XUV fluxes based on Ly$\alpha$ measurements are typically of the order of 30\% \citep{linsky2014}. This is because a higher XUV flux more rapidly ionises hydrogen, increasing the upper atmospheric heating and expansion, pushing the interaction region with the SW farther away from the planet. Furthermore, at such high stellar XUV fluxes the planet would have lost almost half of its current mass within the estimated age of the system, without accounting for the fact that the star was more active in the past. On the basis of the estimated stellar XUV flux, accounting for its uncertainty, and of evolutionary considerations, we conclude that a high stellar XUV emission is unlikely to be the cause of the non-detection of planetary Ly$\alpha$ absorption. This result is driven by the fact that Ly$\alpha$ planetary atmospheric absorption at the velocity probed by the observations can be caused just by ENAs, which become more abundant with increasing the velocity and/or the density of the SW. Therefore, similarly to \citet{vidotto2020}, we find that a stronger (i.e., faster and/or denser) SW compresses the planetary atmosphere on the side facing the star, reducing its size. However, a stronger SW penetrates deeper into the neutral part of the expanding planetary atmosphere, increasing the density of ENAs, thus the Ly$\alpha$ absorption at the velocities probed by the observations. However, the set of simulations described in Sect.~\ref{sec:result} do not reproduce the case assumed by \citet{vidotto2020} in which the SW is so strong that it compresses the planetary atmosphere below the sonic point. Therefore, we run a further simulation considering the slow SW (i.e., 400\,km\,s$^{-1}$), a stellar XUV flux at 1\,AU of 6\,erg\,cm$^{-2}$\,s$^{-1}$, and a stellar mass-loss rate of 1.7$\times$10$^{13}$\,g\,s$^{-1}$ and density of 5$\times$10$^4$\,cm$^{-3}$, closely corresponding to Model A in \citet{vidotto2020}. In agreement with \citet{vidotto2020}, we find that the bow shock and ionopause lie much closer to the planet, namely at 16.5 and 10.5\,$R_{\rm p}$, respectively. We also find that the planetary flow on the day side reaches its maximum speed of 4.9\,km\,s$^{-1}$ at a distance of 6\,$R_{\rm p}$, while the sound speed is 14.4\,km\,s$^{-1}$, implying a subsonic flow. However, we also find that the planetary mass loss rate is just 2.5\% smaller than what we obtained from the other simulations, therefore confirming that a subsonic PW does not necessarily entail a smaller mass-loss rate \citep{garcia2007}. In one-dimensional models, this is the result of partial compensation in density and velocity changes when the prescribed downstream pressure is increased. Indeed, higher pressures slow down the flow, which responds by increasing the temperature and in turn the density (through a larger scale height). Because the mass loss rate scales with the product of density and velocity, their individual changes tend to cancel out. There is a limit to this, as for instance expected if the planetary wind temperatures become very high and the plasma loses a significant amount of energy through radiation. This picture is qualitatively consistent with the parametric study conducted by \citet{christie2016}, who report a decrease in the mass-loss rate when the planetary wind becomes weaker and easier to confine by the stellar wind (see their Fig. 7). The quantitative differences with \citet{christie2016} probably arise from the different treatment of the equation of state of the gas. It is unclear if their prescription of a polytropic equation of state, which results in roughly isothermal temperature of the planetary wind, can capture the compensation effects described above. Furthermore, we find that absorption in the blue wing of Ly$\alpha$ is still very large, namely $R_{\rm p,Ly\alpha}$/$R_{\rm star}$\,$\approx$\,0.64, which is in disagreement with the observations. \citet{vidotto2020} could not have reached this result, because of the 1D assumption and, more importantly, the lack of ENAs. Therefore, although a very strong SW is indeed able to slightly reduce the planetary mass-loss rate, it would also lead to a stronger, rather than weaker, absorption signature in the blue wing of the Ly$\alpha$ line profile. Since estimates indicate that the wind strength of $\pi$\,Men is close to solar \citep[or even stronger;][]{vidotto2020}, our simulations clearly suggest that it is very unlikely $\pi$\,Men\,c\ hosts an atmosphere dominated by hydrogen and helium, in agreement with the considerations of \citet{gandolfi2018} and \citet{garcia2020}. Therefore, we argue that, despite the rather low bulk planetary density, $\pi$\,Men\,c's atmosphere cannot be strongly hydrogen-dominated and that should thus contain a non-negligible amount of heavier elements, as suggested by \citet{garcia2020}. Future observations should focus on looking for elements, such as He, C, and O, to shed more light on the planetary atmospheric composition. Furthermore, additional observations are needed to check that the available measurements have not been made at non-typical and rare stellar conditions of either very low SW density or high XUV flux. \begin{acknowledgements} I.S., M.Kh., and M.R. received support by the RSF project 18-12-00080 in the frame of which the numerical modeling, key for this study, has been developed. Parallel computing has been performed at the Computation Center of Novosibirsk State University, the SB RAS Siberian Supercomputer Center, the Joint Supercomputer Center of RAS, and the Supercomputing Center of the Lomonosov Moscow State University. I.S. and M.Kh. also acknowledge the RFBR project 20-02-00520. M.Kh. acknowledges support from the projects I2939-N27 and S11606-N16 of the Austrian Science Fund (FWF). M.Kh. acknowledges grant number 075-15-2019-1875 from the government of the Russian Federation under the project called ``Study of stars with exoplanets''. We thank the anonymous referee for the useful comments that led to improve the manuscript. \end{acknowledgements}
proofpile-arXiv_065-4363
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} There has been a continued interest in the theory of fuzzy measures ever since Sugeno \cite{sug} defined this measure having monotonicity property instead of additivity. This notion has shown many applications in the theory of fuzzy sets defined by Zadeh \cite{zadeh}. Later on, several other measures were defined and studied which are non-additive and have some advantages over each other. We refer to \cite{cho}, \cite{ich}, \cite{mas}, \cite{weber1}, \cite{weber2} and references therein for such consideration. In \cite{pap2} (see also \cite{pap1}, \cite{pap3}, \cite{pap4}, \cite{pap5}, \cite{pap6}, \cite{flores}), Pap initiated the so called $\op$-decomposable measures based on the pseudo algebraic operations, i.e., pseudo-addition $\op$ and pseudo-multiplication $\ot$. Consequently, in this framework, an integral and the derivative have been defined. Since the pseudo operations involve a generator function, usually denoted by $g$, the corresponding notions are also refered to as $g$-addition, $g$-multiplication, $g$-derivative, $g$-integral etc. In the present paper, we make an attempt to contribute in $g$-calculus by deriving several classical inequalities in this framework, namely, Young's inequality, H\"older's inequality, Minkowski's inequality and Hermite-Hadamard inequality. Let us point out that for the case $1<p<\infty$, $g$-analogue of H\"older's and Minkowski's inequalities were derived in \cite{agahi} by using the corresponding classical inequalities. In our case, we derive these inequalities by using $g$-Young's inequality that we first prove in this paper. Moreover, we also establish certain other variants of H\"older's inequality and we cover the case $p<1,\, p\ne 0$ as well. The paper is organized as follows. In Section 2, we collect the entire ``pseudo-machinery'' that is required throughout the paper. Here apart from the known notions and concepts, we define pseudo-logarithmic function. We also define pseudo-absolute value which helps us to establish the inequalities in a wider domain. Section 3, starts with Young's inequality followed by H\"older's inequality and its variants and finally we prove Minkowski's inequality in this section. In Section 4, we define pseudo-convexity and in this context prove the Hermite-Hadamard inequality. As a special case, the $g$-analogue of geometric-logarithmic-arithmetic mean inequality is obtained. Also in this section, we prove a refined version of Hermite-Hadamard inequality. Finally, in Section 5, we summarize the work done in this paper and make some suggestions for future work. \section{The Pseudo Setting} In this section, we collect some basic algebraic operations, elementary functions, derivative and integral in the framework of pseudo algebra. Most of these notions are already known, however, we define absolute value and logarithm formally here. \subsection{The algebraic operations} Let $\op$ denote the pseudo-addition, a function $\op:\R\times\R\to\R$ which is commutative, nondecreasing in each component, associative and has a zero element. We shall assume that $\op$ is strict pseudo-addition which means that $\op$ is strictly increasing in each component and continuous. By Aczel's Theorem \cite{ling}, for each strict pseudo-addition, there exists a monotone function $g$ (called the generator for $\op$), $g:\R\to\R^+$ such that \begin{equation*} x\op y = \gi\left(g(x) + g(y)\right). \end{equation*} Similarly, we denote by $\ot$, the pseudo-multiplication which is a function $\op:\R\times\R\to\R$ which is commutative, nondecreasing in each component, associative and has a unit element. The operation $\ot$ is defined by \begin{equation*} x\ot y = \gi\left(g(x) \cdot g(y)\right). \end{equation*} Now onwards, the set $\R$ equipped with the pseudo operations $\op$ and $\ot$ with the corresponding generator $g$ will be denoted by $\Rg$. The zero and the unit elements of $\Rg$ will be denoted, respectively, by $0_g$ and $1_g$. In \cite{bc}, the authors considered that the generator function $g:\R_g\to\R$ is strictly monotone (either strictly increasing or strictly decreasing), onto, $g(0_g)=0$, $g'(x)\ne 0$, for all $x$, $g\in C^2$ and $\gi\in C^2$. Using this map, the following well defined operations have been defined: \begin{equation*} x\om y = \gi\left(g(x) - g(y)\right), \quad x\oti y=\gi\left( \frac{g(x)}{g(y)}\right),\,\,{\rm provided}\,\, y\ne 0_g. \end{equation*} The order relation in $\Rg$, denoted by $\le_g$, satisfies the following: \begin{equation*} x\le_g y \iff x\om y \le 0_g. \end{equation*} If $x\le_g y$, we can also write it as $y\ge_g x$. If $x\le_g y$ and $x\ne y$, we shall write it as $x<_gy$, or equivalently, $y>_gx$. In order to make $\Rg$ a linear space over the field $\R$, we define the pseudo-scalar product: \begin{equation*} n\odot x = \gi(ng(x)), \quad x\in \Rg,\,\, n\in \R. \end{equation*} It was pointed out in \cite{bc} that the operations $\odot$ and $\ot$ are different. There was a need to define the scalar product $\odot$ since the compatibility condition $1\odot x=x$ is not satisfied by $\ot$. \begin{remark} $(\Rg,\op,\ot,\le_g)$ is an ordered and complete algebra. \end{remark} \subsection{Differentiation and integration} The pseudo-derivative or more commonly called $g$-derivative of a suitable function $f:[a,b]\subseteq\R\to\R_g$ is defined by \begin{equation*} D^\op f(x):=\frac{d^\op f(x)}{dx}=\gi\left( (g\circ f)'(x) \right). \end{equation*} In \cite{mar} (see also \cite{bc}), a more general $g$-derivative was defined as \begin{equation*} D_g f(x):= \frac{d_g f(x)}{dx}=\lim_{h\to 0}\left[ f(x\op h)\om f(x) \right]\oti h \end{equation*} and it was shown that \begin{equation*} D_g f(x)=\gi\Big((g\circ f)'(x)/g'(x) \Big). \end{equation*} The pseudo-integral or the $g$-integral of a suitable function $f:[a,b]\subseteq \R\to\Rg$ is defined by \begin{equation}\label{e2.1} \int^\op_{[a,b]}f(t)\ot dt=\gi\left( \int_a^b (g\circ f)(x)\,dx \right). \end{equation} Similar to the notion of the $g$-derivative, in \cite{mar}, a more general $g$-integral was defined which is given by \begin{equation* \int^g_{[a,b]}f(t)\ot dt=\gi\left( \int_a^b (g\circ f)(x)g'(x)\,dx \right). \end{equation*} \begin{remark} The $g$-derivative $D^\op$ and the $g$-integral $\displaystyle\int^\op_{[a,b]}$ can be obtained as special cases of, respectively, $D_g$ and $\displaystyle\int^g_{[a,b]}$. Since the justification requires notions which are beyond the scope of this paper, we refer to \cite{bc}, \cite{mar} for details. Unless specified otherwise, in this paper, the $g$-derivative and $g$-integral will be used as represented by $D_g$ and $\displaystyle\int^g_{[a,b]}$, respectively. \end{remark} Some of the properties of $g$-integral are mentioned below: \begin{itemize} \item [(a)] $\displaystyle \int^g_{[a,b]}(f\op h)\ot dt=\int^g_{[a,b]}f\ot dt \op \int^g_{[a,b]} h\ot dt$ \item [(b)] $\displaystyle \int^g_{[a,b]}(\lambda\ot f)\ot dt=\lambda\ot\int^g_{[a,b]}f\ot dt $ \item [(c)] $\displaystyle \int^g_{[a,b]}(\lambda\od f)\ot dt=\lambda\od\int^g_{[a,b]}f\ot dt $ \item [(d)] $\displaystyle f\le_g h\Rightarrow \int^g_{[a,b]}f\ot dt \le_g \int^g_{[a,b]} h\ot dt$ \end{itemize} \subsection{The exponent} Define the set \begin{equation*} \Rg^+=\{x\in\Rg : 0_g\le_g x\}. \end{equation*} In view of the operation $\ot$, for $x\in\Rg$ and $n\in\mathbb N$, we define \begin{equation*} x\pwr n:=\underbrace{x\ot x\ot\hdots \ot x}_{n-{\rm times}}=\gi(g^n(x)). \end{equation*} This notion of exponent can be extended for a general $p\in (0,\infty)$ which is defined (see \cite{agahi}, \cite{ich}) for all $x\in \Rg^+$ as \begin{equation*} x\pwr p=\gi(g^p(x)). \end{equation*} It can further be generalized to cover negative powers as well: For $p\in (0,\infty)$, we define \begin{equation*} x\pwr {-p}=1_g\oti x\pwr p, \quad x\in \Rg^+. \end{equation*} For $p=0$, we define $x\pwr 0=1_g$. It is easy to check that for $p,q\in (0,\infty)$ and $x\in\Rg^+$, the following laws of exponent hold: \begin{enumerate} \item [(i)] $x\pwr p \ot x\pwr q = x\pwr {p+q}$ \item [(ii)] $\left( x\pwr p \right)\pwr q=x\pwr {pq}$ \item [(iii)] $(x\ot y) \pwr p =x\pwr p \ot y\pwr p$ \item [(iv)] $(x\oti y) \pwr p =x\pwr p \oti y\pwr p$ \item [(v)] $(\alpha\od x) \pwr p =\alpha^ p \od x\pwr p$ \end{enumerate} \subsection{Absolute value} For $x\in\Rg$, we define its $g$-absolute value as follows: \begin{equation*} |x|_g:= \begin{cases} x,&{\rm if}\, 0_g\le_g x\\ -x,& {\rm if}\, x<_g 0_g. \end{cases} \end{equation*} It can be seem that \begin{equation*} |x|_g=\gi\left(|g(x)| \right). \end{equation*} Note that $$ |x\op y|_g\le_g |x|_g \op |y|_g $$ if $g$ is increasing and the inequality is reversed if $g$ is decreasing. \subsection{Exponential and logarithm functions} The $g$-exponential function for $x\in\Rg$ as defined in \cite{bc} is given by \begin{equation*} E\pwr x=\gi(e^{g(x)}), \end{equation*} where $e^{g(x)}$ is the standard exponential function. It is natural to define $g$-logarithm function by \begin{equation*} \Ln x=\gi\left(\ln{g(x)}\right), \end{equation*} where $\ln g(x)$ is the standard logarithm function. \begin{remark} Unlike in the standard case, we may have that $E\pwr x<_g 0_g$. In fact, if the generator $g$ is monotonically decreasing, then for any $x\in \Rg$, $E\pwr x\le_g0$, since $\gi$ is also monotonically decreasing. This suggests that in the pseudo case, logarithm can be defined for "negative" numbers, which in fact is true if, again, the generator $g$ is decreasing. \end{remark} Through the following proposition, we provide several properties of $E\pwr x$ and $\Ln x$, the proofs of which can be worked out easily. \begin{pro}\label{pr2.4} The following hold: \begin{enumerate}[label=\rm(\roman*)] \item $D_g(E\pwr x)=E\pwr x$ \item $\displaystyle\int^g E\pwr x \ot dx =E\pwr x$ \item $E\pwr x\ot E\pwr y=E\pwr{x\op y}$ \item $E\pwr {\Ln x}=x$ \item $\Ln E\pwr x=x$ \item $\Ln (x\ot y)=\Ln x\op \Ln y$ \end{enumerate} \end{pro} \section {Inequalities} In this section, we shall prove $g$-analogue of Young's, H\"older's and Minkowski's inequalities. Here and throughout, for any $p\in\R,\,p\ne 0$, $p'$ will denote the conjugate index to $p$, i.e., ${1\over p} + {1\over {p'}}=1$. \subsection {Young's inequality} The classical Young's inequality asserts that for $1<p<\infty$ and $a,b>0$, it holds: $$ ab \le \frac{a^p}{p} + \frac{b^{p'}}{p'} $$ whereas the inequality gets reversed if $p<1,\,p\ne 0$. We prove the $g$-analogue of this inequality below: \begin{theorem}\label{t-y1} Let $1<p<\infty$. \begin{enumerate} \item [\rm (a)] If the generator $g$ is increasing then for all $a,b\in \Rg^+$, the following Young's type inequality holds: \begin{equation}\label{e3.1} a\ot b \le_g \left( a\pwr p \oti\gi(p) \right) \op \left( b\pwr {p'} \oti\gi(p') \right). \end{equation} \item [\rm (b)] If the generator $g$ is decreasing then for all $a,b\in \Rg^+$, the inequality \eqref{e3.1} holds in the reverse direction. \end{enumerate} \end{theorem} \begin{proof} (a) In view of Proposition \ref{pr2.4}, we have \begin{align}\label{e3.2} a\ot b &=E^{\Ln (a\ot b)}\nonumber\\ &=E^{(\Ln a\op \Ln b)}. \end{align} Note that for any $1<p<\infty$ \begin{align*} \Ln a\pwr p &=\gi\left( \ln g^p(a) \right)\\ &=\gi(p\ln g(a))\\ &=\gi\Big(g(\gi(p))\cdot g(\gi(\ln g(a))) \Big)\\ &=\gi(p)\ot \gi(\ln g(a))\\ &=\gi(p)\ot \Ln a \end{align*} which gives that \begin{equation* \Ln a\pwr p \oti \gi(p)=\Ln a. \end{equation*} Thus \begin{align}\label{e3.4} \Ln a &= \Ln a\pwr p \oti \gi(p)\nonumber\\ &=\gi\Big(\ln g^p(a)\Big) \oti \gi(p)\nonumber\\ &=\gi\left(\frac{\ln g^p(a)}{p}\right). \end{align} Similarly, since $1<p'<\infty$, we have \begin{equation}\label{e3.5} \Ln b = \gi\left(\frac{\ln g^{p'}(b)}{p'}\right). \end{equation} By using \eqref{e3.4} and \eqref{e3.5} in \eqref{e3.2}, using a known inequality and increasingness of $\gi$ (since $g$ is so), we get \begin{align} a\ot b&=E^{\left(\gi\big(\frac{\ln g^p(a)}{p}\big)\op \gi\big(\frac{\ln g^{p'}(b)}{p'}\big) \right) }\nonumber\\ &=E^{\left(\gi {\left({1\over p}\ln g^p(a) + {1\over p'}\ln g^{p'}(b) \right)} \right) }\nonumber\\ &=\gi \left( e^{\left({1\over p}\ln g^p(a) + {1\over p'}\ln g^{p'}(b) \right)} \right)\nonumber\\ &\le_g \gi {\left({1\over p}e^{\ln g^p(a)} + {1\over p'}e^{\ln g^{p'}(b)} \right)}\label{e3.6}\\ &=\gi {\left({1\over p}{g^p(a)} + {1\over p'}{g^{p'}(b)} \right)}=:A.\label{e3.7} \end{align} Further, we find that $$ a\pwr p \oti\gi(p)=\gi(g^p(a)) \oti\gi(p) = \gi\left(\frac{g^p(a)}{p}\right) $$ and similarly $$ b\pwr {p'} \oti\gi(p')=\gi(g^p(a)) \oti\gi(p) = \gi\left(\frac{g^{p'}(b)}{p'}\right) $$ so that \begin{equation}\label{e3.8} a\pwr p \oti\gi(p) \op b\pwr {p'} \oti\gi(p') = \gi\left(\frac{g^p(a)}{p} + \frac{g^{p'}(b)}{p'} \right)=:B. \end{equation} The assertion now follows in view of \eqref{e3.7} and \eqref{e3.8} since $A\om B= 0_g$. \medskip (b) Since the generator $g$ is decreasing and consequently $\gi$ is so, the inequality \eqref{e3.6} gets reversed and the assertion follows. \end{proof} Below we prove $g$-Young's inequality for the case $p<1,\,p\ne 0$. \begin{theorem}\label{t-y2} Let $p<1,\,p\ne 0$. \begin{enumerate} \item [\rm (a)] If the generator $g$ is increasing then for all $a,b\in \Rg^+$, the following Young's type inequality holds: \begin{equation}\label{e-y1} a\ot b \ge_g \left( a\pwr p \oti\gi(p) \right) \op \left( b\pwr {p'} \oti\gi(p') \right). \end{equation} \item [\rm (b)] If the generator $g$ is decreasing then for all $a,b\in \Rg^+$, the inequality \eqref{e-y1} holds in the reverse direction. \end{enumerate} \end{theorem} \begin{proof} (a) Without any loss of generality, we assume that $0<p<1$ so that $p'<0$ for otherwise we can interchange the roles of $p$ and $p'$. Take $r=1/p$ and $s=-p'/p$. Then $1<r,s<\infty$ and $1/r + 1/s = 1$. Set $$ x=(a\ot b)\pwr p\quad {\rm and}\quad y=b\pwr {-p}. $$ We apply inequality \eqref{e3.1} on $x,y$ and with exponents $r,s$ and obtain \begin{equation}\label{e-y3} x\ot y \le_g \left( x\pwr r \oti\gi(r) \right) \op \left( y\pwr {s} \oti\gi(s) \right). \end{equation} Now, it can be calculated that $$ x\ot y=a\pwr p,\quad x\pwr r \oti\gi(r)=(a\ot b)\ot \gi(\frac{1}{p}),\quad y\pwr {s} \oti\gi(s) = b\pwr {p'}\oti \gi (\frac{-p'}{p}) $$ which on substituting in \eqref{e-y3} and rearranging the terms give the result. \medskip (b) This can be obtained using the similar arguments and applying Theorem \ref{t-y1}(b). \end{proof} \subsection{H\"older's inequality} Let $1<p<\infty$. We denote by $L_p(\R)$, the Lebesgue space which consists of all measurable functions defined on $\R$ such that $$ \|f\|_{p,\R}=\left(\int_\R |f(x)|^p\, dx\right)^{1/p}<\infty. $$ By a weight function, we mean a function which is measurable, positive and finite almost everywhere (a.e) on $\R$. For a weight function $w$, we denote by $L_{p,w}(\R)$, the weighted Lebesgue space which consists of all measurable functions defined on $\R$ such that $$ \|f\|_{p,w,\R}=\left(\int_\R |f(x)|^p w(x)\, dx\right)^{1/p}<\infty. $$ The spaces $L_p(\R)$ and $L_{p,w}(\R)$ are both Banach spaces. Let $f:\R\to \Rg$ be a mesurable function and $p\in\R,\, p\ne 0$. We define \begin{equation}\label{l1} [f]_{p,\Rg}^g:=\left(\int_{\Rg}^g |f(x)|_g\pwr p\ot dx\right)\pwr {{1\over p}}\quad {\rm and}\quad [f]_{p,g,\R}:=\left(\int_{\R} |f(x)|^pg'(x)\,dx\right)^{{1\over p}}. \end{equation} \begin{remark}\label{r-3.3} The expression $[f]_{p,g,\R}$ in \eqref{l1} is not a norm of some weighted Lebesgue space unless $1<p<\infty$ and $g'$ qualifies to be a weight function which can be a case if, e.g., $g$ is an increasing function and consequently $g'>0$. In such a case, $[f]_{p,g,\R}$ will enjoy all the properties of a norm. \end{remark} As an independent interest, it is easy to observe that the expressions $[f]_{p,\R_g}^g$ and $[f]_{p,g,\R}$ are connected. Precisely, we prove the following: \begin{pro}\label{t-l1} Let $f:\R\to \Rg$ be a measurable function and $p\in\R,\, p\ne 0$. Then $$ [f]_{p,\Rg}^g = \gi\Big([g\circ f]_{p,g,\R} \Big). $$ \end{pro} \begin{proof} Let $p>0$. We have $$ |f(x)|_g\pwr p = \Big(\gi(|g(f(x))|)\Big)\pwr p = \gi\Big(|g(f(x))|^p\Big) $$ so that $$ \int_{\Rg}^g |f(x)|_g\pwr p\ot dx = \gi\left(\int_\R |g(f(x))|^p g'(x)\, dx\right) $$ which gives that $$ [f]_{p,\Rg}^g = \gi\left(\left(\int_\R |g(f(x))|^p g'(x)\, dx\right)^{1/p} \right) $$ and we are done in this case. Similar arguments can be employed to prove the assertion for $p<0$. \end{proof} An element $x\in\R_g$ is said to be finite, written $x<_g\infty$, if there exists $c\in\R_g$ such that $x<_g c$. We prove the following H\"older's type inequality: \begin{theorem}\label{t-h1} Let $1<p<\infty$. \begin{enumerate} \item [\rm (a)] Let the generator $g$ be increasing and $f,h:\R\to \Rg$ be measurable functions such that $[f]_{p,\Rg}^g<_g\infty$ and $[h]_{p',\Rg}^g<_g\infty$. Then $[f\ot h]^g_{1,\Rg}<_g\infty$ and the following H\"older's type inequality holds: \begin{equation}\label{e3.9} [f\ot h]^g_{1,\Rg}\le_g [f]^g_{p,\Rg}\ot [h]^g_{p',\Rg}. \end{equation} \item [\rm (b)] If the generator $g$ is decreasing then the inequality \eqref{e3.9} holds in the reverse direction. \end{enumerate} \end{theorem} \begin{proof} (a) We shall be using Theorem \ref{t-y1} for appropriate $a,b\in\Rg^+$. Choose \begin{equation}\label{e3.10} a=|f(x)|_g \oti [f]^g_{p,\Rg} \end{equation} and \begin{equation}\label{e3.11} b=|h(x)|_g \oti [h]^g_{p',\Rg}. \end{equation} We have \begin{equation}\label{e3.12} \int_{\Rg}^g a\ot b \ot dx = \left( \int_{\Rg}^g |f(x)\ot h(x)|_g\ot dx \right) \oti \left( [f]^g_{p,\Rg}\ot [h]^g_{p',\Rg} \right). \end{equation} Further, it can be seen that \begin{equation*} a\pwr p = |f(x)|_g\pwr p \oti \Big([f]^g_{p,\Rg}\Big)\pwr p \end{equation*} so that \begin{equation*} {a\pwr p}\oti {\gi(p)} =\left\{ |f(x)|_g\pwr p \oti \Big([f]^g_{p,\Rg}\Big)\pwr p \right\} \oti {\gi(p)} \end{equation*} which on $g$-integrating gives \begin{align*} \int_{\Rg}^g {a\pwr p}\oti {\gi(p)} \ot dx &= \left\{\left(\int_{\Rg}^g |f(x)|_g\pwr p \ot dx \right)\oti [f]^g_{p,\Rg} \right\} \oti {\gi(p)}\\ &=1_g\oti \gi(p)\\ &=\gi\Big(\frac{g(1_g)}{p}\Big). \end{align*} Similarly, one can obtain that \begin{equation*} \int_{\Rg}^g {b\pwr {p'}}\oti {\gi(p')} \ot dx = \gi\Big(\frac{g(1_g)}{p'}\Big) \end{equation*} which together with the last equation gives \begin{align} \label{e3.13} \left(\int_{\Rg}^g {a\pwr p}\oti {\gi(p)} \ot dx\right) \op \left(\int_{\Rg}^g {b\pwr {p'}}\oti {\gi(p')} \ot dx\right) &= \gi\Big(\frac{g(1_g)}{p}\Big) \op \gi\Big(\frac{g(1_g)}{p'}\Big)\nonumber\\ &=\gi\Big(\frac{g(1_g)}{p}+ \frac{g(1_g)}{p'}\Big)\nonumber\\ &=\gi(g(1_g))\nonumber\\ &=1_g. \end{align} Now, the inequality \eqref{e3.9} follows in view of \eqref{e3.12} and \eqref{e3.13} if we take $a$ and $b$ given, respectively, by \eqref{e3.10} and \eqref{e3.11} in Theorem \ref{t-y1} and take $g$-integral on both the sides of the resulting inequality. (b) This follows since the inequality \eqref{e3.1} gets reversed for decreasing $g$. \end{proof} A generalized version of Theorem \ref{t-h1} is the following: \begin{theorem}\label{t-h2} Let $1<p,q,r<\infty$ be such that ${1\over p}+{1\over q} = {1\over r}$. \begin{enumerate} \item [\rm (a)] Let the generator $g$ be increasing and $f,h:\R\to \Rg$ be measurable functions such that $[f]_{p,\Rg}^g<_g\infty$ and $[h]_{p',\Rg}^g<_g\infty$. Then $[f\ot h]^g_{r,\Rg}<_g\infty$ and the following H\"older's type inequality holds: \begin{equation}\label{e3.14} [f\ot h]^g_{r,\Rg}\le_g [f]^g_{p,\Rg}\ot [h]^g_{q,\Rg}. \end{equation} \item [\rm (b)] If the generator $g$ is decreasing then the inequality \eqref{e3.14} holds in the reverse direction. \end{enumerate} \end{theorem} \begin{proof} (a) Write $P=p/r$, $Q=q/r$ so that ${1\over P}+{1\over Q} = 1$. Applying the inequality \eqref{e3.9} for the functions $F:=f\pwr r,\,H:=h\pwr r$ and with the exponents $P,Q$, we obtain \begin{equation}\label{e3.15} [F\ot H]^g_{1,\Rg}\le_g [F]^g_{P,\Rg}\ot [H]^g_{Q,\Rg}. \end{equation} It can be calculated that \begin{align*} [F]^g_{P,\Rg} &= \left( [f]^g_{p,\Rg} \right)\pwr{r/p},\\ [H]^g_{Q,\Rg} &= \left( [h]^g_{q,\Rg} \right)\pwr{r/q}\\ \end{align*} and \begin{equation*} [F\ot H]^g_{1,\Rg}=\left( [f\ot h]^g_{r,\Rg}\right)\pwr{r} \end{equation*} using which in \eqref{e3.15} and adjusting the exponents, \eqref{e3.14} follows. \medskip (b) This is an immediate consequence of the fact that $g$ is decreasing. \end{proof} As a consequence of Theorem \ref{t-h2}, we prove the following: \begin{theorem} Let $1<p,q,r<\infty$ be such that ${t\over p}+{{1-t}\over q} = {1\over r}$, where $0<t<1$. \begin{enumerate} \item [\rm (a)] If the generator $g$ is increasing and for all measurable functions $f:\R\to \Rg$, $$ [f]^g_{p,\Rg}<_g \infty \quad {and}\quad [f]^g_{q,\Rg}<_g \infty, $$ then $[f]^g_{r,\Rg}<_g \infty$ and the following inequality holds: \begin{equation}\label{e3.16} [f]^g_{r,\Rg}\le_g \Big([f]^g_{p,\Rg}\Big)\pwr t \ot \Big([f]^g_{q,\Rg}\Big)\pwr {1-t}. \end{equation} \item [\rm (b)] If the generator $g$ is decreasing then the inequality \eqref{e3.16} holds in the reverse direction. \end{enumerate} \end{theorem} \begin{proof} (a) Applying Theorem \ref{t-h2} for the functions $f\pwr t,\,f\pwr{1-t}$ and with the exponents $p\over t$, $q\over{1-t}$, we obtain that $$ [f\pwr t\ot f\pwr{1-t}]^g_{r,\Rg}\le_g [f\pwr t]^g_{p/t,\Rg}\ot [f\pwr {1-t}]^g_{q/{1-t},\Rg}. $$ Now, since it can be worked out that $$ [f\pwr t]^g_{p/t,\Rg}=\Big([f]^g_{p,\Rg}\Big)\pwr t $$ and $$ [f\pwr {1-t}]^g_{q/{1-t},\Rg}=\Big([f]^g_{q,\Rg}\Big)\pwr {1-t}, $$ the assertion follows. \medskip (b) This is an immediate consequence of the fact that $g$ is decreasing. \end{proof} Theorem \ref{t-h1} provides $g$-H\"older's inequality for $1<p<\infty$. Here, we used $g$-Young's ineuality for the same range of $p$ given in Theorem \ref{t-y1}. On the similar lines, by using Theorem \ref{t-y2}, $g$-H\"older's inequality for $p<1,\, p\ne 0$ can be obtained. We only state the theorem below: \begin{theorem}\label{t-rh} Let $0<p<1$. \begin{enumerate} \item [\rm (a)] If the generator $g$ is increasing then for all measurable functions $f,h:\R\to \Rg$, the following inequality holds: \begin{equation}\label{e3.17} [f\ot h]^g_{1,\Rg}\ge_g [f]^g_{p,\Rg}\ot [h]^g_{p',\Rg}. \end{equation} \item [\rm (b)] If the generator $g$ is decreasing then the inequality \eqref{e3.17} holds in the reverse direction. \end{enumerate} \end{theorem} \begin{remark} Theorem \ref{t-h1} is a generalization of $g$-H\"older's inequality proved in \cite{agahi} for the case $1<p<\infty$. There the authors used the integral $\displaystyle\int^{\op}_{[a,b]}$ as defined in \eqref{e2.1} which is a special case of the integral $\displaystyle\int^g_{[a,b]}$ used in this paper. Moreover our proof is based on $g$-Young's inequality and we have covered the case $p<1,\,p\ne 0$ as well. The other variants of $g$-H\"older's inequality that we prove in this subsection are also new. \end{remark} \subsection{Minkowski's inequality} \begin{theorem}\label{t-m1} Let $1<p<\infty$. \begin{enumerate} \item [\rm (a)] If the generator $g$ is increasing then for all measurable functions $f,h:\R\to \Rg$, the following Minkowski's type inequality holds: \begin{equation}\label{e-m1} [f\op h]^g_{p,\Rg}\le_g [f]^g_{p,\Rg}\op [h]^g_{p,\Rg}. \end{equation} \item [\rm (b)] If the generator $g$ is decreasing then the inequality \eqref{e-m1} holds in the reverse direction. \end{enumerate} \end{theorem} \begin{proof}(a) Since $g$ is increasing, we have \begin{equation}\label{e-m2} |f\op h|_g\le_g |f|_g \op |h|_g \end{equation} and consequently \begin{align}\label{e-m3} \int_{\Rg}^g |f\op h|_g\pwr p\ot dx &=\int_{\Rg}^g |f\op h|_g\pwr {p-1} \ot |f\op h|_g\ot dx\nonumber\\ &\le_g \int_{\Rg}^g |f\op h|_g\pwr {p-1} \ot |f|_g\ot dx \op \int_{\Rg}^g |f\op h|_g\pwr {p-1} \ot |h|_g\ot dx\nonumber\\ &=: I_1\op I_2. \end{align} We apply H\"older's inequality \eqref{e3.9} and obtain \begin{align*} I_1 &\le_g [(f\op h)\pwr {p-1}]^g_{p',\Rg}\ot [f]^g_{p,\Rg}\\ &=\left([f\op h]^g_{p,\Rg}\right)^{(p)\oti (p')}\ot [f]^g_{p,\Rg}. \end{align*} Similarly \begin{align*} I_2 &\le_g\left([f\op h]^g_{p,\Rg}\right)^{(p)\oti (p')}\ot [h]^g_{p,\Rg} \end{align*} so that \eqref{e-m3} gives $$ \int_{\Rg}^g |f\op h|_g\pwr p\ot dx\le_g \left([f]^g_{p,\Rg} \op [h]^g_{p,\Rg} \right) \ot \left([f\op h]^g_{p,\Rg}\right)^{(p)\oti (p')}. $$ Now, adjusting the powers of the factor $[f\op h]^g_{p,\Rg}$, the assertion follows. \medskip (b) This follows immediately since the inequality \eqref{e-m2} and H\"older's inequality both hold in the reverse direction in this case. \end{proof} The case of the Minkowski inequality when $p<1,\, p\ne 0$ can be discussed similarly. We only state the result below: \begin{theorem}\label{t-m2} Let $0<p<1,\, p\ne 0$. \begin{enumerate} \item [\rm (a)] If the generator $g$ is increasing then for all measurable functions $f:\R\to \Rg$, the following Minkowski's type inequality holds: \begin{equation}\label{e-m4} [f\op h]^g_{p,\Rg}\ge_g [f]^g_{p,\Rg}\op [h]^g_{p,\Rg}. \end{equation} \item [\rm (b)] If the generator $g$ is decreasing then the inequality \eqref{e-m4} holds in the reverse direction. \end{enumerate} \end{theorem} \section{Hermite-Hadamard Inequality} In this section, we shall consider the special case of the integral $\displaystyle\int^g_{[a,b]} f(x)\ot dx$, i.e., $\displaystyle\int^\op_{[a,b]} f(x)\ot dx$ given by \eqref{e2.1}. The classical Hermite-Hadamard inequality asserts that if $f:[a,b]\to \R$ is a convex function then the following holds: $$ f\left({a+b\over 2}\right) \le { 1\over{b-a}} \int_a^b f(x)\, dx \le \frac{f(a)+f(b)}{2} $$ and the inequalities are reversed if $f$ is concave. The aim of this section is to derive a $g$-analogue of this inequality. First, we define the following: \begin{definition} A function $f:[a,b]\to \Rg$ is said to be pseudo-convex on $[a,b]$ if for all $x,y\in[a,b]$ and all $0\le\lambda\le 1$ $$ f(\l x + (1-\l)y)\le_g \l\od f(x)\op (1-\l)\od f(y). $$ \end{definition} We shall denote $$ \sigma :=\l b + (1-\l) a\quad {\rm and}\quad \delta:=\l a + (1-\l)b. $$ \begin{theorem}\label{t-hh1} Let $f:[a,b]\to \Rg$ be a pseudo-convex function. Then the following Hermite-Hadamard inequality holds: \begin{equation}\label{e-hh1} f\left({a+b\over 2}\right) \le_g \left( 1\over{b-a}\right)\od \int^\op_{[a,b]} f(x)\ot dx \le_g {1\over 2}\od \big(f(a)\op f(b)\big). \end{equation} \end{theorem} \begin{proof} Since $f$ is pseudo-convex, we have \begin{align*} f\left(\sigma + \delta \over 2\right) &\le_g {1\over 2}\od (f(\sigma) \op f(\delta))\\ &\le_g {1\over 2}\od \Big( \l\od f(b)\op (1-\l)\od f(a)\op \l\od f(a)\op (1-\l)\od f(b)\Big)\\ &= {1\over 2}\od \big(f(a)\op f(b)\big). \end{align*} Thus since $\sigma + \delta = a+b$, it follows that \begin{equation}\label{e-hh2} f\left({a+b\over 2}\right) \le_g {1\over 2}\od (f(\sigma) \op f(\delta)) \le_g {1\over 2}\od \big(f(a)\op f(b)\big). \end{equation} Now, making variable substitution $\l b + (1-\l) a = x$, we find that \begin{align*} \int^\op_{[0,1]} f(\sigma)\ot d\l&= \int^\op_{[0,1]} f(\l b + (1-\l) a)\ot d\l\\ &= \gi\left( \int_0^1 (g\circ f)(\l b + (1-\l) a) \,d\l \right)\\ &=\gi\left({1\over{b-a}} \int_a^b (g\circ f)(x) \,dx\right)\\ &=\gi\left({1\over{b-a}}g\left(\gi\left( \int_a^b (g\circ f)(x) \,dx\right)\right)\right)\\ &={1\over{b-a}}\od \gi\left( \int_a^b (g\circ f)(x) \,dx\right)\\ &={1\over{b-a}}\od \int^\op_{[a,b]} f(x)\ot dx. \end{align*} Consequently, taking $g$-integral throughout \eqref{e-hh2} with respect to $\l$ over $[0,1]$, the inequality \eqref{e-hh1} follows. \end{proof} \begin{corollary} Let $u,v >_g 0_g,\, u\ne v$ and $g$ be increasing. Then the following inequalities hold: \begin{equation}\label{e-cor1} (u\ot v)\pwr{1/2} \le_g \frac{1}{g(\Ln u)-g(\Ln v)}\od (u\om v) \le_g {1\over 2}\od (u\op v). \end{equation} \end{corollary} \begin{proof} Clearly, for an increasing function $g$, $f(x)=E^{\gi(x)}$ is pseudo-convex. After some calculations, it can be worked out that by taking $f(x)=E^{\gi(x)}$ in Theorem \ref{t-hh1}, the inequalities \eqref{e-hh1} become \begin{equation}\label{e-cor2} \left(E^{\gi(a)}\ot E^{\gi(b)} \right) \pwr{1/2} \le_g \frac{1}{b-a}\od \left(E^{\gi(b)}\om E^{\gi(a)} \right)\le_g \frac{1}{2}\od \left(E^{\gi(a)}\op E^{\gi(b)} \right) . \end{equation} Now, put $$ E^{\gi(a)} = u\quad {\rm and}\quad E^{\gi(b)} = v $$ so that $$ a=g(\Ln u)\quad {\rm and}\quad b=g(\Ln v). $$ The inequalities \eqref{e-cor1} now follow with these transformations. \end{proof} \begin{remark} The inequalities \eqref{e-cor1} are the $g$-analogue of the standard geometric-logarithmic-arithmatic mean inequality $$ \sqrt{uv}\le \frac{u-v}{\ln u-\ln v}\le \frac{u+v}{2},\quad u,v>0,\, u\ne v $$ which can be obtained by taking the generator $g$ as the identity function in \eqref{e-cor1} \end{remark} A refinement of the Hermite-Hadamard inequality has recently been given in (\cite{far}, Theorem 1.1), We prove below its $g$-analogue which is a refinement of the inequality \eqref{e-hh1}. \begin{theorem}\label{t-hh5} Let $f:[a,b]\to \Rg$ be a pseudo-convex function. Then for all $\l\in[0,1]$, the following inequalities hold: \begin{equation}\label{e-hh3} f\left({a+b\over 2}\right)\le_g \ell(\l) \le_g \left( 1\over{b-a}\right)\od \int^\op_{[a,b]} f(x)\ot dx\le_g L(\l) \le_g {1\over 2}\od \big(f(a)\op f(b)\big), \end{equation} where $$ \ell(\l):=\l\od f\left({\l b + (2-\l) a\over 2} \right) \op (1-\l)\od f\left({1+\l b + (1-\l) a\over 2} \right) $$ and $$ L(\l):={1\over 2}\od\Big( f(\l b + (1-\l)a) \op \l\od f(a) \op (1-\l)\od f(b)\Big). $$ \end{theorem} \begin{proof} Recall $\sigma=\l b + (1-\l)a$. Clearly $a<\sigma<b$. We apply the inequality \eqref{e-hh1} on the interval $[a,\sigma]$, with $\l\ne 0$ and get \begin{equation}\label{e-hh4} f\left({a+\sigma\over 2}\right) \le_g \left( 1\over{\sigma-a}\right)\od \int^\op_{[a,\sigma]} f(x)\ot dx \le_g {1\over 2}\od \big(f(a)\op f(\sigma)\big). \end{equation} Similarly applying again \eqref{e-hh1} on the interval $[\sigma,b]$ with $\l\ne 1$, we get \begin{equation}\label{e-hh5} f\left({\sigma+b\over 2}\right) \le_g \left( 1\over{b-\sigma}\right)\od \int^\op_{[\sigma,b]} f(x)\ot dx \le_g {1\over 2}\od \big(f(\sigma)\op f(b)\big). \end{equation} Clearly \begin{equation}\label{e-hh6} \l\od f\left({a+\sigma\over 2}\right) \op (1-\l)\od f\left({\sigma+b\over 2}\right) = \ell(\l). \end{equation} Also, we find that \begin{align} &\l\od \left( 1\over{\sigma-a}\right)\od \int^\op_{[a,\sigma]} f(x)\ot dx \op (1-\l)\od \left( 1\over{b-\sigma}\right)\od \int^\op_{[\sigma,b]} f(x)\ot dx\nonumber\\ &\quad = {1\over{b-a}}\od \left(\int^\op_{[a,\sigma]} f(x)\ot dx \op \int^\op_{[\sigma,b]} f(x)\ot dx \right)\nonumber\\ &\quad = {1\over{b-a}}\od \int^\op_{[a,b]} f(x)\ot dx\label{e-hh7} \end{align} and \begin{align} &\l\od{1\over 2}\od (f(a)\op f(\sigma)) \op (1-\l)\od{1\over 2}\od ( f(\sigma)\op f(b)\nonumber\\ &\quad ={1\over 2}\od \Big(\l\od f(a)\op f(\sigma) \op (1-\l)\od f(b) \Big)\nonumber\\ &\quad =L(\l).\label{e-hh8} \end{align} Now, taking the $\od$ product of \eqref{e-hh4} with $\l$ and \eqref{e-hh5} with $1-\l$ and using \eqref{e-hh6}, \eqref{e-hh7}, \eqref{e-hh8}, we obtain \begin{equation}\label{e-hh9} \ell(\l)\le_g {1\over{b-a}}\od \int^\op_{[a,b]} f(x)\ot dx\le_g L(\l). \end{equation} Writing $\displaystyle a+b\over 2$ as $$ {a+b\over 2} ={\l\over 2}(\l b+(2-\l)a) + \big(\frac{1-\l}{2}\big)(1+\l)b + (1-\l)a $$ and using the pseudo-convexity, we get \begin{equation}\label{e-hh10} f\left({a+b\over 2}\right)\le_e \ell(\l) \end{equation} and from the pseudo-convexity applied on $L(\l)$, we get \begin{equation}\label{e-hh11} L(\l)\le_g {1\over 2}\od \big(f(a)\op f(b)\big). \end{equation} Now, the inequalities \eqref{e-hh3} follow from \eqref{e-hh10} and \eqref{e-hh11}. \end{proof} \begin{remark} Similar to the pseudo-convexity, one can define pseudo-concavity: A function $f:[a,b]\to \Rg$ is said to be pseudo-concave on $[a,b]$ if for all $x,y\in[a,b]$ and all $0\le\lambda\le 1$ $$ f(\l x + (1-\l)y)\ge_g \l\od f(x)\op (1-\l)\od f(y). $$ Theorems \ref{t-hh1} and \ref{t-hh5} can be formulated and proved with pseudo-convexity relaced by pseudo-concavity. \end{remark} \section{Concluding Remarks} In this paper, the classical inequalities, namely, Young's, H\"older's, Minkowski's and Hermite-Hadamard inequalities have been derived in the framework of $g$-calculus. The entire range of index $p\in\R,\,p\ne 0$ has been covered. \begin{remark} In \cite{mes} (see also \cite{mar1}), Mesiar introduced a generalized $g$-integral $$ \int_{[a,b]}^{g,h}f(x)\ot dx = \gi\left( \int_a^b (g\circ f)(x)h(x)\,dx \right), $$ where $h$ is a non-negative integrable real function and the corresponding $g$-derivative $$ D_{g,g} f(x)=\gi\Big((g\circ f)'(x)/h(x) \Big). $$ The results of Section 3 can easily be formulated and proved in terms of the above integral. \end{remark} \begin{remark} It is of interest if the inequalities in Section 4 could be obtained for the generalized integrals $\displaystyle\int_{[a,b]}^{g}$ or $\displaystyle\int_{[a,b]}^{g,h}$ instead of $\displaystyle\int_{[a,b]}^\op$. \end{remark}
proofpile-arXiv_065-4370
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} The possibility of lensing of GRBs was originally discussed in the context of testing whether GRBs are located at cosmological distances \citep{Paczynski1986,Mao1992}. With the cosmological origin firmly established, we are now approaching sample sizes where the probability of observing a lensed GRB is no longer negligible. In this paper we focus on the case of macrolensing, i.e.~strong lensing producing image separations of order arcseconds \citep{Treu2010}. One of the motivations for searching for lensed GRBs is that the magnification effect means that GRBs located at high redshift can be studied in detail. Depending on the properties and data quality of any lensed GRBs identified, they may also provide information about dark matter distributions and constraints on the Hubble constant (e.g., \citealt{Oguri2019}). Macrolensing of a GRB is expected to manifest as a GRB recurring with the same light curve and spectrum as a previous GRB, but with a different flux and a small positional offset. The time interval between the GRBs may range from days to years. A challenge in identifying such lenses is that the angular separations between the GRB images are typically smaller than the localization uncertainties of current gamma-ray detectors, which range from arcminutes for {\it Swift}~BAT (and improves to arcseconds when subsequently detected by XRT, \citealt{Burrows2005}), to several degrees or more for {\it Fermi}~GBM \citep{Goldstein2019}. While lenses with large separations could in principle be resolved with {\it Swift}, the relatively small field of view of BAT means that the probability of detecting a lensed pair within this sample is low \citep{Li2014}. A more promising approach to identify lensed GRBs is to rely on the assumption that every GRB has a unique light curve and spectral evolution. An interesting precedent for this is set by the gravitationally lensed blazar B0218+357, which displays double images separated by $\sim 0\farcs{3}$ \citep{Odea1992}. While {\it Fermi}~LAT does not resolve the images, gamma-ray flares have been observed to repeat with a time-delay of 11.5 days in the combined light curve \citep{Cheung2014}. The main differences between this example and the expectations for lensed GRBs are a longer duration of the emission and a larger emission region for the blazar, with the latter having implications for the effects of so-called microlensing, discussed below in the case of GRBs. Previous searches for macrolensed GRBs have all yielded null results. This includes searching $\sim 2100$ GRBs observed by {\it BATSE} \citep{Li2014}, $\sim 2300$ GRBs observed by {\it Konus-Wind} \citep{Hurley:2019km}, as well as two smaller samples of GRBs observed in the first years of {\it Fermi}~GBM \citep{Veres2009,2011AIPC.1358...17D}. These searches were all based on comparisons of light curves and time-averaged spectra of GRBs with overlapping positional uncertainty regions. In this paper we present a search for lensed GRBs among $\sim 2700$ GRBs observed by {\it Fermi}~GBM during 11 years of operations. The fourteen detectors that comprise the {\it Fermi}~GBM continuously observe the entire unobscured sky and offer a high sensitivity over the 8~keV -- 40~MeV energy range \citep{2009ApJ...702..791M}, providing good conditions for identifying lenses. We also extend the methodology compared to previous works by considering time-resolved spectra and assessing the similarities of the most promising candidates using simulations. It is possible that one or both GRBs in a lens pair are affected by additional lensing by smaller objects in the lens galaxy, the impact of which depends on the nature of those objects. Microlensing by stars leads to a smearing of light curves on millisecond times scales \citep{Williams1997}, while so-called millilensing by compact globular clusters or massive black holes ($M\gtrsim 10^6~{\rm M_{\odot}}$) leads to emission episodes within a GRB repeating with a time delay longer than seconds \citep{Nemiroff2001}. Lensing by intermediate mass black holes would lead to smearing/echoes on time scales between these extremes \citep{Ji2018}. Confirming that differences between a pair of lensed GRBs are due to any type of small-scale lensing would require redshift information, which is only available for $\sim 5\%$ of the sample. We therefore do not consider these effects in our search, although we note that lens candidates are unlikely to be rejected due to microlensing given our choice of light curve binning. In summary, we confine ourselves to considering macrolensing, while making minimal assumptions about the lenses. This means that we consider well-separated light curves and that we set no a priori upper limit to the possible time delays. Throughout this paper, we will use ``lensing" to refer to this kind of macrolensing. Below we first describe the data used for the analysis in Section~\ref{sec:data}, describe the selection of lens candidates in Section~\ref{sec:methods} and analyze the final candidates in Section~\ref{sec:FinalLensCands}. We present and discuss the results in Section~\ref{sec:Discussion}, and provide a summary in Section~\ref{sec:summary}. \section{Data} \label{sec:data} We base the search for lens candidates on data from \textit{Fermi} GBM. Specifically, we use time-tagged event (TTE) data, which contains the arrival times of individual photons with a precision of 2 $\mu$s, as well as information regarding in which of the 128 energy channels the photon registered. When performing time series analysis, we consider data from the NaI detectors, whereas we use data from both the NaI and BGO detectors for the spectral analysis, as described in Sections~\ref{sec:lightcurves} and~\ref{sec:timeResolvedSpectra}, respectively. We obtain information about the localizations from an online compilation\footnote{\url{https://icecube.wisc.edu/~grbweb_public/index.html}} that also includes localizations by other telescopes than \textit{Fermi}. We downloaded all GRBs detected before 2020-01-09 from the online \textit{Fermi} GBM catalog using \textit{3ML} \citep{2015arXiv150708343V}. The resulting sample contains 2712 GRBs, which corresponds to over 3.6 million unique pairs. Although it is in principle possible to also include data from other telescopes, this would complicate the analysis significantly, since different instruments operate in different energy intervals and at different efficiencies. \section{Selection of lens candidates} \label{sec:methods} In this Section we describe the methods used to search for pairs of GRBs that are consistent with a gravitational lensing scenario and a common physical origin. As mentioned in Section~\ref{sec:intro}, macrolensing will yield well-separated identical light curves with identical spectra. However, in practice, we do not expect the light curves or spectra to be identical due to observational uncertainties. These include the Poisson nature of the detector counts, the angles between the detectors and the source, varying backgrounds, as well as different flux levels due to the lensing. Additionally, most GRBs observed by GBM have poor localization and no redshift measurements. We have adopted our methods to take these observational uncertainties into consideration. We begin by making cuts to the sample to remove burst pairs that are clearly not consistent with the lensing scenario (\ref{sec:samples}). We proceed by comparing the light curves (\ref{sec:lightcurves}) and finally consider the time-resolved spectra (\ref{sec:timeResolvedSpectra}) of each GRB pair. \subsection{Initial cuts} \label{sec:samples} We consider the position, relative duration, and spectral information of each burst pair to remove obviously non-lensed pairs from the sample. Since a full analysis of 3.6 million pairs can become computationally expensive for parts of the analysis, we only perform the subsequent analysis on pairs that passed the previous cuts. In addition to the sample consisting of lens candidates, we also construct a reference sample of GRB pairs that we know are not lens pairs. We refer to the samples as the lens-candidate sample (L) and the non-lensed sample (NL). \textbf{Position:} This is the only variable we can use to completely rule out a lensing scenario. In order to make a cut based on position, we attribute a circular uncertainty region to each burst based on the statistical and systematic uncertainty of the localization. We set the radius of the combined uncertainty region to $\sigma_{\text{tot}} = \sqrt{(2\sigma_{\text{stat}})^2 + \sigma_{\text{syst}}^2}$, where $\sigma_{\text{stat}}$ is the 68\% confidence level statistical uncertainty and $\sigma_{\text{syst}}$ is the systematic uncertainty. The original localization algorithm of GBM had large systematic uncertainties (up to $14^{\circ}$, \citealt{Connaughton:2015gp}) and we therefore conservatively set $\sigma_{\text{syst}} = 14^{\circ}$. For GRBs localized by other instruments (mainly the {\it Neil Gehrels Swift Observatory}), we set $\sigma_{\text{syst}} = 0^{\circ}$, since the systematic uncertainties are comparatively small. Although the GBM localization has been improved, both by the BALROG algorithm \citep{Burgess:2018get} and the updated GBM algorithm \citep{Goldstein2019}, we conservatively use the original localization uncertainties at this stage. We compare the localization of every GRB with all previously observed GRBs. If the circular uncertainty regions with radius $\sigma_{\text{tot}}$ overlap, we place the pair in the L sample. If the uncertainty regions are separated by more than $10^{\circ}$, we place them in the NL sample. The extra separation gives us further confidence that the NL sample contains no lenses. We note that GBM uncertainty regions are typically not circular, often being better described by ellipses. Given the conservative nature of these cuts, we expect the use of circular uncertainty regions to have a negligible impact on the final results. We consider more accurate, non-circular uncertainty regions in Section~\ref{sec:timeResolvedSpectra}, where we also generate new response files for GRBs that have better localizations from other instruments. \textbf{Duration:} The lensing is not expected to significantly change the duration of a GRB. However, the observed duration may be affected by several uncertainties, including a low signal-to-noise ratio (SNR), background properties, which may also further impact the SNR, as well as the satellite position. In the latter case the observation may start late or stop early due to occultation by the Earth or entering the south-Atlantic-anomaly. To account for these effects, we impose the conservative requirement that $T_{90}$ should differ by less than a factor of $5$ in lens candidate pairs. \textbf{Spectra:} The last hard cut is based on spectra. We use two common empirical functions to assess the spectra; the Band function and a cutoff power law (the Comptonized model in the {\it Fermi}~GBM spectral catalog, \citealt{2016ApJS..223...28N}). Although these functions may not capture all spectral features, similar spectra will yield similar fits. Further, we only consider the model parameters pertaining to the low-energy slope and peak energy of the spectrum, since these tend be more well constrained than the high-energy slopes. GRB pairs where the $2\sigma$ confidence intervals of both parameters overlap in the time-integrated or peak-flux spectra for at least one of the models are kept in the L sample. For this part of the analysis we use the spectral parameters available through the online \textit{Fermi} spectral catalog\footnote{Available at \url{https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/bursts/}.} \citep{Gruber:2014cx,vonKienlin:2014dt,2016ApJS..223...28N}. Only bursts with entries in the spectral catalog are eliminated this way (these entries were missing for most bursts from the second half of 2018 and onward at the time of our analysis). This is a conservative cut that helps reduce the number of pairs that we need to compare further. \textbf{Final sample:} The three cuts are performed in order of increasing computational requirement, {\rm i.e.~} ~$T_{90}$, spectral parameters, and position. Naturally, the order of the cuts does not matter for the final L sample. In this order, the three cuts remove $\sim 1.8\cdot 10^{6}$, $\sim 6\cdot 10^{5}$, and $\sim 1.2\cdot 10^{6}$ pairs from the L sample, respectively. This leaves about $1.2 \cdot 10^{5}$ GRB pairs in the L sample to investigate further. We summarize the cuts in Table~\ref{tab:cuts}. For comparison, the NL sample contains about $1.1 \cdot 10^{6}$ pairs. This sample has a different cut on position, as described above, but the same cuts on duration and spectra as the L sample. The cuts described above are all conservative. We have tested multiple variations of the cuts, with different limits on the localization, spectral parameters and relative $T_{90}$, and find that we obtain largely the same candidates after the light curve comparison in Section~\ref{sec:lightcurves}. We stress that there are no cuts based on the relative flux of the GRB pairs. In simple spherically symmetric lens models, the later GRB is expected to be fainter (e.g., \citealt{Mao1992}), but this does not hold for more realistic scenarios. Finally, we note that there are 96 pairs in the initial sample with redshifts that agree to within $5\%$ of each other, but that none of these pairs survive the hard cuts. \subsection{Cross correlation of light curves} \label{sec:lightcurves} The next step of the analysis is to assess the similarity of light curves for the GRB pairs that remain after the initial cuts. We produce light curves from the TTE data using two different bin sizes, $0.5$ and $0.05$~s, and perform the analysis for both. The smaller bin size is particularly useful for short GRBs, where a larger bin size will yield featureless light curves. We construct light curves over the longest possible time interval for each GRB, something which is mainly limited by the available detector response, and also subtract the background. The background was determined by fitting first-order polynomials in \textit{3ML}. We sum the background-subtracted light curves from the brightest NaI detectors for each burst (usually three detectors), as listed in the \textit{Fermi}~GBM spectral catalog. Using multiple detectors helps compensate for different observing conditions for single detectors. We do not include the BGO detectors since they make very little difference for the time series analysis and because information on which BGO detector to use is not available for the most recent GRBs. In order to assess the similarity between two light curves, we consider the cross correlation (CC). We use the implementation of the CC in one dimension for discrete, real-valued functions $a[i]$ and $b[i]$, normalized such that the maximum value of the CC is unity; \begin{align*} CC[n] = \sum _{i=0}^{N-1}\frac{{a[i]}b[i+n]}{N \cdot \sigma_\text{a}\sigma_\text{b}}. \end{align*} Here, $a$ and $b$ represent the binned light curves, $n$ is the relative displacement of $a$ and $b$, and $\sigma_\text{a}$ and $\sigma_\text{b}$ are the standard deviations of $a$ and $b$, respectively. We have also assumed that both $a$ and $b$ are of length $N$. This is important because the relative lengths of the light curves could otherwise impact the maximum value of the resulting CC. For each burst pair we therefore trim the longer light curve such that it is of the same length as the shorter light curve, while still containing the relevant emission episode as defined by the $T_{90}$. For each pair we use the maximum of the CC (henceforth denoted CC$_{\text{max}}$) as the measure of the similarity between the light curves. It is obvious that CC$_{\text{max}}$ will tend to increase with bin size and that one should not compare values of CC$_{\text{max}}$ constructed from light curves with different binning. We also find that using background-subtracted light curves is important for getting reliable values of CC$_{\text{max}}$. When calculating the CC for non-background subtracted light curves, we find a large number of pairs that have artificially high CC$_{\text{max}}$ due to similarly varying backgrounds. We therefore base all our analysis and results on background-subtracted light curves. In order to assess the results we also performed a simulation study. The simulations are described in appendix~\ref{appendix:simulations}. In Figure~\ref{fig:CCdistr_L_SL} we show the CC$_{\text{max}}$ distributions of the L sample for both time binnings together with the corresponding simulated distributions. The simulated lensing scenario is simplified and does not consider any specific lens model, major variations in observing conditions or biases in the L sample (see appendix~\ref{appendix:simulations} and Section~\ref{sec:Discussion} for further discussion). However, it does provide an estimate of the range of CC$_{\text{max}}$ values that are compatible with lensing. \begin{figure}[b] \centering \includegraphics[width=0.42\textwidth]{CC_distr_L_SL_05.pdf} \includegraphics[width=0.42\textwidth]{CC_distr_L_SL_005.pdf} \caption{Distribution of CC$_{\text{max}}$ for the L sample (blue) and the simulations (orange). The results for time bins of $0.5$ and $0.05$~s are shown in the top and bottom panels, respectively. The histograms for the simulations are constructed from 5500 light curve pairs (see appendix~\ref{appendix:simulations} for further details about the simulations).} \label{fig:CCdistr_L_SL} \end{figure} The distribution of CC$_{\text{max}}$ from the simulations suggests that higher values of CC$_{\text{max}}$ are increasingly indicative of lensing, as expected. However, even if we assume that the simulations include the most important variations present in lensed GRB light curves, there is no clear value of CC$_{\text{max}}$ to use as a cutoff. In addition, while the simulations show that low values of CC$_{\text{max}}$ can be compatible with lensing, the lack of redshift measurements means that very similar light curves are still needed in order to identify convincing lens candidates. For these reasons, we simply select the burst pairs with the 250 highest values of CC$_{\text{max}}$ for both time bins for further analysis. Due to overlap, this results in 315 unique burst pairs from the L sample that we analyze below. \subsection{Time-resolved spectral analysis} \label{sec:timeResolvedSpectra} The final step is a time-resolved spectral analysis. However, we first refine the sample further. A subset of \textit{Fermi}~GBM bursts have improved localization information available in the online catalog, consisting of sky map probability distributions that account for both the statistical and systematic uncertainties \citep{Connaughton:2015gp}. These non-circular regions provide more accurate estimates of the localization uncertainties than the $\sigma_{\rm tot}$ used in Section~\ref{sec:samples}. We inspect the updated localizations, plots of the model spectra, as well as the light curves of all 315 burst pairs, and manually select the 22 most promising candidates for time-resolved spectral analysis. By performing a time-resolved spectral analysis, we obtain significantly better estimates of spectral parameters than what is available through the GBM catalog. There are four GRBs in the sample selected for time-resolved spectral analysis that have reported positions from {\it Swift} (GRB~120119A, GRB~161004B, GRB~180703A and GRB~190720A). For these GRBs, we generate new response files for the improved positions using the GBM response generator.\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/gbm/DOCUMENTATION.html}} We find that the new response files have a small impact on the spectral analysis, with the best-fit parameters being consistent within 1$\sigma$. We perform a Bayesian analysis of the time-resolved spectra with a cutoff power-law using \textit{3ML}. We use a flat prior for the spectral index, $\alpha \sim \mathcal{U}(-10,10)$, and priors that are flat in logarithmic space for the cutoff energy and normalization, $\log x_\mathrm{c} \sim \mathcal{U}(1,1000)$~keV and $\log K \sim \mathcal{U}(10^{-30},10^{3})~\mathrm{keV}^{-1} \mathrm{s}^{-1} \mathrm{cm}^{-2}$, respectively. Further, we sample the posterior using the built-in python implementation \citep{2014A&A...564A.125B} of MultiNest \citep{10.1111/j.1365-2966.2009.14548.x}, using 500 samples. For a few spectra we use different limits on the priors and a different number of samples in order to achieve convergence. Since the trigger times may vary somewhat relative to the overall light curve shapes, we align the bursts by shifting the starting time of the second GRB. The alignment was done by eye, but it essentially retrieves the optimal time lag found by the CC analysis. The shift allow us to use the same time bins for the spectral analysis of both GRBs. We define these time bins by running the Bayesian blocks algorithm \citep{2013ApJ...764..167S} on the first GRB. Background spectra for all time intervals were created as described in Section~\ref{sec:lightcurves}. We finally compare the resulting posteriors in each time bin. In the ideal lens scenario we expect these to overlap to a large degree in all bins that have a significant signal. The time-resolved analysis allows us to eliminate all the remaining candidates. In the next Section we present the most interesting cases from this analysis. \section{Analysis of final candidates} \label{sec:FinalLensCands} In Table~\ref{tab:cuts} we summarize the number of GRB pairs present in the L sample after each cut, starting from the number of unique pairs constructed from 2712 GRBs. Following the steps described in Sections~\ref{sec:samples}-\ref{sec:timeResolvedSpectra}, we are left with no convincing candidates for gravitationally lensed GRBs. For illustrative purposes we present the three most interesting candidates below. These are GRB~100515A-GRB~130206B, GRB~140430B-GRB~161220B, and GRB~160718A-GRB~170606A. We also use the last pair as a case study to investigate the effects of observational uncertainties. The sky positions for all three pairs are shown in Figure~\ref{fig:positions}, while Figures~\ref{fig:100515-130206}-\ref{fig:160718-170606} show the light curves and evolution of spectral parameters. \begin{deluxetable}{ll} \tablenum{1} \tablecaption{Number of GRBs in the L sample after cuts. The first three are hard cuts, described in Section~\ref{sec:samples}, while the last three are soft cuts, described in Sections~\ref{sec:lightcurves}, \ref{sec:timeResolvedSpectra} and \ref{sec:FinalLensCands}.} \label{tab:cuts} \tablewidth{0pt} \tablehead{ Type of cut & GRB pairs remaining } \startdata No cut (initial sample) & $3~676~116$\\ $T_{90}$ & $1~901~542$ \\ Time-integrated spectra & $1~292~816$\\ Position & $116~335$ \\ \hline CC$_{\text{max}}$ & 315 \\ Refined localization & \\ \& manual selection & 22 \\ Time-resolved spectra & 0 \\ \enddata \end{deluxetable} \begin{figure*} \centering \includegraphics[width=0.30\textwidth]{bn100515467_bn130206482_positions.pdf} \includegraphics[width=0.30\textwidth]{bn140430716_bn161220605_positions.pdf} \includegraphics[width=0.30\textwidth]{bn160718975_bn170606968_positions.pdf} \caption{Sky localizations for GRB~100515A-GRB~130206B, GRB~140430B-GRB~161220B, and GRB~160718A-GRB~170606A. The lines indicate the 1, 2, and 3$\sigma$ uncertainty regions, respectively. For GRB~100515A and GRB~130206B, the uncertainty regions are calculated as described in Section~\ref{sec:samples}. Note that the statistical errors are small for these two GRBs, which means that the three confidence levels of each burst overlap almost completely and that the uncertainty is dominated by the systematic uncertainty. For the other four GRBs, \textit{Fermi}~GBM supplies uncertainty regions that include both the statistical and systematic uncertainties. These regions happen to be approximately circular for these GRBs.} \label{fig:positions} \end{figure*} GRB~100515A-GRB~130206B has a CC$_{\text{max}}$ of $0.89$ and $0.74$ for the $0.5$ and $0.05$~s bins, respectively. However, Figure~\ref{fig:100515-130206} shows that the light curves look significantly different by eye, particularly at $2$ -- $5$~s. Furthermore, the spectra differ significantly at the peak of the light curve. This is despite the fact that these bursts passed the time-averaged spectral cuts described in Section~\ref{sec:samples}. Finally, as can be seen in Figure~\ref{fig:positions}, the localization uncertainty regions overlap only marginally, and due to the fact that we have used conservative estimates of the systematic uncertainty of the GBM localization. Neither of these bursts have improved localization available. This pair is not considered to constitute a lensing event. GRB~140430B-GRB~161220B, seen in Figure~\ref{fig:140430-161206}, show good agreement in the main emission episode, both in terms of light curves and overlapping spectral parameters. However, this pair is rejected on the basis of GRB~161220B having an additional peak in the light curve $\sim 20$~s after the main episode, which is not present in GRB~140430B (Figure~\ref{fig:140430-161206}, second panel). There is a possibility that the second peak in GRB~161220B is the result of millilensing, where lensing by, e.g., massive black holes ($M\gtrsim 10^6~{\rm M_{\odot}}$) leads to repeating emission episodes within the GRB \citep{Nemiroff2001}. However, fitting the time integrated spectra of the two light curve peaks, we find that the posteriors have almost no overlap at the $95$~\% level, with the second peak having a softer spectrum. Furthermore, this burst pair exhibits simple light curve shapes with few time bins to analyze, making it a less compelling case than an overlap between more complex light curves. The pair GRB~160718A-GRB~170606A has CC$_{\text{max}}$ of $0.91$ and $0.63$, and shows good overlap of the spectra in three out of four bins. However, the second time bin in Figure~\ref{fig:160718-170606} shows a small, but significant, discrepancy between the posteriors, as well as small differences between the light curves. Below, we investigate whether these differences may be caused by observational uncertainties and/or the flux difference expected from a lensing scenario. We consider the CC$_{\text{max}}$ and visual appearance of the light curves in Section~\ref{sec:crossCorrelation}, and the spectral properties in Section~\ref{sec:spectralAnalysis}. We find that the differences between the light curves and the observed values of CC$_{\text{max}}$ are in fact consistent with lensing, but that the spectral differences are not. However, even if there were better spectral overlap in the second time bin, the light curves of GRB~160718A-GRB~170606A are rather featureless and the number of analyzed time bins few, which makes this a weakly compelling case at best. Finally, although the localization cannot be used to confidently rule out a lensing scenario, there is essentially no overlap of the 2$\sigma$ contours (Figure~\ref{fig:positions}, right panel), further reducing the likelihood of lensing. We thus conclude that GRB~160718A-GRB~170606A is unlikely to be an example of gravitationally lensed GRBs. \begin{figure*} \centering \includegraphics[width=0.40\textwidth]{LC_bn100515467_bn130206482_dt005.pdf} \includegraphics[width=0.40\textwidth]{v6_bn100515467_bn130206482_bin2_600.pdf} \includegraphics[width=0.40\textwidth]{v6_bn100515467_bn130206482_bin3_600.pdf} \includegraphics[width=0.40\textwidth]{v6_bn100515467_bn130206482_bin4_600.pdf} \includegraphics[width=0.40\textwidth]{v6_bn100515467_bn130206482_bin5_600.pdf} \includegraphics[width=0.40\textwidth]{v6_bn100515467_bn130206482_bin6_600.pdf} \caption{Light curves of GRB~100515A and GRB~130206B together with the posterior distributions from fitting the time-resolved spectra with a cutoff power law. The red lines in the light curves indicate the edges of the time bins used for the spectral analysis. The dark and light shaded regions represent the 68 and 95 \% credible regions, respectively. } \label{fig:100515-130206} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.40\textwidth]{LC_bn140430716_bn161220605_dt005.pdf} \includegraphics[width=0.40\textwidth]{LC_bn140430716_bn161220605_dt005_vExpanded.pdf} \includegraphics[width=0.40\textwidth]{v6_bn140430716_bn161220605_bin1_450.pdf} \includegraphics[width=0.40\textwidth]{v6_bn140430716_bn161220605_bin2_450.pdf} \includegraphics[width=0.40\textwidth]{v6_bn140430716_bn161220605_bin3_450.pdf} \includegraphics[width=0.40\textwidth]{v6_bn140430716_bn161220605_bin4_450.pdf} \caption{Same as Figure~\ref{fig:100515-130206}, but for GRB~140430B and GRB~161220B. Note that GRB~161220B has an additional peak in the light curve at $\sim 30$~s (top right panel), which casts doubt on these bursts as a lensed pair.} \label{fig:140430-161206} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.40\textwidth]{LC_bn160718975_bn170606968_dt005.pdf} \includegraphics[width=0.40\textwidth]{v6_bn160718975_bn170606968_bin2_500.pdf} \includegraphics[width=0.40\textwidth]{v6_bn160718975_bn170606968_bin3_500.pdf} \includegraphics[width=0.40\textwidth]{v6_bn160718975_bn170606968_bin4_500.pdf} \includegraphics[width=0.40\textwidth]{v6_bn160718975_bn170606968_bin5_500.pdf} \caption{Same as Figure~\ref{fig:100515-130206}, but for GRB~160718A and GRB~170606A. Note the discrepancy between the posteriors in the second time bin.} \label{fig:160718-170606} \end{figure*} \subsection{Impact of observational uncertainties on light curves and cross correlations} \label{sec:crossCorrelation} As discussed in Section~\ref{sec:lightcurves}, the value of CC$_{\text{max}}$ is in itself not an adequate measure of the probability of lensing. However, for a given burst pair we can use simulations to quantify how likely the observed CC value is under a specific lensing scenario. This technique can be used to significantly improve the power of CC$_{\text{max}}$ as a tool to identify or reject lensed GRB pairs. To illustrate this, we consider the case of GRB~160718A-GRB~170606A, which is the most promising candidate described above. We simulate light curves from GRB~170606A (on the basis of it being the brighter one), using the methods described in appendix~\ref{appendix:simulations}, but with the background and signal flux levels set to those of GRB~160718A. The value of CC$_{\text{max}}$ is then calculated between each simulated light curve and the original light curve. In Figure~\ref{fig:CCsim170606}, we present the resulting CC$_{\text{max}}$ distributions for the two time bins together with the observed values of CC$_{\text{max}}$. The observed CC$_{\text{max}}$ from the $0.05$~s bins is fully consistent with the simulated distribution, while there is some tension in the case of the $0.5$~s time bins. Considering the idealized nature of the simulations, these results suggest that we cannot reject the lensing hypothesis for this burst pair. In order to assess the visual appearance of the light curves, it is instructive to consider a specific example of simulated light curves. In Figure~\ref{fig:LCsim170606}, we show the light curve of GRB~170606A together with one of the light curves simulated as described above. From this figure it is clear that the discrepancies between the observed light curves for GRB~160718A and GRB~170606A, seen in the first panel of Figure~\ref{fig:160718-170606}, are not sufficient to rule out a lensing scenario. We thus conclude that CC$_{\text{max}}$ and visual inspection of the light curves suggest that this pair is consistent with a lensing scenario. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{CC_distr_Obs_SL_170606_160718.pdf} \caption{Distributions of CC$_{\text{max}}$ values expected from an ideal lensing scenario of GRB~170606A, assuming a flux change to the level of GRB~160618A. Results for the $0.5$ and $0.05$~s bins are shown in blue and orange, respectively. The dashed red lines show the observed values of CC$_{\text{max}}$ between GRB~170606A and GRB~160618A.} \label{fig:CCsim170606} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{LC_170606968_lensSim_sScale06_bScale11.pdf} \caption{Example of a background subtracted light curve simulated from GRB~170606A, but with a flux level corresponding to that of GRB~160718A (blue), plotted together with the observed light curve of GRB~170606A (orange). Note the similarity with the top left panel of Figure~\ref{fig:160718-170606}. } \label{fig:LCsim170606} \end{figure} \subsection{Impact of observational uncertainties on the spectral analysis}\label{sec:spectralAnalysis} Although gravitational lensing will not distort spectra, there are other effects that may yield differences in the observed spectra for two lensed GRBs. This includes different observing conditions, such as varying background and angle of incidence to the detectors. Additionally, a change in flux level may affect the spectral fits. Here we investigate to what degree the observed spectral parameters are affected by changes in the flux level, background and response matrix. Specifically, we consider the candidate pair GRB~160718A-GRB~170606A, which has a significant discrepancy in the posteriors in the second time bin (see Figure~\ref{fig:160718-170606}). The relative number of photons in the second time bin for this burst pair is $\sim 0.67$, with GRB~170606A being brighter. We start by simulating spectra for this time bin from a set of model parameters drawn from the posterior for a cutoff power law conditioned on the observed data of GRB~170606A in this time bin. We then fit the simulated spectra with a cutoff power law. To simulate the lensed spectrum, we draw a new set of parameter values from the original posterior, but adjust the model normalization by a factor $0.67$. In order to account for the observing conditions of GRB~160718A, we also use the response matrix and background from this burst for the simulations. We then sample the posterior for a cutoff power law conditioned on these simulated data as well. These procedures are repeated 100 times. Figure~\ref{fig:fakes} shows the results of these simulations. It is clear that the changes in flux, response and background do not resolve the tension between the fits. This is not surprising, since the effect of changing the flux should be an increased spread of the posterior, as can be seen in the figure. The fact that the changes in response and background make little difference is also expected given a well-calibrated instrument and adequate background treatment in the analysis. To reconcile the observations of GRB~160718A-GRB~170606A with a common physical origin would require us to invoke some more exotic lensing scenario that can result in the observed differences. The probability of observing spectral evolution as similar as in GRB~160718A-GRB~170606A among non-lensed GRBs can be estimated from the NL sample. While a time-resolved analysis of the full NL sample is beyond the scope of this work, we have performed a time-resolved analysis of 14 burst pairs from the NL sample. These were selected similarly to the 22 pairs from the L sample in Section~\ref{sec:timeResolvedSpectra}. From these 14 burst pairs we find that GRB~141028A-GRB~190604A have significant overlap in the majority of time bins. This indicates that overlap of the posteriors is not so rare that it constitutes a smoking gun signal for lensing. Thus, as already noted above, we conclude that it is unlikely that GRB~160718A and GRB~170606A are gravitationally lensed GRBs of a common origin. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{joint_posteriors_with_fakes_n350.pdf} \caption{Corner plot showing posteriors of a cutoff power law conditioned on data from bin 2 of GRB~160718A (blue) and GRB~170606A (orange) (cf. Figure~\ref{fig:160718-170606}). In green we show a collection of 100 posteriors for data simulated from the best-fit model of GRB~170606A. Each point represents the maximum posterior point of a single chain. The red points show the corresponding results for data simulated from the best-fit model of GRB~170606A, but using the flux level, background and response of GRB~160718A. } \label{fig:fakes} \end{figure} \section{Results and discussion} \label{sec:Discussion} Our search for pairs of gravitationally lensed GRBs in a sample of 2712 GRBs observed by {\it Fermi}~GBM did not reveal any convincing candidates. Below, we compare these results with previous studies. We also discuss the implications of the null results, future prospects and similarities within the GRB population. All previous searches for macrolensed GRBs have also yielded null results \citep{Veres2009,2011AIPC.1358...17D,Hurley:2019km,Li2014}. These previous works have considered different data sets and used partly different methods to search for lenses. The most comprehensive search was performed by \cite{Hurley2019}, who analyzed $\sim 2300$ long GRBs detected by {\it Konus-Wind}. The authors calculated the CC$_{\rm max}$ for all pairs of GRBs and then compared the sky positions and time-averaged spectra for the pairs in the top $0.25\%$ of the CC$_{\rm max}$ distribution. In the study by \cite{Li2014}, the large number of GRBs observed by {\it BATSE} was exploited to search for lenses. The procedure adopted in this case was to select pairs with angular separations $< 4^{\circ}$ and overlapping time-averaged spectral parameters in the {\it BATSE} 5B spectral catalog \citep{Goldstein2013}, and finally compare the light curves of those pairs by eye. Both \cite{Veres2009} and \cite{2011AIPC.1358...17D} present small, exploratory studies of early data from {\it Fermi}~GBM, where they use the CC to select promising candidates and then compare the time-averaged spectra. \cite{Veres2009} assess the CC through manual inspection, while \cite{2011AIPC.1358...17D} impose constraints on the symmetry of the CC function and its behavior as a function of the temporal resolution of the light curves. It is worth noting that \cite{Veres2009}, \cite{2011AIPC.1358...17D} and \cite{Li2014} all require that the later GRB should be fainter. We have not imposed this constraint since it originates from simple spherically symmetric lens models. Our methods also differ from previous works in that we consider time-resolved spectra and assess the similarities of the most promising pairs using simulations. This approach offers a powerful way to eliminate candidates. There are four GRB pairs in our initial sample that have been identified as interesting by previous works. Even though most of these were ultimately rejected in the previous studies, it is worth noting where they landed in our analysis. GRB080730B-GRB090730A and GRB081216A-GRB090429D were identified as interesting candidates in \cite{Veres2009}. In our work they pass the time-averaged spectral and localization cuts, but have low ranks in the CC$_{\text{max}}$ distribution and are not analyzed further. They were ultimately rejected also in \cite{Veres2009}. GRB090516C-GRB090514A, also identified in \cite{Veres2009}, did not pass the time-averaged spectral cut. This burst pair was not rejected by means of spectral information in \cite{Veres2009} due to lack of available detector response matrix at the time. \cite{2011AIPC.1358...17D} point to GRB080804A-GRB081109A as a possible lensed pair, but this pair fails our time-averaged spectral cut and would also be rejected based on the positions available from {\it Swift}. The discovery of lensed GRBs would be important because the excellent time resolution of GRB detectors offers good prospects for modeling lenses and constraining cosmological parameters. In addition, the fact that we find no convincing lens candidates could in principle be used to place constraints on the properties of lenses and the GRB population. However, this is not possible at present since the null result does not rule out the presence of lenses in the sample. Indeed, our analysis has shown that also low values of CC$_{\text{max}}$ are compatible with lensing, and that pairs of GRBs that are known not to be lensed can have very similar spectra and light curves. While the NL sample and the light curve simulations have been very useful for guiding the analysis, they cannot be used to quantify the probability of false positives or negatives. In addition to the simplified nature of the simulations, the main complicating factor is that the L sample is biased to lower SNR. This bias arises because GRBs with low SNR tend to have larger uncertainties on position and spectral parameters, making them more likely to pass the selection criteria. For comparison, the mean SNR for the L sample is about 50 \% lower than that of the NL sample. This bias is not accounted for in the simulated sample, making quantitative comparison to the L sample difficult. The main challenges with identifying lensed GRBs from current observational data are the large localization uncertainties and lack of redshift measurements. GRB observations from {\it Swift} are superior to {\it Fermi}~GBM in these respects, but {\it Swift} also has a significantly smaller field of view, which makes the probability of observing a pair of lensed GRBs very low \citep{Li2014}. While the identification of large numbers of lensed GRBs will most likely have to await future missions, the discovery of a lensed pair in the growing samples from current telescopes remains a possibility. As we have shown, it is important to consider both light curves and time-resolved spectra, and to assess properties like CC$_{\text{max}}$ using simulations. It is clear that the use of a hard cut on CC$_{\text{max}}$ is the main uncertainty in our search for lenses, with Figure~\ref{fig:CCdistr_L_SL} demonstrating that there is a high probability for lensed pairs to be excluded in this step. By contrast, investigations of our simulated sample suggests that most lensed pairs would pass the initial hard cuts on duration and time-averaged spectra (Section~\ref{sec:samples}), which is expected since these cuts are very conservative. Additionally, lensed pairs that are selected based on a high CC$_{\text{max}}$ are also expected to pass the final time-resolved spectral analysis (Section~\ref{sec:timeResolvedSpectra}). This has been investigated using simulations similar to those described in Section~\ref{sec:spectralAnalysis}. However, we caution that simulations with a realistic lens model (e.g., considering larger variations in flux between the two GRBs in a lensed pair) may give different results. A possible way to improve the use of the CC in future studies is to carry out extensive simulations to assess the CC$_{\text{max}}$ for each pair of GRB, using similar methods as in Section Section~\ref{sec:crossCorrelation}. This may lead to identification of promising candidates that are missed when only selecting based on high values of CC$_{\text{max}}$. Finally, we note that the search for lensed GRBs has provided information regarding the diversity of GRBs. It is notable that our most promising candidates were single-pulsed GRB, as were most of the 315 GRBs that we investigated based on their high values of CC$_{\text{max}}$. Our results show that some of these GRBs also have very similar spectra. This suggests the existence of a relatively simple physical scenario producing the emission, as well as similarities between the progenitors. Further examination of the properties of these GRBs may help shed light on the nature of GRB progenitors and the origin of the prompt emission. \section{Summary and conclusions} \label{sec:summary} We have searched for gravitationally lensed pairs of GRBs, specifically considering the case of macrolensing, in 11 years of \textit{Fermi}~GBM data. The sample consists of about 3.6 million unique pairs. We begin by eliminating burst pairs that are incompatible with a common physical origin based on sky localization, relative duration and time-averaged spectral information available from the \textit{Fermi}~GBM catalog. We then use the CC to investigate the similarity of light curves, and finally analyze the time-resolved spectra of the most promising pairs. We find no convincing cases of gravitationally lensed GRBs. The most similar pairs have single-peaked smooth light curves with relatively few time bins for the spectral analysis. This is best explained by similarities within the GRB population rather than lensing. We stress that this study does not rule out the existence of gravitationally lensed GRBs in the sample. By simulating light curves, we show that the CC$_{\text{max}}$ distribution compatible with lensed GRBs is broad. This means that a high CC$_{\text{max}}$ alone is not an adequate measure by which to identify lenses. Similarly, a low value of CC$_{\text{max}}$ does not necessarily reject a lensing scenario. We conclude that null-results of studies that rely mainly on the value of CC$_{\text{max}}$ from binned light curves (which includes previously mentioned studies on this topic; \citealt{Hurley:2019km,Li2014,Veres2009,2011AIPC.1358...17D}), cannot be used to make reliable inferences about the lens populations. Constraints on e.g., dark matter distributions derived from such null-results are therefore unreliable. To refine the search for lens candidates, it is important to also consider spectral information. Although time-averaged spectral properties are sufficient to rule out many pairs, we find that a time-resolved spectral analysis is a powerful tool to further eliminate candidates. However, as for CC$_{\text{max}}$, it is clear that a similar spectral evolution on its own does not provide sufficient evidence for lensing. This is evident from the fact that we identify GRB pairs that are known not to be lensed, but which still exhibit similar spectral evolution. We conclude that the identification of lensed GRBs requires a comparison of both light curves and time-resolved spectra, and that the significance of the similarities/differences must be assessed with simulations. With these techniques it is possible that convincing lens candidates can be identified in the growing samples of GRBs from current missions. Ultimately, a much larger fraction of well-localized GRBs with redshift measurements is needed to identify lensed GRBs with a high degree of confidence. \acknowledgments This work was supported by the Knut \& Alice Wallenberg Foundation. \vspace{5mm} \facilities{{\it Fermi} (GBM)} \software{Scipy \citep{2019arXiv190710121V} Astropy \citep{2013A&A...558A..33A} 3ML \citep{2015arXiv150708343V} Seaborn \citep{michael_waskom_2017_883859} IPython \citep{PER-GRA:2007} }
proofpile-arXiv_065-4372
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Notations} \indent The letter $p$ with or without subscript will always denote prime numbers. We denote by $(m,n)$ the greatest common divisor of $m$ and $n$. Moreover $e(t)$=exp($2\pi it$). As usual $\varphi(d)$ is Euler's function, $r(d)$ is the number of solutions of the equation $d=m_1^2+m_2^2$ in integers $m_j$, $\chi(d)$ is the non-principal character modulo 4 and $L(s,\chi)$ is the corresponding Dirichlet's $L$ -- function. We shall use the convention that a congruence, $m\equiv n\,\pmod {d}$ will be written as $m\equiv n\,(d)$. We denote by $\lfloor t\rfloor$, $\lceil t\rceil$ and $\{t\}$ respectively the floor function, the ceiling function and the fractional part function of $t$. Let $\lambda_1,\lambda_2,\lambda_3$ are non-zero real numbers, not all of the same sign and $\lambda_1/\lambda_2$ is irrational. Then there are infinitely many different convergents $a_0/q_0$ to its continued fraction, with \begin{equation}\label{lambda12a0q0} \bigg|\frac{\lambda_1}{\lambda_2} - \frac{a_0}{q_0}\bigg|<\frac{1}{q_0^2}\,,\quad (a_0, q_0) = 1\,,\quad a_0\neq0 \end{equation} and $q_0$ is arbitrary large. Denote \begin{align} \label{X} &q_0^2=\frac{X}{(\log X)^{22}}\,;\\ \label{D} &D=\frac{X^{1/2}}{(\log X)^{52}}\,;\\ \label{Delta} &\Delta=\frac{(\log X)^{23}}{X}\,;\\ \label{theta0} &\theta_0=\frac{1}{2}-\frac{1}{4}e\log2=0.0289...;\\ \label{varepsilon} &\varepsilon=\frac{(\log\log X)^7}{(\log X)^{\theta_0}}\,;\\ \label{H} &H=\frac{\log^2X}{\varepsilon}\,;\\ \label{SldalphaX} &S_{l,d;J}(\alpha, X)=\sum\limits_{p\in J\atop{p\equiv l\, (d)}} e(\alpha p)\log p \,,\quad J\subset(\lambda_0X,X]\,, \quad 0<\lambda_{0}<1\,;\\ \label{SalphaX} &S(\alpha, X)=S_{1,1;(\lambda_0X,X]}(\alpha, X)\,;\\ \label{IJalphaX} &I_J(\alpha,X)=\int\limits_Je(\alpha y)\,dy\,;\\ \label{IalphaX} &I(\alpha,X)=I_{(\lambda_0X,X]}(\alpha, X)\,;\\ \label{Exqa} &E(x,q,a)=\sum_{p\le x\atop{p\equiv a\,(q)}} \, \log p-\frac{x}{\varphi(q)}\,. \end{align} \section{Introduction and statement of the result} \indent In 1960 Linnik \cite{Linnik} has proved that there exist infinitely many prime numbers of the form $p=x^2 + y^2 +1$, where $x$ and $y$ -- integers. More precisely he has proved the asymptotic formula \begin{equation*} \sum_{p\leq X}r(p-1)=\pi\prod_{p>2}\bigg(1+\frac{\chi(p)}{p(p-1)}\bigg)\frac{X}{\log X}+ \mathcal{O}\bigg(\frac{X(\log\log X)^7}{(\log X)^{1+\theta_0}}\bigg)\,, \end{equation*} where $\theta_0$ is defined by \eqref{theta0}. Seven years later Baker \cite{Baker} showed that whenever $\lambda_1,\lambda_2,\lambda_3$ are non-zero real numbers, not all of the same sign, $\lambda_1/\lambda_2$ is irrational and $\eta$ is real, then there are infinitely many prime triples $p_1,\,p_2,\,p_3$ such that \begin{equation}\label{Inequality1} |\lambda_1p_1+\lambda_2p_2+\lambda_3p_3+\eta|<\xi\,, \end{equation} where $\xi=(\log \max p_j)^{-A}$ and $A>0$ is an arbitrary large constant. Latter the right-hand side of \eqref{Inequality1} was sharpened several times and the best result up to now belongs to K. Matom\"{a}ki \cite{Matomaki} with $\xi=( \max p_j)^{-2/9+\delta}$ and $\delta>0$. After Matom\"{a}ki inequality \eqref{Inequality1} was solved with prime numbers of a special form. Let $P_l$ is a number with at most $l$ prime factors. The author and Todorova \cite{Dimitrov4}, and the author \cite{Dimitrov2} proved that \eqref{Inequality1} has a solution in primes $p_i$ such that $p_i+2=P_l$,\;$i=1,\,2,\,3$. Very recently the author \cite{Dimitrov3} showed that \eqref{Inequality1} has a solution in Piatetski-Shapiro primes $p_1,\,p_2,\,p_3$ of type $\gamma\in(37/38, 1)$. In this paper we continue to solve inequality \eqref{Inequality1} with prime numbers of a special type. More precisely we shall prove solvability of \eqref{Inequality1} with Linnik primes. Thus we establish the following theorem. \begin{theorem}\label{Theorem} Suppose that $\lambda_1,\lambda_2,\lambda_3$ are non-zero real numbers, not all of the same sign, $\lambda_1/\lambda_2$ is irrational and $\eta$ is real. Then there exist infinitely many triples of primes $p_1,\,p_2,\,p_3$ for which \begin{equation*} |\lambda_1p_1+\lambda_2p_2+\lambda_3p_3+\eta|<\frac{(\log\log \max p_j)^7}{(\log \max p_j)^{\theta_0}} \end{equation*} and such that $p_3=x^2 + y^2 +1$. Here $\theta_0$ is defined by \eqref{theta0}. \end{theorem} \vspace{1mm} In addition we have the following challenge. \begin{conjecture} Let $\varepsilon>0$ be a small constant. Suppose that $\lambda_1,\lambda_2,\lambda_3$ are non-zero real numbers, not all of the same sign, $\lambda_1/\lambda_2$ is irrational and $\eta$ is real. Then there exist infinitely many triples of primes $p_1,\,p_2,\,p_3$ for which \begin{equation*} |\lambda_1p_1+\lambda_2p_2+\lambda_3p_3+\eta|<\varepsilon \end{equation*} and such that $p_1=x_1^2 + y_1^2 +1$,\, $p_2=x_2^2 + y_2^2 +1$,\, $p_3=x_3^2 + y_3^2 +1$. \end{conjecture} The author wishes success to all young researchers in attacking this hard hypothesis. \section{Preliminary lemmas} \indent \begin{lemma}\label{Fourier} Let $\varepsilon>0$ and $k\in \mathbb{N}$. There exists a function $\theta(y)$ which is $k$ times continuously differentiable and such that \begin{align*} &\theta(y)=1\quad\quad\quad\mbox{for }\quad\quad|y|\leq 3\varepsilon/4\,;\\ &0<\theta(y)<1\quad\mbox{for}\quad3\varepsilon/4 <|y|< \varepsilon\,;\\ &\theta(y)=0\quad\quad\quad\mbox{for}\quad\quad|y|\geq \varepsilon\,. \end{align*} and its Fourier transform \begin{equation*} \Theta(x)=\int\limits_{-\infty}^{\infty}\theta(y)e(-xy)dy \end{equation*} satisfies the inequality \begin{equation*} |\Theta(x)|\leq\min\bigg(\frac{7\varepsilon}{4},\frac{1}{\pi|x|},\frac{1}{\pi |x|} \bigg(\frac{k}{2\pi |x|\varepsilon/8}\bigg)^k\bigg)\,. \end{equation*} \end{lemma} \begin{proof} See (\cite{Shapiro}). \end{proof} \begin{lemma}\label{SIasympt} Let $|\alpha|\leq\Delta$. Then for the sum denoted by \eqref{SalphaX} and the integral denoted by \eqref{IalphaX} the asymptotic formula \begin{equation*} S(\alpha,X)=I(\alpha,X)+\mathcal{O}\left(\frac{X}{e^{(\log X)^{1/5}}}\right) \end{equation*} holds. \end{lemma} \begin{proof} Arguing as in (\cite{Tolev}, Lemma 14) we establish Lemma \ref{SIasympt}. \end{proof} \begin{lemma}\label{Bomb-Vin}(Bombieri -- Vinogradov) For any $C>0$ the following inequality \begin{equation*} \sum\limits_{q\le X^{\frac{1}{2}}/(\log X)^{C+5}} \max\limits_{y\le X}\max\limits_{(a,\,q)=1} \big|E (y,\,q,\,a)\big|\ll \frac{X}{(\log X)^{C}} \end{equation*} holds. \end{lemma} \begin{proof} See (\cite{Davenport}, Ch.28). \end{proof} \begin{lemma}\label{Expsumest} Suppose that $\alpha \in \mathbb{R}$,\, $a \in \mathbb{Z}$,\, $q\in \mathbb{N}$,\, $\big|\alpha-\frac{a}{q}\big|\leq\frac{1}{q^2}$\,, $(a, q)=1$. Let \begin{equation*} \Sigma(\alpha,X)=\sum\limits_{p\le X}e(\alpha p)\log p\,. \end{equation*} Then \begin{equation*} \Sigma(\alpha, X)\ll \Big(Xq^{-1/2}+X^{4/5}+X^{1/2}q^{1/2}\Big)\log^4X\,. \end{equation*} \end{lemma} \begin{proof} See (\cite{Iwaniec-Kowalski}, Theorem 13.6). \end{proof} \begin{lemma}\label{Halberstam-Richert} Let $k\in\mathbb N$; $l,a,b\in\mathbb Z$ and $ab\neq0$. Let $x$ and $y$ be real numbers satisfying \begin{equation*} k<y\leq x\,. \end{equation*} Then \begin{equation*} \#\{p\,:x-y<p\leq x,\,p\equiv l\,(k),\,ap+b=p'\} \end{equation*} \begin{equation*} \ll\prod\limits_{p\mid kab}\bigg(1-\frac{1}{p}\bigg)^{-1}\frac{y}{\varphi(k)\log^2(y/k)}\,. \end{equation*} \end{lemma} \begin{proof} See (\cite{Halberstam}, Ch.2, Corollary 2.4.1). \end{proof} The next two lemmas are due to C. Hooley. \begin{lemma}\label{Hooley1} For any constant $\omega>0$ we have \begin{equation*} \sum\limits_{p\leq X}\bigg|\sum\limits_{d|p-1\atop{\sqrt{X}(\log X)^{-\omega}<d<\sqrt{X}(\log X)^{\omega}}} \chi(d)\bigg|^2\ll \frac{X(\log\log X)^7}{\log X}\,, \end{equation*} where the constant in the Vinogradov symbol depends on $\omega>0$. \end{lemma} \begin{lemma}\label{Hooley2} Suppose that $\omega>0$ is a constant and let $\mathcal{F}_\omega(X)$ be the number of primes $p\leq X$ such that $p-1$ has a divisor in the interval $\big(\sqrt{X}(\log X)^{-\omega}, \sqrt{X}(\log X)^\omega\big)$. Then \begin{equation*} \mathcal{F}_\omega(X)\ll\frac{X(\log\log X)^3}{(\log X)^{1+2\theta_0}}\,, \end{equation*} where $\theta_0$ is defined by \eqref{theta0} and the constant in the Vinogradov symbol depends only on $\omega>0$. \end{lemma} The proofs of very similar results are available in (\cite{Hooley}, Ch.5). \section{Outline of the proof} \indent Consider the sum \begin{equation}\label{Gamma} \Gamma(X)= \sum\limits_{\lambda_0X<p_1,p_2,p_3\leq X\atop{|\lambda_1p_1+\lambda_2p_2+\lambda_3p_3+\eta|<\varepsilon}} r(p_3-1)\log p_1\log p_2\log p_3\,. \end{equation} Any non-trivial lower bound of $\Gamma(X)$ implies solvability of $|\lambda_1p_1+\lambda_2p_2+\lambda_3p_3+\eta|<\varepsilon$ in primes such that $p_3=x^2 + y^2 +1$. We have \begin{equation}\label{GammaGamma0} \Gamma(X)\geq\Gamma_0(X)\,, \end{equation} where \begin{equation}\label{Gamma0} \Gamma_0(X)=\sum\limits_{\lambda_0X<p_1,p_2,p_3\leq X}r(p_3-1) \theta(\lambda_1p_1+\lambda_2p_2+\lambda_3p_3+\eta)\log p_1 \log p_2\log p_3\,. \end{equation} Using \eqref{Gamma0} and well-known identity $r(n)=4\sum_{d|n}\chi(d)$ we write \begin{equation} \label{Gamma0decomp} \Gamma_0(X)=4\big(\Gamma_1(X)+\Gamma_2(X)+\Gamma_3(X)\big), \end{equation} where \begin{align} \label{Gamma1} &\Gamma_1(X)=\sum\limits_{\lambda_0X<p_1,p_2,p_3\leq X} \left(\sum\limits_{d|p_3-1\atop{d\leq D}}\chi(d)\right) \theta(\lambda_1p_1+\lambda_2p_2+\lambda_3p_3+\eta)\log p_1\log p_2\log p_3\,,\\ \label{Gamma2} &\Gamma_2(X)=\sum\limits_{\lambda_0X<p_1,p_2,p_3\leq X} \left(\sum\limits_{d|p_3-1\atop{D<d<X/D}}\chi(d)\right) \theta(\lambda_1p_1+\lambda_2p_2+\lambda_3p_3+\eta)\log p_1\log p_2\log p_3\,,\\ \label{Gamma3} &\Gamma_3(X)=\sum\limits_{\lambda_0X<p_1,p_2,p_3\leq X} \left(\sum\limits_{d|p_3-1\atop{d\geq X/D}}\chi(d)\right) \theta(\lambda_1p_1+\lambda_2p_2+\lambda_3p_3+\eta)\log p_1\log p_2\log p_3\,. \end{align} In order to estimate $\Gamma_1(X)$ and $\Gamma_3(X)$ we have to consider the sum \begin{equation} \label{Ild} I_{l,d;J}(X)=\sum\limits_{\lambda_0X<p_1,p_2\leq X\atop{p_3\equiv l\,(d) \atop{p_3\in J}}}\theta(\lambda_1p_1+\lambda_2p_2+\lambda_3p_3+\eta)\log p_1\log p_2\log p_3\,, \end{equation} where $d$ and $l$ are coprime natural numbers, and $J\subset(\lambda_0X,X]$-interval. If $J=(\lambda_0X,X]$ then we write for simplicity $I_{l,d}(X)$. Using the inverse Fourier transform for the function $\theta(x)$ we get \begin{align*} I_{l,d;J}(X)&=\sum\limits_{\lambda_0X<p_1,p_2\leq X\atop{p_3\equiv l\,(d)\atop{p_3\in J}}}\log p_1\log p_2\log p_3 \int\limits_{-\infty}^{\infty}\Theta(t)e\big((\lambda_1p_1+\lambda_2p_2+\lambda_3p_3+\eta)t\big)\,dt\\ &=\int\limits_{-\infty}^{\infty}\Theta(t)S(\lambda_1t,X)S(\lambda_2t,X)S_{l,d;J}(\lambda_3t,X)e(\eta t)\,dt\,. \end{align*} We decompose $I_{l,d;J}(X)$ as follows \begin{equation}\label{Ilddecomp} I_{l,d;J}(X)=I^{(1)}_{l,d;J}(X)+I^{(2)}_{l,d;J}(X)+I^{(3)}_{l,d;J}(X)\,, \end{equation} where \begin{align} \label{Ild1} &I^{(1)}_{l,d;J}(X)=\int\limits_{|t|<\Delta}\Theta(t)S(\lambda_1t,X)S(\lambda_2t,X)S_{l,d;J}(\lambda_3t,X)e(\eta t)\,dt\,,\\ \label{Ild2} &I^{(2)}_{l,d;J}(X)=\int\limits_{\Delta\leq|t|\leq H}\Theta(t)S(\lambda_1t,X)S(\lambda_2t,X)S_{l,d;J}(\lambda_3t,X)e(\eta t)\,dt\,,\\ \label{Ild3} &I^{(3)}_{l,d;J}(X)=\int\limits_{|t|>H}\Theta(t)S(\lambda_1t,X)S(\lambda_2t,X)S_{l,d;J}(\lambda_3t,X)e(\eta t)\,dt\,. \end{align} We shall estimate $I^{(1)}_{l,d;J}(X)$, $I^{(3)}_{l,d;J}(X)$, $\Gamma_3(X),\,\Gamma_2(X)$ and $\Gamma_1(X)$, respectively, in the sections \ref{SectionIld1}, \ref{SectionIld3}, \ref{SectionGamma3}, \ref{SectionGamma2} and \ref{SectionGamma1}. In section \ref{Sectionfinal} we shall complete the proof of Theorem \ref{Theorem}. \section{Asymptotic formula for $\mathbf{I^{(1)}_{l,d;J}(X)}$}\label{SectionIld1} \indent Replace \begin{align} \label{S1} &S_1=S(\lambda_1t, X)\,,\\ \label{S2} &S_2=S(\lambda_2t, X)\,, \\ \label{S3} &S_3=S_{l,d;J}(\lambda_3t,X)\,,\\ \label{I1} &I_1=I(\lambda_1t, X)\,,\\ \label{I2} &I_2=I(\lambda_2t, X)\,, \\ \label{I3} &I_3=\frac{1}{\varphi(d)}I_J(\lambda_3t, X)\,. \end{align} We use the identity \begin{equation}\label{Identity} S_1S_2S_3=I_1I_2I_3+(S_1-I_1)I_2I_3+S_1(S_2-I_2)I_3+S_1S_2(S_3-I_3)\,. \end{equation} From \eqref{Delta}, \eqref{SldalphaX}, \eqref{IJalphaX}, \eqref{Exqa}, \eqref{S3}, \eqref{I3} and Abel's summation formula it follows \begin{equation}\label{S3I3} S_3=I_3+\mathcal{O}\bigg(\Delta X\max\limits_{y\in(\lambda_{0}X,X]}\big|E(y, d, l)\big|\bigg)\,. \end{equation} Now using \eqref{SalphaX} -- \eqref{IalphaX}, \eqref{S1} -- \eqref{S3I3}, Lemma \ref{SIasympt} and the trivial estimations \begin{equation*} S_1, S_2, I_2\ll X \,, \quad I_3\ll \frac{X}{\varphi(d)} \end{equation*} we get \begin{equation}\label{S123I123} S_1S_2S_3-I_1I_2I_3\ll X^3\Bigg(\frac{1}{\varphi(d)e^{(\log X)^{1/5}}} +\Delta\max\limits_{y\in(\lambda_{0}X,X]}\big|E(y,d,l)\big|\Bigg)\,. \end{equation} Put \begin{equation}\label{PhiX} \Phi(X)=\frac{1}{\varphi(d)}\int\limits_{|t|<\Delta}\Theta(t)I(\lambda_1t,X)I(\lambda_2t,X)I_J(\lambda_3t,X)e(\eta t)\,dt\,. \end{equation} Taking into account \eqref{Ild1}, \eqref{S123I123}, \eqref{PhiX} and Lemma \ref{Fourier} we find \begin{equation}\label{Ild1-PhiX} I^{(1)}_{l,d;J}(X)-\Phi(X)\ll \varepsilon\Delta X^3\Bigg(\frac{1}{\varphi(d)e^{(\log X)^{1/5}}} +\Delta\max\limits_{y\in(\lambda_{0}X,X]}\big|E(y,d,l)\big|\Bigg)\,. \end{equation} On the other hand for the integral defined by \eqref{PhiX} we write \begin{equation}\label{JXest1} \Phi(X)=\frac{1}{\varphi(d)}B_J(X)+\Omega\,, \end{equation} where \begin{equation*} B_J(X)=\int\limits_J\int\limits_{\lambda_0X}^{X}\int\limits_{\lambda_0X}^{X} \theta(\lambda_1y_1+\lambda_2y_2+\lambda_3y_3+\eta)\,dy_1\,dy_2\,dy_3 \end{equation*} and \begin{equation}\label{Omega} \Omega\ll\frac{1}{\varphi(d)}\int\limits_{\Delta}^{\infty }|\Theta(t)| |I(\lambda_1t,X)I(\lambda_2t,X)I_J(\lambda_3t,X)|\,dt\,. \end{equation} By \eqref{IJalphaX} and \eqref{IalphaX} we get \begin{equation}\label{IalphaXest} I_J(\alpha,X)\ll\frac{1}{|\alpha|}\,, \quad I(\alpha,X)\ll\frac{1}{|\alpha|}\,. \end{equation} Using \eqref{Omega}, \eqref{IalphaXest} and Lemma \ref{Fourier} we deduce \begin{equation}\label{Omegaest} \Omega\ll\frac{\varepsilon}{\varphi(d)\Delta^2}\,. \end{equation} Bearing in mind \eqref{Delta}, \eqref{Ild1-PhiX}, \eqref{JXest1} and \eqref{Omegaest} we find \begin{equation}\label{Ild1est} I^{(1)}_{l,d;J}(X)=\frac{1}{\varphi(d)}B_J(X) +\varepsilon\Delta^2 X^3\max\limits_{y\in(\lambda_{0}X,X]}\big|E(y,d,l)\big| +\frac{\varepsilon}{\varphi(d)\Delta^2}\,. \end{equation} \section{Upper bound of $\mathbf{I^{(3)}_{l,d;J}(X)}$}\label{SectionIld3} \indent By \eqref{SldalphaX}, \eqref{SalphaX}, \eqref{Ild3} and Lemma \ref{Fourier} it follows \begin{equation}\label{Ild3est1} I^{(3)}_{l,d;J}(X)\ll \frac{X^3\log X}{d}\int\limits_{H}^{\infty}\frac{1}{t}\bigg(\frac{k}{2\pi t\varepsilon/8}\bigg)^k \,dt =\frac{X^3\log X}{dk}\bigg(\frac{4k}{\pi\varepsilon H}\bigg)^k\,. \end{equation} Choosing $k=[\log X]$ from \eqref{H} and \eqref{Ild3est1} we obtain \begin{equation}\label{Ild3est2} I^{(3)}_{l,d;J}(X)\ll\frac{1}{d}\,. \end{equation} \section{Upper bound of $\mathbf{\Gamma_3(X)}$}\label{SectionGamma3} \indent Consider the sum $\Gamma_3(X)$.\\ Since \begin{equation*} \sum\limits_{d|p_3-1\atop{d\geq X/D}}\chi(d)=\sum\limits_{m|p_3-1\atop{m\leq (p_3-1)D/X}} \chi\bigg(\frac{p_3-1}{m}\bigg) =\sum\limits_{j=\pm1}\chi(j)\sum\limits_{m|p_3-1\atop{m\leq (p_3-1)D/X \atop{\frac{p_3-1}{m}\equiv j\;(\textmd{mod}\,4)}}}1 \end{equation*} then from \eqref{Gamma3} and \eqref{Ild} it follows \begin{equation*} \Gamma_3(X)=\sum\limits_{m<D\atop{2|m}}\sum\limits_{j=\pm1}\chi(j)I_{1+jm,4m;J_m}(X)\,, \end{equation*} where $J_m=\big(\max\{1+mX/D,\lambda_0X\},X\big]$. The last formula and \eqref{Ilddecomp} yield \begin{equation}\label{Gamma3decomp} \Gamma_3(X)=\Gamma_3^{(1)}(X)+\Gamma_3^{(2)}(X)+\Gamma_3^{(3)}(X)\,, \end{equation} where \begin{equation}\label{Gamma3i} \Gamma_3^{(i)}(X)=\sum\limits_{m<D\atop{2|m}}\sum\limits_{j=\pm1}\chi(j) I_{1+jm,4m;J_m}^{(i)}(X)\,,\;\; i=1,\,2,\,3. \end{equation} \subsection{Estimation of $\mathbf{\Gamma_3^{(1)}(X)}$} \indent First we consider $\Gamma_3^{(1)}(X)$. From \eqref{Ild1est} and \eqref{Gamma3i} we deduce \begin{align}\label{Gamma31} \Gamma_3^{(1)}(X)=\Gamma^*&+\mathcal{O}\Big(\varepsilon\Delta^2 X^3\Sigma_1\Big) +\mathcal{O}\bigg(\frac{\varepsilon}{\Delta^2}\Sigma_2\bigg)\,, \end{align} where \begin{align} \label{Gamma*} &\Gamma^*=B_J(X)\sum\limits_{m<D\atop{2|m}}\frac{1}{\varphi(4m)}\sum\limits_{j=\pm1}\chi(j)\,,\\ \label{Sigma1} &\Sigma_1=\sum\limits_{m<D\atop{2|m}}\max\limits_{y\in(\lambda_{0}X,X]}\big|E(y,4m,1+jm)\big|\,,\\ \label{Sigma2} &\Sigma_2=\sum\limits_{m<D}\frac{1}{\varphi(4m)}\,. \end{align} From the properties of $\chi(k)$ we have that \begin{equation}\label{Gamma*est} \Gamma^*=0\,. \end{equation} By \eqref{D}, \eqref{Sigma1} and Lemma \ref{Bomb-Vin} we get \begin{equation}\label{Sigma1est} \Sigma_1\ll\frac{X}{(\log X)^{47}}\,. \end{equation} It is well known that \begin{equation}\label{Sigma2est} \Sigma_2\ll \log X\,. \end{equation} Bearing in mind \eqref{Delta}, \eqref{Gamma31}, \eqref{Gamma*est}, \eqref{Sigma1est} and \eqref{Sigma2est} we obtain \begin{equation}\label{Gamma31est} \Gamma_3^{(1)}(X)\ll\frac{\varepsilon X^2}{\log X}\,. \end{equation} \subsection{Estimation of $\mathbf{\Gamma_3^{(2)}(X)}$} \indent Next we consider $\Gamma_3^{(2)}(X)$. From \eqref{Ild2} and \eqref{Gamma3i} we have \begin{equation}\label{Gamma32} \Gamma_3^{(2)}(X)=\int\limits_{\Delta\leq|t|\leq H}\Theta(t) S(\lambda_1t,X)S(\lambda_2t ,X)K(\lambda_3 t, X)e(\eta t)\,dt\,, \end{equation} where \begin{equation}\label{Klambda3X} K(\lambda_3t, X)=\sum\limits_{m<D\atop{2|m}}\sum\limits_{j=\pm1}\chi(j)S_{1+jm,4m;J_m}(\lambda_3t)\,. \end{equation} Suppose that \begin{equation}\label{alphaaq} \bigg|\alpha -\frac{a}{q}\bigg|\leq\frac{1}{q^2}\,,\quad (a, q)=1 \end{equation} with \begin{equation}\label{Intq} q\in\left[(\log X)^{22},\,\frac{X}{(\log X)^{22}}\right]\,. \end{equation} Then \eqref{SalphaX}, \eqref{alphaaq}, \eqref{Intq} and Lemma \ref{Expsumest} give us \begin{equation}\label{Salphaest} S(\alpha,\,X)\ll \frac{X}{(\log X)^7}\,. \end{equation} Let \begin{equation}\label{mathfrakS} \mathfrak{S}(t,X)=\min\left\{\left|S(\lambda_{1}t,\,X)\right|,\left|S(\lambda_2 t,\,X)\right|\right\}\,. \end{equation} Using \eqref{lambda12a0q0}, \eqref{Salphaest}, \eqref{mathfrakS} and working similarly to (\cite{Dimitrov4}, Lemma 6) we establish that there exists a sequence of real numbers $X_1,\,X_2,\ldots \to \infty $ such that \begin{equation}\label{mathfrakSest} \mathfrak{S}(t, X_j)\ll \frac{X_j}{(\log X_j)^7}\,,\;\; j=1,2,\dots\,. \end{equation} Using \eqref{Gamma32}, \eqref{mathfrakS}, \eqref{mathfrakSest} and Lemma \ref{Fourier} we obtain \begin{align}\label{Gamma32est1} \Gamma_3^{(2)}(X_j)&\ll\varepsilon\int\limits_{\Delta\leq|t|\leq H}\mathfrak{S}(t, X_j) \Big(\big|S(\lambda_1 t, X_j)K(\lambda_3 t, X_j)\big| +\big|S(\lambda_2 t, X_j)K(\lambda_3 t, X_j)\big|\Big)\,dt\nonumber\\ &\ll\varepsilon\int\limits_{\Delta\leq|t|\leq H}\mathfrak{S}(t, X_j) \Big(\big|S(\lambda_1 t, X_j)\big|^2 +\big|S(\lambda_2 t, X_j)\big|^2+\big|K(\lambda_3 t, X_j)\big|^2\Big)\,dt\nonumber\\ &\ll\varepsilon \frac{X_j}{(\log X_j)^7}\big(T_1+T_2+T_3\big)\,, \end{align} where \begin{align} \label{Tk} &T_k=\int\limits_{\Delta}^H\big|S(\lambda_k t, X_j)\big|^2\,dt\,,\; k=1,2,\\ \label{T3} &T_3=\int\limits_{\Delta}^H\big|K(\lambda_3 t, X_j)\big|^2\,dt\,. \end{align} From \eqref{Delta}, \eqref{H}, \eqref{SalphaX}, \eqref{Tk} and after straightforward computations we get \begin{equation}\label{Tkest} T_k\ll HX_j\log X_j\,,\; k=1,2\,. \end{equation} Taking into account \eqref{Delta}, \eqref{H}, \eqref{Klambda3X}, \eqref{T3} and proceeding as in (\cite{Dimitrov1}, p. 14) we find \begin{equation}\label{T3est} T_3\ll HX_j\log^3X_j\,. \end{equation} By \eqref{varepsilon}, \eqref{H}, \eqref{Tkest} and \eqref{T3est} we deduce \begin{equation}\label{Gamma32est2} \Gamma_3^{(2)}(X_j)\ll \frac{X_j}{(\log X_j)^7}X_j\log^5X_j =\frac{X^2_j}{(\log X_j)^2}\ll\frac{\varepsilon X_j^2}{\log X_j}\,. \end{equation} \subsection{Estimation of $\mathbf{\Gamma_3^{(3)}(X)}$} \indent From \eqref{Ild3est2} and \eqref{Gamma3i} we have \begin{equation}\label{Gamma33est} \Gamma_3^{(3)}(X)\ll\sum\limits_{m<D}\frac{1}{d}\ll \log X\,. \end{equation} \subsection{Estimation of $\mathbf{\Gamma_3(X)}$} \indent Summarizing \eqref{Gamma3decomp}, \eqref{Gamma31est}, \eqref{Gamma32est2} and \eqref{Gamma33est} we get \begin{equation}\label{Gamm3est} \Gamma_3(X_j)\ll\frac{\varepsilon X_j^2}{\log X_j}\,. \end{equation} \section{Upper bound of $\mathbf{\Gamma_2(X)}$}\label{SectionGamma2} \indent Consider the sum $\Gamma_2(X)$. We denote by $\mathcal{F}(X)$ the set of all primes $\lambda_0X<p\leq X$ such that $p-1$ has a divisor belongs to the interval $(D,X/D)$. The inequality $xy\leq x^2+y^2$ and \eqref{Gamma2} yield \begin{align*} \Gamma_2(X)^2&\ll(\log X)^6\sum\limits_{\lambda_0X<p_1,...,p_6\leq X \atop{|\lambda_1p_1+\lambda_2p_2+\lambda_3p_3+\eta|<\varepsilon \atop{|\lambda_1p_4+\lambda_2p_5+\lambda_3p_6+\eta|<\varepsilon}}} \left|\sum\limits_{d|p_3-1\atop{D<d<X/D}}\chi(d)\right| \left|\sum\limits_{t|p_6-1\atop{D<t<X/D}}\chi(t)\right|\\ &\ll(\log X)^6\sum\limits_{\lambda_0X<p_1,...,p_6\leq X \atop{|\lambda_1p_1+\lambda_2p_2+\lambda_3p_3+\eta|<\varepsilon \atop{\lambda_1p_4+\lambda_2p_5+\lambda_3p_6+\eta|<\varepsilon \atop{p_6\in\mathcal{F}(X)}}}}\left|\sum\limits_{d|p_3-1 \atop{D<d<X/D}}\chi(d)\right|^2\,. \end{align*} The summands in the last sum for which $p_3=p_6$ can be estimated with $\mathcal{O}\big(X^{3+\varepsilon}\big)$.\\ Therefore \begin{equation}\label{Gamma2est1} \Gamma_2(X)^2\ll(\log X)^6\Sigma_0+X^{3+\varepsilon}\,, \end{equation} where \begin{equation}\label{Sigma0} \Sigma_0=\sum\limits_{\lambda_0X<p_3\leq X}\left|\sum\limits_{d|p_3-1 \atop{D<d<X/D}}\chi(d)\right|^2\sum\limits_{\lambda_0X<p_6\leq X\atop{p_6\in\mathcal{F} \atop{p_6\neq p_1}}}\sum\limits_{\lambda_0X<p_1,p_2,p_4,p_5\leq X \atop{|\lambda_1p_1+\lambda_2p_2+\lambda_3p_3+\eta|<\varepsilon \atop{|\lambda_1p_4+\lambda_2p_5+\lambda_3p_6+\eta|<\varepsilon}}}1\,. \end{equation} Since $\lambda_1,\lambda_2,\lambda_3$ are not all of the same sign, without loss of generality we can assume that $\lambda_1>0,\,\lambda_2>0$ and $\lambda_3<0$. Now let us consider the set \begin{equation}\label{SetPsi} \Psi(X)=\{\langle p_1,p_2\rangle\;:\; |\lambda_1p_1+\lambda_2p_2+D|<\varepsilon, \;\;\lambda_0X<p_1,p_2\leq X,\;\;D\asymp X\}\,. \end{equation} We shall find the upper bound of the cardinality of $\Psi(X)$. Using \begin{equation}\label{binaryinequality} |\lambda_1p_1+\lambda_2p_2+D|<\varepsilon \end{equation} we write \begin{equation}\label{p1p2D} \left|\frac{\lambda_1}{\lambda_2}p_1+p_2+\frac{D}{\lambda_2}\right|<\frac{\varepsilon}{\lambda_2}\,. \end{equation} Since $\lambda_2$ is fixed then for sufficiently large $X$ we have that $\varepsilon/\lambda_2$ is sufficiently small. Therefore \eqref{p1p2D} implies \textbf{Case 1.} \begin{equation*} \left\lfloor\frac{\lambda_1}{\lambda_2}p_1+p_2\right\rfloor=\left\lfloor-\frac{D}{\lambda_2}\right\rfloor \end{equation*} or \textbf{Case 2.} \begin{equation*} \left\lceil\frac{\lambda_1}{\lambda_2}p_1+p_2\right\rceil=\left\lceil-\frac{D}{\lambda_2}\right\rceil \end{equation*} or \textbf{Case 3.} \begin{equation*} \left\lfloor\frac{\lambda_1}{\lambda_2}p_1+p_2\right\rfloor=\left\lceil-\frac{D}{\lambda_2}\right\rceil \end{equation*} or \textbf{Case 4.} \begin{equation*} \left\lceil\frac{\lambda_1}{\lambda_2}p_1+p_2\right\rceil=\left\lfloor-\frac{D}{\lambda_2}\right\rfloor\,. \end{equation*} We shall consider only the Case 1. The Cases 2, 3 and 4 are treated similarly. From Case 1 we have \begin{equation*} \left\lfloor\frac{\lambda_1}{\lambda_2}p_1\right\rfloor+p_2=\left\lfloor-\frac{D}{\lambda_2}\right\rfloor \end{equation*} thus \begin{equation*} \left\lfloor\left(\left\lfloor\frac{\lambda_1}{\lambda_2}\right\rfloor +\left\{\frac{\lambda_1}{\lambda_2}\right\}\right)p_1\right\rfloor +p_2=\left\lfloor-\frac{D}{\lambda_2}\right\rfloor \end{equation*} and therefore \begin{equation}\label{Equality1} \left\lfloor\frac{\lambda_1}{\lambda_2}\right\rfloor p_1 +p_2=\left\lfloor-\frac{D}{\lambda_2}\right\rfloor -\left\lfloor\left\{\frac{\lambda_1}{\lambda_2} p_1 \right\}\right\rfloor \end{equation} Bearing in mind the definition \eqref{SetPsi} we deduce that there exist constants $C_1>0$ and $C_2>0$ such that \begin{equation*} C_1X\leq\left\lfloor-\frac{D}{\lambda_2}\right\rfloor -\left\lfloor\left\{\frac{\lambda_1}{\lambda_2} p_1 \right\}\right\rfloor\leq C_2X\,. \end{equation*} Consequently there exists constant $C\in[C_1, C_2]$ such that \begin{equation}\label{Equality2} \left\lfloor-\frac{D}{\lambda_2}\right\rfloor -\left\lfloor\left\{\frac{\lambda_1}{\lambda_2} p_1 \right\}\right\rfloor= CX\,. \end{equation} The equalities \eqref{Equality1} and \eqref{Equality2} give us \begin{equation}\label{Equations} \left\lfloor\frac{\lambda_1}{\lambda_2}\right\rfloor p_1+p_2=CX\,, \end{equation} for some constant $C\in[C_1, C_2]$. We established that the number of solutions of the inequality \eqref{binaryinequality} is less than the number of all solutions of all equations denoted by \eqref{Equations}. According to Lemma \ref{Halberstam-Richert} for any fixed $C\in[C_1, C_2]$ participating in \eqref{Equations} we have \begin{equation}\label{p1p2CX} \#\{\langle p_1,p_2\rangle\;:\; \left\lfloor\lambda_1/\lambda_2\right\rfloor p_1+p_2=CX, \;\;\lambda_0X<p_1,p_2\leq X\}\ll\frac{X\log\log X}{\log^2X}\,. \end{equation} Taking into account that $C\leq C_2$ from \eqref {SetPsi}, \eqref{binaryinequality} and \eqref{p1p2CX} we find \begin{equation}\label{Psiest} \#\Psi(X)\ll\frac{X\log\log X}{\log^2X}\,. \end{equation} The estimations \eqref{Sigma0} and \eqref{Psiest} yield \begin{equation}\label{Sigma0est} \Sigma_0\ll\frac{X^2}{\log^4X}(\log\log X)^2\Sigma^\prime\Sigma^{\prime\prime}\,, \end{equation} where \begin{equation*} \Sigma^\prime=\sum\limits_{\lambda_0X<p\leq X}\left|\sum\limits_{d|p-1 \atop{D<d<X/D}}\chi(d)\right|^2\,,\quad \Sigma^{\prime\prime}= \sum\limits_{\lambda_0X<p\leq X\atop{p\in\mathcal{F}}}1\,. \end{equation*} Applying Lemma \ref{Hooley1} we obtain \begin{equation}\label{Sigma'est} \Sigma^\prime\ll\frac{X(\log\log X)^7}{\log X}\,. \end{equation} Using Lemma \ref{Hooley2} we get \begin{equation}\label{Sigma''est} \Sigma^{\prime\prime}\ll\frac{X(\log\log X)^3}{(\log X)^{1+2\theta_0}}\,, \end{equation} where $\theta_0$ is denoted by \eqref{theta0}. We are now in a good position to estimate the sum $\Gamma_2(X)$. From \eqref{Gamma2est1}, \eqref{Sigma0est} -- \eqref{Sigma''est} it follows \begin{equation}\label{Gamma2est2} \Gamma_2(X)\ll\frac{ X^2(\log\log X)^6}{(\log X)^{\theta_0}}=\frac{\varepsilon X^2}{\log\log X}\,. \end{equation} \section{Lower bound for $\mathbf{\Gamma_1(X)}$}\label{SectionGamma1} \indent Consider the sum $\Gamma_1(X)$. From \eqref{Gamma1}, \eqref{Ild} and \eqref{Ilddecomp} we deduce \begin{equation}\label{Gamma1decomp} \Gamma_1(X)=\Gamma_1^{(1)}(X)+\Gamma_1^{(2)}(X)+\Gamma_1^{(3)}(X)\,, \end{equation} where \begin{equation}\label{Gamma1i} \Gamma_1^{(i)}(X)=\sum\limits_{d\leq D}\chi(d)I_{1,d}^{(i)}(X)\,,\;\; i=1,\,2,\,3. \end{equation} \subsection{Estimation of $\mathbf{\Gamma_1^{(1)}(X)}$} \indent First we consider $\Gamma_1^{(1)}(X)$. Using formula \eqref{Ild1est} for $J=(\lambda_0X,X]$, \eqref{Gamma1i} and treating the reminder term by the same way as for $\Gamma_3^{(1)}(X)$ we find \begin{equation} \label{Gamma11est1} \Gamma_1^{(1)}(X)=B(X)\sum\limits_{d\leq D}\frac{\chi(d)}{\varphi(d)} +\mathcal{O}\bigg(\frac{\varepsilon X^2}{\log X}\bigg)\,, \end{equation} where \begin{equation*} B(X)=\int\limits_{\lambda_0X}^{X}\int\limits_{\lambda_0X}^{X}\int\limits_{\lambda_0X}^{X} \theta(\lambda_1y_1+\lambda_2y_2+\lambda_3y_3+\eta)\,dy_1\,dy_2\,dy_3\,. \end{equation*} According to (\cite{Dimitrov4}, Lemma 4) we have \begin{equation}\label{Best} B(X)\gg\varepsilon X^2\,. \end{equation} Denote \begin{equation} \label{Sigmaf} \Sigma=\sum\limits_{d\leq D}f(d)\,,\quad f(d)=\frac{\chi(d)}{\varphi(d)}\,. \end{equation} We have \begin{equation}\label{fdest} f(d)\ll d^{-1}\log\log(10d) \end{equation} with absolute constant in the Vinogradov's symbol. Hence the corresponding Dirichlet series \begin{equation*} F(s)=\sum\limits_{d=1}^\infty\frac{f(d)}{d^s} \end{equation*} is absolutely convergent in $Re(s)>0$. On the other hand $f(d)$ is a multiplicative with respect to $d$ and applying Euler's identity we obtain \begin{equation}\label{FT} F(s)=\prod\limits_pT(p,s)\,,\quad T(p,s)=1+\sum\limits_{l=1}^\infty f(p^l)p^{-ls}\,. \end{equation} By \eqref{Sigmaf} and \eqref{FT} we establish that \begin{equation*} T(p,s)=\left(1-\frac{\chi(p)}{p^{s+1}}\right)^{-1}\left(1+\frac{\chi(p)}{p^{s+1}(p-1)}\right)\,. \end{equation*} Hence we find \begin{equation}\label{Fs} F(s)=L(s+1,\chi)\mathcal{N}(s)\,, \end{equation} where $L(s+1,\chi)$ -- Dirichlet series corresponding to the character $\chi$ and \begin{equation}\label{Ns} \mathcal{N}(s)=\prod\limits_p \left(1+\frac{\chi(p)}{p^{s+1}(p-1)}\right)\,. \end{equation} From the properties of the $L$ -- functions it follows that $F(s)$ has an analytic continuation to $Re(s)>-1$. It is well known that \begin{equation}\label{Lsest} L(s+1,\chi)\ll1+\left|Im(s)\right|^{1/6}\quad \mbox{for}\quad Re(s)\geq-\frac{1}{2}\,. \end{equation} Moreover \begin{equation}\label{Nsest} \mathcal{N}(s)\ll1\,. \end{equation} By \eqref{Fs}, \eqref{Lsest} and \eqref{Nsest} we deduce \begin{equation}\label{Fsest} F(s)\ll X^{1/6}\quad \mbox{for}\quad Re(s)\geq-\frac{1}{2}\,,\quad |Im(s)|\leq X\,. \end{equation} Usin \eqref{Sigmaf}, \eqref{fdest} and Perron's formula given at Tenenbaum (\cite{Tenenbaum}, Chapter II.2) we obtain \begin{equation}\label{SigmaPeron} \Sigma=\frac{1}{2\pi i}\int\limits_{\varkappa- iX}^{\varkappa+iX}F(s)\frac{D^s}{s}ds +\mathcal{O}\left(\sum\limits_{t=1}^\infty\frac{D^\varkappa\log\log(10t)}{t^{1+\varkappa} \left(1+X\left|\log\frac{D}{t}\right|\right)}\right)\,, \end{equation} where $\varkappa=1/10$. It is easy to see that the error term above is $\mathcal{O}\Big(X^{-1/20}\Big)$. Applying the residue theorem we see that the main term in \eqref{SigmaPeron} is equal to \begin{equation*} F(0)+\frac{1}{2\pi i}\left(\int\limits_{1/10-iX}^{-1/2-i X}+ \int\limits_{-1/2-iX}^{-1/2+i X}+\int\limits_{-1/2+i X}^{1/10+i X}\right)F(s)\frac{D^s}{s}ds\,. \end{equation*} From \eqref{Fsest} it follows that the contribution from the above integrals is $\mathcal{O}\Big(X^{-1/20}\Big)$.\\ Hence \begin{equation}\label{Sigmaest} \Sigma=F(0)+\mathcal{O}\Big(X^{-1/20}\Big)\,. \end{equation} Using \eqref{Fs} we get \begin{equation}\label{F0} F(0)=\frac{\pi}{4}\mathcal{N}(0)\,. \end{equation} Bearing in mind \eqref{Gamma11est1}, \eqref{Sigmaf}, \eqref{Ns}, \eqref{Sigmaest} and (\ref{F0}) we find a new expression for $\Gamma_1^{(1)}(X)$ \begin{equation}\label{Gamma11est2} \Gamma_1^{(1)}(X)=\frac{\pi}{4}\prod\limits_p \left(1+\frac{\chi(p)}{p(p-1)}\right) B(X) +\mathcal{O}\bigg(\frac{\varepsilon X^2}{\log X}\bigg)+\mathcal{O}\Big(B(X)X^{-1/20}\Big)\,. \end{equation} Now \eqref{Best} and \eqref{Gamma11est2} yield \begin{equation}\label{Gamma11est3} \Gamma_1^{(1)}(X)\gg\varepsilon X^2\,. \end{equation} \subsection{Estimation of $\mathbf{\Gamma_1^{(2)}(X)}$} \indent Arguing as in the estimation of $\Gamma_3^{(2)}(X)$ we get \begin{equation} \label{Gamma12est} \Gamma_1^{(2)}(X)\ll\frac{\varepsilon X^2}{\log X}\,. \end{equation} \subsection{Estimation of $\mathbf{\Gamma_1^{(3)}(X)}$} \indent From \eqref{Ild3est2} and \eqref{Gamma1i} we have \begin{equation}\label{Gamma13est} \Gamma_1^{(3)}(X)\ll\sum\limits_{m<D}\frac{1}{d}\ll \log X\,. \end{equation} \subsection{Estimation of $\mathbf{\Gamma_1(X)}$} \indent Summarizing \eqref{Gamma1decomp}, \eqref{Gamma11est3}, \eqref{Gamma12est} and \eqref{Gamma13est} we deduce \begin{equation} \label{Gamma1est} \Gamma_1(X)\gg\varepsilon X^2\,. \end{equation} \section{Proof of the Theorem}\label{Sectionfinal} \indent Taking into account \eqref{varepsilon}, \eqref{GammaGamma0}, \eqref{Gamma0decomp}, \eqref{Gamm3est}, \eqref{Gamma2est2} and \eqref{Gamma1est} we obtain \begin{equation*} \Gamma(X_j)\gg\varepsilon X_j^2=\frac{X_j^2(\log\log X_j)^7}{(\log X_j)^{\theta_0}}\,. \end{equation*} The last lower bound implies \begin{equation}\label{Lowerbound} \Gamma(X_j) \rightarrow\infty \quad \mbox{ as } \quad X_j\rightarrow\infty\,. \end{equation} Bearing in mind \eqref{Gamma} and \eqref{Lowerbound} we establish Theorem \ref{Theorem}.
proofpile-arXiv_065-4378
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Recently, code-switching speech recognition (CSSR) has drawn increased attention in automatic speech recognition (ASR) community\cite{yilmaz2018cs,guo2018cs,zeng2018e2e,Shan2019cs, khassanov2019e2e}. Here, code-switching means a linguistic phenomenon that one speaks different languages within an utterance or between consecutive utterances. Intuitively, we can build a CSSR system simply by means of merging two languages. However, such a CSSR system can rarely produce optimal recognition results. This is because there is no within-an-utterance code-switching samples, and the resulting ASR system frequently fails to recognize those code-switching utterances\cite{khassanov2019e2e,toshniwal2018multilingual}. In reality, we usually have very limited code-switching training data, but more monolingual data~\cite{khassanov2019e2e}. As a result, we treat CSSR as a low-resource ASR problem~\cite{das2015tl,Dalmia2018low-res-asr,Tung2020low-res-asr}. The question is how to fully exploit those ``unlimited" monolingual data to boost the CSSR performance. In this paper, we report our efforts in terms of data selection\footnote{Here, data selection simply means how to reasonably exploit monolingual data for a CSSR system with a specific code-switching pattern.}~\cite{itoh2012data-selection,wei2014data-selection-lvcsr,wei2014data-selection-unsup} for an English-Mandarin code-switching speech recognition contest sponsored by DataTang\footnote{Datatang: https://www.datatang.ai/} in China. It drew over 70+ participants worldwide. The organizer released two training data sets, namely, 200 hours of English-Mandarin code-switching data, and 500 hours of Mandarin data of which there are about 15 hours of similar code-switching data. Additionally, 960 hours of LibriSpeech English data~\cite{chen2015prn-lex,albert2018e2e-libri} is allowed to use. There are three tracks, and one of the tracks is for the hybrid DNN-HMM based ASR system competition. Except for the above-mentioned three training data sets, a trigram language model is given as well, and the only flexibility is the participants are allowed to use their own lexicon to build their ASR system. In this work, all the ASR results are based on the DNN-HMM system~\cite{povey2018tdnnf}. The motivation of this work lies in the observation that we end up with worse results when using the overall three data sets. This motivates us to answer how to exploit the two monolingual data sets reasonably. To begin with, we first add Mandarin data incrementally, and we found consistent performance improvement on the Mandarin part recognition, with insignificant performance drop on the English part. Nevertheless, we found that there is a limit point, where more data degrade results over the point. We do the same experiments for LibriSpeech English data. however, we observe consistent performance drop. We conjecture this is due to the data mismatch problem. To look further into the problem, we categorize the test utterances into different subsets according to how many English words each utterance contains, then analyze the performance change on those subsets respectively. Moreover, we are interested in a condition where no code-switching data is available at all. As a result, one has to utilize monolingual data to train a code-switching system. Particularly, we are interested in how much effectiveness can be achieved by merging pure monolingual data, compared with that we have code-switching data. We remove all the code-switching data from the 500 Mandarin data set, and incrementally merge the LibriSpeech English data into it to train a CSSR system. To our surprise, using more English data is consistently beneficial to both language performance improvement. We notice that this is one of the significant differences of the DNN-HMM ASR system versus End-to-End ASR~\cite{zhou2018comparison-e2e,karita2019comparative-e2e} one. By merging monolingual data, it is observed end-to-end CSSR system fails to recognize those utterances containing code-switching words~\cite{khassanov2019e2e,toshniwal2018multilingual}. The paper is organized as follows. Section~\ref{sec:dd} describes the overall data for the experiments that are followed. Section~\ref{sec:e-setup} reports the experimental setup. Section~\ref{sec:baseline} describes the development of our baseline system. Section~\ref{sec:add-mono} analyzes the effectiveness of using more Mandarin monolingual data, as well as more English data. Section~\ref{sec:pure-mono} reports the effectiveness of using monolingual data to train CSSR system. Finally we concludes in Section~\ref{sec:con}. \section{Data description} \label{sec:dd} The entire data sets that are released by the organizer are reported in Table \ref{tab:dataset}. Each participant has three training data sets and three test data sets, two \textit{dev} sets, and one evaluation set that is released after the contest. The code-switching data are Mandarin-dominated. Figure~\ref{fig:cs-hist} reveals the English word distribution in T1, and E1 to E3 four code-switching data sets in Table~\ref{tab:dataset}. As is shown in Figure~\ref{fig:cs-hist}, the majority of utterances contain only single English words. Besides, English word distributions are very similar between the training data T1 and the \textit{dev} set E1, while the cases of E2 and E3 are more alike. \begin{table}[th] \caption{Data sets and length description} \label{tab:dataset} \centering \begin{tabular}{c c c} \toprule \multicolumn{1}{c}{\textbf{Notation}} & \multicolumn{1}{c}{\textbf{Data\ Set}} & \multicolumn{1}{c}{\textbf{Hours}}\\ \midrule $T1$ & $\text{CS200}$ & $200$~~~\\ $T2$ & $\text{Man500}$ & $500$~~~\\ $T3$ & $\text{Libri}$ & $960$~~~\\ \midrule $E1$ & $\text{dev1}$ & $20$~~~\\ $E2$ & $\text{dev2}$ & $20$~~~\\ $E3$ & $\text{eval}$ & $20$~~~\\ \bottomrule \end{tabular} \end{table} \begin{figure}[th] \centering \captionsetup{justification=centering} \includegraphics[width=6cm]{images/cs-histogram.png} \caption[]{English word distribution of the code-switching \\data sets, including T1, E1 to E3 in Table ~\ref{tab:dataset}.} \label{fig:cs-hist} \end{figure} \section{Experimental setup}\label{sec:e-setup} All experiments are conducted with Kaldi\footnote{https://kaldi-asr.org/}. The acoustic models are trained with the Lattice-free Maximum Mutual Information (LF-MMI) criterion~\cite{povey2016lf-mmi} over the Factorized Time-delay Neural Network (TDNNf)~\cite{povey2018tdnnf}. The acoustic feature is the concatenation of 40-dimensional MFCC features and 100-dimensional i-vectors~\cite{saon2013ivector,peddinti2015tdnn}. The TDNNf is made up of 15 layers, and each layer is decomposed as 1536x256, 256x1536, where 256 is the dimension of the bottleneck layer. Besides, the activation function is Rectified Linear Unit (ReLU)~\cite{dahl2013relu}. To train TDNNf, data augmentation is employed~\cite{ko2015da}. The vocabulary is $\sim$200k, of which Mandarin and English words are $\sim$121k and $\sim$79k respectively. We use language-dependent phone set, and there are 210 Mandarin initials and finals~\cite{guo2018cs}, as well as 42 English phones. As mentioned, the language models are released by the organizer, and it is a trigram language model that includes $\sim$813k Mandarin words and $\sim$116k English words. \section{Baseline} \label{sec:baseline} We report our efforts on building the baseline system. Since the data and language models are fixed, our efforts are mainly focused on choosing an appropriate lexicon. Initially, we use our in-house lexicon. As there is a mismatch between the language models and the lexicon in Chinese word segmentation, we decompose all out-of-vocabulary Chinese words into characters, yielding our initial lexicon, denoted as L0 here. After submitting our evaluation results, we realized there are many entries with incorrect pronunciations for the Chinese part in L0. We conducted manual checking and updated L0 to L1. Simultaneously, we also attempt to use a grapheme lexicon for English words, and this is inspired by \cite{duc2019grapheme}. Table \ref{tab:baseline-lex} reports our baseline results in terms of Meta-data Error Rate (MER) using only \textit{CS200} code-switching training data. By MER, we mean a token error rate that is a mixture of Chinese character error rate and English word error rate. \begin{table}[th] \caption{MER (\%) with different efforts on the recognition lexicons; the systems are trained with \textit{CS200} code-switching training data} \label{tab:baseline-lex} \centering \begin{tabular}{c c c c} \toprule \multicolumn{1}{c}{\textbf{Dictionary}} & \multicolumn{3}{c}{\textbf{MER (\%)}}\\ $ $ & dev1 & dev2 & eval \\ \midrule L0 & $7.43$ & $6.81$ & $7.51$~~~\\ L1 & $7.07$ & $6.28$ & $7.08$~~~\\ Grapheme & $7.11$ & $6.4$ & $7.2$ ~~~\\ \bottomrule \end{tabular} \end{table} From Table~\ref{tab:baseline-lex}, The lexicon after manual checking gets the best results. Therefore, we use L1 for the remaining experiments in this paper. Table~\ref{tab:baseline-overall} reports experimental results using various data combination recipe. \begin{table}[th] \caption{MER (\%) results by combining various training data set to train the ASR system} \label{tab:baseline-overall} \centering \begin{tabular}{l c c c} \toprule \multicolumn{1}{c}{\textbf{Data}} & \multicolumn{3}{c}{\textbf{MER (\%)}}\\ $ $ & dev1 & dev2 & eval \\ \midrule CS200 & $7.07$ & $6.28$ & $7.08$\\ +CS15 & $6.93$ & $6.14$ & $6.87$\\ +Man500 & $\textbf{6.87}$ & $\textbf{5.91}$ & $\textbf{6.63}$\\ +Libri & $7.58$ & $6.67$ & $7.50$\\ \bottomrule \end{tabular} \end{table} Table~\ref{tab:baseline-overall} suggests employing more Mandarin data works, yielding significant MER reduction, however, employing more \textit{Libri} English data degrades results. These suggest there are data selection issues concerned, and it is worthwhile to see what details look like. \section{Analysis of adding monolingual data} \label{sec:add-mono} \subsection{Mandarin data} \label{sub:add-mandarin} To begin with, we fix the \textit{CS200} code-switching data, incrementally increasing monolingual Mandarin data from the \textit{Man500}. We are interested to see how the recognition results will be changed for each individual language, as well as for the two languages combined. Figure~\ref{fig:mandarin-data} plots the various change of the recognition results versus the incremental increase of Mandarin data usage. We notice that we select the data as follows. We first ensure each selection covers the overall speakers, and then determine how many utterances are to be selected, and finalize by random selection. \begin{figure} [th] \centering \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/ds_man500_mer.png} \caption[]% \end{subfigure} \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/ds_man500_cn.png} \caption[]% \end{subfigure} \vskip 0cm \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/ds_man500_en.png} \caption[]% \end{subfigure} \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/ds_man500_eval.png} \caption[]% \end{subfigure} \caption[] {Recognition results change versus Mandarin data Selection from \textit{Man500} data set. (a) Overall MER (\%) change on the \textit{dev1}, \textit{dev2}, and \textit{eval} three data sets. (b) Character error rate (CER) (\%) for the Mandarin part in the three test sets. (c) English WER (\%) in the three test sets. (d) \textit {eval} MER (\%) comparison between the systems with different training sets.} \label{fig:mandarin-data} \end{figure} From Figure~\ref{fig:mandarin-data}, what we observe can be briefly summarized in the following aspects. First, employing more Mandarin data generally produces improved results on the Mandarin part as shown in Figure~\ref{fig:mandarin-data}(b). Secondly, more Mandarin data does not significantly affects English recognition results, as indicated in Figure~\ref{fig:mandarin-data}(c). Thirdly, more Mandarin data over a certain point can hurt the overall performance, as can be seen in Figure~\ref{fig:mandarin-data}(d). The best performance is achieved when about 400 hours of Mandarin data is employed on the \textit{eval} test set. First of all, we attribute the improvement to the close similarity of the \textit{Man500} and the \textit{CS200} code-switching data in terms of the Mandarin part. Actually, they are all read speech released by the organizer. After that, we think there is a balance point, and more monolingual Mandarin data can divert the models in terms of code-switching capability. However, the above-mentioned guesses need further support from more details. To see how the added Mandarin monolingual data affects the results of different category of utterances as shown in Figure~\ref{fig:cs-hist}, we draw Figure~\ref{fig:mandarin-data-detail} revealing the details. From the figure, we can see the more English words the less performance gain from more Mandarin monolingual data usage in terms of recognition results. This is particularly true for Figure~\ref{fig:mandarin-data-detail}(c)-(d), where utterances contain no fewer than 3 English words, and the performance are not stably increased. From Figure~\ref{fig:mandarin-data-detail}(d), using more Mandarin data actually degrades the results on \textit{eval} test set. \begin{figure}[th] \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/ds_man500_eng1.png} \caption[]% {{\small}} \end{subfigure} \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/ds_man500_eng2.png} \caption[]% {{\small}} \end{subfigure} \vskip 0cm \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/ds_man500_eng3.png} \caption[]% {{\small }} \end{subfigure} \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/ds_man500_eng4.png} \caption[]% {{\small}} \end{subfigure} \caption[] { MER (\%) change versus Mandarin data selection from \textit{Man500} data set for 4 categories of utterances as shown in Figure~\ref{fig:cs-hist}. (a) utterances with single English word; (b) utterances with 2 English words; (c) utterances with 3 English words; (d) utterances with no fewer than 4 English words.} \label{fig:mandarin-data-detail} \end{figure} Combined with what is shown in Figures~\ref{fig:mandarin-data} and \ref{fig:mandarin-data-detail}, the recognition performance gains are mainly from the utterances with the single English word. From this perspective, data matching guesses are supported here. However, it does limited or even negative help on the performance improvement for those utterances containing more English words. We achieved overall performance improvement as shown in Figure~\ref{fig:mandarin-data}, only because the single English utterances are dominated, as shown in Figure~\ref{fig:cs-hist}. \subsection{English data}\label{sub:engglish-data} In this section, we are interested to see how the English-Mandarin code-switching ASR system is affected by using more English monolingual data. Here, the English data is \textit{LibriSpeech}, as shown in Table~\ref{tab:dataset}. It is worth a mention that we do this over the ASR system that is trained with \textit{CS200} plus \textit{Man500}, instead of using the data by directly combining \textit{CS200} and \textit{LibriSpeech}. Figure~\ref{fig:librispeech} plots the recognition performance change versus incrementally introducing \textit{LibriSpeech} data. Figure~\ref{fig:librispeech} clearly reveals recognition performance degradation as more English data are merged, on either Mandarin or English part of the code-switching utterances. Intuitively, we attribute this to the data mismatching problem. That is, \textit{LibriSpeech} is mismatched with the English that are spoken by Chinese Mainland speakers, and such a mismatch can be enlarged for the case when those single English words are mixed with Chinese words. To verify our guess, we also need further details. \begin{figure} [th] \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/ds_libri_MER.png} \caption[]% \end{subfigure} \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/ds_libri_CN.png} \caption[]% \end{subfigure} \vskip 0cm \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/ds_libri_EN.png} \caption[]% \end{subfigure} \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/ds_libri_eval.png} \caption[]% \end{subfigure} \caption[] {Results change versus incrementally increasing English data from \textit{LibriSpeech}. (a) Overall MER (\%) change on the three test sets; (b)\&(c) Performance changes for Mandarin(CER\%) and English(WER\%) respectively; (d) \textit{eval} MER (\%) comparison over the system without English data employed from \textit{LibriSpeech} at all.}\label{fig:librispeech} \end{figure} Figure~\ref{fig:libri-detail} shows the performance change details on 4 groups of utterances as shown in Figure~\ref{fig:cs-hist}. From what Figure~\ref{fig:libri-detail} shows, we cannot simply attribute performance degradation only to data mismatching reason. With Mandarin dominated utterances as seen in Figure~\ref{fig:libri-detail}(a)-(b), introducing English data does not help, and more English data degrades results. However, for utterance less dominated with Mandarin words as shown in Figure~\ref{fig:libri-detail}(c)-(d), where there are more than 3 English words in the utterances, using more English data does not necessarily lead to a performance drop. This suggests data mismatch is one reason, code-switching pattern is another reason. Different code-switching patterns are affected by monolingual data usage differently. However, we really need English data from Mainland Chinese to confirm a solid conclusion. \begin{figure} [th] \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/ds_libri_eng1.png} \caption[]% \end{subfigure} \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/ds_libri_eng2.png} \caption[]% \end{subfigure} \vskip 0cm \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/ds_libri_eng3.png} \caption[]% \end{subfigure} \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/ds_libri_eng4.png} \caption[]% \end{subfigure} \caption[] {MER (\%) change versus English data selection from \textit{LibriSpeech} for 4 groups of utterances as shown in Figure~\ref{fig:cs-hist}. (a) utterances with single English word; (b) utterances with 2 English words; (c) utterances with 3 English words; (d) utterances with no fewer than 4 English words.}\label{fig:libri-detail} \end{figure} \section{Pure monolingual data} \label{sec:pure-mono} Code-switching ASR system is much more flexible compared with monolingual ASR one. However, real code-switching data is a low-resource, and hard to access. One natural consideration is to build a code-switching ASR system by merging related monolingual data. This has been extensively studied under the End-to-end (E2E) ASR framework\cite{duc2019grapheme}. Unfortunately, the state-of-the-art E2E ASR system almost completely fails to recognize those utterances mixed with word from different language (within-utterance code-switching)~\cite{duc2019grapheme}, when it is learned with monolingual data. This is because the E2E system has never been learned with utterances containing code-switching word sequence, and so does the decoder fail to predict such a sequence. Here, we examine if such a disastrous case can happen under the HMM-DNN ASR framework. Specifically, we do as follows. We first make a subset of \textit{Man500} set, by removing all code-switching utterances from it. We fix the Mandarin subset and keep incrementally merging the English data from \textit{LibriSpeech}. Figure~\ref{fig:monolingual} draws recognition results with a code-switching ASR system trained with monolingual data. \begin{figure}[t] \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/3_mer.png} \caption[]% \end{subfigure} \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/1_man_cer.png} \caption[]% \end{subfigure} \vskip 0cm \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/2_en_wer.png} \caption[]% \end{subfigure} \quad \begin{subfigure}[b]{0.214\textwidth} \centering \includegraphics[width=\textwidth]{images/4_eval_mer.png} \caption[]% \end{subfigure} \caption[] {\small Result analysis of code-switching system built with monolingual data mergence. (a) MER (\%) on the 3 test sets; (b) CER (\%) for the Mandarin part; (c) WER (\%) for the English part; (d) \textit{eval} MER (\%) comparison between the two systems with different ``code-switching" training data. }\label{fig:monolingual} \end{figure} From Figure~\ref{fig:monolingual}, we see consistent performance improvement by merging the two monolingual data sets. By looking into Figure~\ref{fig:monolingual}(b), the CER drops for the Mandarin part means the coexistence relationship of the individual language is critical in code-switching scenario. Improving one language can benefit another language's recognition performance. However, there is a balance point, and it should be related to a specific code-switching pattern as shown in Section~\ref{sec:add-mono}. By looking into Figure~\ref{fig:monolingual}(c), we see sharp WER drops for the English part. However, the mismatching guess in Section~\ref{sub:engglish-data} is observable here. The WER drop ``stops" over 50\%, which is much higher than what is shown in Figure~\ref{fig:mandarin-data}(c), though the overall WER decrease has not been fully saturated. Finally in Figure~\ref{fig:monolingual}(d), we compare the performance of the two systems on \textit{eval} set, one trained with monolingual data (though English part is not matched), and another trained with \textit{CS200} code-switching data. The gap is still remarkable, and this should be further studied under a fair comparison scenario. \section{Conclusion} \label{sec:con} In this work, we conducted a thorough study on the code-switching system performance with diversified data selections. First, we analyzed the code-switching pattern, it is Mandarin-dominated code-switching data sets, both for training and testing. We also found the monolingual data helps, but data matching is crucial. Besides, there is a balance point for how much monolingual is employed, which is dependent on a specific code-switching environment. Finally, we performed an analysis on a code-switching system performance assuming no real code-switching data available, under the HMM-DNN modeling framework. We found the HMM-DNN ASR system still well performs for within-utterance code-switching recognition. Our observation is completely different from what have been observed under the End-to-end ASR framework. \section{Acknowledgements} \label{sec:ack} The computational work for this paper is partially performed on the resources of the National Supercomputing Centre (NSCC), Singapore (https://www.nscc.sg). \bibliographystyle{IEEEtran}
proofpile-arXiv_065-4386
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Since COBE's discovery \citep{Mather1990} that the spectrum of the cosmic microwave background (CMB) is a near perfect blackbody, all succeeding measurements of this relic signal (including those reported in refs.~\citealt{Hinshaw2003,Planck2014}) have been interpreted self-consistently in terms of a model in which the radiation was thermalized within one year of the big bang. Diffusing through a gradually thinning, scattering-dominated medium, these photons eventually streamed freely once the protons and electrons in the cosmic fluid combined to form neutral hydrogen and helium, a process (not so accurately) referred to as `recombination.' The actual origin of the CMB was not always so evident, however, and serious consideration had been given to the possibility that it was produced by dust in the early Universe, injected into the interstellar medium (ISM) by Pop III stars \citep{Rees1978,Rowan1979,Wright1982}. An additional attraction of this scenario was the likelihood that the photons rethermalized by dust were themselves emitted by the same stars, thereby closing the loop on a potentially elegant, self-consistent physical picture. But the dust model for the CMB very quickly gave ground to recombination for several telling reasons. Two of them, in particular, relied heavily on each other and suggested quite emphatically that the surface of last scattering (LSS) had to lie at a redshift $z_{\rm cmb}\sim 1080$. First, there was the inference of a characteristic scale in the CMB's power spectrum \citep{Spergel2003} which, when identified as an acoustic horizon (see below), implied that radiation must have decoupled from the baryonic fluid no more than $\sim 380,000$ yrs after the big bang (placing it at the aforementioned $z_{\rm cmb}\sim 1080$). Second, one could reasonably assume that the CMB propagated more or less freely after this time, so that its temperature scaled as $T(z)\propto (1+z)$. Assuming that the radiation and matter were in thermal equilibrium prior to the LSS, one could then use the Saha equation to estimate the temperature $T_{\rm cmb}$---and hence the redshift---at which the free electron fraction dropped to $50\%$, signaling the time during which the baryonic fluid transitioned from ionized plasma to neutral gas. Recombination would have occurred at $T_{\rm cmb}\sim 3,000$ K and, given a measured CMB temperature today of $\sim 2.728$ K, this would imply a redshift $z_{\rm cmb}\sim 1,100$, nicely consistent with the interpretation of the acoustic scale. In contrast, emission dominated by dust at $T_{\rm cmb}\lesssim 50$ K would have placed the redshift $z_{\rm cmb}$ at no more than $\sim 20$, creating a significant conflict with the acoustic-scale interpretation of the peaks in the CMB power spectrum. And in parallel with such arguments for recombination, there was also growing concern that Pop III starlight scattered by the stars' own ejected dust faced seemingly insurmountable difficulties accounting for the observed CMB spectrum (see, e.g., Li 2003). Today, there is very little doubt that the CMB must have formed via recombination at $z\sim 1080$ in the context of $\Lambda$CDM. A dust scenario would produce too many inconsistencies with the age-redshift relation and the Pop III star formation rate, among many other observables. As the precision and breadth of the measurements continued to improve, however, the basic recombination picture for the CMB's origin has not remained as clear as one might have hoped two decades ago---not because of problems with the CMB itself but, rather, because of the tension this interpretation creates with other kinds of cosmological observations. For example, from the analysis of the CMB observed with {\it Planck} (Planck Collaboration 2014), one infers a value of the Hubble constant ($H_0=67.6\pm0.9$ km s$^{-1}$ Mpc$^{-1}$) lower than is typically measured locally, and a higher value for the matter fluctuation amplitude ($\sigma_8$) than is derived from Sunyaev-Zeldovich data. Quite tellingly, none of the extensions to the six-parameter standard $\Lambda$CDM model explored by the Planck team was able to resolve these inconsistencies. As we shall see below, comparable tension now exists also between the baryon acoustic oscillation (BAO) scale inferred from the galaxy and quasar distributions at $z\sim 0.5-2.34$ and the aforementioned acoustic length seen in the CMB, weakening the argument for an LSS at $z_{\rm cmb}\sim 1080$. Over the past decade, the standard model's inability to resolve such tensions, along with several inexplicable coincidences, have led to the development of an alternative Friedmann-Robertson-Walker cosmology known as the $R_{\rm h}=ct$ universe \citep{Melia2007,Melia2016,Melia2017b,MeliaAbdelqader2009,MeliaShevchuk2012}. During this time, the predictions of $R_{\rm h}=ct$ have been compared with those of $\Lambda$CDM using over 23 different kinds of data, outperforming the standard model in every case (see, e.g., Table I in Melia 2017a). We are therefore motivated to consider how the origin of the CMB might be interpreted in this alternative cosmology. Ironically, we shall find that---if the BAO and acoustic scales are the same---the redshift of the LSS in this model had to be $\sim 16$, remarkably close to what would have been required in the original dust model. We shall also find that this redshift sits right within the period of Pop III star formation, prior to the epoch of reionization ($5\lesssim z \lesssim 15$), a likely time during which dust would have been injected into the ISM. And quite interestingly, we shall also determine that if this model is correct, knowledge of $H_0$ and $z_{\rm cmb}$ by themselves is sufficient to argue that the CMB temperature today should be $\sim 3$ K, very close to the actual value, suggesting that the Hubble constant and the baryon to photon ratio are not independent, free parameters. Our goal in this paper is therefore not to critique the basic recombination picture in $\Lambda$CDM which, as noted earlier, matches the data remarkably well but, rather, to demonstrate how the (now dated) dust model for the origin of the CMB may still be viable, albeit in the context of $R_{\rm h}=ct$. The growing tension between the predictions of the standard model and the ever improving observations \citep{MeliaGenova2018} could certainly benefit from a reconsideration of a dust origin for the CMB. But our principal motivation for reanalyzing this mechanism is that, while recombination does not work for $R_{\rm h}=ct$, the dust model is unavoidable. It is our primary goal to examine how and why this association emerges naturally in this cosmology. The analysis in this paper will show that, while dust reprocessing of radiation emitted by the same first generation stars was part of the original proposal, our improved understanding of star formation during the Pop III era precludes this possibility. Instead, the background radiation would have originated between the big bang and decoupling, similarly to the situation in $\Lambda$CDM, but would have been reprocessed by dust prior to reionization in the context of $R_{\rm h}=ct$. A critical difference between these models is that the anisotropies in the observed CMB field would therefore correspond to large-scale structure at $z\sim 16$ in $R_{\rm h}=ct$, instead of $z\sim 1080$ in the standard picture. There are, of course, several definitive tests one may carry out to distinguish between these two scenarios, and we shall consider them in our analysis, described in detail in \S~VI. In this section, we shall also describe several potential shortcomings of a dusty origin for the CMB versus the current recombination picture, and we shall see how these are removed in the context of $R_{\rm h}=ct$, though this would not be possible in $\Lambda$CDM. We begin in \S~II with a brief status report on the $R_{\rm h}=ct$ model, and point to the various publications where its predictions have been tested against the data. In \S~III, we discuss some relevant observational issues pertaining to the CMB, including the interpretation of the acoustic horizon as the characteristic length extracted from its power spectrum. In \S~IV we describe the BAO scale and compare it to the acoustic horizon in \S~V. In this section, we also discuss why the LSS had to be at $z_{\rm cmb}\sim 16$ if these two scales are equal. In \S~VI we describe how the CMB could have originated from dust opacity in this model, and we end with an assessment of our results in \S\S~VII and VIII. \section{The $R_{\rm h}=ct$ Model} The $R_{\rm h}=ct$ universe has been described extensively in the literature and its predictions have been tested against many observations at high and low redshifts. This cosmology has much in common with $\Lambda$CDM, but includes an additional ingredient motivated by several theoretical and observational arguments \citep{Melia2007,Melia2016,Melia2017b,MeliaAbdelqader2009,MeliaShevchuk2012}. Like $\Lambda$CDM, it also adopts the equation of state $p=w\rho$, with $p=p_{\rm m}+ p_{\rm r}+p_{\rm de}$ and $\rho=\rho_{\rm m}+ \rho_{\rm r}+\rho_{\rm de}$, but goes one step further by specifying that $w=(\rho_{\rm r}/3+ w_{\rm de}\rho_{\rm de})/\rho=-1/3$ at all times. In spite of the fact that this prescription appears to be very different from the equation of state in $\Lambda$CDM, where $w=(\rho_{\rm r}/3-\rho_\Lambda)/\rho$, nature is in fact telling us that if we ignore the constraint $w=-1/3$ and instead proceed to optimize the parameters in $\Lambda$CDM by fitting the data, the resultant value of $w$ averaged over a Hubble time, is actually $-1/3$ within the measurement errors. Thus, although $w=(\rho_{\rm r}/3-\rho_\Lambda)/\rho$ in $\Lambda$CDM cannot be equal to $-1/3$ from one moment to the next, its value averaged over the age of the Universe is equal to what it would have been in $R_{\rm h}=ct$ all along. This result does not prove that $\Lambda$CDM is incomplete, but nonetheless suggests that the inclusion of the additional constraint $w=-1/3$ might render its predictions closer to the data. By now, one-on-one comparisons between $\Lambda$CDM and $R_{\rm h}=ct$ have been carried out for a broad range of observations, from the angular correlation function of the CMB \citep{MeliaGenova2018,Melia2014b} and high-$z$ quasars \citep{Melia2013a,Melia2014c} in the early Universe, to gamma-ray bursts \citep{Wei2013} and cosmic chronometers \citep{MeliaMaier2013} at intermediate redshifts and, most recently, to the relatively nearby Type Ia SNe \citep{Wei2015}. The application of model selection tools to these tests indicates that the likelihood of $R_{\rm h}=ct$ being `closer to the correct model' is typically $\sim 90\%$ compared to only $\sim 10\%$ for $\Lambda$CDM. And most recently, the Alcock-Pacz\'ynski test using BAO measurements has been shown to favour $R_{\rm h}=ct$ over $\Lambda$CDM at high redshifts \citep{MeliaLopez2017}. There is therefore ample reason to consider the viability of the $R_{\rm h}=ct$ Universe, and to see how one might interpret the formation of the CMB in this model. This is one of several remaining critical tests facing the $R_{\rm h}=ct$ universe. We recently demonstrated that, while the angular correlation function of the CMB as measured with the latest {\it Planck} release \citep{Planck2016a} remains in tension with the predictions of $\Lambda$CDM, it is consistent with $R_{\rm h}=ct$ \citep{MeliaGenova2018}. It is still not clear, however, whether the power spectrum itself may be fully explained in this model. This paper is an important step in that direction. A second issue is whether big bang nucleosynthesis is consistent with the constant expansion rate required in this cosmology. It has been known for several decades that a linear expansion with the physical conditions in the early $\Lambda$CDM universe simply doesn't work because the radiation temperature and densities don't scale properly with redshift \citep{Kaplinghat2000,Sethi2005}. In $R_{\rm h}=ct$, however, the total equation of state is the zero active mass condition $\rho+3p=0$, in terms of the total energy density and pressure, so the various constituents in the cosmic fluid evolve differently than those in the standard model. The situation is closer to the so-called Dirac-Milne universe \citep{Benoit2012}, which also has linear expansion, so the outlook is more promising. In reality, the standard model has not yet completely solved big bang nucleosynthesis. The yields are generally consistent with the observed abundances for $^4$He, $^3$He, and $D$, but $^7$Li is over-produced by a significant amount \citep{Cyburt2008}. This problem will go away with $R_{\rm h}=ct$ nucleosynthesis, which is a two-step process, first through the thermal and homogeneous production of $^4$He and $^7$Li, and then via the production of $D$ and $^3$He. Previous work, e.g., by \cite{Benoit2012}, suggests that the timeline in this model is greatly different from that in $\Lambda$CDM. Whereas all of the burning must take place before neutrons decay in the latter, nucleosynthesis is a much slower process in the former, with a neutron pool sustained via weak interactions. The burning rate is much lower, but its duration is significantly longer, so the $^4$He is produced over a hundred million years instead of only 15 minutes. According to these earlier simulations, the Lithium anomaly largely disappears because the physical conditions during the nuclear burning are far less extreme than in $\Lambda$CDM. This work is well outside the scope of the present paper, of course, but we highlight it here as one of the principal remaining problems to address with this new cosmology. The various measures of distance and time in the $R_{\rm h}=ct$ universe take on very simple forms, with very few parameters \citep{Melia2007,MeliaShevchuk2012,MeliaMaier2013}. In some applications, there are no parameters at all, making the analysis very straightforward, and the results relatively unambiguous. For example, the Hubble constant is $H(z) = H_0(1+z)$, and the age is $t=1/H$, so the age-redshift relationship is \begin{equation} t(z) = {1\over H_0(1+z)}\;. \end{equation} And since $a(t)=(t_0/t)$ in this cosmology, we also have \begin{equation} (1+z)={a(t)\over a(t_0)}={t_0\over t}\;. \end{equation} Given the constraint on density and pressure alluded to above, it is not difficult to show how the energy density of the various constituents must evolve with redshift in this cosmology. Putting \begin{equation} \rho=\rho_{\rm r}+\rho_{\rm m}+\rho_{\rm de}\;, \end{equation} and \begin{equation} p=-\rho/3=w_{\rm de}\rho_{\rm de}+\rho_{\rm r}/3\;, \end{equation} we immediately see that \begin{equation} \rho_{\rm r}=-3w_{\rm de}\rho_{\rm de}-\rho\;, \end{equation} under the assumption that $p_{\rm r}=\rho_{\rm r}/3$ and $p_{\rm m}\approx 0$. Throughout the cosmic evolution, \begin{equation} \rho(t)=\rho_{\rm c}\,a(t)^{-2}\;, \end{equation} where $\rho_{\rm c}\equiv 3c^2H_0^2/8\pi\,G$ is the critical density and $a(t_0)=1$ in a flat universe. \begin{figure}[h] \vskip 0.2in \includegraphics[width=1.0\linewidth]{f1.eps} \caption{Schematic diagram illustrating a possible evolution of the various constituents $\rho_i$---dark energy (de), radiation (r) and matter (m)---in $R_{\rm h}=ct$, as a function of cosmic time. The conditions today imply that $w_{\rm de}=-0.5$, which then fixes $\rho_{\rm r}/\rho=0.2$ and $\rho_{\rm de}/\rho=0.8$ at $z\gg 1$, while $\rho_{\rm m}/\rho=1/3$ and $\rho_{\rm de}/ \rho=2/3$ for $z\sim 0$. Radiation is dominant over matter in the region $t< t_{\rm r}$, while matter dominates over radiation for $t>t_{\rm m}$.} \end{figure} Equation~(5) constrains the radiation energy density in terms of dark energy and $\rho$ at any epoch. At low redshifts, however, we also know that the CMB temperature ($T_0\approx 2.728$ K) translates into a normalized radiation energy density $\Omega_{\rm r}\approx 5\times 10^{-5}$, which is negligible compared to matter and dark energy. Throughout this paper, the mass fractions $\Omega_{\rm m} \equiv \rho_{\rm m}/\rho_{\rm c}$, $\Omega_{\rm r}\equiv \rho_{\rm r}/\rho_{\rm c}$, and $\Omega_{\rm de}\equiv \rho_{\rm de}/ \rho_{\rm c}$, are defined in terms of the current matter ($\rho_{\rm m}$), radiation ($\rho_{\rm r}$), and dark energy ($\rho_{\rm de}$) densities, and the critical density $\rho_{\rm c}$. Therefore, $w_{\rm de}$ must be $\sim -1/2$ in order to produce a partitioning of the constituents in line with what we see in the local Universe. With this value, \begin{equation} \Omega_{\rm de}= -{1\over 3w_{\rm de}}= {2\over 3}\;, \end{equation} while \begin{equation} \Omega_{\rm m}= {1+3w_{\rm de}\over 3w_{\rm de}}= {1\over 3} \end{equation} where, of course, $\Omega_{\rm m}=\Omega_{\rm b}+\Omega_{\rm d}$, representing both baryonic and dark matter \citep{MeliaFatuzzo2016}. At the other extreme, when $z\gg 1$, it is reasonable to hypothesize that $\rho$ is dominated by radiation and dark energy\footnote{In the context of $R_{\rm h}=ct$, we know that radiation alone cannot sustain an equation of state $p=-\rho/3$, so dark energy is a necessary ingredient.}, so that $\rho\approx \rho_{\rm r}+\rho_{\rm de}$. In that case, one would have \begin{equation} \rho_{\rm de}\approx {2\over 1-3w_{\rm de}}\rho_{\rm c}(1+z)^2\quad (z\gg 1)\;, \end{equation} and \begin{equation} \rho_{\rm r}\approx {3w_{\rm de}+1\over 3w_{\rm de}-1}\rho_{\rm c}(1+z)^2\quad (z\gg 1)\;, \end{equation} implying a relative partitioning of $\rho_{\rm de}=0.8\rho$ and $\rho_{\rm r}= 0.2\rho$ (if $w_{\rm de}$ continues to be constant at $-1/2$ towards higher redshifts). In other words, the zero active mass condition $\rho+3p=0$ would be consistent with a gradual transition of the equilibrium representation of the various constituents from the very early universe, in which $\rho_{\rm de}/\rho=0.8$, to the present, where $\rho_{\rm de}/\rho=2/3$. And during this evolution, the radiation energy density that is dominant at $z\gg 1$, with $\rho_{\rm r}/\rho= 0.2$, would eventually have given way to matter with $\rho_{\rm m}/\rho= 1/3$ at later times ($z\sim 0$). This evolution is shown schematically in figure~1. As we shall see shortly, the physical properties of the medium at the LSS---presumably falling between $t_{\rm r}$ and $t_{\rm m}$---provide a valuable datum in between these two extreme limits (i.e., $t_{\rm r}\lesssim t \lesssim t_{\rm m}$). \begin{figure}[h] \vskip 0.2in \includegraphics[width=1.0\linewidth]{f2.eps} \caption{The CMB temperature $T_0$ today, as a function of the fractional representation of dark energy, $\varpi\equiv \rho_{\rm de}/\rho$, at the redshift $z_{\rm cmb}$ of the last scattering surface. Also shown is the fractional representation of matter, $\rho_{\rm m}/\rho$, at $z_{\rm cmb}\approx 16$. The Universe is completely dominated by dark energy and radiation ($\varpi=0.8$) at $z\gg 1$, and by dark energy and matter ($\varpi=2/3$) at low redshifts. Quite remarkably, $T_0\lesssim 5$ K for all values of $\varpi$, but matches the specific measured temperature $2.728$ K when $\varpi=0.677$, at which point one also finds a matter representation $\rho_{\rm m}/\rho=0.308$.} \end{figure} Let us now define the ratio $\varpi\equiv \rho_{\rm de}/\rho$. On the basis of the two arguments we have just made, we expect that $0.8\ge\varpi\ge 2/3$ throughout the history of the Universe. Solving Equations~(3) and (4), we therefore see that, at any redshift, \begin{equation} \rho_{\rm r}=\left({3\over 2}\varpi-1\right)(1+z)^2\rho_{\rm c}\;, \end{equation} while \begin{equation} \rho_{\rm m}=\left(2-{5\over 2}\varpi\right)(1+z)^2\rho_{\rm c}\;. \end{equation} Of course, the fact that $\rho_{\rm r}$ is constrained by the expression in Equation~(10) at large redshift means that the radiation is coupled to dark energy in ways yet to be determined through the development of new physics beyond the standard model. Nonetheless, for specificity, we will also assume that the radiation is always a blackbody, both at high and low redshifts, though with one important difference---that the relic photons are freely streaming below the redshift $z_{\rm cmb}$ at the last scattering surface, corresponding to a time $t_{\rm r}<t_{\rm cmb}<t_{\rm m}$ in figure~1, at which the radiation effectively `decouples" from the other constituents. Therefore \begin{equation} T(z) = T_0(1+z)\quad (z\lesssim z_{\rm cmb})\;. \end{equation} At very high redshifts, however, $T$ is given explicitly by the redshift dependence of $\rho_{\rm r}$. We still do not know precisely where the radiation decouples from matter and dark energy, and begins to stream freely according to the expression in Equation~(13) but, as we shall see below, our results are not strongly dependent on this transition redshift, principally because $\varpi$ is so narrowly constrained to the range $(2/3,0.8)$. Thus, for simplicity, we shall assume that for $z>z_{\rm cmb}$ we may put\footnote{In this expression, we have adopted the {\it Planck} optimized value of the Hubble constant, $H_0=67.6\pm0.9$ km s$^{-1}$ Mpc$^{-1}$ (Planck Collaboration 2014). To be fair, this is the value measured in the context of $\Lambda$CDM, and while a re-analysis of the {\it Planck} data in the context of $R_{\rm h}=ct$ will produce a somewhat different result for $H_0$, the differences are likely to be too small to affect the discussion in this paper.} \begin{equation} T(z) \approx 31.8\;{\rm K}\;(3\varpi/2-1)^{1/4}(1+z)^{1/2} \quad (z\gtrsim z_{\rm cmb})\;. \end{equation} Even before considering the consequences of identifying the BAO scale as the acoustic horizon, which we do in the next section, we can already estimate the location of the LSS by setting Equation~(13) equal to (14), which yields \begin{equation} T_0\approx 8.53\,\left(\varpi(z_{\rm cmb})-{2\over 3}\right)^{1/4}\;{\rm K}\;. \end{equation} Remembering that $0.8\ge\varpi\ge2/3$ everywhere, we therefore see that $T_0$ in this model must be $\lesssim 5$ K, no matter where the LSS is located. This is quite a remarkable result because the only input used to reach this conclusion is the value of $H_0$, unlike the situation with $\Lambda$CDM, in which one must assume both a value of $H_0$ and optimize the baryon to photon fraction in the early Universe to ensure a value of $T_0$ in this range. Figure~2 illustrates how $T_0$ today changes with $\varpi$ if we assume $z_{\rm cmb}=16$ (see below). We see that $\varpi(z_{\rm cmb})$ must then be $\approx 0.677$ when we fix $T_0=2.728$ K, which is consistent with $t_{\rm cmb}$ being closer to $t_{\rm m}$ than $t_{\rm r}$ in figure~1. Indeed, we find from Equations~(11) and (14) that, at $z=z_{\rm cmb}$, $\rho_{\rm r}/\rho \sim 0.016$ and $\rho_{\rm m}/\rho\sim 0.308$. We shall consider the more specific constraints imposed by the CMB acoustic horizon and the BAO peak measurements shortly, but for now we have already demonstrated a very powerful property of the $R_{\rm h}=ct$ universe---that $H_0$ and the baryon to photon ratio are not independent of each other. And clearly, while $z_{\rm cmb}\sim 1080$ in $\Lambda$CDM, the LSS must occur at a much lower redshift ($z_{\rm cmb}\lesssim 30$) in this model. The temperature calculated from Equations~(13) and (14) is compared to that of the standard model in figure~3. This figure also indicates the location of $z_{\rm cmb}$ based on the argument in the previous paragraph, which will be bolstered shortly with constraints from the acoustic and BAO scales. Thus, while $T\sim 3,000$ K at $z\sim 1080$ in $\Lambda$CDM, so that hydrogen `recombination' may be relevant to the CMB in this model, the temperature is too low at $z_{\rm cmb}< 30$ for this mechanism to be responsible for liberating the relic photons in $R_{\rm h}=ct$. \begin{figure}[h] \vskip 0.2in \includegraphics[width=1.0\linewidth]{f3.eps} \caption{The CMB temperature in $R_{\rm h}=ct$ (solid) compared to its counterpart in $\Lambda$CDM (dashed). The location of the LSS at $z_{\rm cmb}\sim 16$ in the former model is based on several observational arguments (see text). By comparison, $z_{\rm cmb}\sim 1080$ in the standard model.} \end{figure} In this regard, our reconsideration of dust's contribution to the formation of the CMB deviates from the original proposal \citep{Rees1978}, in that the radiation being rethermalized at $z\sim z_{\rm cmb}$ in this picture need not all have been emitted by Pop III stars. Indeed, given that $\rho_{\rm r}\approx 0.2\rho$ for $z\gg 1$, these photons were more likely produced during the intervening period between the big bang and decoupling prior to the reprocessing by dust at $z\sim 16$. The implied coupling between radiation and the rest of the cosmic fluid at high redshifts requires physics beyond the standard model, which acted to maintain the $\sim 0.2\rho$ fraction until decoupling, after which the radiation streamed freely---except at $z\sim 15-20$, where it would have attained thermal equilibrium with the dust. An important caveat with this procedure is that we are ignoring the possible role played by other relativistic species, whose presence would affect the redshift dependence of the temperature $T$. Certainly the early presence of energetic neutrinos may have affected structure formation in $\Lambda$CDM. But given that we know very little about extensions to the standard model, we shall for simplicity assume that such particles will not qualitatively impact $T(z)$, though recognize that this assumption may have to be modified, or supplanted, when more is known. This caveat notwithstanding, the dust in this picture would have had no influence on the value of $\rho_{\rm r}$, but simply reprocessed all components (if more than one) in the radiation field into the single, blackbody CMB we see today. We can say with a fair degree of certainty, however, that---as in the standard model---the background radiation field would not have been significantly influenced by Pop III star formation. We shall demonstrate in \S~VI.2 below that more recent work has shown that the halo abundance was probably orders of magnitude smaller than previously thought \citep{Johnson2013}, greatly reducing the likely contribution ($\lesssim 0.5\%$) of Pop III stars to the overall radiative content of the Universe at that time. Thus, the original proposal by Rees \citep{Rees1978} and others would not work because Pop III stars could not supply more than this small fraction of the photons that were thermalized by the dust they ejected into the interstellar medium. In \S~VI below, we will consider three of the most important diagnostics regarding whether or not the CMB and its fluctuations (at a level of 1 part per 100,000) were produced at recombination, or much later by dust emission at the transition from Pop III to Pop II stars (i.e., $z_{\rm cmb}< 30$). An equally important feature of the microwave temperature is its isotropy across the sky. Inflation ensures isotropy in $\Lambda$CDM, but what about $R_{\rm h}=ct$? This question is related to the broader horizon problem, which necessitated the creation of an inflationary paradigm in the first place. It turns out, however, that the horizon problem is an issue only for cosmologies that have a decelerated expansion at early times. For a constant or accelerated expansion, as we have in $R_{\rm h}=ct$, all parts of the observable universe today have been in equilibrium from the earliest moments \citep{Melia2013b}. Thus, not only has everything in the observable ($R_{\rm h}=ct$) universe been homogeneous from the beginning, it has also been distributed isotropically as well. This includes the energy density $\rho$ and its fluctuations, the Pop III stars that formed from them under the action of self gravity, and the dust they expelled into the interstellar medium prior to the formation of large-scale structure. And since the radiative energy density $\rho_{\rm r}$ and its temperature (see Eq.~11) were also distributed homogeneously and isotropically prior to rethermalization by dust, the eventual CMB produced at $z_{\rm cmb}< 30$, and its tiny fluctuations, would therefore now also be isotropic across the sky. In other words, an isotropic CMB cannot be used to distinguish between $R_{\rm }=ct$ and the inflationary $\Lambda$CDM. \section{The Acoustic Scale} CMB experiments, most recently with {\it Planck} \citep{Planck2014}, have identified a scale $r_{\rm s}$ in both the temperature and polarization power spectrum, with a measured angular size $\theta_{\rm s}=(0.596724\pm 0.00038)^\circ$ on the LSS. If this is an acoustic horizon, the CMB fluctuations have a characteristic size $\theta_{\rm f}\approx 2\theta_{\rm s}$, since the sound wave produced by the dark-matter condensation presumably expanded as a spherical shell and what we see on the LSS is a cross section of this structure, extending across twice the acoustic horizon. Since the multipole number is defined as $l_{\rm s}=2\pi/\theta_{\rm f}$, one has $l_{\rm s}=\pi/\theta_{\rm s}$, which produces the well-known location (at $\sim 300$) of the first peak with an acoustic angular size $\theta_{\rm s}\sim 0.6^\circ$. Actually, there are several additional physical effects one must take into account in order to arrive at the true measured value of $l^{TT}_m$ for the first peak. These include the decay of the gravitational potential and contributions from the Doppler shift of the oscillating fluid, all of which introduce a phase shift $\phi_m$ in the spectrum \citep{Doran2002,Page2003}. The general relation for all peaks and troughs is $l^{TT}_m=l_{\rm s}(m-\phi_m)$. Thus, since $\phi_m$ is typically $\sim 25\%$, the measured location of the first peak ends up at $l^{TT}_1\sim 220$. The acoustic scale in any cosmological model depends critically on when matter and radiation decoupled ($t_{\rm dec}$) and how the sound speed $c_{\rm s}$ evolved with redshift prior to that time. In $\Lambda$CDM, the decoupling was completed at recombination. But this need not be the case in every model. As we shall see, the radiation may have decoupled from matter earlier than the time at which the observed CMB was produced if, as in the case of $R_{\rm h}=ct$, rethermalization of the photons by dust occurred at $z<30$. For the rest of this paper, we therefore make a distinction between $t_{\rm dec}$ and $t_{\rm cmb}$. Since Hydrogen was the dominant element by number, the transition from an optically thick to thin medium is thought to have occurred when the number of ambient H-ionizing photons dropped sufficiently for Hydrogen to recombine \citep{Peebles1970,Hu1995,White1994}. The actual estimate of the rate at which neutral Hydrogen formed also depends on other factors, however, since the $13.6$ eV photons couldn't really `escape' from the fluid. Instead, the process that took photons out of the loop was the $2s\rightarrow 1s$ transition, which proceeds via 2-photon emission to conserve angular momentum. So neutral Hydrogen did not form instantly; the epoch of recombination is thought to have coincided with the fraction $x$ of electrons to baryons dropping below $50\%$. But because the baryon to photon ratio is believed to have been very small (of order $10^{-9}$ in some models), the H-ionizing photons did not have to come from the center of the Planck distribution. There were enough ionizing photons in the Wien tail to ionize all of the Hydrogen atoms. This disparity in number means that the value of the radiation temperature at decoupling is poorly constrained, in the sense that $x$ would have depended on the baryon to photon ratio as well as temperature. But since the dependence of $x$ on the baryon density $\rho_{\rm b}$ was relatively small compared to its strong exponential dependence on temperature, any model change in $\rho_{\rm b}$ could easily have been offset by a very tiny change in temperature. So $z_{\rm dec}$ is nearly independent of the global cosmological parameters, and is determined principally by the {\it choice} of $r_{\rm s}$, which is typically calculated according to \begin{equation} r_{\rm s}\equiv \int_0^{t_{\rm dec}} c_{\rm s}(t^{\prime})[1+z(t^{\prime})]\,dt^\prime\;, \end{equation} from which one then infers a proper distance $R_{\rm s}(z_{\rm dec})=r_{\rm s}/(1+z_{\rm dec})$ traveled by the sound wave reaching the redshift at decoupling. For a careful determination of $z_{\rm dec}$, one therefore needs to know how the sound speed $c_{\rm s}$ evolves with time. For a relativistic fluid, $c_{\rm s}=c/\sqrt{3}$, but the early universe contained matter as well as radiation, and dark energy in the context of $R_{\rm h}=ct$. And though the strong coupling between photons, electrons and baryons allows us to treat the plasma as a single fluid for dynamical purposes during this era \citep{Peebles1970}, the contribution of baryons to the equation of state alters the dependence of $c_{\rm s}$ on redshift, albeit by a modest amount. For example, a careful treatment of this quantity in the context of $\Lambda$CDM takes into account its evolution with time, showing that differences amounting to a factor $\sim 1.3$ could lead to a reduction in sound speed. Quantitatively, such effects are typically rendered through the expression \begin{equation} c_{\rm s}={c\over\sqrt{3(1+3\rho_{\rm b}/4\rho_{\rm r})}} \end{equation} \citep{White1994}. Obviously, $c_{\rm s}$ reduces to $c/\sqrt{3}$ when $\rho_{\rm b}/\rho_{\rm r}\rightarrow 0$, as expected. The situation in $R_{\rm h}=ct$ is somewhat more complicated, primarily because $\rho$ contains dark energy throughout the cosmic expansion. From \S~II, we expect that $\rho_{\rm r}/\rho_{\rm m}$ is a decreasing function of $t$. In addition, $\rho_{\rm r}$ is itself always a small fraction of $\rho$, but in order to maintain the constant equation of state $p=-\rho/3$, it is reasonable to expect that all three constituents remain coupled during the acoustically important epoch, i.e., in the region $t\lesssim t_{\rm m}$ in figure~1. Therefore, \begin{equation} c_{\rm s}^2=\left(+{1\over 3}\right){\partial\rho_{\rm r}\over\partial\rho}+ {\partial p_{\rm de}\over\partial\rho_{\rm de}}{\partial \rho_{\rm de}\over \partial\rho}\;, \end{equation} under the assumption that $p_{\rm m}\approx 0$ at all times. We already know that $\partial\rho_{\rm r}/\partial\rho\le 0.2$. Thus, depending on the sound speed of dark energy, the overall sound speed in the cosmic fluid, $c_{\rm s}$, may or may not be much smaller than $c/\sqrt{3}$ in the early $R_{\rm h}=ct$ universe. We can estimate its value quantitatively by assuming for simplicity that \begin{equation} c_{\rm s}(t) = c_{\rm s}(t_*)\left({t_*\over t}\right)^\beta\;, \end{equation} where $t_*$ is the time at which the acoustic wave is produced and the index $\beta$ is positive in order to reflect the decreasing importance of radiation with time. The acoustic radius in such a model would therefore be given by the expression \begin{equation} r_{\rm s}^{R_{\rm h}=ct}(t_{\rm dec})=c_{\rm s}(t_*)\,t_0\,t_*^\beta\int_{t_*}^{t_{\rm dec}} {dt^\prime\over (t^\prime)^{1+\beta}}\;. \end{equation} Thus, as long as $t_{\rm dec}\gg t_*$, \begin{equation} r_{\rm s}^{R_{\rm h}=ct}={c_{\rm s}(t_*)\,t_0\over\beta} ={R_{\rm h}(t_0)\over\beta}\left({c_{\rm s}(t_*)\over c}\right)\;, \end{equation} so that \begin{equation} \left({c_{\rm s}(t_*)\over c}\right)=\beta\left({r_{\rm s}^{R_{\rm h}=ct}\over R_{\rm h}(t_0)}\right)\;. \end{equation} We shall return to this after we discuss the BAO scale in the next section. Before doing so, however, it is worthwhile reiterating an important difference between the acoustic scale in $\Lambda$CDM and that in $R_{\rm h}=ct$. The consensus today is that, in the standard model, the temperature of the baryon-photon fluid remained high enough all the way to $t_{\rm cmb}$ for the plasma to be at least partially ionized, allowing a strong coupling between the baryons and the radiation. As such, the comoving acoustic horizon $r_{\rm s}$ in Equation~(16) is calculated assuming that sound waves propagated continuously from $t\sim 0$ to $t_{\rm dec}\sim t_{\rm cmb}$. As one may see from Equation~(21), however, there are several reasons why the analogous quantity $r_{\rm s}^{R_{\rm h}=ct}$ in $R_{\rm h}=ct$ may need to be calculated with a truncated integral that does not extend all the way to $t_{\rm cmb}$. The principal argument for this is that the kinetic temperature of the medium may have dropped below the ionization level prior to the time at which the observed CMB was produced, which would effectively decouple the baryons from the photons. This would certainly occur if rethermalization of the primordial radiation field by dust happened at $z<30$, well after decoupling. Nonetheless, none of the analysis carried out in this paper is affected by this. All we need to assume is that the acoustic horizon at the last scattering surface remained constant thereafter, including at the redshift where the BAO peaks are observed. To be clear, the physical scale $R_{\rm s}(t)=a(t)r_{\rm s}$ of the BAO peaks is larger than that at $z_{\rm cmb}$, but this change is due solely to the effects of expansion, arising from the expansion factor $a(t)$, not to a continued change in the comoving scale $r_{\rm s}$. Thus, our imprecise knowledge of the scale factor $r_{\rm s}^{R_{\rm h}=ct}$ in $R_{\rm h}=ct$ is not going to be an impediment to the analysis we shall be carrying out in this paper. \section{The BAO Scale} In tandem with the scale $\theta_{\rm s}$ seen by {\it Planck} and its predecessors, a peak has also been seen in the correlation function of galaxies and the Ly-$\alpha$ forest (see, e.g., \citealt{MeliaLopez2017}, and references cited therein). Nonlinear effects in the matter density field are still mild at the scale where BAO would emerge, so systematic effects are probably small and can be modeled with a low-order perturbation theory \citep{Meiksin1999,Seo2005,Jeong2006,Crocce2006,Eisenstein2007b,Nishimichi2007,Matsubara2008,Padmanabhan2009,Taruya2009,Seo2010}. Thus, the peak seen with large galaxy surveys can also be interpreted in terms of the acoustic scale. To be clear, we will be making the standard assumption that once the acoustic horizon has been reached at decoupling, this scale remains fixed thereafter in the comoving frame. The BAO proper scale, however, is not the same as the acoustic proper scale in the CMB. Although these lengths are assumed to be identical in the comoving frame, the horizon scale continues to expand along with the rest of the Universe, according to the expansion factor $a(t)$. As such, the physical BAO scale is actually much bigger than the CMB acoustic length, with a difference that depends critically on the cosmological model. As we shall see, this is the reason the recombination picture does not work in $R_{\rm h}=ct$, because equating these two scales in this model implies a redshift for the CMB much smaller than 1080. In the past several years, the use of reconstruction techniques \citep{Eisenstein2007a,Padmanabhan2012} that enhance the quality of the galaxy two-point correlation function and the more precise determination of the Ly-$\alpha$ and quasar auto- and cross-correlation functions, has resulted in the measurement of BAO peak positions to better than $\sim 4\%$ accuracy. The three most significant of these are a) the measurement of the BAO peak position in the anisotropic distribution of SDSS-III/BOSS DR12 galaxies \citep{Alam2016} at the two independent/non-overlapping bins with $\langle z\rangle=0.38$ and $\langle z\rangle=0.61$, using a technique of reconstruction to improve the signal/noise ratio. Since this technique affects the position of the BAO peak only negligibly, the measured parameters are independent of any cosmological model; and b) the self-correlation of the BAO peak in the Ly-$\alpha$ forest in the SDSS-III/BOSS DR11 data \citep{Delubac2015} at $\langle z\rangle=2.34$, in addition to the cross-correlation of the BAO peak of QSOs and the Ly-$\alpha$ forest in the same survey \citep{Font2014}. In their analysis of these recent measurements, \citep{Alam2016} traced the evolution of the BAO scale separately over nearby redshift bins centered at 0.38, 0.51 and 0.61 (the $z=0.51$ measurement is included for this discussion, though its bin overlaps with both of the other two), and then in conjunction with the Ly-$\alpha$ forest measurement at $z=2.34$ \citep{Delubac2015}. As was the case in \cite{MeliaLopez2017}, these authors opted not to include other BAO measurements, notably those based on photometric clustering and from the WiggleZ survey \citep{Blake2011}, whose larger errors restrict their usefulness in improving the result. Older applications of the galaxy two-point correlation function to measure a BAO length were limited by the need to disentangle the acoustic length in redshift space from redshift space distortions arising from internal gravitational effects \citep{Lopez2014}. To do this, however, one invariably had to either assume prior parameter values or pre-assume a particular model to determine the degree of contamination, resulting in errors typically of order $20-30\%$. Even so, several inconsistencies were noted between theory and observations at various levels of statistical significance. For example, based on the BAO interpretation of a peak at $z=0.54$, the implied angular diameter distance was found to be $1.4\sigma$ higher than what is expected in the concordance $\Lambda$CDM model \citep{Seo2010}. When combined with the other BAO measurements from SDSS DR7 spectroscopic surveys \citep{Percival2010} and WiggleZ \citep{Blake2011}, there appeared to be a tendency of cosmic distances measured using BAO to be noticeably larger than those predicted by the concordance $\Lambda$CDM model. The more recent measurements using several innovative reconstruction techniques have enhanced the quality of the galaxy two-point correlation function and the quasar and Ly-$\alpha$ auto- and cross-correlation functions. Unfortunately, in spite of this improved accuracy, the comparison with model predictions depends on how one chooses the data. When the Ly-$\alpha$ measurement at $z=2.34$ is excluded, \cite{Alam2016} find that the BOSS measurements are fully consistent with the {\it Planck} $\Lambda$CDM model results, with only one minor level of tension having to do with the inferred growth rate $f\sigma_8$, for which the BOSS BAO measurements require a bulk shift of $\sim 6\%$ relative to {\it Planck} $\Lambda$CDM. In all other respects, the standard model predictions from {\it Planck} fit the BAO-based distance observables at these three redshift bins typically within $1\sigma$. On the other hand, \cite{Alam2016} also find that when the Ly-$\alpha$ measurement at $z=2.34$ is included with the three lower redshift BOSS measurements, the combined data deviate from the concordance model predictions at a $2-2.5\sigma$ level. This result has been discussed extensively in the literature \citep{Delubac2015,Font2014,Sahni2014,Aubourg2015}, and is consistent with our previous analysis using a similar data set to carry out an Alcock-Paczy\'nski (AP) test of various cosmological models \citep{MeliaLopez2017}. The AP test, based on the combined BOSS and Ly-$\alpha$ measurements (see Table 1 below), shows that the observations are discrepant at a statistical significance of $\gtrsim 2.3\sigma$ with respect to the predictions of a flat $\Lambda$CDM cosmological model with the best-fit {\it Planck} parameters \citep{MeliaLopez2017}. More so than any other observation of the acoustic scale to date, the tension between the measurement at $\langle z\rangle=2.34$ and theory is problematic because the observed ratio $d_A/d_H=1.229\pm 0.11$ is obtained independently of any pre-assumed model, in terms of the angular-diameter distance $d_A(z)$ and Hubble radius $d_H(z)\equiv c/H(z)$. The bottom line is that BAO measurements may or may not be in tension with {\it Planck} $\Lambda$CDM, largely dependent on which measurements one chooses for the analysis. Certainly, the BAO measurement based on the Ly-$\alpha$ forest requires different techniques than those used with the galaxy samples, and no doubt is affected by systematics possibly different from those associated with the latter. For instance, \cite{Delubac2015} worry about possible observational biases when examining the Ly-$\alpha$ forest. What is clear up to this point is that, given the rather small range in BOSS redshifts (essentially $0.38<z< 0.61$) one may adequately fit the distance observables with either {\it Planck} $\Lambda$CDM or $R_{\rm h}=ct$. The factor separating these two models is primarily the inclusion of the Ly-$\alpha$ measurements at $z=2.34$ which, however, is a different kind of observation, and may be problematic for various reasons. Table~1 lists the three measurements used to carry out the Alcock-Paczy\'nski test in order to establish whether or not the BAO scale $r_{\rm BAO}$ is a true `standard ruler' \citep{MeliaLopez2017}. The ratio \begin{equation} {\cal D}(z)\equiv d_A(z)/d_H(z) \end{equation} (e.g., from the flux-correlation function of the Ly-$\alpha$ forest of high-redshift quasars \citep{Delubac2015}) is independent of both $H_0$ and the presumed acoustic scale $r_{\rm BAO}$, thereby providing a very clean test of the cosmology itself. In $\Lambda$CDM, $d_A$ depends on several parameters, including the mass fractions $\Omega_{\rm m}$, $\Omega_{\rm r}$, and $\Omega_{\rm de}$. Assuming zero spatial curvature, so that $\Omega_{\rm m}+\Omega_{\rm r} +\Omega_{\rm de}=1$, the angular-diameter distance at redshift $z$ is given by the expression \begin{eqnarray} d^{{\Lambda}\rm CDM}_A(z)&=&{c\over H_0}{1\over (1+z)}\int_{0}^{z} \left[\Omega_{\rm m}(1+u)^3+\right.\nonumber\\ &\null&\left.\hskip-0.5in\Omega_{\rm r}(1+u)^4 +\Omega_{\rm de} (1+u)^{3(1+w_{\rm de})}\right]^{-1/2}\,du\,, \end{eqnarray} where $p_{\rm de}=w_{\rm de}\rho_{\rm de}$ is the dark-energy equation of state. Thus, since $\rho_{\rm r}$ is known from the CMB temperature $T_0=2.728$~K today, the essential free parameters in flat $\Lambda$CDM are $H_0$, $\Omega_{\rm m}$ and $w_{\rm de}$, though the scaled baryon density $\Omega_{\rm b}\equiv \rho_{\rm b}/\rho_{\rm c}$ also enters through the sound speed (Eq.~17). The other quantity in Equation~(23) is the Hubble distance, \begin{eqnarray} d^{{\Lambda}\rm CDM}_{\rm H}(z) &\equiv&{c\over H(z)}\nonumber \\ &=& {c\over H_0}\left[\Omega_{\rm m}(1+z)^3+\Omega_{\rm r}(1+z)^4\right.\nonumber\\ &\null&\left.\qquad+\Omega_{\rm de} (1+z)^{3(1+w_{\rm de})}\right]^{-1/2}. \end{eqnarray} In the $R_{\rm h}=ct$ Universe, the angular-diameter distance is simply given as \begin{equation} d^{R_{\rm h}=ct}_A(z)=\frac{c}{H_{0}}\frac{1}{(1+z)}\ln(1+z)\;, \end{equation} while the Hubble distance is \begin{equation} d^{R_{\rm h}=ct}_{\rm H}(z)={c\over H_0}{1\over(1+z)}\;. \end{equation} In this cosmology, one therefore has the simple, elegant expression \begin{equation} {\cal D}_{R_{\rm h}=ct}(z)=\ln(1+z)\;, \end{equation} which is {\it completely free of any parameters}. For $\Lambda$CDM with flatness as a prior, ${\cal D}_{\Lambda\rm CDM}$ relies entirely on the variables $\Omega_{\rm m}$ and $w_{\rm de}$. This clear distinction between ${\cal D}_{\Lambda\rm CDM}(z)$ and ${\cal D}_{R_{\rm h}=ct}(z)$ can therefore be used to test these competing models in a one-on-one comparison, free of the ambiguities often attached to data tainted with nuisance parameters. Unlike those cases, the measured ratio ${\cal D}_{\rm obs}$ is completely independent of the model being examined. In \cite{MeliaLopez2017}, we used the Alcock-Paczy\'nski test to compare these model independent data to the predictions of $\Lambda$CDM and $R_{\rm h}=ct$ and showed that the standard model is disfavoured by these measurements at a significance greater than $\sim 2.3\sigma$, while the probability of $R_{\rm h}=ct$ being consistent with these observations is much closer to 1. \begin{table*} \vskip 0.2in \center {\footnotesize \centerline{{\bf Table 1.} Inferred BAO scale, $r_{\rm BAO}$, from the most recent high-precision measurements} \begin{tabular}{cccccc} \\ \hline\hline $z$\qquad&\qquad{${\cal D}_{\rm obs}(z)$}\qquad&\qquad$\theta_{\rm BAO}$\qquad& \qquad{$r_{\rm BAO}^{\Lambda{\rm CDM}}$}\qquad &\qquad$r_{\rm BAO}^{R_{\rm h}=ct}$ &\qquad{\rm Reference}\qquad\\ \qquad&&\qquad{(deg)}\qquad&\qquad (Mpc)\qquad& \qquad{(Mpc)}\qquad &\qquad\qquad\\ \hline 0.38\qquad & \qquad $0.286\pm0.025$ & \qquad $5.60\pm 0.12$ & \qquad $158.6\pm3.4$ & \qquad $130.3\pm2.8$ & \qquad \cite{Alam2016} \\ 0.61\qquad & \qquad $0.436\pm0.052$ & \qquad $3.67\pm 0.08$ & \qquad $153.7\pm3.4$ & \qquad $126.3\pm2.8$ & \qquad \cite{Alam2016} \\ 2.34\qquad & \qquad $1.229\pm0.110$ & \qquad $1.57\pm 0.05$ & \qquad $149.7\pm4.8$ & \qquad $136.8\pm4.4$ & \qquad \cite{Delubac2015} \\ \hline {\rm Average}& & & \qquad $154.0\pm3.6$ & \qquad $131.1\pm4.3$ & \\ \hline\hline \end{tabular} } \vskip 0.3in \end{table*} The inclusion of the BAO measurement at $z=2.34$ creates tension with the $\Lambda$CDM interpretation of the acoustic scale, which is eliminated in $R_{\rm h}=ct$, lending some support to the idea that the BAO and CMB acoustic scales should be related in this model. For the application in this paper, we must adopt a particular value of $H_0$ to use these high-precision data to extract a comoving BAO scale. For $\Lambda$CDM, we adopt the concordance parameter values $\Omega_{\rm m}=0.31$, $H_0=67.6$ km s$^{-1}$ Mpc$^{-1}$, $w_{\rm de}=-1$, and $\Omega_{\rm b}=0.022/h^2$ and, to keep the comparison as simple as possible, we here assume the same value of $H_0$ for the $R_{\rm h}=ct$ cosmology. From the data in Table~1, we see that the scale $r_{\rm BAO}$ may be used as a standard ruler over a significant redshift range ($0\le z\le 2.34$) in both models, though the actual value of $r_{\rm BAO}$ is different if the same Hubble constant is assumed in either case. Based solely on this outcome, the interpretation of $r_{\rm BAO}$ as an acoustic scale could be valid in $R_{\rm h}=ct$, perhaps more so than in $\Lambda$CDM. \section{Adopting the Acoustic Horizon as a Standard Ruler} Let us now assume that the BAO and CMB acoustic scales are equal. In the $R_{\rm h}=ct$ universe, we therefore have \begin{equation} \ln(1+z_{\rm cmb})={r_{\rm BAO}^{R_{\rm h}=ct}\over R_{\rm h}(t_0)\,\theta_{\rm s}}\;, \end{equation} so that \begin{equation} z_{\rm cmb}=16.05^{+2.4}_{-2.0}\;, \end{equation} which corresponds to a cosmic time $t_{\rm cmb}\approx 849$ Myr. This redshift at last scattering in $R_{\rm h}=ct$ is quite different from the corresponding value ($\sim 1080$) in $\Lambda$CDM, so is there any confirming evidence to suggest that this is reasonable? There is indeed another type of observation supporting this inferred redshift. The value quoted in Equation~(30) is a good match to the $z_{\rm cmb}$ measured using an entirely different analysis of the CMB spectrum, which we now describe. It has been known for almost two decades that the lack of large-angle correlations in the temperature fluctuations observed in the CMB is in conflict with predictions of inflationary $\Lambda$CDM. Probabilities ($\lesssim 0.24\%$) for the missing correlations disfavour inflation at better than $3\sigma$ \citep{Copi2015}. Recently, we \citep{MeliaGenova2018} used the latest {\it Planck} data release \citep{Planck2014} to demonstrate that the absence of large-angle correlations is best explained with the introduction of a non-zero minimum wavenumber $k_{\rm min}$ for the fluctuation power spectrum $P(k)$. This is an important discriminant among different cosmological models because inflation would have stretched all fluctuations beyond the horizon, producing a $P(k)$ with $k_{\rm min}=0$ and, therefore, strong correlations at all angles. A non-zero $k_{\rm min}$ would signal the presence of a maximum fluctuation wavelength at decoupling, thereby favouring non-inflationary models, such as $R_{\rm h}=ct$, which instead produce a fluctuation spectrum with wavelengths no bigger than the gravitational (or Hubble) radius \citep{MeliaGenova2018}. It is beyond the scope of the present paper to discuss in detail how the cutoff $k_{\rm min}$ impacts the role of inflation within the standard model, but it may be helpful to place this measurement in a more meaningful context by summarizing the key issue (see \citealt{Liu2020} for a more in-depth discussion). Slow-roll inflation in the standard model is viewed as the critical mechanism that can simultaneously solve the horizon problem and generate a near scale-free fluctuation spectrum, $P(k)$. It is readily recognized that these two processes are intimately connected via the initiation of the inflationary phase, which in turn also determines its duration. The identification of a cutoff $k_{\rm min}$ in $P(k)$ tightly constrains the time at which inflation could have started, requiring the often used small parameter $\epsilon$ \citep{Liddle1994} to be $\gtrsim 0.9$ throughout the phase of inflationary expansion in order to produce sufficient dilation to fix the horizon problem. Such high values of $\epsilon$ predict extremely red spectral indices, however, which disagree with measured near scale-free spectrum, which typically requires $\epsilon\ll 1$. Extensions to the basic picture have been suggested by several workers \citep{Destri2008,Scacco2015,Santos2018,Handley2014,Ramirez2012,Remmen2014}, most often by adding a kinetic-dominated or radiation-dominated phase preceding the slow-roll expansion. But none of the approaches suggested thus far have been able to simultaneously fix the horizon problem and produce enough expansion to overcome the horizon problem. It appears that the existence of $k_{\rm min}$ requires a modification and/or a replacement of the basic inflationary picture \citep{Liu2020}. In the $R_{\rm h}=ct$ cosmology, on the other hand, fluctuation modes never cross back and forth across the Hubble horizon, since the mode size and the Hubble radius grow at the same rate as the Universe expands. Thus, $k_{\rm min}$ corresponds to the first mode emerging out of the Planck domain into the semi-classical Universe \citep{Melia2019}. The scalar-field required for this has an exponential potential, but it is not inflationary, and it satisfies the zero active mass condition, $\rho_\phi+3p_\phi=0$, just like the rest of the Universe during its expansion history. The amplitude of the temperature anisotropies observed in the CMB requires the quantum fluctuations in $\phi$ to have classicalized at $\sim 3.5\times 10^{15}$ GeV, suggesting an interesting physical connection to the energy scale in grand unified theories. Indeed, such scalar-field potentials have been studied in the context of Kaluza-Klein cosmologies, string theory and supergravity (see, e.g., \citealt{Halliwell1987}). In terms of the variable \begin{equation} u_{\rm min}\equiv k_{\rm min}\,c\Delta\tau_{\rm cmb}\;, \end{equation} where $c\Delta\tau_{\rm cmb}$ is the comoving radius of the last scattering surface written in terms of the conformal time difference between $t_0$ and $t_{\rm cmb}$, the recent analysis of the CMB anisotropies \citep{MeliaGenova2018} shows that the angular-correlation function anomaly disappears completely for $u_{\rm min}=4.34\pm 0.50$, a result that argues against the basic slow-roll inflationary paradigm for the origin and growth of perturbations in the early Universe, as we have just discussed. With an implied $u_{\rm min}=0$, the standard inflationary cosmology in its present form is disfavoured by this result at better than $8\sigma$, a remarkable conclusion if the introduction of $k_{\rm min}$ in the power spectrum turns out to be correct. For obvious reasons, this outcome is highly relevant to the interpretation of an acoustic scale because it provides a completely independent measurement of $z_{\rm cmb}$. At large angles, corresponding to multipoles $\ell\lesssim 30$, the dominant physical process producing the anisotropies is the Sachs-Wolfe effect \citep{Sachs1967}, representing metric perturbations due to scalar fluctuations in the matter field. This effect translates inhomogeneities of the metric fluctuation amplitude on the last scattering surface into anisotropies observed in the temperature today. From the definition of $u_{\rm min}$, it is trivial to see that the maximum angular size of the Sachs-Wolfe fluctuations is \begin{equation} \theta_{\rm max}={2\pi\over u_{\rm min}}\;. \end{equation} In the $R_{\rm h}=ct$ Universe, quantum fluctuations begin to form at the Planck scale with a maximum wavelength \begin{equation} \lambda_{\rm max}=\eta\,2\pi R_{\rm h}(z_{\rm cmb})\;, \end{equation} where $\eta$ is a multiplicative factor $\sim O(1)$ \citep{MeliaGenova2018}. Therefore, \begin{equation} \ln(1+z_{\rm cmb})=\eta u_{\rm min}\;. \end{equation} For example, if $\eta\sim 2/3$, then $z_{\rm cmb} = 17.05^{+8}_{-5}$. This is a rather significant result because it provides a firm confirmation that our estimate of $z_{\rm cmb}$ based on the observed BAO in $R_{\rm h}=ct$ may be correct in the context of this model. Incidentally, aside from the evidence provided against basic, slow-roll inflation by the non-zero value of $k_{\rm min}$, the emergence of $\theta_{\rm max}$, and its implied value of $z_{\rm cmb}$, also introduces significant tension with the inferred location of the last scattering surface in $\Lambda$CDM based on the first acoustic peak of the CMB power spectrum. But an extended discussion concerning this new result is beyond the scope of the present paper, whose principal goal is an examination of the possible origin of the CMB in the $R_{\rm h}=ct$ model. Returning now to Equation~(22), we see that identifying the BAO scale as the acoustic horizon gives \begin{equation} {c_{\rm s}(t_*)\over c/\sqrt{3}}\approx {\beta\over 20}\;. \end{equation} As we have seen, part of the reduction of $c_{\rm s}$ below its relativistic value in $R_{\rm h}=ct$ is due to the fact that $\rho_{\rm r}$ is only $0.2\rho$ in the early Universe. But that still leaves about a factor $4$ unaccounted for in Equation~(18). Perhaps this is indirect evidence that radiation and dark energy are coupled strongly during the acoustically active period and that the sound speed of dark energy cannot be ignored. But without new physics beyond the standard model, from which such properties would be derived, there is little more one can say without additional speculation. \section{Dust vs Recombination in $R_{\rm h}=ct$} The physical attributes of the LSS that we have just described in the $R_{\rm h}=ct$ universe echo some of the theoretical ideas explored decades ago, though these were abandoned in favour of a recombination at $z_{\rm cmb}\sim 1080$ scenario. Before attempting to rescue the dust origin for the CMB, it is essential to scrutinize globally whether such a proposal makes sense in terms of what we know today. In general terms, there are at least three observational signatures that may be used to distinguish between recombination and dust opacity as the origin of the CMB, and we consider each in turn. In addition, there are several other potential shortcomings that simply would not work in $\Lambda$CDM, providing a strong argument {\sl against} the dust model in standard cosmology, though these are removed quite easily in the context of $R_{\rm h}=ct$, so that a dust origin for the CMB is virtually unavoidable in this alternative cosmology. We shall summarize these issues and how they are resolved in $R_{\rm h}=ct$ at the end of this section. \subsection{Recombination lines} The first of these signatures is quite obvious and rests on the expectation that recombination lines ought to be present at some level in the CMB's spectrum if the current picture is correct, whereas all such lines would have been completely wiped out by dust rethermalization. The expectation of seeing recombination lines from $z_{\rm cmb}$ is so clear cut that extensive simulations have already been carried out for this process in the context of $\Lambda$CDM \citep{Rubino-Martin2006,Rubino-Martin2008}. The effect of recombination line emission on the angular power spectrum of the CMB is expected to be quite small, of order $\sim 0.1\mu{\rm K}$--$0.3\mu{\rm K}$, but may be separated from other effects due to their peculiar frequency and angular dependence. Narrow-band spectral observations with improved sensitivities of future experiments may therefore measure such deviations if the CMB was produced by recombination. \subsection{The CMB Spectrum} A second signature has to do with the CMB's radiation spectrum itself. Clearly, the opacity in a plasma comprised primarily of Hydrogen and Helium ions and their electrons is dominated by Thomson scattering, which does not alter the spectral shape produced at large optical depths as the CMB photons diffuse through the photosphere. There is, however, the issue of how much dilution of the blackbody distribution occurs in a scattering medium, which does not alter the `colour' temperature of the radiation, but reduces its intensity below that of a true Planck function. We will not be addressing this specific question here because our primary focus is dust opacity, which has an alternative set of issues, including the fact that the efficiency of dust absorption is frequency dependent \citep{Wright1982}. To address this point, and its impact on the shape of the CMB's radiation spectrum, let us begin by assuming a density $n_{\rm d}(\Omega,t)$ of thermalizers with a temperature $T_{\rm d}(\Omega,t)$ at time $t$ and in the direction $\Omega\equiv (\theta,\phi)$. The efficiency of absorption $Q_{\rm abs}$ (in units of comoving distance per unit time) of the thermalizers depends on several factors, including geometry, frequency, composition and orientation. Then, assuming Kirchoff's law with isotropic emission by each radiating surface along the line-of-sight, and recalling that the invariant intensity scales as $\nu^{-3}$, we may write the intensity observed at frequency $\nu_0$ in the direction $\Omega$ as \begin{eqnarray} I(\nu_0,\Omega)&=&\langle\sigma\rangle {2h\nu_0^3\over c^2}\int_0^{t_0}\, dV(t)\;n_{\rm d}(\Omega,t)\times\nonumber\\ &\null&\hskip-0.6in{\langle Q_{\rm abs}(\nu[\nu_0,t])\rangle\over d_L(t)^2} P(\nu[\nu_0,t],T_{\rm d}[\Omega,t])\,e^{-\tau(\nu_0,\Omega,t)}\,, \end{eqnarray} where $\langle\sigma\rangle$ is the average cross section of the thermalizers, $\langle Q_{\rm abs}\rangle$ is an average over the randomly oriented thermalizers in the field of unpolarized radiation, $d_L$ is the luminosity distance, $dV$ is the comoving volume element, and \begin{equation} P(\nu,T)\equiv {1\over \exp(h\nu/kT)-1} \end{equation} is the Planck partition function, so that \begin{equation} B(\nu,T)\equiv {2h\nu^3\over c^2}P(\nu,T) \end{equation} is the blackbody intensity. In addition, the quantity \begin{equation} \tau(\nu_0,\Omega,t)=\langle\sigma\rangle \int_t^{t_0} dt\; \langle Q_{\rm abs}(\nu[\nu_0,t])\rangle\,n_{\rm d}(\Omega,t) \end{equation} is the optical depth due to the thermalizers along the line-of-sight between time $t$ and $t_0$. Let us further assume a scaling law \begin{equation} n_{\rm d}(\Omega,t)=n_{\rm d}(\Omega,0)(1+z)^\epsilon\;. \end{equation} Expressing these integrals in terms of redshift $z$, we therefore have \begin{eqnarray} I(\nu_0,\Omega)&=&\tau_0(\Omega){2h\nu_0^3\over c^2}\int_0^\infty\,dz^\prime\,{(1+z^\prime)^{\epsilon-1} \over c E(z^\prime)}\times\nonumber\\ &\null&\hskip-0.92in \langle Q_{\rm abs}(\nu_0[1+z^\prime])\rangle P(\nu_0[1+z^\prime]\,T_{\rm d}[\Omega,z^\prime])\,e^{-\tau(\nu_0,\Omega,z^\prime)}\,, \end{eqnarray} and \begin{equation} \tau(\nu_0,\Omega,z)=\tau_0(\Omega)\, \int_0^z dz^\prime\; {(1+z^\prime)^{\epsilon-1}\over c E(z^\prime)}\,\langle Q_{\rm abs}(\nu_0[1+z^\prime])\rangle \end{equation} where \begin{equation} \tau_0(\Omega)\equiv {c\over H_0}\,\langle\sigma\rangle\,n_{\rm d}(\Omega,0)\;, \end{equation} and \begin{equation} E(z)\equiv {H(z)\over H_0}\;. \end{equation} Noting that \begin{eqnarray} {d\over dz}e^{-\tau(\nu_0,\Omega,z)}&=&-\tau_0(\Omega){(1+z)^{\epsilon-1}\over c E(z)}\langle Q_{\rm abs}(\nu_0[1+z])\rangle\nonumber\\ &\null&\times e^{-\tau(\nu_0,\Omega,z)}\;, \end{eqnarray} we can see from Equation~(41) that \begin{eqnarray} I(\nu_0,\Omega)&=&-{2h\nu_0^3\over c^2}\int_0^\infty\,dz^\prime\, P(\nu_0[1+z^\prime],T_{\rm d}[\Omega,z^\prime])\nonumber\\ &\null& \times {d\over dz^\prime}e^{-\tau(\nu_0,\Omega,z^\prime)}\;, \end{eqnarray} and therefore integrating by parts, we find that \begin{eqnarray} I(\nu_0,\Omega)&=&B(\nu_0,T_{\rm d}[0])+{2h\nu_0^3\over c^2}\int_0^\infty\,dz^\prime\, e^{-\tau(\nu_0,\Omega,z^\prime)}\nonumber\\ &\null&\times{d\over dz^\prime} P(\nu_0[1+z^\prime],T_{\rm d}[\Omega,z^\prime])\;. \end{eqnarray} We see that the intensity of the CMB measured at Earth may deviate from that of a true blackbody, but only if the second term on the right-hand side of this equation is significant. Notice, however, that regardless of how the optical depth $\tau(\nu_0,\Omega,z)$ varies with $\nu_0$, there is a strictly zero deviation from a true Planckian shape for $T_{\rm d}(z)\propto (1+z)$, which one may readily recognize from Equation~(37). If the dust and the radiation it rethermalizes near the photosphere (at the LSS) are in equilibrium (see discussion below concerning what is required to sustain this equilibrium), $T_{\rm d}$ is expected to follow the evolution of the photon temperature (Equation~13) and, coupled with the fact that $\nu\propto (1+z)$ in all cases, we see that $P(\nu,T)$ is then independent of redshift. Therefore, $(d/dz^\prime)P=0$ in Equation~(47), leaving $I(\nu_0,\Omega)=B(\nu_0,T_{\rm d}[0])$ at all frequencies \citep{Rowan1979}. The key issue is therefore not whether the dust opacity is frequency dependent but, rather, whether the dust reaches local thermal equilibrium with the radiation. The answer to this question is yes, as long as enough dust particles are generated to produce optical depths $\tau(\nu_0,\Omega,z)\gg 1$ at $z\sim z_{\rm cmb}$. Though framed in the context of $\Lambda$CDM, the early work on this topic already established the fact that a medium could be rendered optically thick just with dust, even if the latter constituted a mere percentage level density compared to those of other constituents in the cosmic fluid \citep{Rees1978,Rowan1979,Wright1982,Rana1981,Hawkins1988}. In the context of $R_{\rm h}=ct$, we may estimate whether or not this holds true as follows. Extremely metal-poor stars have been detected, e.g., in the Galactic bulge \citep{Howes2015}, possibly revealing a remnant trace of the Pop III stars formed prior to $z\sim 15$. These data support the conventional picture of an extremely low metal abundance in the ISM prior to Pop III stellar nucleosynthesis. We do not yet have a tight constraint on the metallicity between Pop III and Pop II star formation, but let us parametrize its value relative to solar abundance as $f_{\rm Z}$. We shall argue in the next subsection that the dust was created prior to $z\sim 16$ and then destroyed by Pop II supernovae at the start of the epoch of reionization (i.e., $z\sim 15$). Assuming a Hubble constant $H_0=67.7$ km s$^{-1}$ Mpc$^{-1}$ and a baryon fraction $\Omega_{\rm b}\sim 0.04$ \citep{Planck2016a}, it is straightforward to estimate the comoving mass density of metals, $\rho_{\rm s}(z=16)\sim 4\times 10^{-29}f_{\rm Z}$ g cm$^{-3}$ at $z=16$. Therefore, for a bulk density of $\sim 2$ g cm$^{-3}$ of silicate grains, and a grain radius $r_{\rm s}\sim 0.1$ micron, the dust number density would have been $n_{\rm s}(z=16)\sim 5\times 10^{-15}f_{\rm Z}$ cm$^{-3}$. At $z=16$, the CMB spectrum ranged from $\lambda_{\rm min}\sim 0.003$ cm to $\lambda_{\rm max}\sim 0.02$ cm, for which the dust absorption efficiency was $Q(\lambda_{\rm min})\sim 0.02$ and $Q(\lambda_{\rm max})\sim 0.003$ \citep{Draine2011}. And therefore the photon mean free path $\langle l_\gamma\rangle$ due to dust absorption is estimated to lie between the limits $\sim 3\times 10^{25} f_{\rm Z}^{-1}$ cm and $\sim 2\times 10^{26} f_{\rm Z}^{-1}$ cm. By comparison, the gravitational (or Hubble) radius at that redshift was $R_{\rm h}(z=16)\sim 10^{27}$ cm. Thus, every photon in the CMB would have been absorbed by dust prior to $z\sim 16$ as long as $f_{\rm Z}\gtrsim 0.2$, i.e., about $20\%$ of the solar value, which is not at all unreasonable. Correspondingly, the dust temperature must remain in equilibrium with the CMB radiation field (see Eq.~43). There are two important factors guiding this process. The first is based on the average heating $H(T)$ and cooling $K(T_{\rm d})$ rates for a given dust particle, while the second is due to the fact that each absorption of a photon produces a quantum change in the dust particle's temperature that may be strongly dependent on its size \citep{Weingartner2001,Draine2001}. In the cosmological context, the dust is heated by an isotropic radiation field with an angle-averaged intensity $J_\lambda=B(\lambda,T)$ (see Eq.~38), where $T(z=16)\approx 46$ K, unlike our local neighborhood, where the primary heating agent is UV light. Thus, a typical dust particle is heated at a rate $H(T)=4\pi r_{\rm s}^2 \int_0^\infty d\lambda\,\pi B(\lambda,T)Q(\lambda)$, in terms of the previously defined absorption efficiency $Q(\lambda)$. According to Kirchoff's law, its emissivity is proportional to $B(\lambda,T_{\rm d})Q(\lambda)$, and so its cooling rate may be similarly written $K(T_{\rm d})=4\pi r_{\rm s}^2 \int_0^\infty d\lambda\,\pi B(\lambda,T_{\rm d})Q(\lambda)$. These integrals are identical, except when $T_{\rm d}\not=T$. To gauge how long it would take for the dust to reach equilibrium with the CMB radiation field if these temperatures were not equal, consider the temperature evolution equation $C(T_{\rm d}) \;dT_{\rm d}/dt=H(T)-K(T_{\rm d})$, where $C(T_{\rm d})$ is the heat capacity. At $T_{\rm d}\sim 46$ K, $C\sim 0.2\, k_{\rm B}N_{\rm s}$ \citep{Draine2001}, where $k_{\rm B}$ is Boltzmann's constant and $N_{\rm s}$ is the number of molecules in the dust grain. For a $\sim 0.1\mu$m sized particle, $N_{\rm s}\sim 3\times 10^8$ \citep{Weingartner2001}, so putting $\langle Q(\lambda)\rangle\sim 0.012$, one finds that $dT_{\rm d}/dt\sim 10^{-7}(T^4-T_{\rm d}^4)$. Thus, assuming that either $H(T)$ or $K(T_{\rm d})$ is dominant, we infer that it would take about $50$ seconds for the dust to reach equilibrium at $T= T_{\rm d}\sim 46$ K. It is therefore reasonable to assume that dust was thermalized with the radiation at $z\sim 16$. The second issue is more constraining. Upon absorbing a photon with wavelength $\lambda$, a dust grain containing $N_{\rm s}$ molecules undergoes a change in temperature $\Delta T_{\rm d}=hc/\lambda\,C(T_{\rm d})\sim 7.2\;(\lambda\,N_{\rm s})^{-1}$ K. For the larger grains (i.e., $r_{\rm s}\sim 0.1-0.3$ $\mu$m), with $N_{\rm s}\sim 3\times 10^8-10^{10}$, this is a minuscule fraction ($\sim 10^{-9}-10^{-8}$) of the equilibrium temperature $T_{\rm d}=46$ K throughout the wavelength range $\lambda\sim 0.003-0.02$ cm, so the smooth evolution in $T_{\rm d}$ described in previous paragraphs seems perfectly attuned to the physics at $z\sim 16$. Smaller grains have less heat capacity and a reduced radiating area, however, so the absorption of photons can lead to temperature spikes \citep{Draine2001}. At $r_{\rm s}\sim 0.003$ $\mu$m, we have $N_{\rm s}\sim 1.4\times 10^4$, so $\Delta T_{\rm d}/T_{\rm d} \sim 6\times 10^{-4}-4\times 10^{-3}$. Evidently, the assumption of a smooth evolution in $T_{\rm d}$ starts to break down for grains smaller than this, since they proceed through stochastic heating via absorption and cooling between the spikes. The dust model required for consistency with the observed spectrum of the CMB therefore consists of silicates with sizes $\sim 0.003-0.3$ $\mu$m, or even larger, though for sizes $\gtrsim 0.3$ $\mu$m, we would then violate our previous estimate of $n_{\rm s}(z=16)$ and the satisfactory result that $f_{\rm Z}\sim 0.2$. As modeled here, the dust is optically thick at all relevant frequencies. But once the dust is destroyed, however, the principal contributor to the optical depth affecting the CMB spectrum is Thomson scattering within the ionized medium across the epoch of reionization. At least for this process, one would not expect a discernible difference between the dust and recombination models because the structure of the reionization region is essentially the same in both cases. The observations constrain when reionization began and ended, and the physics responsible for this process is essentially independent of the background cosmology. Certainly, there are percentage differences arising from the respective age-redshift relationships, which affect the variation in baryonic density with time, but a detailed calculation (Melia \& Fatuzzo 2016) has already shown that the optical depth through this region would be consistent with the value (i.e., $\tau\sim 0.066$) measured by {\it Planck} \citep{Planck2018} in both cases. Finally, let us quantitatively confirm our earlier statement concerning the negligible impact of Pop III stars on the overall background radiation field. Much more massive ($500\;M_\odot\gtrsim M\gtrsim 21\;M_\odot$) than stars formed today \citep{Bromm2004,Glover2004}, Pop III stars emitted copious high-energy radiation that ionized the halos within which they formed \citep{Johnson2007}. Following their brief ($\sim 10^6-10^7$ yr) lives, a large fraction of these stars \citep{Heger2003} exploded as SNe, ejecting the first heavy elements into the interstellar medium \citep{Whalen2008}. Given the dust size and required number (see above), we estimate that roughly $9\times 10^{44}$ g Mpc$^{-3}$ (co-moving volume) of dust material needed to be injected into the interstellar medium during the principal epoch ($20\gtrsim z\gtrsim 15$) of Pop III star formation. The ultimate fate of the Pop III stars depended on their mass prior to the SN explosion. For a mass $M\lesssim 40\;M_\odot$, roughly $20\%$ of the mass was ejected into the interstellar medium as metals, leaving a compact remnant behind. For $M\gtrsim 140\;M_\odot$, the explosion was much more powerful, dispersing as much as $\sim 50\%$ of the mass \citep{Heger2002}. For the sake of illustration, let us adopt a typical mass $M\sim 100\;M_\odot$, with a typical ejection fraction of $30\%$ (between these two limits). In the $R_{\rm h}=ct$ universe, $1+z=1/tH_0$, from which we estimate an interval of time $\Delta t \sim 200$ Myr between $z=15$ and $20$. Thus, $\sim 1.5\times 10^8$ Mpc$^{-3}$ Pop III stars must have exploded as SNe to provide the required dust. Prior to exploding, however, these Pop III stars also injected a copious amount of radiation into the ambient medium. A typical Pop III star with mass $M\sim 100\;M_\odot$ was a blackbody emitter with radius $R_*=3.9\,R_\odot$ and surface effective temperature $T_*=10^5$ K, so its bolometric luminosity would have been $\sim 4\times 10^{39}$ erg s$^{-1}$. Thus, the total energy density radiated by these stars during their lives would have been $U_{III}\sim 4\times 10^{63}$ erg Mpc$^{-3}$. By comparison, the CMB energy density at $z\sim 16$ was $U_{\rm cmb}\sim 8\times 10^{65}$ erg Mpc$^{-3}$. Evidently, $U_{III}/U_{\rm cmb}\sim 0.5\%$, a negligible fraction. In terms of the photon number, this ratio would have been even smaller, given that the average energy of a photon radiated by the stars was much higher than that of the CMB. A somewhat related issue is the nature of the cosmic infrared background (CIB), and whether it may be related in some way to a dusty origin for the CMB. Most of the CIB is believed to have been produced by extragalactic dust at $z\sim 2$ \citep{Planck2011}. The mechanism for producing the CMB and CIB in this model are, however, quite different. The CMB in this picture was produced by saturated dust absorption and emission at $16\gtrsim z\gtrsim 14$, with all of the CMB photons having been absorbed prior to $z\sim 14$. The dust producing the CIB at $z\sim 2$ was presumably heated by stars and quasars near that redshift, thereby producing an infrared signal with a different temperature profile. The CIB and CMB would have been created under very different physical conditions, with the high-$z$ component in thermal equilibrium with the dust, and the lower-$z$ component produced by dust heated by higher frequency radiation. As we showed earlier, dust heating by Pop II and III stars at $16\gtrsim z\gtrsim 14$ was insignificant compared to the CMB. The reverse situation appears to have materialized at $z\sim 2$. \subsection{Frequency-dependent Power Spectrum} The third crucial signature that may distinguish between dust and recombination has to do with anisotro\-pies in the temperature distribution across the sky and how they vary among surveys conducted at different frequencies. In simple terms, one does not expect photospheric depth effects to determine the observed distribution of fluctuations in the case of Thomson scattering because the optical depth is independent of frequency. Thus, maps made at different frequencies should reveal exactly the same pattern of anisotropies since all of the relic photons are freed from essentially the same LSS. An important caveat, however, is that this simplified recombination picture in the standard model may be ignoring an effect, due to Rayleigh scattering by neutral hydrogen, that itself could produce a percentage-level dependence of the power spectrum on frequency, as we shall discuss later in this section. Assuming that the power spectrum is frequency-independent would almost certainly not be valid in the case of dust if its opacity also depends on frequency. Although photospheric depth effects might not significantly change the shape and size of the larger fluctuations from one map to another, they might alter the observed pattern of anisotropies on the smaller scales if the angular diameter distance between the LSS's at two different frequencies is comparable to the proper size of the fluctuations themselves. These differences would, at some level, produce variations in the CMB power spectrum compiled at different frequencies. A detailed analysis of the dependence of the CMB power spectrum on frequency was reported recently by the Planck collaboration \citep{Planck2016a}, following an initial assessment of such effects based on the WMAP first-year release in \cite{Hinshaw2003} (see, e.g., their fig.~2). {\it Planck} maps at different frequencies constrain the underlying CMB differently and cross-correlating them is quite challenging, in part due to the changing foreground conditions with frequency. The {\it Planck} analysis has shown that residuals in the half-mission TT power spectra clearly do vary from one cross power spectrum to the next, sampling a frequency range $70-217$ GHz, though this could be due to several effects, including foreground systematics, as well as possible intrinsic variations in the location of the LSS. One may also gauge the dependence of the multipole power coefficients on frequency by varying the maximum multipole number $\ell_{\rm max}$ included in the analysis, from $\sim 900$ to several thousand, thereby probing a possible greater variation in the observed anisotropies on small scales compared to the larger ones. This particular test produces shifts in the mean values of the optimized cosmological parameters by up to $\sim 1\sigma$, in ways that cannot always be related easily to non-cosmological factors. In addition, the cross power spectrum at lower frequencies ($\lesssim 100$ GHz) shows variations in the amplitude $D_{\ell}$ of up to $\sim 4\sigma$ compared to measurements at higher frequencies. Overall, {\it Planck} finds a multipole power varying an amount $\Delta D_\ell$ (increasing with multipole number $\ell$ over the frequency range $\sim 70-200$ GHz) anywhere from $\sim 40\;\mu$K$^2$ at $\ell\sim 400$, to $\sim 100\;\mu$K$^2$ at $\ell\gtrsim 800$. Thus, with $D_\ell\sim 2000\;\mu$K$^2$ over this range, one infers a maximum possible variation of the power spectrum---as a result of frequency-induced changes in the location of the LSS---to be $\sim 2\%$ at $\ell\sim 400$, increasing to $\sim 5\%$ for $\ell\gtrsim 800$. Thus, in order for a dust origin of the CMB to be consistent with current limits, the angular-diameter distance to the LSS cannot vary with frequency so much that it causes unacceptably large variations in the inferred angular size of the acoustic horizon. Earlier, we estimated that $z_{\rm cmb}\sim 16$ in the $R_{\rm h}=ct$ universe. This redshift is interesting for several reasons, one of them being that it coincides almost exactly with the beginning of the epoch of reionization at $z\sim 15$. It is tempting to view this as more than a mere coincidence, in the sense that the ramp up in physical activity producing a rapid increase of the UV emissivity around that time would not only have reionized the Hydrogen and Helium, but also destroyed the dust. So a viable scenario in this picture would have the medium becoming optically thick with dust by $z\sim 16$, then rapidly thinning out due to the destruction of the dust grains by $z\sim 15$. Any variation in the location of the LSS would then be limited to the range of angular-diameter distances between $z\sim 14-15$ and $16$. We can easily estimate the impact this would have on the inferred angular size $\theta_{\rm s}$. Assuming the medium was optically thick at $z_{\rm cmb}$ and that it became mostly transparent by $z=z_{\rm cmb}-\Delta z$, one can easily show from Equation~(29) that the change in $\theta_{\rm s}$ would be \begin{equation} \Delta\theta_{\rm s}={r_{\rm BAO}^{R_{\rm h}=ct}\over R_{\rm h}(t_0)} \left[{1\over\ln(1+z_{\rm cmb}-\Delta z)}-{1\over\ln(1+z_{\rm cmb})}\right]\;. \end{equation} Table~2 summarizes some critical data extracted from this relation. Given the relatively weak dependence of $d_A^{R_{\rm h}=ct}(z)$ on $z$ at these redshifts, the apparent angular size of the acoustic horizon changes very slowly. Consequently, even if it took the Universe $50-100$ Myr to become transparent and initiate the epoch of reionization, the impact on our inferred CMB power spectrum appears to be no more than a few percent, consistent with current observational limits. \begin{table*} \vskip 0.2in \center \centerline{{\bf Table 2.} Dust photospheric depth at the LSS} \begin{tabular}{cccc} \hline\hline $\Delta z$\qquad\qquad&{$\Delta\theta_{\rm s}$}\qquad&\qquad Percentage\qquad& \qquad{$\Delta t$}\qquad \\ \qquad&{(deg)}\qquad&\qquad of $\theta_{\rm s}$& \qquad {(Myr)}\qquad \\ \hline 1\qquad\qquad & 0.013 &\qquad 2.2$\%$ & \qquad 53 \\ 2\qquad\qquad & 0.025 &\qquad 4.2$\%$ & \qquad 100 \\ \hline\hline \end{tabular} \end{table*} Some support for this idea may be found in our current understanding of how dust is formed and destroyed in the ISM. Though some differences distinguish nucleosynthesis and mass ejection in Pop III stars from analogous processes occurring during subsequent star formation, two factors pertaining to the life-cycle of dust were no doubt the same: (1) that dust principally formed within the ejecta of evolved stars; and (2) that it was then destroyed much more rapidly than it was formed in supernova-generated shock waves. These essential facts have been known since the earliest observation of shock-induced dust destruction over half a century ago \citep{Routly1952,Cowie1978,Seab1983,Welty2002}, creating a severe constraint on how much dust can possibly be present near young, star-forming regions. The early-type stars among them are the strongest UV emitters; they also happen to be the ones that evolve most rapidly on a time scale of only $10-20$ Myr and then end their lives as supernovae. The shocks they produce in the ISM result in the complete destruction of all grains on a time scale $\lesssim 100$ Myr \citep{Jones1994,Jones1996}. When this time scale is compared to the results shown in Table~2, the idea that the Universe transitioned from being optically thick with dust at $z\sim 16$ to optically thin by $z\sim 14-15$ becomes quite significant. There are several links in this chain, however, and maybe the correlations we have found are just coincidences. But at face value, there is an elegant synthesis of basic, well-understood astrophysical principles that work together to provide a self-consistent picture of how the cosmic fluid might have become optically thick by $z\sim 16$ due to dust production in Pop III stars, followed by an even more rapid phase of Pop II star formation and deaths. The earliest of these would have completely destroyed the dust with their supernova-induced shocks in a mere $\sim 100$ Myr, liberating the CMB relic photons and initiating the epoch of reionization by $z\sim 14-15$. To complete the discussion concerning whether or not an observed frequency-shift in the power spectrum can distinguish between the recombination and dust models for the CMB using future high-precision measurements, however, one must also consider the impact of Rayleigh scattering by neutral hydrogen, which itself may introduce some frequency dependence on the observed anisotropic structure. This effect is due to the classical scattering of long-wavelength photons by the HI dipole, which has an asymptotic $\nu^4$-dependence on frequency. Since the transition from fully ionized plasma to neutral hydrogen and helium is not sudden at recombination, higher frequencies of the observed CMB anisotropies should be Rayleigh scattered by the fractional density of HI atoms that builds while recombination proceeds (see, e.g., \citealt{Takahara1991,Yu2001,Lewis2013,Alipour2015}). But though this effect can strengthen considerably with increasing frequency, the blackbody spectrum also falls rapidly, so there are very few photons where Rayleigh scattering would be most impactful. The above-referenced studies have shown that the Rayleigh signal is most likely to be observable over a range of frequencies $200$ GHz $\lesssim \nu\lesssim800$ GHz, producing a $\lesssim 1\%$ reduction in anisotropy (for both the temperature and E-polarization) at $353$ GHz. Nevertheless, a frequency-dependent dust photospheric depth that we have been discussing in this section may still be distinguishable from the Rayleigh signal because it is expected to produce $\lesssim 4\%$ variations in the power spectrum even at frequencies below $\sim 200$ GHz, where the latter is not observable. As noted earlier, the percentage-level variations suggested by the latest {\it Planck} observations are observed in the frequency range $\sim 70$ GHz $-200$ GHz, where the Rayleigh distortions would be $<<1\%$. \subsection{E-mode and B-mode Polarization} The three aspects we have just considered---the detection of recombination lines, the CMB spectrum, and its possible frequency dependence in the dust model---will feature prominently in upcoming comparative tests between the recombination and dust scenarios. But there are several other factors we must consider, including what the detection (or non-detection) of E-mode and B-mode polarization can tell us about the medium in which the CMB is produced. The linear-polarization pattern can be geometrically decomposed into two rotational invariants, the E (gradient) mode and B (curl) mode \citep{Kamionkowski1997,Zaldarriaga1997}. In the standard model, E-mode polarization is produced by Thomson scattering of partially anisotropic radiation associated with the same scalar density fluctuations that produce the temperature hot spots. These are longitudinal compression modes with density enhancements aligned perpendicular to the direction of propagation, and therefore result in a polarization pattern with zero curl. Tensor (or gravitational wave) modes, on the other hand, alter the frequency of the background anisotropic radiation along diagonals to the propagation vector as they cross the LSS, and the subsequent Thomson scattering therefore produces a polarization pattern with a non-zero curl. The detection of B-mode polarization is therefore an important signature of tensor fluctuations associated with a quantized scalar (possibly inflaton) field in the early Universe. As reported by the \cite{Planck2018}, the foreground polarized intensity produced by dust in the Milky Way is several orders of magnitude larger than that seen (or expected) in the CMB. Aspherical dust particles align with an ambient magnetic field and produce both E-mode and B-mode polarization. But the relative power in these two components is a complicated function of the underlying physical conditions, notably the strength of the magnetic field {\bf B} and its structure (i.e., turbulent versus smooth), and its energy density relative to the plasma density. Many expected to see a randomly oriented foreground polarization map with equal powers in the E-modes and B-modes \citep{Caldwell2017}. Instead, the {\it Planck} data reveal a surprising E/B anisotropy of a factor $\sim 2$ \citep{Planck2018}. Equally important, {\it Planck} also reveals a positive TE correlation in the dust emission, to which we shall return shortly. Once the foreground polarization was subtracted, however, the remaining signal contained only an E-mode pattern and no B-mode that one could attribute to the CMB. In further analysis, the CMB peaks were stacked, revealing a characteristic ringing pattern in temperature associated with the first acoustic peak (on sub-degree scales), and a high signal-to-noise pattern in the E-mode stack (see, e.g., their fig.~20). This correlation between the temperature and E-mode anisotropies observed by {\it Planck} is therefore consistent with the standard picture (see above), supporting the view that the CMB must have been created by recombination in the context of $\Lambda$CDM. But neither the absence of a B-mode in the fore\-ground-subtracted signal, nor the TE correlation, can yet rule out a dust origin for the CMB in the alternative scenario we are considering in this paper. The observations are not yet precise enough, nor is the theoretical basis for dust polarization sufficiently well established, for us to say for sure whether B-mode polarization is/should be present in the foreground-subtracted CMB map. There are two requirements for dust to emit polarized light: (1) non-sphericity of the dust grains to allow them to spin about an axis perpendicular to their semi-major axis, and (2) an organized magnetic field to maintain alignment of the spin axes. We do not know if the earliest dust grains produced by Population III stellar ejecta were spherical or not, but our experience with other dust environments suggests this is quite likely. Insofar as the magnetic fields are concerned, our current measurements suggest that---if they exist---intergalactic magnetic fields are probably weaker than those found within galaxies, where $|{\bf B}_{\rm G}|$ is typically $3-4\,\mu$G \citep{Grasso2001}, but are certainly not ruled out. Observations of Abel clusters imply field amplitudes $|{\rm B}_{\rm ICM}|\sim 1-10\mu$G, but beyond that, no firm measurements have yet been made. High resolution measurements of the rotation measure in high-redshift quasars hint at the presence of weak magnetic fields in the early Universe. For example, radio observations of the quasar 3C191 at $z=1.945$ \citep{Kronberg1994} are consistent with $|{\bf B}_{\rm IGM}|\sim 0.4-4\mu$G. For the Universe as a whole, some interesting limits may be derived using the ionization fraction in the cosmic fluid and reasonable assumptions concerning the magnetic coherence length. If one adopts the largest reversal scale ($\sim 1$ Mpc) seen in galaxy clusters, one concludes that $|{\bf B}_{\rm IGM}|\lesssim 10^{-9}$ G (see \citealt{Kronberg1994,Grasso2001}, and references cited therein). These fields could be as small as $\sim 10^{-11}$ G, however, if their coherence length is much larger. Several other arguments add some support to the view that the primordial $|{\bf B}_{\rm IGM}|$ could have fallen within this range. Specifically, the galactic dynamo origin for ${\bf B}_{\rm G}$ is not widely accepted. The main alternative is to assume that the galactic field ${\bf B}_{\rm G}$ resulted directly from a primordial field compressed adiabatically when the protogalactic cloud collapsed. This would imply a primordial field strength $\sim 10^{-10}$ G at $z>5$ at the time when galaxies were forming, consistent with the observational limits derived from the rotation measures of high-redshift objects \citep{Grasso2001}. We simply do not know yet what the magnetic-field strength would have been during the epoch of Pop II and III star formation and evolution. It is quite possible, e.g., that the magnetic field could have been even stronger than $|{\bf B}_{\rm IGM}|$ within the halos where the Pop III stars ejected most of their dust. Of course, such criteria impact whether or not the dust grains could have been aligned. Some proposed mechanisms for this process rely on the strength of {\bf B}, but others---such as mechanical alignment \citep{Dolginov1976,Lazarian1994,Roberge1995,Hoang2012} and radiative alignment \citep{Dolginov1976,Draine1996,Draine1997,Weingartner2003,Lazarian2007} are not so sensitive. At this stage, it is safe to assume that our experience with dust grain alignment and polarized emission in our local neighborhood may be insufficient to fully appreciate the analogous process occurring during Pop III stellar evolution at $z\sim 16$. But though we have never seen polarized dust emission from the intergalactic medium, there are several good reasons to suspect that the dust origin for the CMB described in this paper could nonetheless account for the polarization constraints already available today. First, the dust producing the CMB would presumably have been destroyed prior to $z\sim 14$, so the absence of polarized dust emission from the IGM at $z<14$ is not an indication that it lacks a magnetic field (see above). Second, theoretical work on better understanding the characteristics of dust emission has begun in earnest, mostly in response to these {\it Planck} observations. We know for a broad range of physical conditions that the dust polarization fraction is typically $\sim 6-10\%$ (see, e.g., \citealt{Draine2009}), not unlike the $\sim 10\%$ fraction measured in the CMB \citep{Planck2018}. Third, we now know that the E-mode and B-mode powers depend on several detailed properties of the dust profile and the background magnetic field (see, e.g., \citealt{Caldwell2017,Kritsuk2018,Kim2019}). In fact, it has been known for several decades that an alignment between the density structures and the magnetic fields generates more E-mode power than B-mode \citep{Zaldarriaga2001}. In other words, the E/B asymmetry depends quite sensitively on the randomness of this alignment, such that a higher degree of randomness produces less E/B asymmetry. Thus, a highly organized {\bf B} within the halos where the Pop III star dust was expelled would have produced a large E/B asymmetry. In their analysis, \cite{Caldwell2017} considered this dependence in the context of magnetized fluctuations decomposed into slow, fast, and Alfv\'en magnetohydrodynamic waves, and showed that E/B could range anywhere from $\sim 2$ (as observed by {\it Planck} in the Milky Way), to as much as $\sim 20$, when the medium is characterized by weak fields and fast magnetosonic waves---the conditions one would have expected for the dust environment at $z\sim 16$ (see their figure 3 for a summary of these results). Therefore, the current non-detection of B-mode polarization in the foreground-subtracted CMB signal cannot yet be used to rule out the dust scenario described in this paper. Ironically, a future detection of B-mode polarization could be used to either constrain inflationary models in the context of $\Lambda$CDM, or the underlying physical conditions in a magnetized dusty environment at $z\sim 16$, if the scenario developed in this paper continues to be viable. Finally, {\it Planck} \citep{Planck2018} has confirmed the existence of a TE correlation in the foreground dust emission (see above), suggesting that the overlap seen in the temperature and E-mode stacks of the foreground-subtracted CMB signal could either have been due to Thomson scattering in the recombination scenario, or to the polarized dust emission at $z\sim 16$. \subsection{Other Potential Shortcomings of the Dust Model} Lensing of the CMB has been measured with very high precision, and appears to be consistent with the transfer of radiation over a comoving distance extending from $z\sim 1080$ to $0$ (for an early review, see \citealt{Lewis2006}). The latest Planck data \citep{Planck2016b} would therefore not support a CMB originating at $z\sim 16$ in the context of $\Lambda$CDM. But structure formation happened differently in $R_{\rm h}=ct$, and measures of distance deviate sufficiently from one model to the next that weak lensing calculations need to be carefully redone. We do not yet have a complete simulation of the fluctuation growth in this model over the entire cosmic history, though some initial steps have been taken \citep{Melia2017a,Yennapureddy2018}. Insofar as lensing is concerned, there are several key factors that one may use to qualitatively assess how the lensing effects in $R_{\rm h}=ct$ would differ from those in $\Lambda$CDM. Although the LSS redshift is different in the two models, and the time-redshift relationship varies by factors of up to $\sim 2$, what matters most critically in determining the lensing effects are: (1) the comoving distance to the LSS, (2) the potential well sizes, and (3) the background pattern of anisotropies at the LSS. Together with estimates of the BAO scale (see Table~1 above), the initial calculations completed thus far for the formation of structure in $R_{\rm h}=ct$ inform us that the typical potential well size in this model is about $265$ Mpc (compared with $\sim 300$ Mpc in $\Lambda$CDM), while the comoving distance between $z\sim 16$ and $0$ is $\sim 12,200$ Mpc. Thus, one may estimate that the approximate number of potential wells traversed by the radiation from where the CMB originates to $z=0$ is approximately $46$. As it turns out, this is almost exactly the same number as in the standard model from $z\sim 1080$ to $0$ \citep{Lewis2006}. The deflection angle due to weak lensing from $z\sim 16$ to $0$ in $R_{\rm h}=ct$ is therefore expected to be quite similar to that from $z\sim 1080$ to $0$ in $\Lambda$CDM. We may estimate it by assuming that the potentials are uncorrelated, so that the total deflection angle should be $\sim 10^{-4}\sqrt{46}$ radians, in terms of the approximate deflection angle due to a single well. Thus the overall deflection angle is about $2$ arcmins in both models. The actual calculation of the remapping of the CMB temperature due to weak lensing is much more complicated than this, of course, but the fact that the scales are so similar suggests that the observed lensing features probably do not rule out a dust origin for the CMB in $R_{\rm h}=ct$. Finally, there would be a problem growing the fluctuation amplitude of $\sim 10^{-5}$ from $z\sim 16$ to $0$ in the standard model, given that there is barely enough time to do so starting from $z\sim 1080$. While this is true in $\Lambda$CDM, the time-redshift relationship in $R_{\rm h}=ct$ is sufficiently different to compensate for the shorter redshift range. Again, we do not yet have a complete history of the fluctuation growth in this model, but the growth equation differs from that in $\Lambda$CDM, primarily because the background metric is not the same \citep{Melia2017a}. The principal issue, though, is the timeline $t(z) = t_0/(1+z)$. It is easy to see that the time elapsed from $z\sim 17$ to today is about $13$ Gyr. By comparison, the time elapsed since $z\sim 1080$ in $\Lambda$CDM is about $13.7$ Gyr. One would not claim that this difference is sufficient to create a problem for $R_{\rm h}=ct$, especially since the two growth equations are not the same. \section{Discussion} Our principal goal in this paper has been to demonstrate how the zero active mass condition (i.e., $\rho+3p=0$) underlying the $R_{\rm h}=ct$ cosmology guides the evolution in $\rho_{\rm r}$, $\rho_{\rm m}$, $\rho_{\rm de}$ and $T(z)$, particularly at early times when the CMB was produced. Together with additional constraints from the measured values of $\theta_{\rm s}$ and $r_{\rm BAO}$, and the adoption of the acoustic horizon as a standard ruler, we have concluded that $z_{\rm cmb}$ in this model must be much smaller than its corresponding value in $\Lambda$CDM, eliminating the possibility that the `recombination' of protons and electrons could have liberated the CMB relic photons in this model. Finding an alternative mechanism for producing the CMB in this picture does not have as much flexibility as one might think, however, because the physical attributes of the LSS and the measured values of $H_0$ and $T_0$ point back quite robustly to the dust model proposed several decades ago. Ironically, many of the features in this model that were resoundingly rejected in the context of $\Lambda$CDM become fully self-consistent with each other and the data when viewed with $R_{\rm h}=ct$ as the background cosmology. The fact that the creation of the CMB at $z_{\rm cmb}\sim 16$ coincides very well with the onset of the epoch of reionization at $z\sim 15$ is a strong point in its favour, because the astrophysics of this process is well understood in the local Universe, from which one expects a correlation between the rapid increase in UV emissivity and the rapid destruction of dust grains in star-forming regions. Fortunately, the observational differences between the recombination and dust scenarios should be quite distinguishable using the improved sensitivities of future experiments, thus allowing us to definitively rule out one or the other of these mechanisms in the near future. This result may come either (i) from the detection of recombination lines at $z\sim 1080$, which without any doubt would rule out dust and very strongly affirm the recombination model in $\Lambda$CDM, or (ii) affirm a robust frequency dependence of the CMB power spectrum, with $\sim 5\%$ variations arising from the displacement of the LSS from $z\sim 16$ to $z\sim 15$ (or lower) across the sampled frequency range. In the meantime, there is much to do on the theoretical front. Our initial investigation into how acoustic waves might have evolved in the early $R_{\rm h}=ct$ universe, eventually producing the multi-peak structure in the temperature spectrum of the CMB and, later, also the characteristic BAO distance scale in the distribution of galaxies and the Ly-$\alpha$ forest, has resulted in a self-consistent picture for the redshift dependence of the components in the cosmic fluid. By no means should this study yet be viewed as compelling, however, given that the physics of fluctuation growth throughout this period is very complex and dependent on many assumptions, some reasonably justified, others subject to further scrutiny. The corresponding picture in the standard model has undergone several decades of development, based on a combination of simple arguments---such as the use of Equation~(16) to estimate the acoustic horizon in the comoving frame---and much more elaborate semi-analytic and full numerical simulations to follow the various epochs of halo growth as the dominant contributions to the cosmic fluid transitioned from radiation to coupled baryon-radiation components, and finally to matter. In addition, one must introduce some reasonable distinction between the fluctuations themselves and the smooth background. For example, a principal concern with the modeling of growth across the epoch of recombination is the delayed condensation of baryons relative to dark matter \citep{Yoshida2003,Naoz2006}. Structure formation in the early Universe begins with the gravitational amplification of small seed fluctuations, which is believed to form dark-matter halos. Subsequent hydrodynamic processes allow the baryonic gas to fall into these potential wells, undergoing shock heating and radiative cooling along the way. Several different studies have indicated that a substantial difference may therefore exist in the distribution of baryons and dark matter at decoupling (for some of the pioneering work on this topic, see \citealt{Hu1995,Ma1995,Seljak1996,Yamamoto2001,Singh2002}. After recombination, when the baryons were no longer coupled to the radiation, gravitational infall caused the baryon density fluctuations to catch up to the dark matter anisotropies, though perturbation modes below the Jeans length were presumably delayed as a result of the initial oscillations. It is quite clear from this brief overview that the development of an acoustic scale, and its subsequent evolution throughout the formation of large-scale structure, not only depends on rather complex physics, but must also probably vary between different cosmological models. {\it For example, a principal difference between the recombination and dust scenarios is that decoupling in the latter would have occurred well before the liberation of the CMB relic photons, which means that the emergence of an acoustic horizon would be mostly hidden from view by the large dust opacity at smaller redshifts.} The only features that would have survived across $z_{\rm cmb}$ are $\theta_{\rm s}$ and the scale $r_{\rm BAO}$, but note that both of these quantities would have been set well before $t_{\rm cmb}$, creating some observational ambiguity about the value of $z_{\rm dec}$, which equals $z_{\rm cmb}$ in $\Lambda$CDM, but not in $R_{\rm h}=ct$. This paper represents merely the first step, basically the use of various measurements to estimate the physical conditions prior to $z_{\rm cmb}$. And though the picture is self consistent thus far, some of the essential elements may change, perhaps considerably, once realistic simulations are carried out. Nonetheless, the empirical estimate shown in Equation~(35) is quite robust, because regardless of how and when the baryonic structure started to form, this average sound speed is required by the assumed equality of the measured acoustic radius $r_{\rm s}$ and the BAO scale $r_{\rm BAO}$. This estimate therefore includes effects, such as oscillations in the coupled baryon-radiation fluid and the subsequent baryonic catch up. In other words, we don't actually need to know specifics about the medium through which the waves propagated to get this number because, at this level, it is derived from the observations. As we look forward to further developments in this analysis, there are several clues and indicators that are already quite evident. The self-consistent picture emerging in this paper requires a continued coupling between the various components in the (cosmic) background fluid, as one may infer directly from figure~1. In the early universe, the background radiation, dark energy and matter would necessarily have been coupled. Much of this requires new physics, but this situation is hardly new or unique. It is difficult to avoid such a conclusion in any cosmological model. Even $\Lambda$CDM has several such requirements that are yet to be resolved. Consider that we have no idea what the inflaton field is. Yet without it, $\Lambda$CDM cannot resolve the horizon problem. We also have little idea of how baryonic and dark matter were generated initially. Certainly, matter was not present at the big bang, nor during the inflationary phase. Dark energy remains a big mystery, particularly if it is really a cosmological constant, given that its density is many orders of magnitude smaller than quantum field theory requires for the vacuum. All of these issues await a possible resolution in physics beyond the standard model. The situation is somewhat different with the fluctuations themselves. Recent work with this model \citep{Melia2017a} suggests that, in spite of this coupling, dark energy remained a smooth background, and did not participate in the fluctuation growth. It is therefore reasonable to expect that the sequence of dark matter condensation followed by baryonic catch up (required in $\Lambda$CDM) carries over in an analogous fashion to $R_{\rm h}=ct$. But there are several important differences, one of them being that neither radiation nor matter could apparently have represented more than $\sim 20-30\%$ of the total energy density at any given time. This would almost certainly have slowed down the rate of growth in the early universe, but there would have been ample time to accommodate this difference given that $t_{\rm cmb}$ in this model is $\sim 849$ Myr, compared to $\sim 380,000$ yr in $\Lambda$CDM. Aside from the smaller fractional energy density representation of matter and radiation, there is the additional difference compared to $\Lambda$CDM brought about by the implied evolution of $\rho_{\rm m}$, $\rho_{\rm r}$ and $\rho_{\rm de}$ (see figure~1) consistent with the zero active mass condition. The latter leads to a growth rate equation for the fluctuations lacking a gravitational growth term to first order (Melia 2017a). This feature has actually been quite successful in accounting for the observed growth rate at $z\lesssim 2$, matching the inferred value $f\sigma_8(0)$ at redshift zero quite well. By comparison, the corresponding equation in $\Lambda$CDM predicts a curvature in this rate as a function of redshift that is not supported by the observations \citep{Melia2017a}. This growth characteristic in $R_{\rm h}=ct$ also applies to the early universe, adding to our expectation that the gravitationally-induced growth of fluctuations was slower in this model compared to $\Lambda$CDM. \section{Conclusion} The acoustic scale associated with the propagation of sound waves prior to recombination has become one of the most useful measurements in cosmology, providing a standard ruler for the optimization of several key parameters in models such as $\Lambda$CDM. The standard model, however, does not fit the measured BAO scale very well. And a more recent analysis of the CMB angular correlation function provides some evidence against basic, slow-roll inflation, making it more difficult to understand how the horizon problem may be avoided in $\Lambda$CDM. Given the success of the alternative cosmology known as $R_{\rm h}=ct$ in accounting for a diverse set of observational data, we have therefore sought to better understand how the origin of the CMB could be interpreted in this model. We have found that the characteristic length ($\sim$$131\pm 4.3$ Mpc) inferred from large-scale structure may be interpreted as a BAO scale in $R_{\rm h}=ct$, as long as $z_{\rm cmb}\sim 16$, which would mean that the location of the LSS would essentially coincide with the onset of the epoch of reionization. This picture is consistent with the evolutionary requirements of the zero active mass condition and with our understanding of the life cycle of dust in star forming regions. Of course, much work remains to be done. The results look promising thus far, suggesting that finding a more complete solution, incorporating the necessary physics to account for the growth of fluctuations up to $z_{\rm dec}$ and their continued evolution towards $z_{\rm cmb}$, is fully warranted. This effort is currently underway and the outcome will be reported elsewhere. On the observational front, the recombination and dust models for the origin of the CMB should be readily distinguishable with upcoming, higher sensitivity instruments, which should either detect recombination lines at $z\sim 1080$, or establish a robust variation with frequency of the CMB power spectrum due to the displacement of the LSS from $z\sim 16$ to $z\sim 14-15$ across the sampled frequency range at the level of $\sim 2-5\%$. \acknowledgments I am grateful to the anonymous referee for several helpful suggestions to improve the presentation in the manuscript. I am also very happy to acknowledge helpful discussions with Daniel Eisenstein and Anthony Challinor regarding the acoustic scale, and with Martin Rees, Jos\'e Alberto Rubino-Martin, Ned Wright and Craig Hogan for insights concerning the last-scattering surface. I thank Amherst College for its support through a John Woodruff Simpson Lectureship, and Purple Mountain Observatory in Nanjing, China, for its hospitality while part of this work was being carried out. This work was partially supported by grant 2012T1J0011 from The Chinese Academy of Sciences Visiting Professorships for Senior International Scientists, and grant GDJ20120491013 from the Chinese State Administration of Foreign Experts Affairs.
proofpile-arXiv_065-4409
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Message Passing Network} \label{app:message-passing} At message passing step $t$, each bond $(u, v) \in \mathcal{E}$ is associated with two messages $\mathbf{m}_{uv}^{(t)}$ and $\mathbf{m}_{vu}^{(t)}$. Message $\mathbf{m}_{uv}^{(t)}$ is updated using: \begin{align} \mathbf{m}_{uv}^{(t+1)} & = \mathrm{GRU} \left(\mathbf{x}_u, \mathbf{x}_{uv}, \{\mathbf{m}_{wu} ^ {(t)}\}_{w \in N(u) \setminus v}\right) \end{align} where $\mathrm{GRU}$ denotes the Gated Recurrent Unit, adapted for message passing \cite{jin2018junction}: \begin{align} \mathbf{s}_{uv} & = \sum_{k \in N(u) \setminus v}{\mathbf{m}_{ku}^{(t)}} \\ \mathbf{z}_{uv} & = \mathrm{\sigma} \left({\mathbf{W_z}} \left[\mathbf{x}_u, \mathbf{x}_{uv}, \mathbf{s}_{uv} \right] + b_z \right) \\ \mathbf{r}_{ku} & = \mathrm{\sigma} \left(\mathbf{W_r} \left[\mathbf{x}_u, \mathbf{x}_{uv}, \mathbf{m}_{ku}^{(t)}\right] + b_r\right) \\ \mathbf{\tilde{r}}_{uv} & = \sum_{k \in N(u) \setminus v} {\mathbf{r}_{ku} \odot \mathbf{m}_{ku}^{(t)}} \\ \mathbf{\tilde{m}}_{uv} & = \mathrm{\tanh} \left(\mathbf{W} \left[\mathbf{x}_u, \mathbf{x}_{uv}\right] + \mathbf{U}\mathbf{\tilde{r}}_{uv} + b \right) \\ \mathbf{m}_{uv}^{(t+1)} & = \left(1 - \mathbf{z}_{uv}\right) \odot \mathbf{s}_{uv} + \mathbf{z}_{uv} \odot \mathbf{\tilde{m}}_{uv} \end{align} After $T$ steps of iteration, we aggregate the messages with a neural network $g(\cdot)$ to derive the representation for each atom: \begin{align} \mathbf{c}_u & = \mathrm{g} \left(\mathbf{x}_{u}, \sum_{k \in N(u)}{\mathbf{m}_{vu}^{(T)}} \right) \end{align} \section{Multiple Edit Prediction} \label{app:multiple_pred} We propose an autoregressive model for multiple edit prediction that allows us to represent arbitrary length edit sets. The model makes no assumption on the connectivity of the reaction centers or the electron flow topology, addressing the drawbacks mentioned in \cite{bradshaw2018a, jin2017predicting}. Each edit step $t$ uses the intermediate graph $\mathcal{G}_{s}^{(t)}$ as input, obtained by applying the edits until $t$ to $\mathcal{G}_p$. Atom and bond labels are now indexed by the edit step, and a new termination symbol $y_d^{(t)}$ is introduced such that $\sum_{(u, v), k} {y_{uvk}^{(t)}} + \sum_{u}{y_u}^{(t)} + y_d^{(t)} = 1$. The number of atoms remain unchanged during edit prediction, allowing us to associate a hidden state $\mathbf{h}_u^{(t)}$ with every atom $u$. Given representations $\mathbf{c}_u^{(t)}$ returned by the $\mathrm{MPN}(\cdot)$ for $\mathcal{G}_{s}^{(t)}$, we update the atom hidden states as \begin{align} \label{eqn:multi-edit-update} \mathbf{h}_u^{(t)} & = \mathrm{\tau} \left(\mathbf{W_{h}h}_u^{(t-1)} + \mathbf{W_{c}c}_u^{(t)} + b \right) \end{align} The bond hidden state $\mathbf{h}_{uv}^{(t)} = (\mathbf{h}_u^{(t)} \hspace{2pt}|| \hspace{2pt} \mathbf{h}_v ^ {(t)})$ is defined similar to the single edit case. We also compute the termination score using a molecule hidden state $\mathbf{h}_m^{(t)} = \sum_{u \in \mathcal{G}_s^{(t)}}{\mathbf{h}_u^{(t)}}$. The edit logits are predicted by passing these hidden states through corresponding neural networks: \begin{align} \label{eqn:multi-edit-score} s_{uvk}^{(t)} &= \mathbf{u_k}^T \mathrm{\tau}\left(\mathbf{W_kh}_{uv}^{(t)} + b_k \right)\\ s_u^{(t)} &= \mathbf{u_a}^T \mathrm{\tau} \left(\mathbf{W_ah}_u^{(t)} + b_a \right) \\ s_d^{(t)} &= \mathbf{u_d}^T \mathrm{\tau} \left(\mathbf{W_dh}_m^{(t)} + b_d \right) \end{align} \paragraph{Training} Training minimizes the cross-entropy loss over possible edits, aggregated over edit steps \begin{equation} \mathcal{L}_e (\mathcal{T}_e) = -\sum_{(\mathcal{G}_p, E) \in \mathcal{T}_e}\sum_{t=1}^{|E|}\left({\sum_{(j, k) \in E[t]}{y_{uvk}^{(t)} \mathrm{\log}(s_{uvk}^{(t)})} + \sum_{u \in E[t]}{y_u^{(t)} \mathrm{\log}(s_u^{(t)})} + y_d^{(t)} \mathrm{\log}(s_d^{(t)})}\right) \end{equation} Training utilizes \textit{teacher-forcing} so that the model makes predictions given correct histories. \section{Leaving Group Attachment} \label{app:leaving_group_attach} Attaching atoms in the leaving groups were marked during vocabulary construction. The number of such atoms are used to divide leaving groups into single and multiple attachment categories. The single attachment leaving groups are further divided into single and double bond attachments depending on the {\em valency} of the attaching atom. By default, for leaving groups in the multiple attachments category, a single bond is added between attaching atom(s) on the synthon and leaving groups. For multiple attachment leaving groups with a combination of single and double bonds, the attachment is hardcoded. A single edit can result in a maximum of two attaching atoms for the synthon(s). For the case where the model predicts a leaving group with a single attachment, and the predicted edit results in a synthon with two attaching atoms, we attach to the first atom. For the opposite case where we have multiple attaching atoms on the leaving group and a single attaching atom for the synthon, atoms on the leaving group are attached through respective bonds. The former case represents incorrect model predictions, and is not observed as ground truth. \section{Experimental Details} \label{app:experimental_details} Our model is implemented in PyTorch \citep{pytorch2019}. We also use the open-source software RDKit \citep{rdkit2017} to process molecules for our training set, for attaching leaving groups to synthons and generating reactant SMILES. \subsection{Input Features} \label{app:input_feat} \paragraph{Atom Features} We use the following atom features: \begin{itemize} \item One hot encoding of the atom symbol (65) \item One hot encoding of the degree of the atom (10) \item Explicit valency of the atom (6) \item Implicit valency of the atom (6) \item Whether the atom is part of an aromatic ring (1) \end{itemize} \paragraph{Bond Features} We use the following bond features: \begin{itemize} \item One hot encoding of bond type (4) \item Whether the bond is conjugated (1) \item Whether bond is part of ring (1) \end{itemize} \subsection{Retrosynthesis Benchmarks} \label{app:retro} The hidden layer dimension of the GRU based Message Passing Network is set to 300. We run $T = 10$ iterations of message passing in the encoder. All models are trained with the Adam optimizer and an initial learning rate of 0.001. Gradients are clipped to have a maximum norm of 20.0. \paragraph{Edit Prediction} The global dependency module comprises three convolutional layers with 600, 300 and 150 filters respectively, and a kernel size of 5. The atom and bond edits scoring network has a hidden layer dimension of 300. The model is trained for 100 epochs, and the learning rate is reduced by a factor of 0.9 when a plateau, as measured by the accuracy of predicted edits on the validation set, is observed. The edit prediction model has 2M parameters. \paragraph{Synthon Completion} The embedding dimension of leaving groups is set to 200, and graph representations are projected to the embedding dimension with a learnable projection matrix. The classifier over leaving groups also has a hidden layer dimension of 300, and a dropout probability of 0.2. The synthon completion model has 0.8M parameters. \paragraph{Shared Encoder Model} Trainable parameters of the associated edit prediction and synthon completion modules have the same dimensions as above. We set $\lambda_e$ to 1.0 and $\lambda_s$ to 2.0. The model is trained for 100 epochs, and the learning rate is reduced by a factor of 0.9 when a plateau, as measured by the accuracy of predicted edits and leaving groups on the validation set, is observed. The shared encoder model has 2.3M parameters. \subsection{Rare Reactions} \label{app:rare_reactions} To prepare the rare reaction subset of USPTO-50k, we first extract templates from the USPTO-50k training set using the template extraction algorithm in \citep{coley2017prediction}. For this work, we assume a rare template is one which occurs in the training set \emph{atmost} 10 times. Applying this definition to extracted templates results in a rare reaction dataset with 3889 training, 504 development and 512 test reactions. \section{Conclusion} Previous methods for single-step retrosynthesis either restrict prediction to a template set, are insensitive to molecular graph structure or generate molecules from scratch. We address these shortcomings by introducing a graph-based template-free model inspired by a chemist's workflow. Given a target molecule, we first identify synthetic building blocks (\emph{synthons}) which are then realized into valid reactants, thus avoiding molecule generation from scratch. Our model outperforms previous methods by significant margins on the benchmark dataset and a rare reaction subset of the same dataset. Future work aims to extend the model to realize a single reactant from multiple synthons, and develop pretraining strategies specific to reaction chemistry to improve rare reaction performance. \section{Experiments} \label{sec:results} Evaluating retrosynthesis models is challenging as multiple sets of reactants can be generated from the same product through a combination of different edits and/or leaving groups. To deal with this, previous works \citep{coley2017computer, dai2019retrosynthesis} evaluate the ability of the model to recover retrosynthetic strategies recorded in the dataset. However, this evaluation does not directly measure the generalization to infrequent reactions. \citet{chen2020learning} further evaluate on a rare reaction subset of the standard dataset, and note that model performance drops by almost 25\% on such reactions compared to the overall performance. Our evaluation quantifies two scenarios, \begin{enumerate*}[label=(\roman*.)] \item overall performance on the standard dataset, and \item generalization ability via rare reaction performance \end{enumerate*}. We also evaluate the performance of the edit prediction and synthon completion modules to gain more insight into the working of our model. \paragraph{Data} We use the benchmark dataset USPTO-50k \citep{schneider2016s} for all our experiments. The dataset contains $50,000$ atom-mapped reactions across 10 reaction classes. Following prior work \citep{coley2017computer, dai2019retrosynthesis}, we divide the dataset randomly in an 80:10:10 split for training, validation and testing. For evaluating generalization, we use a subset of rare reactions from the same dataset, consisting of 512 reactions. Details on the construction of this subset can be found in Appendix~\ref{app:rare_reactions}. \paragraph{Evaluation} We use the top-$n$ accuracy ($n = 1, 3, 5, 10, 50$) as our evaluation metric, defined as the fraction of examples where the recorded reactants are suggested by the model with rank $\leq n$. Following prior work \citep{coley2017computer, zheng2019predicting, dai2019retrosynthesis}, we compute the accuracy by comparing the canonical SMILES of predicted reactants to the ground truth. Atom-mapping is excluded from this comparison, but stereochemistry, which describes the relative orientation of atoms in the molecule, is retained. The evaluation is carried out for both cases where the reaction class is known or unknown. \paragraph{Baselines} For evaluating overall performance, we compare \textsc{GraphRetro} to six baselines ---three template-based and three template-free models. For evaluating rare reaction performance, we compare only against \textsc{GLN} which is the state-of-the-art method. The baseline methods include: \begin{itemize}[wide=0pt, leftmargin=15pt, label=] \item \emph{Template-Based}: \textsc{Retrosim} \cite{coley2017computer} ranks templates for a given target molecule by computing molecular similarities to precedent reactions. \textsc{NeuralSym} \citep{segler2017neural} trains a model to rank templates given a target molecule. The state-of-the-art method \textsc{GLN} \citep{dai2019retrosynthesis} models the joint distribution of templates and reactants in a hierarchical fashion using logic variables. \item \emph{Template-Free}: \textsc{SCROP} \citep{zheng2019predicting} and \textsc{LV-Transformer} \citep{chen2020learning} use the Transformer architecture \citep{vaswani2017attention} to output reactant SMILES given a product SMILES. To improve the validity of their suggestions, \textsc{SCROP} include a second Transformer that functions as a syntax correcter. \textsc{LV-Transformer} uses a latent variable mixture model to improve diversity of suggestions. \textsc{G2Gs} \citep{shi2020graph} is a contemporary graph-based retrosynthesis prediction approach that first identifies synthons and expands synthons into valid reactants using a variational graph translation module. \end{itemize} \subsection{Overall Performance} \label{subsec:retro_benchmark} \paragraph{Results} As shown in Table~\ref{tab:overall}, when the reaction class is unknown, \textsc{GraphRetro}'s \emph{shared} and \emph{separate} configurations outperform \textsc{GLN} by 11.7\% and 11.3\% in top-$1$ accuracy, achieving state-of-the-art performance. Similar improvements are achieved for larger $n$, with \textasciitilde84\% of the true precursors in the top-$5$ choices. Barring \textsc{GLN}, \textsc{GraphRetro}’s top-$5$ and top-$10$ accuracies are comparable to the top-$50$ accuracies of template-based methods, especially in the unknown reaction class setting. When the reaction class is known, \textsc{Retrosim} and \textsc{GLN} restrict template sets corresponding to the reaction class, thus improving performance. Both of our model configurations outperform the other methods till $n = 5$. \begin{table}[t] \caption[Overall Performance]{\textbf{Overall Performance}\footnotemark. (sh) and (se) denote \emph{shared} and \emph{separate} training.} \label{tab:overall} \vspace{5pt} \centering \begin{adjustbox}{max width=\linewidth} \begin{tabular}{lcccccccccccc} \toprule \multirow{3}{*}{\textbf{Model}} & & \multicolumn{11}{c}{\textbf{Top-$n$ Accuracy (\%)}} \\ \cmidrule{3-13} & & \multicolumn{5}{c}{\textbf{Reaction class known}} & & \multicolumn{5}{c}{\textbf{Reaction class unknown}}\\ \cmidrule{3-7}\cmidrule{9-13} & $n=$ & {1} & {3} & {5} & {10} & {50} & & {1} & {3} & {5} & {10} & {50} \\ \midrule \multicolumn{13}{l}{\textbf{Template-Based}} \\ \midrule \textsc{Retrosim} \citep{coley2017computer}&&52.9 & 73.8 & 81.2 & 88.1 & 92.9 & & 37.3 & 54.7 & 63.3 & 74.1 & 85.3 \\ \textsc{NeuralSym} \citep{segler2017neural} && 55.3 & 76.0 & 81.4 & 85.1 & 86.9 & & 44.4 & 65.3 & 72.4 & 78.9 & 83.1 \\ \textsc{GLN} \citep{dai2019retrosynthesis} && 64.2 & 79.1 & 85.2 & \cellcolor{blue!15}90.0 & \cellcolor{blue!15}93.2 & & 52.5 & 69.0 & 75.6 & 83.7 & \cellcolor{blue!15}92.4\\ \midrule \multicolumn{13}{l}{\textbf{Template-Free}} \\ \midrule \textsc{SCROP} \citep{zheng2019predicting} && 59.0 & 74.8 & 78.1 & 81.1 & - & & 43.7 & 60.0 & 65.2 & 68.7 & -\\ \textsc{LV-Transformer} \citep{chen2020learning} && - & - & -& -& - & & 40.5 & 65.1 & 72.8 & 79.4 & - \\ \textsc{G2Gs} \citep{shi2020graph} && 61.0 & 81.3 & 86.0 & 88.7 & - && 48.9 & 67.6 & 72.5 & 75.5 & -\\ $\textsc{GraphRetro}$ (sh) && 67.2 & 81.7 & 84.6 & 87.0 & 87.2 & & \cellcolor{blue!15}64.2 & 78.6 & 81.4 & 83.1 & 84.1 \\ $\textsc{GraphRetro}$ (se) && \cellcolor{blue!15}67.8 & \cellcolor{blue!15}82.7 & \cellcolor{blue!15}85.3 & 87.0& 87.9& & 63.8 & \cellcolor{blue!15}80.5 & \cellcolor{blue!15}84.1 & \cellcolor{blue!15}85.9 & 87.2 \\ \bottomrule \end{tabular} \end{adjustbox} \end{table} \paragraph{Parameter Sharing} The benefits of sharing the encoder are indicated by comparable performances of the \emph{shared} and \emph{separate} configurations. In the \emph{shared} configuration, the synthon completion module is trained only on the subset of leaving groups that correspond to single-edit examples. In the \emph{separate} configuration, the synthon completion module is trained on all leaving groups. For reference, the \emph{separate} configuration with the synthon completion module trained on the same subset as the \emph{shared} configuration achieves a 62.1\% top-1 accuracy in the unknown reaction class setting. \subsection{Rare Reactions} \label{subsec:rare-reactions} From Table~\ref{tab:rare_reactions}, we note that for rare reactions, the performance of all methods drops significantly. Despite the low template frequency, \textsc{GLN}'s performance can be explained by its hierarchical model design that first ranks precomputed reaction centers, allowing the model to focus on relevant parts of the molecule. Instead of precomputing centers, our model learns to identify the correct edit (and consequently reaction centers), thus improving the top-$1$ accuracy by 4\% over \textsc{GLN}. \footnotetext{{results for \textsc{NeuralSym} are taken from \citep{dai2019retrosynthesis}, for other baselines, we use their reported results}} \begin{table}[H] \caption{\textbf{Rare Reaction Performance}. (sh) and (se) denote \emph{shared} and \emph{separate} configurations.} \label{tab:rare_reactions} \vspace{5pt} \centering \begin{tabular}{lcccccc} \toprule \multirow{2}{*}{\textbf{Model}} & \multicolumn{5}{c}{\textbf{Top-$n$ Accuracy (\%)}} \\\cmidrule{2-6} & {1} & {3} & {5} & {10} & {50} \\ \midrule \textsc{GLN} & 27.5 & 35.7 & 39.3 & 46.1 & \cellcolor{blue!15}55.9\\ ${\textsc{GraphRetro}}$ (sh) & \cellcolor{blue!15}31.5 & \cellcolor{blue!15}40.2 & 43.6 & 45.3 & 48.6\\ $\textsc{GraphRetro}$ (se) & 30.1 & \cellcolor{blue!15}40.2 & \cellcolor{blue!15}45.1 & \cellcolor{blue!15}48.2 & 52.2\\ \bottomrule \end{tabular} \end{table} \subsection{Individual Module Performance} \label{subsec:ablation_studies} To gain more insight into the working of \textsc{GraphRetro}, we evaluate the top-$n$ accuracy ($n = 1, 2, 3, 5$) of edit prediction and synthon completion modules, with results shown in Table~\ref{tab:ind_perform}. \paragraph{Edit Prediction} For the edit prediction module, we compare the true edits to top-$n$ edits predicted by the model. When the reaction class is known, edits are predicted with a top-5 accuracy of 94\%, close to its theoretical upper bound of 95.1\% (\% of single-edit examples). When the reaction class is unknown, the model achieves a top-1 accuracy of almost 90\%. Identifying the correct edit is necessary for generating the true reactants. \paragraph{Synthon Completion} For the synthon completion module, we first apply the true edits to obtain synthons, and compare the true leaving groups to top-$n$ leaving groups predicted by the model.The synthon completion module is able to identify \textasciitilde~97\% (close to its upper bound of 99.7\%) of the true leaving groups in its top-$5$ choices. \begin{table}[H] \caption{\textbf{Performance Study} of edit prediction and synthon completion modules} \label{tab:ind_perform} \vspace{5pt} \centering \begin{adjustbox}{max width=\linewidth} \begin{tabular}{lcccccccccc} \toprule \multirow{2}{*}{\textbf{Setting}} & & \multicolumn{9}{c}{\textbf{Top-$n$ Accuracy (\%)}} \\ \cmidrule{3-11} & & \multicolumn{4}{c}{\textbf{Reaction class known}} & & \multicolumn{4}{c}{\textbf{Reaction class unknown}}\\ \cmidrule{3-6}\cmidrule{8-11} &$n = $ &{1} & {2} & {3} & {5} & & {1} & {2} & {3} & {5}\\ \midrule Edit Prediction && 91.4 & 93.1 & 93.6 & 94.0 && 89.8 & 92.1 & 92.6 & 93.2 \\ Synthon Completion && 77.1 & 89.2 & 93.6 & 96.9 && 73.9 & 87.0 & 92.6 & 96.6\\ \bottomrule \end{tabular} \end{adjustbox} \end{table} \raggedbottom \subsection{Example Predictions} \label{subsec:example_predictions} In Figure~\ref{fig:example-predictions}, we visualize the model predictions and the ground truth for three cases. Figure~\ref{fig:example-predictions}a shows an example where the model identifies both the edits and leaving groups correctly. In Figure~\ref{fig:example-predictions}b, the correct edit is identified but the predicted leaving groups are incorrect. We hypothesize this is due to the fact that in the training set, leaving groups attaching to the carbonyl carbon (C=O) are small (e.g. -OH, -NH$_2$, halides). The true leaving group in this example, however, is large. The model is unable to reason about this and predicts the small leaving group -I. In Figure~\ref{fig:example-predictions}c, the model identifies the edit and consequently the leaving group incorrectly. This highlights a limitation of our model. If the edit is predicted incorrectly, the model cannot suggest the true precursors. \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{figures/discussion/examples.pdf} \caption[Example predictions]{\textbf{Example Predictions}. The true edit and incorrect edit (if any) are highlighted in green and red respectively. The true and predicted leaving groups are highlighted in blue. \textbf{a}. Correctly predicted example by the model. \textbf{b}. Correctly predicted edit but incorrectly predicted leaving groups. \textbf{c}. Incorrectly predicted edit and leaving group.} \label{fig:example-predictions} \end{figure} \section*{Broader Impact} \label{sec:broad-impact} Our research advances template-free models for computer-aided retrosynthesis by incorporating a chemist's workflow into the design. In addition to achieving improved accuracy, our formulation has the potential to make retrosynthetic workflows more flexible and interactive by allowing users to supply their own edits and not use the edit prediction module for inference. These tools are intended to reduce the time investment required to physically produce computationally-designed molecules (e.g. drug candidates) and make the process of molecular discovery cheaper and faster. Long-term, they may enable the automation of routine chemistry using common organic transformations, which would encourage a reallocation of human chemist creativity to more complex synthesis tasks (e.g. the synthesis of more complex molecules, stereoselectivity, discovery of new reaction types) for which models cannot make accurate predictions. \subsection{Edit Prediction} \label{subsec:edit-pred} For a given retrosynthesis pair $R = (\mathcal{G}_p, \mathcal{G}_r)$, we predict an edit score only for existing bonds and atoms, instead of every atom pair as in \citep{coley2019graph, jin2017predicting}. This choice is motivated by the low frequency (\textasciitilde0.1\%) of new bond formations in the training set examples. Coupled with the sparsity of molecular graphs\footnote{$M \sim N$, for a molecule with $N$ atoms and $M$ bonds}, this reduces the prediction complexity from $O(N^2)$ to $O(N)$ for a product with $N$ atoms. Our edit prediction model has variants tailored to single and multiple edit prediction. Since 95\% of the training set consists of single edit examples, the remainder of this section describes the setup for single edit prediction. A detailed description of our multiple edit prediction model can be found in Appendix~\ref{app:multiple_pred}. Each bond $(u, v)$ in $\mathcal{G}_p$ is associated with a label $y_{uvk} \in \{0, 1\}$ indicating whether its bond type $k$ has changed from the products to reactants. Each atom $u$ is associated with a label $y_u \in \{0, 1\}$ indicating a change in hydrogen count. We predict edit scores using representations that are learnt using a graph encoder. \paragraph{Graph Encoder} To obtain atom representations, we use a variant of the {\em message passing network} (MPN) described in \citep{gilmer2017neural}. Each atom $u$ has a feature vector $\mathbf{x}_u$ indicating its atom type, degree and other properties. Each bond $(u, v)$ has a feature vector $\mathbf{x}_{uv}$ indicating its aromaticity, bond type and ring membership. For simplicity, we denote the encoding process by $\mathrm{MPN}(\cdot)$ and describe architectural details in Appendix~\ref{app:message-passing}. The MPN computes atom representations $\{\mathbf{c}_u | u \in \mathcal{G}\}$ via \begin{equation} \label{eqn:mpn} \{\mathbf{c}_u\} = \mathrm{MPN}(\mathcal{G}, \{\mathbf{x}_u\}, \{\mathbf{x}_{uv}\}_{v \in \mathcal{N}(u)}), \end{equation} where $\mathcal{N}(u)$ denotes the neighbors of atom $u$. The graph representation $\mathbf{c}_{\mathcal{G}}$ is an aggregation of atom representations, i.e. $\mathbf{c}_{\mathcal{G}} = \sum_{u \in \mathcal{V}}{\mathbf{c}_u}$. When $\mathcal{G}$ has connected components $\{\mathcal{G}_i\}$, we get a set of graph representations $\{\mathbf{c}_{\mathcal{G}_i}\}$. For a bond $(u, v)$, we define its representation $\mathbf{c}_{uv} = (\mathbf{c}_u \hspace{2pt}|| \hspace{2pt} \mathbf{c}_v) $ as the concatentation of atom representations $\mathbf{c}_u$ and $\mathbf{c}_v$, where $||$ refers to concatenation. Using these representations to directly predict edit scores constrains predictions to the neighborhood the messages were aggregated from. We include global dependencies in the prediction input by using convolutional layers, which have been used successfully to extract globally occurring features using locally operating filters \citep{krizhevsky2012imagenet, conv}. We apply $P$ layers of convolutions to atom and bond representations to obtain embeddings $\mathbf{c}_u^P$ and $\mathbf{c}_{uv}^P$. These representations are then used to predict atom and bond edit scores using corresponding neural networks, \begin{align} \label{eqn:edit-atom-score} s_u & = \mathbf{u_a}^T \mathrm{\tau}(\mathbf{W_ac}_u^P + b) \\ \label{eqn:edit-bond-score} s_{uvk} & = \mathbf{u_k}^T \mathrm{\tau}(\mathbf{W_kc}_{uv}^P + b_k), \end{align} where $\mathrm{\tau}(\cdot)$ is the ReLU activation function. \paragraph{Training} \looseness -1 We train by minimizing the cross-entropy loss over possible bond and atom edits \begin{equation} \mathcal{L}_e = -\sum_{(\mathcal{G}_p, \hspace{1pt} E)} \left({\sum_{((u, v), k) \in E}{y_{uvk} \mathrm{\log}(s_{uvk})} + \sum_{u \in E}{y_u \mathrm{\log}(s_u)}}\right). \end{equation} The cross-entropy loss enforces the model to learn a distribution over possible edits instead of reasoning about each edit independently, as with the binary cross entropy loss used in \citep{jin2017predicting, coley2019graph}. \section{Model Design}\label{sec:graph-retro} Our approach leverages the property that graph topology is largely unaltered from products to reactants. To achieve this, we first derive suitable building blocks from the product called {\em synthons}, and then complete them into valid reactants by adding specific functionalities called {\em leaving groups}. These derivations, called {\em edits}, are characterized by modifications to bonds or hydrogen counts on atoms. We first train a neural network to predict a score for possible edits (Section~\ref{subsec:edit-pred}). The edit with the highest score is then applied to the product to obtain synthons. Since the number of unique leaving groups are small, we model leaving group selection as a classification problem over a precomputed vocabulary (Section~\ref{subsec:syn-compl}). To produce candidate reactants, we attach the predicted leaving group to the corresponding synthon through chemically constrained rules. The overall process is outlined in Figure~\ref{fig:overview}. Before describing the two modules, we introduce relevant preliminaries that set the background for the remainder of the paper. \paragraph{Retrosynthesis Prediction} A retrosynthesis pair $R$ is described by a pair of molecular graphs $(\mathcal{G}_p, \mathcal{G}_r)$, where $\mathcal{G}_p$ are the products and $\mathcal{G}_r$ the reactants. A molecular graph is described as $\mathcal{G} = (\mathcal{V}, \mathcal{E})$ with atoms $\mathcal{V}$ as nodes and bonds $\mathcal{E}$ as edges. Prior work has focused on the single product case, while reactants can have multiple connected components, i.e. $\mathcal{G}_r = \{\mathcal{G}_{r_c}\}_{c=1}^{C}$. Retrosynthesis pairs are \emph{atom-mapped} so that each product atom has a unique corresponding reactant atom. The retrosynthesis task then, is to infer $\{\mathcal{G}_{r_c}\}_{c=1}^{C}$ given $\mathcal{G}_{p}$. \paragraph{Edits} Edits consist of \begin{enumerate*}[label=(\roman*.)] \item atom pairs $\{(a_i, a_j)\}$ where the bond type changes from products to reactants, and \item atoms $\{a_i\}$ where the number of hydrogens attached to the atom change from products to reactants \end{enumerate*}. We denote the set of edits by $E$. Since retrosynthesis pairs in the training set are atom-mapped, edits can be automatically identified by comparing the atoms and atom pairs in the product to their corresponding reactant counterparts. \paragraph{Synthons and Leaving Groups} Applying edits $E$ to the product $\mathcal{G}_p$ results in incomplete molecules called \emph{synthons}. Synthons are analogous to rationales or building blocks, which are expanded into valid reactants by adding specific functionalities called \emph{leaving groups} that are responsible for its reactivity. We denote synthons by $\mathcal{G}_s$ and leaving groups by $\mathcal{G}_l$. We further assume that synthons and leaving groups have the same number of connected components as the reactants, i.e $\mathcal{G}_s = \{\mathcal{G}_{s_c}\}_{c=1}^{C}$ and $\mathcal{G}_l = \{\mathcal{G}_{l_c}\}_{c=1}^{C}$. This assumption holds for 99.97\% reactions in the training set. Formally, our model generates reactants by first predicting the set of edits $E$ that transform $\mathcal{G}_p$ into $\mathcal{G}_s$, followed by predicting a leaving group $\mathcal{G}_{l_c}$ to attach to each synthon $\mathcal{G}_{s_c}$. The model is defined as \begin{equation} \label{eqn:overview} P(\mathcal{G}_r | \mathcal{G}_p) = \sum_{E,\mathcal{G}_l} P(E | \mathcal{G}_p) P(\mathcal{G}_l |\mathcal{G}_p, \mathcal{G}_s), \end{equation} where $\mathcal{G}_s, \mathcal{G}_r$ are deterministic given $E, \mathcal{G}_l$, and $\mathcal{G}_p$. \subsection{Synthon Completion} \label{subsec:syn-compl} Synthons are completed into valid reactants by adding specific functionalities called {\em leaving groups}. This involves two complementary tasks: \begin{enumerate*}[label=(\roman*.)] \item selecting the appropriate leaving group, and \item attaching the leaving group to the synthon \end{enumerate*}. As ground truth leaving groups are not directly provided, we extract the leaving groups and construct a vocabulary $\mathcal{X}$ of unique leaving groups during preprocessing. The vocabulary has a limited size ($|\mathcal{X}| = 170$ for a standard dataset with $50,000$ examples). We thus formulate leaving group selection as a classification problem over $\mathcal{X}$. \paragraph{Vocabulary Construction} Before constructing the vocabulary, we align connected components of synthon and reactant graphs by comparing atom mapping overlaps. Using aligned pairs $\mathcal{G}_{s_c} = \left(\mathcal{V}_{s_c}, \mathcal{E}_{s_c}\right)$ and $\mathcal{G}_{r_c} = \left(\mathcal{V}_{r_c}, \mathcal{E}_{r_c}\right)$ as input, the leaving group vocabulary $\mathcal{X}$ is constructed by extracting subgraphs $\mathcal{G}_{l_c} = \left(\mathcal{V}_{l_c}, \mathcal{E}_{l_c}\right)$ such that $\mathcal{V}_{l_c} = \mathcal{V}_{r_c} \setminus \mathcal{V}_{s_c}$. Atoms $\{a_i\}$ in the leaving groups that attach to synthons are marked with a special symbol. We also add three tokens to $\mathcal{X}$ namely $\text{START}$, which indicates the start of synthon completion, $\text{END}$, which indicates that there is no leaving group to add and $\text{PAD}$, which is used to handle variable numbers of synthon components in a minibatch. \paragraph{Leaving Group Selection} Treating each $x_i \in \mathcal{X}$ as a molecular subgraph, we learn representations $\mathbf{e}_{x_i}$ using the $\mathrm{MPN}(\cdot)$. We also use the same $\mathrm{MPN}(\cdot)$ to learn the product graph representation $\mathbf{c}_{\mathcal{G}_p}$ and sython representations $\{\mathbf{c}_{\mathcal{G}_{s_c}}\}_{c=1}^{C}$, where $C$ is the number of connected components. For each step $c \leq C$, we compute leaving group probabilities by combining the product representation $\mathbf{c}_{\mathcal{G}_p}$, the synthon component representation $\mathbf{c}_{\mathcal{G}_{s_c}}$ and the representation of leaving group in the previous stage $\mathbf{e}_{l_{c-1}}$ via a single layer neural network and $\mathrm{softmax}$ function \begin{equation}\label{eqn:ind-model} \hat{q}_{l_c} = \mathrm{softmax}\left(\mathbf{U} \mathrm{\tau}\left(\mathbf{W_1} \mathbf{c}_{\mathcal{G}_p} + \mathbf{W_2} \mathbf{c}_{\mathcal{G}_{s_c}} + \mathbf{W_3} \mathbf{e}_{l_{(c-1)}}\right)\right), \end{equation} where $\hat{q}_{l_c}$ is distribution learnt over $\mathcal{X}$. Using the representation of the previous leaving group $\mathbf{e}_{l_{c-1}}$ allows the model to understand {\em combinations} of leaving groups that generate the desired product from the reactants. We also include the product representation $\mathbf{c}_{\mathcal{G}_p}$ as the synthon graphs are derived from the product graph. \paragraph{Training} For step $c$, given the one hot encoding of the true leaving group $q_{l_c}$, we minimize the cross-entropy loss \begin{equation} \mathcal{L}_s = \sum_{c=1}^{C}{\mathcal{L}(\hat{q}_{l_c}, q_{l_c})}. \end{equation} Training utilizes teacher-forcing \citep{williams1989learning} so that the model makes predictions given correct histories. During inference, at every step, we use the representation of leaving group from the previous step with the highest predicted probability. \paragraph{Leaving Group Attachment} Attaching leaving groups to synthons is a deterministic process and not learnt during training. The task involves identification of the type of bonds to add between attaching atoms in the leaving group (marked during vocabulary construction), and the atom(s) participating in the edit. These bonds can be inferred by applying the {\em valency} constraint, which determines the maximum number of neighbors for each atom. Given synthons and leaving groups, the attachment process has a 100\% accuracy. The detailed procedure is described in Appendix~\ref{app:leaving_group_attach}. \subsection{Overall Training and Inference} The two modules can either be trained separately (referred to as \emph{separate}) or jointly by sharing the encoder (\emph{shared}). Sharing the encoder between edit prediction and synthon completion modules allows us to train the model end-to-end. The shared training minimizes the loss $\mathcal{L} = \lambda_e \mathcal{L}_e + \lambda_s \mathcal{L}_s$, where $\lambda_e$ and $\lambda_s$ weigh the influence of each term on the final loss. Inference is performed using beam search with a log-likelihood scoring function. For a beam width $n$, we select $n$ edits with highest scores and apply them to the product to obtain $n$ synthons, where each synthon can consist of multiple connected components. The synthons form the nodes for beam search. Each node maintains a cumulative score by aggregating the log-likelihoods of the edit and predicted leaving groups. Leaving group inference starts with a connected component for each synthon, and selects $n$ leaving groups with highest log-likelihoods. From the $n^2$ possibilities, we select $n$ nodes with the highest cumulative scores. This process is repeated until all nodes have a leaving group predicted for each synthon component. \section{Introduction} \label{sec:intro} {\em Retrosynthesis prediction}, first formalized by E.J. Corey \citep{corey1991logic}, is a fundamental problem in organic synthesis that attempts to identify a series of chemical transformations for synthesizing a target molecule. In the single-step formulation, the task is to identify a set of reactant molecules given a target. The problem is challenging as the space of possible transformations is vast, and requires the skill of experienced chemists. Since the 1960s, retrosynthesis prediction has seen assistance from modern computing techniques \citep{corey1969computer}, with a recent surge in machine learning methods \citep{chen2020learning, coley2017computer, dai2019retrosynthesis, zheng2019predicting}. Existing machine learning methods for retrosynthesis prediction fall into template-based \citep{coley2017computer, dai2019retrosynthesis, segler2017neural} and template-free approaches \citep{chen2020learning, zheng2019predicting}. Template-based methods match the target molecule against a large set of templates, which are molecular subgraph patterns that highlight changes during a chemical reaction. Despite their interpretability, these methods suffer from poor generalization to new and rare reactions. Template-free methods bypass templates by learning a direct mapping from the SMILES representations \citep{weininger1988smiles} of the product to reactants. Despite their greater generalization potential, these methods generate reactant SMILES character by character, failing to utilize the largely conserved substructures in a chemical reaction. A chemical reaction satisfies two \emph{fundamental} properties: \begin{enumerate*}[label=(\roman*.)] \item the product atoms are always a subset of the reactant atoms\footnote{ignoring impurities}, and \item the molecular graph topology is largely unaltered from products to reactants \end{enumerate*}. For example, in the standard retrosynthesis dataset, only 6.3\% of the atoms in the product undergo any change in connectivity. Our approach is motivated by the hypothesis that utilizing subgraphs within the product to generate reactants can significantly improve the performance and generalization ability of retrosynthesis models. Our template-free approach called \textsc{GraphRetro} generates reactants in two stages: \begin{enumerate*}[label=(\roman*.)] \item deriving intermediate molecules called synthons from the product molecule, and \item expanding synthons into reactants by adding specific functionalities called leaving groups \end{enumerate*}. For deriving synthons, we utilize the rarity of new bond formations to predict a score for existing bonds and atoms instead of each atom pair. This reduces the prediction complexity from $O(N^2)$ to $O(N)$. Furthermore, we incorporate global dependencies into our prediction input by using convolutional layers. For completing synthons into reactants, we select leaving groups from a precomputed vocabulary. The vocabulary is constructed during preprocessing by extracting subgraphs that differ between a synthon and the corresponding reactant, and has a 99.7\% coverage on the test set. We evaluate \textsc{GraphRetro} on the benchmark USPTO-50k dataset and a subset of the same dataset that consists of rare reactions. On the USPTO-50k dataset, \textsc{GrapRetro} achieves 64.2\% top-1 accuracy without the knowledge of reaction class, outperforming the state-of-the-art method by a margin of 11.7\%. On the rare reaction subset, \textsc{GraphRetro} achieves 31.5\% top-1 accuracy, with a 4\% margin over the state-of-the-art method. \begin{figure}[t] \centering \includegraphics[width=1.02\textwidth]{figures/introduction/overview.pdf} \caption[Proposed retrosynthesis workflow]{\textbf{Overview of Our Approach}. \textbf{a}. \textbf{Edit Prediction}. We train a model to learn a distribution over possible graph edits, In this case, the correct edit corresponds to breaking the bond marked in red. Applying this edit produces two synthons. \textbf{b}. \textbf{Synthon Completion}. Another model is trained to pick candidate leaving groups (blue) for each synthon from a discrete vocabulary, which are then attached to produce the final reactants.} \label{fig:overview} \end{figure} \section{Experiments} \label{sec:experiments} \paragraph{Data} We evaluate the \textsc{GraphRetro} model on the widely used benchmark dataset USPTO-50k \citep{schneider2016s}, which contains 50k atom-mapped reactions across 10 reaction classes. Following \citep{liu2017retrosynthetic, coley2017prediction}, we partition the dataset into a 80\%/10\%/10\% train/development/test split. For the transfer learning experiments, we extract a rare reaction subset of USPTO-50k (see Appendix~\ref{app:rare-reactions}), with \textasciitilde3900 train reactions and 500 reactions in development and test sets. \paragraph{Baseline Models} We compare \textsc{GraphRetro} to five baselines. \textsc{LV-Transformer} \citep{chen2020learning} extends the Transformer architecture with a latent variable to improve diversity of suggested reactants. \textsc{SCROP} \citep{zheng2019predicting} uses an additional Transformer to correct the syntax of candidates generated from the first one. \textsc{Retrosim} \cite{coley2017computer} uses molecular similarities to precedent reactions for template ranking. \textsc{NeuralSym} \citep{segler2017neural} learns a conditional distribution over templates given a molecule. The state-of-the-art method, \textsc{GLN} \citep{dai2019retrosynthesis} models the joint distribution of templates and reactants using logic variables. Results of \textsc{NeuralSym} are taken from \cite{dai2019retrosynthesis}. \paragraph{Evaluation Metrics} Similar to prior work \citep{dai2019retrosynthesis, liu2017retrosynthetic}, we use the top-$n$ exact match accuracy as our evaluation metric, for $n = 1, 3, 5, 10, 20 \text{ and } 50$. The accuracy is computed by matching canonical SMILES strings of predicted reactants with those of ground truth reactants. For the benchmarks, we evaluate the top-$n$ accuracy with and without the knowledge of reaction class. For the latter, the reaction class is added as an additional one-hot vector to atom features. For transfer learning, we evaluate the top-$n$ accuracy for $n = 1, 3, 5 \text{and} 10$, for the setting where the reaction class is unknown. \subsection{Overall Performance} \label{subsec:pred-perform} As shown in Table~\ref{tab:overall}, when the reaction class is unknown, our \emph{shared} and \emph{separate} configurations outperform \textsc{GLN} by 11.6\% and 11.2\% in top-1 accuracy, respectively. Similar improvements are achieved for larger $n$, with \textasciitilde85\% of the true precursors in the top-5 choices. Barring \textsc{GLN}, \textsc{GraphRetro}’s top-5 and top-10 accuracies are higher than the top-50 accuracy of other baseline models. When the reaction class is known, \textsc{Retrosim} and \textsc{GLN} restrict template sets corresponding to the reaction class, thus improving performance. Both of our model configurations outperform the other models till $n = 5$. \begin{table}[] \caption{Top-$n$ exact match accuracy. (S) denotes Single-Edit and (M) denotes Multiple Edit} \label{tab:overall} \vspace{5pt} \centering \begin{tabular}{lcccccc} \toprule \multirow{2}{*}{\textbf{Model}} & \multicolumn{6}{c}{\textbf{Top-$n$ Accuracy (\%)}} \\\cline{2-7} & {1} & {3} & {5} & {10} & {20} & {50} \\ \midrule \multicolumn{6}{c}{{Reaction class known}} \\ \midrule \textsc{Retrosim} \citep{coley2017computer}& 52.9 & 73.8 & 81.2 & 88.1 & 91.8 & 92.9 \\ \textsc{NeuralSym} \citep{segler2017neural}& 55.3 & 76 & 81.4 & 85.1 & 86.5 & 86.9 \\ \textsc{GLN} \citep{dai2019retrosynthesis} & 63.2 & 77.5 & 83.4 & \cellcolor{blue!15}89.1 & \cellcolor{blue!15}92.1 & \cellcolor{blue!15}93.2 \\ \textsc{SCROP} \citep{zheng2019predicting}& 59 & 74.8 & 78.1 & 81.1 & - & - \\ $\mathrm{\textsc{GraphRetro}}^*$ (\emph{shared}) & 67.2 & 82.5 & 85.7 & 87.2 & 87.5 & 87.2\\ $\mathrm{\textsc{GraphRetro}}^*$ (\emph{separate}) & \cellcolor{blue!15}67.7 & \cellcolor{blue!15}83.7 & \cellcolor{blue!15}86.6 & 87.9 & 88.1 & 87.9 \\ \midrule \multicolumn{6}{c}{{Reaction class unknown}} \\ \midrule \textsc{Retrosim} \citep{coley2017computer} & 37.3 & 54.7 & 63.3 & 74.1 & 82 & 85.3 \\ \textsc{NeuralSym} \citep{segler2017neural} & 44.4 & 65.3 & 72.4 & 78.9 & 82.2 & 83.1 \\ \textsc{GLN} \citep{dai2019retrosynthesis} & 52.6 & 68 & 75.1 & 83.1 & \cellcolor{blue!15}88.5 & \cellcolor{blue!15}92.1 \\ \textsc{SCROP} \citep{zheng2019predicting} & 43.7 & 60.0 & 65.2 & 68.7 & - & - \\ \textsc{LV-Transformer} \cite{chen2020learning} & 40.5 & 65.1 & 72.8 & 79.4 & - & - \\ $\mathrm{\textsc{GraphRetro}(S)}^*$ (\emph{shared}) & \cellcolor{blue!15}64.2 & 80 & 83 & 84.6 & 84.8 & 84.1 \\ $\mathrm{\textsc{GraphRetro}(S)}^*$ (\emph{separate}) & 63.8 & \cellcolor{blue!15}80.9 & \cellcolor{blue!15}84.5 & \cellcolor{blue!15}86.4 & 87.1 & 87.2 \\ \bottomrule \end{tabular} \end{table} \subsection{Individual Module Performance (WIP)} \label{subsec: indi-perform} As shown in Table~\ref{tab:overall}, when the reaction class is unknown, our \emph{shared} and \emph{separate} configurations outperform \textsc{GLN} by 11.6\% and 11.2\% in top-1 accuracy, respectively. Similar improvements are achieved for larger $n$, with \textasciitilde85\% of the true precursors in the top-5 choices. Barring \textsc{GLN}, \textsc{GraphRetro}’s top-5 and top-10 accuracies are higher than the top-50 accuracy of other baseline models. When the reaction class is known, \textsc{Retrosim} and \textsc{GLN} restrict template sets corresponding to the reaction class, thus improving performance. Both of our model configurations outperform the other models till $n = 5$. \begin{table*}[h] \caption{Performance of individual modules} \label{tab:ind-perform} \vspace{5pt} \centering \begin{adjustbox}{max width=\linewidth} \begin{tabular}{lcccclcccc} \toprule \multirow{2}{*}{\textbf{Model}} & \multicolumn{4}{c}{\textbf{Top-$n$ Accuracy (\%)}} & \multirow{2}{*}{\textbf{Model}} & \multicolumn{4}{c}{\textbf{Top-$n$ Accuracy (\%)}} \\\cline{2-5}\cline{7-10} & {1} & {3} & {5} & {10} & & {1} & {3} & {5} & {10} \\ \midrule \textsc{Retrosim} & 52.9& 73.8 & 81.2 & 88.1 & \textsc{Retrosim} & 37.3 & 54.7 & 63.3 & 74.1 \\ \textsc{NeuralSym} & 55.3 & 76 & 81.4 & 85.1 & \textsc{NeuralSym} & 44.4 & 65.3 & 72.4 & 78.9 \\ \textsc{GLN} & 63.2 & 77.5 & 83.4 & \cellcolor{blue!15}89.1 & \textsc{GLN} & 52.6 & 68 & 75.1 & 83.1\\ \textsc{SCROP} & 59 & 74.8 & 78.1 & 81.1 & \textsc{SCROP} & 43.7 & 60.0 & 65.2 & 68.7 \\ & & & & & \textsc{LV-Transformer} & 40.5 & 65.1 & 72.8 & 79.4\\ $\mathrm{\textsc{GraphRetro}}$ (sh) & 67.2 & 82.5 & 85.7 & 87.2 & $\mathrm{\textsc{GraphRetro}}$ (sh) & \cellcolor{blue!15}64.2 & 80 & 83 & 84.6\\ $\mathrm{\textsc{GraphRetro}}$ (se) & \cellcolor{blue!15}67.7 & \cellcolor{blue!15}83.7 & \cellcolor{blue!15}86.6 & 87.9 & $\mathrm{\textsc{GraphRetro}}$ (se) & 63.8 & \cellcolor{blue!15}80.9 & \cellcolor{blue!15}84.5 & \cellcolor{blue!15}86.4 \\ \bottomrule \end{tabular} \end{adjustbox} \end{table*} \subsection{Transfer Learning (WIP)} \label{subsec:transfer-learn} As shown in Table~\ref{tab:overall}, when the reaction class is unknown, our \emph{shared} and \emph{separate} configurations outperform \textsc{GLN} by 11.6\% and 11.2\% in top-1 accuracy, respectively. Similar improvements are achieved for larger $n$, with \textasciitilde85\% of the true precursors in the top-5 choices. Barring \textsc{GLN}, \textsc{GraphRetro}’s top-5 and top-10 accuracies are higher than the top-50 accuracy of other baseline models. When the reaction class is known, \textsc{Retrosim} and \textsc{GLN} restrict template sets corresponding to the reaction class, thus improving performance. Both of our model configurations outperform the other models till $n = 5$. \begin{table*}[h] \caption{Transfer Learning} \label{tab:transfer-learning} \vspace{5pt} \centering \begin{adjustbox}{max width=\linewidth} \begin{tabular}{lcccclcccc} \toprule \multirow{2}{*}{\textbf{Model}} & \multicolumn{4}{c}{\textbf{Top-$n$ Accuracy (\%)}} & \multirow{2}{*}{\textbf{Model}} & \multicolumn{4}{c}{\textbf{Top-$n$ Accuracy (\%)}} \\\cline{2-5}\cline{7-10} & {1} & {3} & {5} & {10} & & {1} & {3} & {5} & {10} \\ \midrule \textsc{Retrosim} & 52.9& 73.8 & 81.2 & 88.1 & \textsc{Retrosim} & 37.3 & 54.7 & 63.3 & 74.1 \\ \textsc{NeuralSym} & 55.3 & 76 & 81.4 & 85.1 & \textsc{NeuralSym} & 44.4 & 65.3 & 72.4 & 78.9 \\ \textsc{GLN} & 63.2 & 77.5 & 83.4 & \cellcolor{blue!15}89.1 & \textsc{GLN} & 52.6 & 68 & 75.1 & 83.1\\ \textsc{SCROP} & 59 & 74.8 & 78.1 & 81.1 & \textsc{SCROP} & 43.7 & 60.0 & 65.2 & 68.7 \\ & & & & & \textsc{LV-Transformer} & 40.5 & 65.1 & 72.8 & 79.4\\ $\mathrm{\textsc{GraphRetro}}$ (sh) & 67.2 & 82.5 & 85.7 & 87.2 & $\mathrm{\textsc{GraphRetro}}$ (sh) & \cellcolor{blue!15}64.2 & 80 & 83 & 84.6\\ $\mathrm{\textsc{GraphRetro}}$ (se) & \cellcolor{blue!15}67.7 & \cellcolor{blue!15}83.7 & \cellcolor{blue!15}86.6 & 87.9 & $\mathrm{\textsc{GraphRetro}}$ (se) & 63.8 & \cellcolor{blue!15}80.9 & \cellcolor{blue!15}84.5 & \cellcolor{blue!15}86.4 \\ \bottomrule \end{tabular} \end{adjustbox} \end{table*} \subsection{Example Predictions (WIP) } \label{subsec:example-predictions} Figure~\ref{fig:correct-example} shows an example where the edit and leaving groups are identified correctly. In Figure~\ref{fig:incorrect-example}, the correct edit is identified but the predicted leaving groups are incorrect. In the training set, leaving groups attaching to the carbonyl carbon (C=O) are small (e.g. -OH, -NH$_2$, halides). The true leaving group in this example, however, is large. \textsc{GraphRetro} is unable to reason about this and predicts the small leaving group -I. \begin{figure}[H] \centering \begin{subfigure}{0.5\columnwidth} \centering \includegraphics[width=0.75\linewidth]{figures/discussion/correct-nei-2.pdf} \label{fig:neia} \end{subfigure}% \begin{subfigure}{0.5\columnwidth} \centering \includegraphics[width=0.75\linewidth]{figures/discussion/correct-nei-1.pdf} \label{fig:neib} \end{subfigure} \caption[Example of successful edit predictions after using local neighborhood information]{Example of successful edit predictions (\emph{green}) after incorporating local neighborhood interactions. Incorrect edits predicted previously are shown in \emph{red}. The model identifies a multiple ring aromatic system (\emph{left}) and bond polarities (\emph{right}).} \label{fig:Edit prediction with NA} \end{figure} \begin{figure}[] \begin{subfigure}{\columnwidth} \centering \includegraphics[width=\linewidth]{figures/discussion/correct-example.pdf} \caption{Successful Prediction} \label{fig:correct-example} \end{subfigure} \begin{subfigure}{\columnwidth} \centering \includegraphics[width=0.9\linewidth]{figures/discussion/incorrect-example.pdf} \caption{Unsuccessful Prediction} \label{fig:incorrect-example} \end{subfigure} \caption[Example predictions]{Example predictions. The true edit and incorrect edit (if any) are highlighted in green and red respectively. The true and predicted leaving groups are highlighted in yellow.} \label{fig:example-predictions} \end{figure} \section{Introduction} \label{sec:intro} {\em Retrosynthesis prediction}, first formalized by E.J. Corey \citep{corey1991logic}, is a fundamental problem in organic synthesis that attempts to identify a series of chemical transformations for synthesizing a target molecule. In the single-step formulation, the task is to identify a set of reactant molecules given a target. Beyond simple reactions, many practical tasks involving complex organic molecules are difficult even for expert chemists. As a result, substantial experimental exploration is needed to cover for deficiencies of analytical approaches. This motivated interest in computer-assisted retrosynthesis\citep{corey1969computer}, with a recent surge in machine learning methods \citep{chen2020learning, coley2017computer, dai2019retrosynthesis, zheng2019predicting}. On the computational side, the key challenge is how to explore combinatorial space of reactions that can yield the target molecule. Existing machine learning methods for retrosynthesis prediction fall into template-based \citep{coley2017computer, dai2019retrosynthesis, segler2017neural} and template-free approaches \citep{chen2020learning, zheng2019predicting}. Template-based methods match the target molecule against a large set of templates, which are molecular subgraph patterns that highlight changes during a chemical reaction. Despite their interpretability, these methods suffer from poor generalization to new and rare reactions. Template-free methods bypass templates by learning a direct mapping from the SMILES representations \citep{weininger1988smiles} of the product to reactants. Despite their greater generalization potential, these methods generate reactant SMILES character by character, increasing generation complexity and which in turn negatively impacts their complexity. In this paper, we propose a retrosynthesis model which provides generalization capacity of template-free models without resorting to full generation. This is achieved by learning to maximally reuse and recombine large fragments from the target molecule. This idea is grounded in the fundamental properties of chemical reactions, independent of their complexity level: \begin{enumerate*}[label=(\roman*.)] \item the product atoms are always a subset of the reactant atoms\footnote{ignoring impurities}, and \item the molecular graph topology is largely unaltered from products to reactants \end{enumerate*}. For example, in the standard retrosynthesis dataset, only 6.3\% of the atoms in the product undergo any change in connectivity. Operating at the level of these preserved subgraphs greatly reduces the complexity of reactant generation, leading to improved empirical performance. Our template-free approach called \textsc{GraphRetro} generates reactants in two stages: \begin{enumerate*}[label=(\roman*.)] \item deriving intermediate molecules called synthons from the product molecule, and \item expanding synthons into reactants by adding specific functionalities called leaving groups \end{enumerate*}. For deriving synthons, we utilize the rarity of new bond formations to predict a score for existing bonds and atoms instead of each atom pair. This reduces the prediction complexity from $O(N^2)$ to $O(N)$. Furthermore, we incorporate global dependencies into our prediction input by using convolutional layers. For completing synthons into reactants, we select leaving groups from a precomputed vocabulary. The vocabulary is constructed during preprocessing by extracting subgraphs that differ between a synthon and the corresponding reactant, and has a 99.7\% coverage on the test set. We evaluate \textsc{GraphRetro} on the benchmark USPTO-50k dataset and a subset of the same dataset that consists of rare reactions. On the USPTO-50k dataset, \textsc{GrapRetro} achieves 64.2\% top-1 accuracy without the knowledge of reaction class, outperforming the state-of-the-art method by a margin of 11.7\%. On the rare reaction subset, \textsc{GraphRetro} achieves 31.5\% top-1 accuracy, with a 4\% margin over the state-of-the-art method. \begin{figure}[t] \centering \includegraphics[width=1.02\textwidth]{figures/introduction/overview.pdf} \caption[Proposed retrosynthesis workflow]{\textbf{Overview of Our Approach}. \textbf{a}. \textbf{Edit Prediction}. We train a model to learn a distribution over possible graph edits, In this case, the correct edit corresponds to breaking the bond marked in red. Applying this edit produces two synthons. \textbf{b}. \textbf{Synthon Completion}. Another model is trained to pick candidate leaving groups (blue) for each synthon from a discrete vocabulary, which are then attached to produce the final reactants.} \label{fig:overview} \end{figure} \section{Rare Reactions} \label{sec:rare_reactions} Natural product synthesis presents a unique challenge for retrosynthesis through a combination of low resource learning, complex target molecules and rare or unseen chemical transformations. Pretraining and transfer learning present attractive options for training deep models for this task. As datasets for natural product synthesis are not available, we construct a rare reaction subset from our training set (details in Appendix~\ref{app:transfer_learning}) to study transfer learning. We evaluate two approaches: \subsection{Direct Transfer} \label{subsec:pretrain_model} This is akin to the general transfer learning setup where the model is first trained on a larger dataset and then finetuned on the smaller dataset. We apply this strategy to both the \emph{shared} and \emph{separate} configurations of \textsc{GraphRetro}. \subsection{Contrastive Learning of Reactions (RCL)} \label{subsec:pretrain_encoder} We propose a contrastive learning strategy to pretrain the encoder that learns representations of products and reactants reflective of their reaction chemistry. Our method is based on the premise that a product-synthon-reactant tuple for a given reaction are closer to each other in the reaction space than arbitrary combinations from different reactions. The components of our setup include: \textbf{Graph encoder} $f(\cdot)$ that computes a graph representation given a graph $\mathcal{G}$. The setup is agnostic of the choice of graph encoder. We use the message passing network (MPN) as our encoder. \textbf{Reaction space projection} $g(\cdot)$ that maps the graph representation to the reaction space. We use a MLP with one hidden layer to obtain $\mathbf{h}_\mathcal{G} = \mathbf{W_2} \left(\mathrm{\tau}\left(\mathbf{W_1}\mathbf{c}_{\mathcal{G}} + b_1\right)\right)$. This projection network is necessary as the ``closeness'' between products, synthons and reactants is enforced in the reaction space, not in the representation space. The contrastive loss, thus, is applied to corresponding projections of product, synthon and reactant graph representations. For every reaction $R_c$, we define two contrastive prediction tasks: \begin{itemize} \item Given $\mathcal{G}_p$ and a set $\{\mathcal{G}_{p'}\}$ which contains a connected component $\mathcal{G}_{s_{c}}$, the \emph{product-synthon} prediction task aims to identify $\mathcal{G}_{s_{c}}$ in $\{\mathcal{G}_{{p}'}\}$. This is repeated for every component $c \leq C$. \item Given a set $\{\mathcal{G}_{s'}\}$ and a positive pair of examples $\mathcal{G}_{s_{c}}, \mathcal{G}_{r_{c}}$ \emph{synthon-reactant} contrastive prediction task aims to identify $\mathcal{G}_{r_{c}}$ in $\{\mathcal{G}_{s'}\}$ given $\mathcal{G}_{s_{c}}$. \end{itemize} We sample a random minibatch of $N$ reactions, and use the other $N-1$ reactions as negative examples. Given vectors $\mathbf{h}_i$, $\mathbf{h}_j$, we define the similarity $s_{ij}$ as their dot product $s_{ij}= \mathbf{h}_i^{T} \mathbf{h}_j$. The similarity function accommodates both unnormalized and normalized vectors. \paragraph{Training} The model is trained to minimize the following loss function: \begin{align} \mathcal{L\left(\mathcal{T}\right)} & = \sum_{\mathcal{B} \in \mathcal{T}} {\sum_{k \in \{p, s\}}{\sum_{(\mathcal{G}_k, \{\mathcal{G}_{{k}'}\}) \in \mathcal{B}}}{\mathcal{L}^{(k, k')}}} \\ \mathcal{L}^{(k, k')} & = -\sum_{\mathcal{G}_j \in \{\mathcal{G}_{{k}'}\}}\mathrm{log} \hspace{1pt}\frac{\mathrm{exp(}s_{kj} \hspace{2pt}/ \hspace{2pt}\mathrm{\theta})}{\sum_{j'}{\mathrm{exp(}s_{kj'} \hspace{2pt}/ \hspace{2pt} \mathrm{\theta})}} \end{align} where $\mathcal{B}$ indicates the minibatch, $\mathrm{\theta}$ denotes the temperature parameter and $(\mathcal{G}_k, \{\mathcal{G}_{{k}'}\})$ denote the positive - negative example pairs constructed for the two contrastive prediction tasks. \section{Related Work} \label{sec:related-work} \paragraph{Retrosynthesis Prediction} Existing machine learning methods for retrosynthesis prediction can be divided into template-based and template-free approaches. Templates are either hand-crafted by experts \citep{hartenfeller2011collection, szymkuc2016computer}, or extracted algorithmically from large databases \cite{coley2017prediction, law_route_2009}. Exhaustively applying large template sets is expensive due to the involved subgraph matching procedure. Template-based methods therefore utilize different ways of prioritizing templates, by either learning a conditional distribution over the template set \citep{segler2017neural}, ranking templates based on molecular similarities to precedent reactions \citep{coley2017computer} or directly modelling the joint distribution of templates and reactants using logic variables \citep{dai2019retrosynthesis}. Despite their interpretability, these methods fail to generalize outside their rule set. Template-free methods \citep{liu2017retrosynthetic, zheng2019predicting, chen2020learning} bypass templates by learning a direct transformation from products to reactants. Current methods leverage architectures from neural machine translation and use string based representation of molecules called SMILES \citep{weininger1988smiles}. Linearizing molecules as strings does not utilize the inherently rich chemical structure. In addition, the reactant SMILES are generated from scratch, character by character. Attempts have been made to improve validity by adding a syntax correcter \citep{zheng2019predicting} and a mixture model to improve diversity of suggestions \citep{chen2020learning}, but the performance remains worse than \citep{dai2019retrosynthesis} on the standard retrosynthesis dataset. \paragraph{Reaction Center Identification} Our work most closely relates to models that predict reaction outcomes based on the reaction center~\citep{coley2019graph, jin2017predicting}. This center covers a small number of participating atoms involved in the reaction. By learning to rank atom pairs based on their likelihood to be in the reaction center, these models can predict reaction outcomes. In the contemporary, work \citep{shi2020graph} directly applied this idea to graph-based retrosynthesis. However, their performance falls short of the state-of-the-art performance. The task of identifying the reaction center is related to the step of deriving the synthons in our formulation. Our work departs from \citep{coley2019graph, jin2017predicting} as we utilize the property that new bond formations occur rarely (\textasciitilde0.1\%) from products to synthons, allowing us to predict a score only for existing bonds and atoms and reduce prediction complexity from $O(N^2)$ to $O(N)$. We further introduce global dependencies into the prediction input by using convolutional layers. This novel architecture yield over 11\% improvement over state-of-the-art approaches. \paragraph{Utilizing Substructures} Substructures have been utilized in various tasks from sentence generation by fusing phrases to molecule generation and optimization \citep{jin2018junction, jin2020multiobjective}. Our work is closely related to \citep{jin2020multiobjective} which uses precomputed substructures as building blocks for property-conditioned molecule generation. However, instead of precomputing, synthons ---analogous building blocks for reactants--- are indirectly learnt during training.
proofpile-arXiv_065-4415
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} The process of deep-inelastic scattering (DIS) of leptons off a nucleon target provides important information on the nucleon structure and the parton content. Therefore, it plays a central role in the determination of the parton distribution functions (PDFs), especially for the proton PDFs~\cite{Accardi:2016ndt}. At large values of Bjorken $x$ the DIS data constrain the valence quark distributions, while at small $x$ they are sensitive to the sea-quark and gluon distributions. In addition, the DIS cross sections at small $x$ contain substantial contributions from charm and bottom quarks. The virtuality $Q^2$ of the exchanged gauge-boson is the other important kinematic variable in DIS. It offers a wide range of scales to probe, for instance, in electron-proton scattering the parton dynamics inside the proton. Depending on the value of $Q^2$, different theoretical descriptions of DIS within Quantum Chromodynamics (QCD) may be applied. This concerns in particular the number $n_f$ of active quark flavors and the treatment of the heavy quarks, as charm and bottom. At low scales, when $Q^2$ is of the order of the heavy quark mass squared $m_h^2$, one typically works with $n_f = 3$ massless quark flavors. Then, the proton structure function is composed only out of light-quark PDFs for up, down and strange and of the gluon PDF. Massive quarks appear in the final state only or contribute as purely virtual corrections. At higher scales, for $Q^2 \gg m_c^2, m_b^2$ compared to the charm and bottom quark masses squared, additional dynamical degrees of freedom lead to theories with $n_f = 4$ or $5$ effectively light flavors, depending on whether charm is considered massless, or even both, charm and bottom. The massive renormalization group equations rule these dynamics and provide the corresponding scale evolution, linking to $n_f = 3$ massless quarks at very low virtualities. In general, the transition for the flavor dependence of the strong coupling $\alpha_s$, i.e., $\alpha_s(n_f) \to \alpha_s(n_f+1)$, is achieved at some matching scale $\mu_0$ with the decoupling relations~\cite{Appelquist:1974tg} which can be implemented perturbatively in QCD. These decoupling relations introduce a logarithmic dependence on the heavy quark masses $m_c$, $m_b$. In a similar manner, this is realized for the PDF $f_i$ of a parton $i$ with the help of suitable heavy-quark operator matrix elements (OMEs)~\cite{Buza:1996wv,Bierenbaum:2009mv}, which, in the perturbative expansion, also depend logarithmically on the heavy-quark masses. The transition $f_i(n_f) \to f_i(n_f+1)$, again at a matching scale $\mu_0$, implies also the introduction of new heavy-quark PDFs for charm or bottom when they become effectively light flavors, and can then be considered as effective dynamical degrees of freedom inside the proton. For a given fixed value of $n_f$, and having decoupled the heavy quarks in an appropriate manner, one may define the fixed-flavor number (FFN) scheme. In the FFN scheme used in the ABMP16 global fit of proton PDFs~\cite{Alekhin:2017kpj}, only light quarks and gluons are considered in the initial state, while heavy quarks appear in the final state as a result of the hard scattering of the incoming massless partons. Existing data on the heavy-quark DIS production are well described by the FFN scheme with $n_f = 3$, see Refs.~\cite{Accardi:2016ndt,H1:2018flt}. However, many PDF fits, like those of CT18~\cite{Hou:2019efy}, MMHT14~\cite{Harland-Lang:2014zoa} and NNPDF3.1~\cite{Ball:2017nwa} employ various different versions of the so-called variable-flavor-number (VFN) scheme. In the VFN scheme the quark flavors charm and bottom are considered also in the initial state from a certain mass scale onward and are dealt with as partonic components in the proton. As a consequence, the original distributions $f_i(n_f)$ are mapped into the distributions $f_i(n_f+1)$ at a chosen scale $\mu_0$, cf.~\cite{Buza:1996wv}. In addition, the VFN scheme effectively performs a resummation of logarithms in the ratio $Q^2/m_c^2$ (or $Q^2/m_b^2$) through the parton evolution equations for the charm (or bottom) PDF~\cite{Shifman:1977yb}, although the corresponding logarithms are not necessarily large for realistic kinematics. The difference in modeling of the heavy quark contribution, i.e., the choice for the FFN or the VFN scheme, has an impact on the PDFs obtained in global fits~\cite{Accardi:2016ndt,Thorne:2014toa}. Therefore, a detailed comparison of the two approaches is mandatory in view of the use of the respective PDF sets in QCD precision phenomenology. A particular prescription for a VFN scheme has been proposed in \cite{Buza:1996wv}, commonly referred to as BMSN-scheme. In PDF fits, the VFN scheme using this approach yields results which are not very different from the ones in the FFN scheme~\cite{Alekhin:2009ni}. This happens due to a smooth transition between the $n_f$- and $(n_f+1)$-flavor regimes at the matching scales $\mu_0=m_c$ and $\mu_0=m_b$, respectively, which is imposed in the BMSN ansatz. However, the BMSN prescription is based on heavy-quark PDFs, i.e., charm and bottom, which are derived with the help of fixed-order matching conditions. Therefore, the results of our previous study~\cite{Alekhin:2009ni} cannot be directly compared to the PDF fits in Refs.~\cite{Hou:2019efy,Harland-Lang:2014zoa,Ball:2017nwa} which apply heavy-quark PDF evolution. In the present article we study the phenomenology of a modified BMSN prescription, which also includes the scale evolution of heavy-quark PDFs in order to clarify the basic features of such VFN schemes. Our studies are limited to the case of DIS charm-quark production, since this process is most essential phenomenologically and, at the same time, a representative case. The paper is organized as follows. Basic features of QCD factorization, the VFN scheme and the BMSN prescription are outlined in Sec.~\ref{sec:form}. In Sec.~\ref{sec:evol} we describe the particularities introduced by the heavy-quark PDF evolution and Sec.~\ref{sec:pheno} contains the benchmarking of various factorization schemes based on existing data for DIS charm-quark production. We address implications of VFN schemes for predictions at hadron colliders Sec.~\ref{sec:colliders} and conclude in Sec.~\ref{sec:concl}. Technical details of the various implementations of heavy-quark schemes are summarized in App.~\ref{sec:appA}. \section{Heavy-quark PDFs} \label{sec:form} The dynamics of massless partons in the proton are parameterized in terms of the PDFs $f_i$ with $i=u,d,s,g$ for up, down, strange quarks and the gluon. These define the set of flavor-singlet quark and gluon PDFs, $q^{s}$ and $g$, \begin{eqnarray} \label{eq:qs+g-PDFs} q^{s}(n_f, \mu^2) \,=\, \sum_{l=1}^{n_f} (f_l(n_f, \mu^2) + \bar{f}_l(n_f), \mu^2) \, , \qquad\qquad g(n_f, \mu^2) \,=\, f_g(n_f, \mu^2) \, , \end{eqnarray} where $\mu$ denotes the factorization scale and we suppress the dependence on the momentum fractions $x$ here and below. Using standard QCD factorization~\footnote{A variant of the VFN scheme is used in the NNPDF3.1 fit of Ref.~\cite{Ball:2017nwa}, where the heavy-quark PDFs are parameterized by some functional form, which is then fitted to the data.}, the PDFs for the heavy quarks charm and bottom ($h =c, b$) at the scale $\mu$ in the $\overline{\mathrm{MS}}\, $\ scheme and using on-shell renormalization for the mass $m_h$ are then constructed from the quark-singlet and gluon PDFs in Eq.~(\ref{eq:qs+g-PDFs}) and the heavy-quark OMEs $A_{ij}$ as follows~\cite{Buza:1996wv,Bierenbaum:2009mv} \begin{eqnarray} \label{eq:VFNS-hq} f_{h+\bar h}(n_f+1, \mu^2) &=& {A_{hq}^{ps}\Big(n_f, \frac{\mu^2}{m_h^2}\Big)} \otimes {q^{s}(n_f, \mu^2)} + {A_{hg}^{s}\Big(n_f, \frac{\mu^2}{m_h^2}\Big)} \otimes {g(n_f, \mu^2)} \, , \end{eqnarray} where $h=c,b$ and `$\otimes$' denotes the Mellin convolution in the momentum fractions $x$. Typically, the matching conditions are imposed at the scale $\mu_0 = m_h$, and we further assume that $f_{h + {\bar h}} = 0$ at scales $\mu \le m_h$. In addition, the transition $\{q^{s}(n_f), g(n_f)\} \to \{q^{s}(n_f+1), g(n_f+1)\}$ for the set of the light-quark singlet and the gluon distributions with the respective heavy-quark OMEs has to account for operator mixing in the singlet sector \begin{eqnarray} \label{eq:VFNS-lq} q^{s}(n_f+1,\mu^2) &=& \left[ A_{qq,h}^{ns}\Bigl(n_f, \frac{\mu^2}{m_h^2}\Bigr) + A_{qq,h}^{ps} \Bigl(n_f, \frac{\mu^2}{m_h^2}\Bigr) + A_{hq}^{ps} \Bigl(n_f, \frac{\mu^2}{m_h^2}\Bigr) \right] \otimes q^{s}(n_f,\mu^2) \nonumber\\ && +\left[A^{s}_{qg,h}\Bigl(n_f, \frac{\mu^2}{m_h^2}\Bigr) + A^{s}_{hg}\Bigl(n_f, \frac{\mu^2}{m_h^2}\Bigr) \right] \otimes g(n_f,\mu^2) \, , \\ \label{eq:VFNS-g} g(n_f+1, \mu^2) &=& A_{gq,h}^{s}\Bigl(n_f,\frac{\mu^2}{m_h^2}\Bigr) \otimes q^{s}(n_f,\mu^2) + A_{gg,h}^{s}\Bigl(n_f,\frac{\mu^2}{m_h^2}\Bigr) \otimes g(n_f,\mu^2) \, , \end{eqnarray} with $h=c,b$, see Refs.~\cite{Buza:1996wv,Bierenbaum:2009mv}, also for matching relations for the non-singlet distributions. The perturbative expansion of the OMEs in powers of the strong coupling constant $\alpha_s$ reads (using $a_s = \alpha_s/(4\pi)$ as a short-hand), \begin{eqnarray} \label{eq:OMEexp} A_{ij} \; = \; \delta_{ij} + \sum\limits_{k=1}^{\infty}\, a_s^k \, A_{ij}^{(k)} \; = \; \delta_{ij} + \sum\limits_{k=1}^{\infty}\, a_s^k \, \sum\limits_{\ell=0}^{k}\, a^{(k,\ell)}_{ij}\, \ln^{\,\ell}\left(\frac{\mu^2}{m_h^2}\right) \, , \qquad \end{eqnarray} where the expressions $a^{(k,0)}_{ij}$ contain the information, which is genuinely new at the $k$-th order. The leading-order (LO) and next-to-leading order (NLO) contributions to the OMEs are given by the coefficients at order $a_s$ and $a_s^2$ in Eq.~(\ref{eq:OMEexp}), respectively. They have been determined analytically in closed form in Refs.~\cite{Buza:1995ie,Buza:1996wv,Bierenbaum:2007qe,Bierenbaum:2009zt}~\footnote{ The initial calculation of the two-loop OMEs $A_{hg}^{s,\, (2)}$ and $A_{gg,h}^{s,\, (2)}$ in Ref.~\cite{Buza:1996wv} was incomplete, cf. Ref.~\cite{Bierenbaum:2009zt}.}. At next-to-next-to-leading (NNLO) the heavy-quark OMEs are known either exactly or to a good approximation~\cite{Bierenbaum:2009mv,Ablinger:2010ty,Kawamura:2012cr,Ablinger:2014lka,Ablinger:2014nga,Alekhin:2017kpj}. This includes specifically the non-singlet and pure-singlet constant parts $a_{hq}^{(3,0)}$ in Eq.~(\ref{eq:OMEexp}) and the term $a_{hg}^{(3,0)}$ at order $a_s^3$. In the latter case an approximation based on fixed Mellin moments~\cite{Bierenbaum:2009mv} with a residual uncertainty in the small-$x$ region has been given in Ref.~\cite{Alekhin:2017kpj,Kawamura:2012cr}. It should be noted that, the decoupling relations in Eqs.~(\ref{eq:VFNS-hq})--(\ref{eq:VFNS-g}) assume the presence of a single heavy quark at each step only. Thus, the bottom-quark contributions are ignored in the transition from $n_f=3$ to $4$ and in the construction of the charm-quark PDF. However, starting at two-loop order, the perturbative corrections to the heavy-quark OMEs contain graphs with both, charm- and bottom-quark lines. With the ratio of masses $(m_c/m_b)^2 \approx 1/10$, charm quarks generally cannot be taken as massless at the scale of the bottom-quark. Such two-mass contributions to the heavy-quark OMEs have been computed recently~\cite{Ablinger:2017err,Ablinger:2017xml,Ablinger:2018brx}. At three--loop order (and beyond), these corrections can neither be attributed to the charm- nor to the bottom-quark PDFs separately. Rather, one has to decouple charm and bottom quarks together at some large scale and the corresponding VFN scheme, i.e. the simultaneous transition with two massive quarks, $f_i(n_f) \to f_i(n_f+2)$, has been discussed recently in Ref.~\cite{Blumlein:2018jfm}. This proceeds in close analogy to the simultaneous decoupling of bottom and charm quarks in the strong coupling constant $\alpha_s$, see for instance Ref.~\cite{Grozin:2011nk}. We will elaborate on these aspects further below. First, we will limit our studies to the case of the charm-quark PDF and apply Eqs.~(\ref{eq:VFNS-hq})--(\ref{eq:VFNS-g}) to change from $n_f=3$ to $4$. At LO only the heavy-quark OME $A_{hg}^{s}$ contributes and the coefficients are \begin{equation} \label{eq:Ahg-one-loop} a_{hg}^{(1,0)}(x) \,=\, 0 \, , \qquad \qquad a_{hg}^{(1,1)}(x) \,=\, 4 T_f (1 - 2 x + 2 x^2) \,=\, \frac{P^{(0)}_{qg}(x)}{n_f}\, \, , \end{equation} i.e., the constant term of the unrenormalized massive OME $A_{hg}^{s,\, (1)}$ vanishes and the logarithmic one with $T_f = 1/2$ is proportional to the LO quark-gluon splitting function $P^{(0)}_{qg}$ in the normalization of Ref.~\cite{Vogt:2004mw}. For four active flavors we abbreviate the charm PDF in Eq.~(\ref{eq:VFNS-hq}) as $c(x,\mu^2) \,\equiv\, f_{c+\bar c}(4,x,\mu^2)$ and consider its perturbative expansion \begin{equation} \label{eq:charm-pdf-def} c(x,\mu^2) \,=\, c^{(1)}(x,\mu^2) + c^{(2)}(x,\mu^2) + \dots, \end{equation} where the LO term $c^{(1)}(x,\mu^2)$ has a particularly simple form, \begin{equation} \label{eq:cqlo} c^{(1)}(x,\mu^2) \,=\, a_s(\mu^2)\, \ln\left(\frac{\mu^2}{m_c^2}\right)\, \int_x^1 \frac{dz}{z}\,\, a_{hg}^{(1,1)}(z)\,\, g\left(n_f=3,\frac{x}{z},\mu^2\right) \, . \end{equation} Here, $g$ denotes the gluon PDF in the 3-flavor scheme. This expression is used in the BMSN prescription~\cite{Buza:1996wv} of the VFN scheme and determines the charm-quark distribution at all scales $\mu \geq m_c$ at fixed-order perturbation theory (FOPT). On the contrary, other VFN prescriptions, like ACOT~\cite{Aivazis:1993pi,Kramer:2000hn,Tung:2001mv}, FONLL~\cite{Forte:2010ta} or RT~\cite{Thorne:1997ga} use Eq.~(\ref{eq:cqlo}) as a boundary condition for $c(x,\mu^2)$ at $\mu=m_c$ and derive the scale dependence with the help of the standard QCD evolution equations (DGLAP) for massless quarks. The evolution resums logarithmic terms to all orders, so that the charm-quark distribution acquires additional higher order contributions which are not present in the FOPT expression in Eq.~(\ref{eq:cqlo}). In order to illustrate the numerical difference between these two approaches, we consider the derivative of $c(x,\mu^2)$ \begin{eqnarray} \label{eq:cloder} \frac{dc^{(1)}(x,\mu^2)}{d\ln\mu^2} &=& a_s(\mu^2)\int_x^1 \frac{dz}{z}\,\, a_{hg}^{(1,1)}(z)\,\,g\left(\frac{x}{z},\mu^2\right) \,+\, \left(\frac{da_s}{d\ln\mu^2}\right) \frac{c^{(1)}(x,\mu^2)}{a_s} \nonumber \\ & & \,+\, a_s(\mu^2)\ln\left(\frac{\mu^2}{m_c^2}\right) \int_x^1 \frac{dz}{z} \,\,a_{hg}^{(1,1)}(z)\,\, \dot{g}\left(\frac{x}{z},\mu^2\right) \, , \end{eqnarray} where $\dot{g}(x,\mu^2)\equiv dg(x,\mu^2)/d\ln\mu^2$. \begin{figure} \centering \includegraphics[width=\textwidth,height=0.5\textwidth]{pdfevol.pdf} \caption{\small The difference between the evolved $c$-quark distributions and the ones obtained with the FOPT conditions in various orders of QCD (LO: dots, NLO: dashes, and NNLO$^{\ast}$: solid lines) versus the factorization scale $\mu$ and at representative values of the parton momentum fraction $x$ (left: $x=0.0002$, right: $x=0.002$) taking the matching scale $\mu_0=m_c=1.4~{\rm GeV}$, where $m_c$ is the pole mass of $c$-quark mass. The vertical dash-dotted lines display the upper margin for the HERA collider kinematics. } \label{fig:pdfevol} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1.0\textwidth]{pdfder.pdf} \hfill \caption{\label{fig:pdfder} The same as in Fig.~\ref{fig:pdfevol} for the scale derivatives of the charm-quark PDF, $\dot{c}(x,\mu^2) \equiv dc(x,\mu^2)/d\ln\mu^2$.} \end{figure} The first term in Eq.~(\ref{eq:cloder}) corresponds to the right hand side of the standard DGLAP evolution equations, recall Eq.~(\ref{eq:Ahg-one-loop}), i.e., $a_{hg}^{(1,1)}$ is proportional to $P^{(0)}_{qg}$. The second and the third term, however, account for the difference between the FOPT distributions and the evolved ones. These terms vanish at the matching scale $\mu_0=m_c$ as they should by definition. For scales $\mu > m_c$ the second term proportional to the QCD $\beta$ function is negative, since $da_s/d\ln \mu^2 = \beta(a_s)/(4\pi) < 0$. However, the net effect of the difference between the FOPT and the DGLAP evolved distributions shown in Fig.~\ref{fig:pdfevol} on the left is positive at small $x$ and driven by $\dot{g}$ in the third term. Only at large $x$, where the gluon PDF is negligible, the term proportional to $\beta(a_s)$ dominates and the net difference between the FOPT and the DGLAP evolved distributions is negative. The matching conditions for the charm-quark at NLO are more involved. The NLO term $c^{(2)}(x,\mu^2)$ in Eq.~(\ref{eq:charm-pdf-def}) has the form \begin{eqnarray} \label{eq:cq-two-loop} c^{(2)}(x,\mu^2) &=& a_s^2(\mu^2) \int_x^1 \frac{dz}{z}\,\, A_{hg}^{s,\, (2)}\left(n_f=3,z,\frac{\mu^2}{m_c^2}\right)\,\, g\left(n_f=3,\frac{x}{z},\mu^2\right) \nonumber \\ & & \,+\, a_s^2(\mu^2) \int_x^1 \frac{dz}{z}\,\, A_{hq}^{ps,\, (2)}\left(n_f=3,z,\frac{\mu^2}{m_c^2}\right)\,\, q^{s}\left(n_f=3,\frac{x}{z},\mu^2\right) \, . \end{eqnarray} It includes the NLO corrections to the massive OMEs $A_{hg}^{s,\, (2)}$ and $A_{hq}^{ps,\, (2)}$ for $n_f=3$, see Eq.~(\ref{eq:OMEexp}), the gluon and the quark-singlet PDFs, $g$ and $q^{(s)}$, are taken in the 3-flavor scheme again, cf. Eq.~(\ref{eq:qs+g-PDFs}). Since $a_{hg}^{(2,0)}$ and $a_{hq}^{(2,0)}$ in Eq.~(\ref{eq:OMEexp}) are non-zero in the $\overline{\mathrm{MS}}\, $\ scheme, $c(x,\mu^2)$ at NLO does not vanish anymore at the matching scale $\mu_0=m_c$, see the off-set in Fig.~\ref{fig:pdfevol} on the right. \begin{figure}[t!] \centering \includegraphics[width=1.0\textwidth]{pdfevolth.pdf} \hfill \caption{\label{fig:pdfevolth} The same as in Fig.~\ref{fig:pdfevol} at NNLO$^{\ast}$ and different values of the matching scale $\mu_0$ (solid line: $\mu_0=m_c$, dashes: $\mu_0=2m_c$). } \end{figure} The comparison of the charm-quark FOPT distributions at NLO based on Eqs.~(\ref{eq:cqlo}) and (\ref{eq:cq-two-loop}) and the evolved ones, using $c(x,\mu^2)$ only as the boundary condition at the matching scale, shows in Fig.~\ref{fig:pdfevol} qualitatively the same pattern as at LO, although the numerical differences are smaller now. At small $x$, driven by the scale derivative $\dot{g}$ of the gluon PDF, the FOPT distributions are larger while at large $x$ the terms proportional to $\beta(a_s)$ dominate and the DGLAP evolved distributions are larger. These observations can be expressed in quantitative form through the scale derivative of the NLO term $c^{(2)}(x,\mu^2)$, which reads \begin{eqnarray} \label{eq:cnloder} \frac{dc^{(2)}(x,\mu^2)}{d\ln\mu^2} &=& a_s^2(\mu^2) \int_x^1 \frac{dz}{z}\,\, \left(a_{hg}^{(2,1)}(z) + 2 \ln\left(\frac{\mu^2}{m_c^2}\right) a_{hg}^{(2,2)}(z)\right)\,\, g\left(\frac{x}{z},\mu^2\right) \nonumber \\ & & \,+\, a_s^2(\mu^2) \int_x^1 \frac{dz}{z}\,\, \left(a_{hq}^{(2,1)}(z) + 2 \ln\left(\frac{\mu^2}{m_c^2}\right) a_{hq}^{(2,2)}(z)\right)\,\, q^{s}\left(\frac{x}{z},\mu^2\right) \,+\, 2 \left(\frac{da_s}{d\ln\mu^2}\right) \frac{c^{(2)}(x,\mu^2)}{a_s} \nonumber \\ & & \,+\, a_s^2(\mu^2) \int_x^1 \frac{dz}{z}\,\, A_{hg}^{s,\, (2)}\left(z,\frac{\mu^2}{m_c^2}\right)\,\, \dot{g}\left(\frac{x}{z},\mu^2\right) + a_s^2(\mu^2) \int_x^1 \frac{dz}{z}\,\, A_{hq}^{ps,\, (2)}\left(z,\frac{\mu^2}{m_c^2}\right)\,\, \dot{q}^{s}\left(\frac{x}{z},\mu^2\right) \, , \nonumber \\ \end{eqnarray} where, again, $\dot{g}(x,\mu^2)\equiv dg(x,\mu^2)/d\ln\mu^2$ and $\dot{q}^{s}(x,\mu^2)\equiv dq^{s}(x,\mu^2)/d\ln\mu^2$. Here, the first two terms in the right hand side contain the expressions used in the standard DGLAP equations to evolve the charm-quark PDF, since the NLO splitting functions $P^{(1)}_{qg}$ and $P^{(1)}_{qq}$ appear in the terms $a_{hg}^{(2,1)}$ and $a_{hq}^{(2,1)}$, cf.~\cite{Buza:1996wv,Bierenbaum:2009mv}. However, there are also other contributions, since the heavy-quark OMEs enjoy their own (massive) renormalization group equation. In addition, the full expression $dc(x,\mu^2)/d\ln\mu^2$ at NLO contains, of course, also the terms from $\dot{c}^{(1)}(x,\mu^2)$ in Eq.~(\ref{eq:cloder}) expanded to higher order in $a_s$, for example the term proportional to $\beta(a_s)$. In summary, these terms are responsible for decreasing the difference between the FOPT and the evolved distributions at NLO in Fig.~\ref{fig:pdfevol}. As a further variant in the study of the DGLAP evolved charm-quark PDF, one can perform the evolution using the full NNLO splitting functions $P^{(2)}_{ij}$ of Ref.~\cite{Vogt:2004mw} starting at the matching scale $\mu_0=m_c$ from the boundary condition for $c(x,m_c^2)$ at NLO in Eqs.~(\ref{eq:cqlo}) and (\ref{eq:cq-two-loop}). We denote this variant as NNLO$^{\ast}$, since there is a mismatch in the orders of perturbation theory between the heavy-quark OMEs and the accuracy of the evolution equations. The difference with the NLO variant is due to terms which are formally of higher order, but nevertheless have significant numerical impact at small $x$ as shown in Fig.~\ref{fig:pdfevol}. There, the FOPT distributions at NLO and the evolved ones at NNLO$^{\ast}$ accuracy are very similar in the entire $\mu$-range. Only at large $x$, the increased order in the DGLAP evolution is negligible. In Fig.~\ref{fig:pdfder} we display the scale derivatives of the charm-quark PDF $\dot{c}(x,\mu^2) \equiv dc(x,\mu^2)/d\ln\mu^2$ calculated using Eqs.~(\ref{eq:cloder}) and (\ref{eq:cnloder}). We consider the difference of $\dot{c}(x,\mu^2)$ determined in FOPT, $\dot{c}_{\rm{FOPT}}$, and the one evolved with the standard DGLAP evolution, $\dot{c}_{\rm{evol}}$, choosing $n_f=4$ and starting from the expressions in Eqs.~(\ref{eq:cqlo}) and (\ref{eq:cq-two-loop}) at the matching scale $\mu_0=m_c=1.4~{\rm GeV}$. Evidently, at LO the difference $\dot{c}_{\rm{FOPT}}-\dot{c}_{\rm{evol}}$ has to vanish at the matching scale, while at NLO or in the NNLO$^{\ast}$ variant some finite off-set at $\mu_0=m_c=1.4~{\rm GeV}$ remains. Remarkably, the results at NLO and at NNLO$^{\ast}$, i.e., using NLO boundary conditions from Eqs.~(\ref{eq:cqlo}) and (\ref{eq:cq-two-loop}) and NNLO splitting functions in the evolution of $\dot{c}_{\rm{evol}}$ are very different at low factorization scales and only converge above $\mu^2 \gtrsim 10^2 \dots 10^3$~GeV$^2$, depending on the value of $x$. These large scales, however, at which the NLO and the NNLO$^{\ast}$ variants become of similar size, are typically well outside the kinematic range of the HERA collider, whose upper limit is indicated by the vertical arrow. These findings indicate that there is a substantial numerical uncertainty in the VFN prescriptions ACOT~\cite{Aivazis:1993pi,Kramer:2000hn,Tung:2001mv}, FONLL~\cite{Forte:2010ta} or RT~\cite{Thorne:1997ga} due to the order of the QCD evolution applied. In particular the additional higher order terms in the NNLO$^{\ast}$ variant do have a sizable effect within the $\mu$-range covered by experimental data on DIS charm-quark production and, hence, on the quality of the description of those data in a fit using such VFN prescriptions. An additional source of uncertainty in the VFN scheme concerns the choice of the matching scale $\mu_0$. Conventionally it is set to the corresponding heavy-quark mass, $m_c$ and $m_b$ for the 4- and 5-flavor PDFs, respectively. A variation of $\mu_0$ leads to a modification of the shape of the evolved heavy-quark PDFs, whereas, in contrast, the FOPT ones remain unchanged by construction. Therefore, for $\mu_0 > m_h$ a difference between the FOPT and the evolved heavy-quark PDFs is generally becoming smaller, in particular within the phase-space region covered by existing data, cf. Fig.~\ref{fig:pdfevolth}. Such a variation of $\mu_0$ also implies the use of the FFN scheme to describe data in a wider kinematic range, e.g., up to $\mu_0 = 2m_c$ instead of $\mu_0 = m_c$ for the illustration in Fig.~\ref{fig:pdfevolth}. Therefore, the uncertainty due to a variation of matching scale is not completely independent from those related to the choice of heavy-quark PDFs employed in the VFN scheme. \begin{figure}[t!] \centering \includegraphics[width=1.0\textwidth]{bmsn.pdf} \hfill \caption{\label{fig:bmsn} The structure function $F_2^c$ for DIS $c$-quark production at values $x=0.0001$ (left) and $x=0.001$ (right) of the Bjorken variable versus momentum transfer $Q^2$ computed in the FFN scheme (solid lines), with the asymptotic expression of the FFN scheme (dots), the ZMVFN scheme (dash-dotted lines) and the BMSN prescription of the VFN scheme (dashes) and, using the FOPT $c$-quark distribution. } \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1.0\textwidth]{f2c.pdf} \hfill \caption{\label{fig:f2c} The same as in Fig.~\ref{fig:bmsn} computed using the BMSN prescription of the VFN scheme and various approaches for the generation of the $c$-quark distributions (FOPT: dots, NLO evolved: dashes, NNLO$^{\ast}$ evolved: dash-dotted lines) in comparison with the FFN scheme results (solid lines). } \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1.0\textwidth]{f2cth.pdf} \hfill \caption{\label{fig:f2cth} The same as in Fig.~\ref{fig:f2c} for the NNLO$^{\ast}$ $c$-quark distributions obtained with the matching scales $\mu_0=m_c$ (dash-dotted lines) and $\mu_0=2m_c$ (dashes). } \end{figure} \section{BMSN prescription of the VFN scheme } \label{sec:evol} The heavy-quark distribution derived using the matching conditions Eqs.~(\ref{eq:VFNS-hq}), (\ref{eq:VFNS-lq}) enter the zero-mass VFN scheme (ZMVFN) expression for $F_{2,h}$ \begin{equation} \label{eq:zmvfn} F_{2,h}^{ZMVFN} \,=\, \sum\limits _{k=0}^{\infty} a_s^k(n_f+1) \sum\limits_{i=q,g,h} C^{(k)}_{2,i}(n_f+1) \otimes f_i(n_f+1) \, , \end{equation} where $C_{2,i}^{(k)}$ are the massless DIS Wilson coefficients at the $k$-th order, which are known to next-to-next-to-next-to-leading order (N$^3$LO)~\cite{Vermaseren:2005qc}. This expression is valid at asymptotically large momentum transfer $Q^2 \gg m_h^2$, while it is unsuitable for scales $Q^2 \simeq m_h^2$ since the heavy-quark decoupling is not applicable. Therefore, a realistic implementation of the VFN scheme commonly includes a combination of the ZMVFN expression in Eq.~(\ref{eq:zmvfn}) with the FFN one \begin{equation} \label{eq:ffn} F_{2,h}^{FFN} \,=\, \sum\limits _{k=1}^{\infty} a_s^k(n_f) \sum\limits _{i=q,g} H^{(k)}_{2,i}(n_f) \otimes f_i(n_f) \, , \end{equation} where $H_{2,i}^{(k)}$ are the Wilson coefficients for the DIS heavy-quark production, all known exactly at NLO~\cite{Laenen:1992zk} and $H^{(3)}_{2,g}$ to a good approximation at NNLO~\cite{Kawamura:2012cr,Alekhin:2017kpj}. Furthermore, in order to avoid double counting, a subtraction has to be carried out when combining Eqs.~(\ref{eq:zmvfn}) and (\ref{eq:ffn}). For the BMSN prescription of the VFN scheme~\cite{Buza:1996wv} this subtraction arises from the asymptotic FFN expression as follows \begin{equation} \label{eq:asymp} F_{2,h}^{asy} \,=\, \sum\limits _{k=1}^{\infty} a_s^k(n_f) \sum\limits _{i=q,g} H^{(k),asy}_{2,i}(n_f) \otimes f_i(n_f) \, , \end{equation} where $H^{(k),asy}_{2,i}$ is derived from $H^{(k)}_{2,i}$ taken in the limit of $Q^2 \gg m_h^2$. In summary BMSN prescription then reads \begin{equation} \label{eq:bmsn} F_{2,h}^{BMSN} \,=\, F_{2,h}^{FFN}+F_{2,h}^{ZMVFN}-F_{2,h}^{asy}\, , \end{equation} where a factorization scale $\mu_F=m_h$ is used throughout. The asymptotic Wilson coefficients $H^{asy}_{2,i}$ can be expanded into a linear combination of the massless Wilson coefficients $C_{2,i}$ and the massive OMEs~\cite{Buza:1995ie,Buza:1996wv,Bierenbaum:2009mv}. For this reason, the asymptotic expression of Eq.~(\ref{eq:asymp}) coincides with the ZMVFN one of Eq.~(\ref{eq:zmvfn}), when the FOPT matching conditions Eqs.~(\ref{eq:VFNS-hq})--(\ref{eq:VFNS-g}) are employed, up to the subleading non-singlet terms and the difference between $a_s^k(n_f+1)$ and $a_s^k(n_f)$~\cite{Alekhin:2009ni}. The latter exhibits a small discontinuity at $Q^2 \simeq m_h^2$ beyond one loop~\cite{Schroder:2005hy,Chetyrkin:2005ia}, which is numerically negligible, so that $F_{2,h}^{ZMVFN}$ and $F_{2,h}^{asy}$ in Eq.~(\ref{eq:bmsn}) essentially cancel. Therefore, at small $Q^2$ one obtains $F_{2,h}^{BMSN} \to F_{2,h}^{FFN}$. On the other hand, at large scales $Q^2 \gg m_h^2$ the FFN term is canceled by $F_{2,h}^{asy}$ and one has in this limit, that $F_{2,h}^{BMSN} \to F_{2,h}^{ZMVFN}$. In summary, the BMSN prescription Eq.~(\ref{eq:bmsn}) provides a smooth transition between FFN scheme at small momentum transfer to the ZMVFN scheme at large scales, cf. Fig.~\ref{fig:bmsn}. \begin{figure}[tbp] \centering \includegraphics[width=.9\textwidth,height=0.8\textwidth]{heracvfn.pdf} \hfill \caption{\small The pulls obtained for the combined HERA data on DIS $c$-quark production~\cite{H1:2018flt} in the FFN version of the present analysis (solid lines) versus $x$ in bins on $Q^2$. The predictions obtained using the BMSN prescriptions of the VFN scheme with various versions of the heavy-quark PDFs with respect to the FFN fit are displayed for comparison (dotted-dashes: fixed order NLO; dashes: evolved from the NLO matching conditions with the NLO splitting functions; dots: the same for the NLO matching conditions combined with the NNLO splitting functions). The PDFs obtained in the FFN fit are used throughout. } \label{fig:heracvfn} \end{figure} A version of the BMSN prescription based on the NLO evolution of the $(n_f+1)$-flavor PDFs also allows for a smooth matching with the FFN scheme at $Q^2 = m_h^2$, because the NLO-evolved PDFs do not have a discontinuity with respect to the FOPT ones at the matching point, cf. Figs.~\ref{fig:pdfevol} and \ref{fig:f2c}. For the variant denoted NNLO$^{\ast}$ which uses NNLO-evolved PDFs the trend is different: the slope of $F_{2,h}$ at $Q^2 = m_h^2$ predicted by the BMSN prescription is substantially larger than the one obtained with the FFN scheme. This is in line with the difference between the NNLO and FOPT PDFs. Obviously, this difference is not explained by the impact of the resummation of large logarithms, but rather by the mismatch in the perturbative order of the matching conditions and evolution kernels employed to obtain the NNLO heavy-quark PDFs. Therefore, the difference between the NLO and NNLO$^{\ast}$ variants of the VFN scheme should essentially quantify its uncertainty due to the missing NNLO corrections to the massive OMEs. A choice of the matching scale $\mu_0=m_h$, i.e., at the mass of the heavy quark, is a matter of convention rather than appearing as a consequence of solid theoretical arguments. Also note, that for DIS charm production, the matching scale $\mu_0$ cannot be significantly shifted to scales much lower than $m_c$, because in this case the matching would be performed at scales well below 1~GeV, where QCD perturbative expansions are not converging anymore. When $\mu_0$ is shifted upwards, e.g., $\mu_0=2m_h$, the difference between the NLO and NNLO$^{\ast}$ variants of the VFN scheme is becoming less significant. This is particularly due to the fact, that then essential parts of the problematic small-$Q^2$ region are left for a theoretical description within the FFN scheme, cf. Fig.~\ref{fig:f2cth}. The impact of scheme variations and the choice of the matching scale are qualitatively similar for the $c$- and $b$-quark production. Nonetheless, the effects are less pronounced for the $b$-quark case~\cite{Bertone:2017ehk}, mainly because of the smaller numerical value of strong coupling at the scale $m_b$. For this reason, and also due to more representative kinematics of data, all our phenomenological comparisons are focused on the $c$-quark contribution. \section{Benchmarking of the FFN and VFN schemes with the HERA data} \label{sec:pheno} \begin{figure} \centering \includegraphics[width=0.455\textwidth,height=0.42\textwidth]{glu.pdf} \includegraphics[width=0.455\textwidth,height=0.42\textwidth]{sea.pdf} \caption{\small Left panel: The relative uncertainty in the 3-flavor gluon distribution $xg(x,\mu)$ at the factorization scale $\mu=3~{\rm GeV}$ versus $x$ obtained in the fit based on the BMSN VFN prescription with the FOPT heavy-flavor PDFs (hatched area) in comparison to the relative variation of its central value due to switching to the NLO- (dashes) and NNLO$^{\ast}$-evolved (dots) PDFs. Right panel: the same for the total light-flavor sea quark distribution $xS(x,\mu)$. } \label{fig:pdfs} \end{figure} To study the phenomenological impact of the VFN scheme uncertainties we consider several variants of ABMP16 PDF fit~\cite{Alekhin:2017kpj}, which include the recent HERA data on heavy-flavor DIS production~\cite{H1:2018flt}. Furthermore, the inclusive neutral-current DIS HERA data used in the ABMP16 fit are excluded in order to illuminate the impact of the scheme variation on the PDFs extracted from the fit. For the same reason we exclude the collider data on $W^\pm$- and $Z$-boson production, which provide an additional constraint at small-$x$ on the PDFs in the ABMP16 fit. However, in order to keep the different species of quark flavors disentangled, we add data on DIS off a deuteron target, analogous to an earlier study in Ref.~\cite{Alekhin:2017fpf}. For all variants we employ the NLO massive Wilson coefficients~\cite{Laenen:1992zk} and the pole-mass definition for the heavy-quark masses, so that a consistent comparison of the FFN scheme with the original formulation of the BMSN prescription and its modifications is possible. For the same purpose we take the factorization scale $\mu_F={m_h}$ both for the FFN and VFN scheme. The values of $m_c^{pole}=1.4$~GeV and $m_b^{pole}=4.4$~GeV used in the present study are not perfectly consistent with the ones obtained in the ABMP16 fit with the $\overline{\mathrm{MS}}\, $ definition. However, they are close to the values in the pole-mass scheme preferred by the HERA data~\cite{H1:2018flt}~\footnote{ Changing the heavy-quark mass renormalization scheme to the $\overline{\rm MS}$-scheme is straightforward, cf.~\cite{Alekhin:2010sv,Ablinger:2014nga}.}. With these settings, the FFN scheme provides a good description of the $c$-quark production data, cf. Fig.~\ref{fig:heracvfn}. The agreement of the fit with the data is equally good, both at small and at large $Q^2$, underpinning the fact, that any additional large logarithms cannot improve the theoretical data treatment within the range of kinematics covered by the HERA data. This observation is indeed long known~\cite{Gluck:1993dpa}. In order to check this aspect in greater detail we also compare predictions of various versions of the VFN scheme with the data. Let us consider the VFN predictions for the heavy-quark production cross sections which are computed by using the BMSN prescription of Eq.~(\ref{eq:bmsn}) for $F_2$, while still keeping the FFN scheme for $F_L$. The justification of this approach derives from the small numerical contribution of $F_L$ as compared to $F_2$. In addition, the modeling of $F_L$ within the VFN framework is conceptually problematic~\cite{Alekhin:2009ni}, because the effects of power corrections in $m_h^2/Q^2$ cannot be disregarded for this observable~\cite{Buza:1995ie}. The PDFs used in this comparison are the ones obtained in the FFN version of the fit. Therefore the obtained pulls display the impact of the scheme variation only. As expected, predictions of the VFN scheme based on the BMSN prescription and the FOPT heavy-flavor PDFs are close to the FFN ones. The same applies to the case of NLO-evolved PDFs, which are smoothly matched with the FFN ones at small scales, cf. Fig.~\ref{fig:f2c}. In contrast, an excess with respect to the small-$Q^2$ data appears for the variant of the fit with the NNLO$^{\ast}$ PDFs employed. This excess is clearly related to the mismatch between the FFN scheme and this variant of the VFN one. At large $Q^2$ the impact of the resummation of large logarithms is marginal, in particular given the size of the data uncertainties. The latter is true also for the case of NLO-evolved PDFs. \begin{figure} \centering \includegraphics[width=0.455\textwidth,height=0.42\textwidth]{gluin.pdf} \includegraphics[width=0.455\textwidth,height=0.42\textwidth]{seain.pdf} \caption{\small The same as in Fig.~\ref{fig:pdfs} for the variants of the fit with the HERA inclusive DIS data appended. Results of the NNLO FFN fit displayed for comparison (solid lines). } \label{fig:pdfsin} \end{figure} \begin{figure} \centering \includegraphics[width=0.455\textwidth,height=0.42\textwidth]{gluth.pdf} \includegraphics[width=0.455\textwidth,height=0.42\textwidth]{seath.pdf} \caption{\small The same as in Fig.~\ref{fig:pdfsin} for a comparison of two variants of the BMSN fit based on the NNLO$^{\ast}$ PDFs with the matching point at $\mu_0=m_c$ (dots) and $\mu_0=2m_c$ (dashed dots). } \label{fig:pdfshi} \end{figure} The HERA data on $c$-quark production used in the present analysis are accurate enough to provide a sensible constraint on the small-$x$ gluon distribution. Moreover, the latter demonstrate sensitivity to the choice of the factorization scheme, cf. Fig.~\ref{fig:pdfs}. The FFN scheme and the BMSN scheme with the FOPT and the NLO-evolved PDFs are in qualitative agreement, while a much lower small-$x$ gluon distribution is preferred in the variant based on the NNLO$^{\ast}$ PDFs. This is in line with the trends observed for the pull comparison, cf. Fig.~\ref{fig:heracvfn}. The difference between gluon and quark distributions obtained in the NLO- and NNLO$^{\ast}$-based fits is pronounced at small $x$ due to kinematic correlations with the small-$Q^2$ region, where the difference between these two approaches is localized, and reaches $\sim 30\%$ at $x=10^{-4}$. The description of the small-$x$ inclusive DIS data is also sensitive to the scheme choice due to a substantial contribution of the heavy-quark production. In order to check this quantitatively, we consider variants of the fits with various VFN scheme prescriptions and the HERA inclusive data~\cite{Abramowicz:2015mha} added. In line with the recent update of the ABMP16 fit~\cite{Alekhin:2019ntu}, we impose strong cuts on the momentum transfer $Q^2 > 10~{\rm GeV}^2$ and on the hadronic mass $W^2 > 12.5~{\rm GeV}^2$, which allow to avoid any impact of higher twist corrections, cf.~\cite{Alekhin:2012ig}. The PDF uncertainties are improved due to the additional data included. However, the sensitivity of the resulting gluon distribution to the choice of heavy-quark PDF evolution still reaches $\sim30\%$ at $x=10^{-4}$, cf. Fig.~\ref{fig:pdfsin}. Such a spread induces quite essential uncertainties in the small-$x$ VFN predictions, in particular in the $c$- and $b$-quark input distributions for scattering processes at hadron colliders. \begin{figure} \centering \includegraphics[width=0.455\textwidth,height=0.42\textwidth]{gluhi.pdf} \includegraphics[width=0.455\textwidth,height=0.42\textwidth]{seahi.pdf} \caption{\small The same as in Fig.~\ref{fig:pdfsin} for the 5-flavor PDFs at the factorization scale $\mu=100$~GeV. } \label{fig:pdfhigh} \end{figure} The gluon distribution obtained using the BMSN prescription with the NLO-evolved PDFs is increased with respect to the FOPT one at $x \sim 0.01$, which gives a hint on the impact of the resummation of large logarithms at these kinematics. No further substantial change in the gluon distribution at $x \gtrsim 0.01$ is observed, when the NNLO corrections to the evolution are taken into account. Therefore, one should expect a minor impact of the logarithmic terms at higher order (higher powers) on the description of the existing DIS data, although the comparison is somewhat deteriorated by the uncertainty from the mismatch in perturbative orders in the NNLO$^{\ast}$ fits appearing at $x\lesssim 0.01$. In this context it is also instructive to consider the results of the FFN fit performed with account of the NNLO corrections, which include the terms up to $O(\ln^2(\mu^2/m_h^2))$~\cite{Alekhin:2017kpj} and $\overline{\mathrm{MS}}\, $ masses $m_c(m_c)=1.27~{\rm GeV}$, $m_b(m_b)=4.18~{\rm GeV}$~\cite{Tanabashi:2018oca}. The gluon distribution obtained with these settings is similar to the VFN ones at $x\gtrsim 0.01$, located between the NLO and NNLO$^{\ast}$ fit results at $x\sim 10^{-4}$ and lower by $\sim 5\%$ than both of these variants at $x\sim 0.01$, where they agree with each other, cf. Fig.~\ref{fig:pdfsin}. This plot also yields an upper limit on the estimate of the impact of missing large logarithms in the NNLO FFN fit. On the other hand, a comparison with the NNLO VFN fit at small $x$ is inconclusive due to the large uncertainties in the VFN scheme appearing at these kinematics. A more accurate estimate requires the NNLO VFN fit with a consistent boundary condition based on OMEs at NNLO accuracy~\cite{Bierenbaum:2009mv,Ablinger:2010ty,Kawamura:2012cr,Ablinger:2014lka,Ablinger:2014nga,Alekhin:2017kpj}. Nonetheless, at the present level of data accuracy this upper limit is comparable with the experimental uncertainties in the gluon distribution obtained from the fit. Finally, considering a variation of the matching scale for the 4-flavor PDFs from $\mu_0=m_c$ to $\mu_0=2m_c$, leads to VFN heavy-flavor predictions being closer to the FFN ones, cf. Fig.~\ref{fig:f2cth}. The phenomenological effect of such a variation is more substantial at small $Q^2$ and $x$ due to kinematic characteristics of the existing DIS experiments. Therefore, the corresponding change of the gluon distribution due to a matching point variation is significant mostly at $x \lesssim 10^{-3}$, cf. Fig.~\ref{fig:pdfshi}. It is comparable in size with the VFN scheme uncertainty related to the boundary conditions for the evolution. However, strictly speaking, these two uncertainty sources should not be considered independently since the impact of the matching scale variation also manifests itself through the scheme change. \section{Implications of VFN schemes for predictions at hadron colliders} \label{sec:colliders} The contribution of heavy flavors to the hadro-production of massive states, like $W^\pm$-, $Z$- and Higgs-bosons, $t$-quarks, etc., are commonly taken into account within the 4- or 5-flavor scheme. This allows for great simplifications of the computations, since the VFN PDFs employed in this case contain resummation effects, which are generally rising with the factorization scale, cf.~Fig.~\ref{fig:pdfevol}. Therefore, the VFN scheme provides a relevant framework for the phenomenology of heavy particle hadro-production. The NNLO 4- and 5-flavor PDFs still suffer from the uncertainty due to the yet unknown exact NNLO corrections to the massive OMEs. Moreover, for the NNLO PDFs derived from the VFN fit including the small-$x$ DIS data this uncertainty is enhanced, since the part of those DIS data, which provides an essential constraint on the PDFs, also populates the matching region. The observed spread in the 5-flavor gluon distributions, which are obtained from the VFN fits with varying treatments of the matching ambiguity is somewhat reduced with increasing scales due to the general properties of the QCD evolution. However, it is still comparable to experimental uncertainties at $x\sim 0.01$ and substantially larger at $x\sim 10^{-4}$, cf.~Fig.~\ref{fig:pdfhigh}. Altogether, this implies an uncertainty in predictions of the production rates of the Higgs-boson and $t$-quark pairs at the Large Hadron Collider (LHC) within a margin of few percent and somewhat larger at the higher collision energies discussed for future hadron colliders. Note that in the ABMP16 fit~\cite{Alekhin:2017kpj}, which is based on a combination of both, DIS and hadron collider data, the FFN and the 5-flavor VFN schemes are used for the theoretical description of these samples, respectively. This allows to keep the advantages of the VFN scheme at large scales, while avoiding its problems concerning the DIS data. Nevertheless, the NNLO massive OMEs~\cite{Bierenbaum:2009mv,Ablinger:2010ty,Kawamura:2012cr,Ablinger:2014lka,Ablinger:2014nga,Alekhin:2017kpj} are still necessary to generate NNLO PDFs free from the matching ambiguity. \begin{figure} \centering \includegraphics[width=0.9\textwidth,height=0.42\textwidth]{f2cb.pdf} \caption{\small The ratio of two-mass contribution Eq.~(\ref{eq:tm}) to the DIS structure function $F_{2,c}$ (left panel) and $F_{2,b}$ (right panel) computed in the VFN scheme using the PDFs from the NNLO$^{\ast}$ variant of the VFN fit versus momentum transfer $Q^2$ and at various values of Bjorken $x$ (solid line: $x$=0.0001, dashes: $x$=0.001, dotted-dashes: $x$=0.01). } \label{fig:tm} \end{figure} In closing the studies of VFN schemes we wish to address a conceptual problem of the 5-flavor scheme definition due to the fact, that the $b$-quark mass $m_b$ is not too much larger than $m_c$. This relates to the inherent limitations of the VFN schemes due to the successive decoupling of one heavy quark at a given time. As discussed above, starting from the two-loop order, the DIS structure functions also receive contributions which contain two different massive quarks~\cite{Blumlein:2018jfm}. At two loops, they are given by one-particle reducible Feynman diagrams, while one-particle irreducible graphs appear at the three-loop order for the first time, cf.~\cite{Ablinger:2017err,Ablinger:2017xml,Ablinger:2018brx}. Here we will consider the two-loop effects, which arise from virtual corrections with both, charm and bottom quarks. Thus, no production threshold is involved. For the structure function $F_2$ one obtains \begin{eqnarray} \label{eq:tm} F_{2,h}^{{\rm 2-mass},(2)}(x,Q^2) \,=\, - e_h^2\ a_s^2(Q^2)\, \frac{16}{3} T_F^2\, x\, \ln\left(\frac{Q^2}{m_c^2}\right) \ln\left(\frac{Q^2}{m_b^2}\right) \int\limits_x^1 \frac{dz}{z} \left(z^2 + (1-z)^2\right)\, g\left(\frac{x}{z},Q^2\right)\, , \end{eqnarray} which is to be added to Eqs.~(\ref{eq:zmvfn}) or (\ref{eq:ffn}), with $e_h$ denoting the fractional heavy-quark charge and using $T_F =1/2$. The effect of the 2-mass contributions rises at small $x$ and large $Q^2$, being more pronounced for the case of $b$-quark production, cf. Fig.~\ref{fig:tm}. For the kinematics of the proposed lepton-proton LHeC collider it reaches up to $\sim 3\%$, which has impact on the phenomenology of heavy-quark production. As demonstrated in Ref.~\cite{Blumlein:2018jfm}, the two-mass diagrams at the two-loop order have the largest effects for the $b$- and $c$-quark distribution at large $Q^2$. The respective PDFs can be obtained by adding the two-mass contributions to the OMEs in Eq.~(\ref{eq:VFNS-hq}). Comparing the heavy-quark PDFs with and without the two-mass effects included, one finds that the relative size of the effect is negative: $b$-quark distributions with the two-mass contributions included are decreased by -2\% to -6\% in the range for $Q^2$ from 30 to $10000~\ensuremath{\,\mathrm{GeV}}^2$ at small $x$, $x=10^{-4}$; $c$-quark distribution the relative variations are smaller, amounting to -1\% to -4\% for $Q^2 = 100~\ensuremath{\,\mathrm{GeV}}^2$ to $10000~\ensuremath{\,\mathrm{GeV}}^2$ and $x=10^{-4}$. In precision fits these two-mass effects have consequences for all PDFs and require the use of a different VFN scheme compared to those with the decoupling of a single heavy quark at the time, cf.~\cite{Blumlein:2018jfm}. At this point, however, we leave detailed studies of VFN schemes with two massive quarks, i.e., the simultaneous transition $f_i(n_f) \to f_i(n_f+2)$ for PDFs for future studies. \section{Conclusions} \label{sec:concl} The precise description of the parton content in the proton across a large range of scales is a an important ingredient in precision phenomenology. The treatment of heavy quarks with a mass $m_h$ requires adapting the number of light flavors in QCD to the kinematics under consideration, set by the factorization scale $\mu$, which is typically associated with the hard scale of the scattering process. Within the ABMP16 global PDF fit, the FFN scheme with $n_f=3$ light flavors provides a good description of the existing world DIS data, while the LHC processes are typically described with $n_f=5$ massless flavors by implementing decoupling of heavy quarks and a transition from 3- to 4- or 5-flavor PDFs, including the possibility for the resummation of large logarithms in $Q^2/m_h^2$. To check the effects of such a resummation on the analysis of existing DIS data we have studied the $c$-quark PDF, constructed with the help of massive OMEs in QCD, and we have quantified differences between the use of perturbation theory at fixed order and subsequent evolution. We have found that the impact of the PDF evolution as used in the BMSN prescription of VFN scheme is sizable and rather $x$-dependent than $Q^2$-dependent, showing little impact on the large-log resummation on the heavy-quark production at realistic kinematics. Moreover, these differences must be considered an inherent theoretical uncertainty of VFN schemes since using NLO or NNLO accuracy for the evolution leads to significantly different results due to mismatch in the orders of perturbation theory between the heavy-quark OMEs and the accuracy of the evolution equations. Likewise, and related, the choice of the matching point position employed in the VFN schemes has the impact on heavy-quark PDFs and therefore brings additional uncertainty. With the help of variants of the ABMP16 PDF fit, we have confronted the FFN scheme and different realizations of VFN schemes (FOPT, evolved at NLO, evolved at NNLO) in the BMSN approach with the combined HERA data and DIS $c$-quark production. The FFN scheme delivers a very good description of those data and we have found little need for the additional resummation of large logarithms in the kinematic range covered by HERA. From the fit variants, we have also determined the gluon and the total light-flavor sea quark distributions, illustrating again the sizable numerical differences, obtained by adopting the respective VFN scheme variants. Depending on the value of $x$, the observed differences for the gluon PDF are well outside the experimental uncertainties at low factorization scales and persist as well as at high scales of ${\cal O}(\text{100})~\ensuremath{\,\mathrm{GeV}}$. The VFN scheme choices are, therefore, highly relevant for LHC phenomenology and affect the predictions for the hadro-production of massive particles within a margin of few percent. In summary, despite being applicable in a limited kinematic range, the FFN scheme works very well for the modern PDF fits and contains much smaller theoretical uncertainty than the VFN schemes currently available. As an avenue of future development, the latter will benefit from improving the perturbative accuracy of the massive OMEs used, including their NNLO corrections, which are known exactly or to a good approximation. Other features of VFN schemes to be improved concern the simultaneous decoupling of bottom and charm quarks, which is advisable due to the close proximity of the mass scales $m_b$ and $m_c$. We leave these issues for future studies. \acknowledgments \noindent This work has been supported in part by by Bundesministerium f\"ur Bildung und Forschung (contract 05H18GUCC1) and by the EU ITN network SAGEX agreement No. 764850 (Marie Sk\l{}odowska-Curie).
proofpile-arXiv_065-4431
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \vspace{-7pt} Probabilistic constellation shaping (PCS) is a technique that offers a shaping gain of up to 1.53~dB signal-to-noise ratio (SNR) for the linear additive white Gaussian noise (AWGN) channel \cite{Forney}. In fiber-optic communications, the first use of PCS, to the best of our knowledge, was to reduce the peak-to-average power in OFDM systems via Trellis shaping \cite{HellerbrandTrellisShaping}. The first papers to employ PCS to achieve a shaping gain were published in 2012 \cite{Smith2012JLT_CodedModulation,beygi2012adaptive}. PCS has attracted wide interest since the proposal of probabilistic amplitude shaping (PAS) framework in 2014/15 \cite{GeorgTComm}. The first demonstrations were published the same year \cite{myShapingOFC,ShapingPDP,myShapingPTL}. Since then, countless papers have been written on PCS, and it has been implemented in commercial high-performance digital signal processors (DSPs). \vspace{-9pt} \section{Fundamentals of PCS} \vspace{-7pt} The success story of PCS has several reasons. The underlying PAS architecture enables a low-complexity integration of PCS into existing coded modulation schemes with off-the-shelf binary forward error correction (FEC). Furthermore, a shaping gain of approx. 1~dB for high-order quadrature amplitude modulation (QAM) formats can be a significant benefit for optimized fiber-optic communication systems. Another important feature of PCS is rate adaptivity, which means that the throughput can be varied by changing the shaping distribution while keeping QAM order and FEC overhead fixed. This allows to dynamically and efficiently utilize the available spectrum. The processing block that enables PAS is the distribution matcher (DM), which transforms a block of $k$ uniformly distributed input bits into a sequence of $n$ shaped amplitudes of the desired distribution. The initially proposed constant-composition distribution matcher (CCDM) \cite{Schulte2016TransIT_DistributionMatcher} is asymptotically optimal, yet its finite-length rate loss has been shown to be suboptimal \cite{MPDM}, which is why long CCDM blocks would be beneficial. All CCDM algorithms available in the literature are, however, sequential \cite{PASR}. Obviously, the combination of long blocks and sequential algorithms is challenging for real-time implementation. Research effort was put into finding algorithms that had the lowest rate loss at a fixed block length $n$ or offered some implementation benefit. \vspace{-9pt} \section{Nonlinear Interference for PCS} \vspace{-7pt} For the nonlinear fiber channel, PCS has been examined in theory, simulations, and experiments \cite{myJLT,JulianJLT,ESS_Karim,ESS_Sebastiaan}. A decrease in effective SNR after DSP compared to uniform signaling is observed due to the increased kurtosis of the shaped constellation, yet this only marginally reduces the shaping gain that could be achieved over a linear channel \cite{myJLT}. Most studies of the impact of PCS on NLI do not focus on the above mentioned finite-length aspects and thus use a simplified PAS setup. A common PAS emulation technique is to simply draw the QAM symbols according to the desired distribution without considering DM implementation aspects at all. A slightly more realistic approach is to consider very long CCDM sequences of several thousand or tens of thousands amplitudes, neglecting whether such long blocks are implementable in practice. In the first paper to report on the finite-length behavior of PCS with a CCDM \cite{ESS_Karim}, a block-length dependence of SNR was found in simulations, which was later confirmed in experiments \cite{ESS_Sebastiaan}. The SNR is inversely proportional to $n$, i.e., short blocks mitigate NLI and thus give higher SNR than long blocks. Taking into account the finite-length rate loss, the achievable information rate (AIR) \ensuremath{\text{AIR}_n}\xspace for length-$n$ DMs \cite{MPDM} is maximized at a finite $n$. This is in stark contrast to a linear channel where SNR is independent of $n$ and AIR increases with $n$ because longer blocks generally lead to lower rate loss. For multi-span WDM fiber simulations \cite{myOFC2020}, Fig.~\ref{fig:SNR_GMI_n} shows an SNR decrease by 0.8~dB between $n=10$ and $n=5000$ as well as an AIR optimum at approximately 500~symbols. When NLI is turned off in the simulations, the SNR becomes independent of $n$ and \ensuremath{\text{AIR}_n}\xspace{} is increasing with $n$. \begin{wrapfigure}{R}{0.7\textwidth} \vspace*{-\baselineskip} \centering \inputtikz{SNR_n} \inputtikz{GMI_n} \vspace*{-\baselineskip} \caption{\label{fig:SNR_GMI_n} Effective SNR after DSP (left) and \ensuremath{\text{AIR}_n}\xspace{} (right) vs. $n$ for shaped 64QAM. A regular nonlinear fiber setup (solid red) and a linearized fiber without NLI (dashed blue) are considered. The SNR for uniform 64QAM is shown as reference.} \vspace*{-\baselineskip} \end{wrapfigure} The NLI mitigation at short block lengths can be explained as follows for CCDM sequences \cite{myCCJLT2019}. We keep the amplitude distribution fixed such that a concatenation of CCDM blocks of varying $n$ always has the same average distribution. The only difference is in how many sub-blocks (having constant composition) the compound sequence consists of. The reason for the NLI mitigation must thus lie in temporal properties that are introduced by short-length PCS but are not necessarily present for long blocks. An illustrative explanation is presented in Fig.~\ref{fig:CCDM_block_example} for the distribution $[0.4, 0.3, 0.2, 0.1]$ of four shaped amplitudes $[\alpha,\beta,\gamma,\delta]$. The compound sequence comprising 30 amplitudes is either generated by concatenating three blocks pf length $n=10$ each or from a single CCDM with $n=30$. For $n=10$, each of the three blocks must, for example, contain the symbol $\delta$ once, which imposes a certain temporal structure in the overall compound sequence that is not present for $n=30$. As shown in Fig.~\ref{fig:CCDM_block_example} for $n=30$, the second and third $\delta$ are intra-block neighbors, which is not possible for $n=10$. We conclude that short-length CCDM introduces a temporal structure that limits the clustering of identical symbols, which in turn leads to NLI mitigation. \vspace{-9pt} \section{Implications and Open Questions} \vspace{-7pt} With such a NLI mitigation, the ``longer is better'' paradigm of PCS is apparently not true for the nonlinear fiber channel. This is particularly beneficial for hardware implementation as power consumption and latency requirements can become less stringent. The AIR, however, is relatively insensitive to using too long blocks (see Fig.~\ref{fig:SNR_GMI_n}), so the performance loss compared to the optimum is small when using too large block lengths. As it is the temporal structure that leads to NLI mitigation, any DSP block that modifies the transmit sequence can significantly reduce this benefit, and interleaving is such an operation. In a modern DSP architecture such as in 400ZR \cite{400ZR}, two interleaving stages exist. A bit interleaver is placed between the inner and outer FEC to break up any error bursts from unsuccessfully decoding an inner FEC codeword. Additionally, a symbol interleaver is placed after QAM mapping to achieve polarization and phase diversity, which means that a FEC word is spread over both polarizations and also over time such that it experiences different amounts of phase noise. The shuffling of the bit interleaver can be undone by de-interleaving after the inner FEC encoding and re-applying the interleaving before the inner FEC decoding \cite{myOFC2020}. This ensures that burst errors of an inner codeword are still spread over several outer FEC blocks, yet the temporal structure required for NLI mitigation remains intact. While this is in principle feasible, it remains unclear how practical such an approach is and whether it has any other drawback. Regarding symbol interleaving, there is no straightforward way of keeping the diversity due to scrambling while achieving the NLI mitigation by short PCS. Thus, it remains an open question whether NLI mitigation or polarization and phase diversity have a higher overall impact on the system performance. \begin{figure}[!h] \vspace*{-0.9\baselineskip} \centering \inputtikz{CCDM_block_example} \vspace*{-0.6\baselineskip} \caption{\label{fig:CCDM_block_example} Concatenation of three CCDM blocks of length $n=10$ each (top) in comparison to a single CCDM block of length $n=30$ (bottom). Both cases give the same average distribution, but their temporal properties differ.} \vspace*{-\baselineskip} \end{figure} \vspace{-19pt}
proofpile-arXiv_065-4432
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction and summary} One of the most exciting developments the past few years, is the discovery of exactly solvable models of quantum gravity, starting with Kitaev's SYK models \cite{Kitaevtalks,*Sachdev:1992fk,*Polchinski:2016xgd,*Maldacena:2016hyu}, going through bulk Jackiw-Teitelboim (JT) gravity \cite{Jackiw:1984je,*Teitelboim:1983ux,Almheiri:2014cka,*Jensen:2016pah,*Maldacena:2016upp,*Engelsoy:2016xyb} and its correlation functions \cite{Bagrets:2016cdf,Stanford:2017thb, Mertens:2017mtv,Lam:2018pvp,Mertens:2018fds,Blommaert:2018oro,Kitaev:2018wpr, *Yang:2018gdb,Iliesiu:2019xuh}, and leading to the inclusion of higher genus and random matrix descriptions \cite{Saad:2019lba}, making contact with the black hole information paradox in its various incarnations \cite{Saad:2019pqd, Almheiri:2019qdq, *Penington:2019kki, Marolf:2020xie}. It goes without saying that finding other models that are solvable to the same extent would be highly valuable, in particular to test the robustness of the ideas. For example, it is important to have a similar non-perturbative definition of theories of gravity as in \cite{Saad:2019lba} that are also coupled to matter. In the same work \cite{Saad:2019lba}, it was proposed that JT gravity can be viewed as a parametric limit of the older minimal string model. The latter can be viewed as a double-scaled matrix integral \cite{Brezin:1990rb, *Douglas:1989ve, *Gross:1989vs} that in the continuum description becomes a non-critical string theory described by Liouville CFT, coupled to a minimal model and the $bc$ ghost sector. We will call this combination \emph{Liouville gravity} in what follows. Since there is a substantial amount of evidence in favor of a random matrix description of these models, finding JT gravity within a limiting situation illustrates that it is in hindsight not a surprise at all that JT gravity is a matrix integral. \\~\\ In this work, we will develop these UV ancestors of JT gravity in more detail. We will enlarge our scope slightly: instead of restricting to only minimal models to complete the Liouville CFT, we will consider a generic matter CFT for the first few sections. In that case, we do not have a (known) matrix description to guide us. At times, we will restrict to the minimal string and find perfect agreement between continuum and matrix descriptions. A particular emphasis is placed on correlation functions within these theories and how precisely they approach the JT correlation functions in a certain limit. We also highlight how the Riemann surface description of JT gravity at higher topology also generalizes (in fact, quantum deforms) to these models leading to generalizations of the Weil-Petersson (WP) volumes to glue surfaces together. \\~\\ Let us sketch the set-up in more detail. Consider a disk-shaped worldsheet with coordinates $(z,\bar{z})$ and boundary coordinate $x$. Within Liouville gravity, we are allowed to insert closed string tachyon vertex operators $\mathcal{T}_{i}$ and open string tachyon vertex operators $\mathcal{B}_{i}$. Denoting these operator insertions collectively by $\mathcal{O}$, we will define the disk amplitudes $\mathcal{A}_\mathcal{O}(\ell_1,\ldots, \ell_n)$ with fixed length boundaries $\ell_1 \hdots \ell_n$ (see discussion around \eqref{introbdycond} for more details on the boundary conditions) as \begin{figure}[h] \centering \raisebox{18mm}{$\mathcal{A}_\mathcal{O}(\ell_1,\ldots, \ell_n) \quad = \quad$} \begin{tikzpicture}[scale=0.8] \draw[fill=blue!40!white,opacity=0.7] (0,0) ellipse (1.5 and 1.5); \draw[fill] (-0.5,0) circle (0.06); \node at (0,-1.985) {}; \node at (-0.1,0) {\small $\mathcal{T}_1$}; \draw[fill] (0.5,0) circle (0.06); \node at (0.9,0) {\small $\mathcal{T}_2$}; \node at (0.75,-0.5) {\small $...$}; \draw[fill] (-1.3,-0.75) circle (0.06); \node at (-1.65,-0.8) {\small $\mathcal{B}_1$}; \node at (-1.85,0) {\small $\ell_1$}; \node at (-1,1.6) {\small $\ell_2$}; \node at (1,1.6) {\small $\ell_3$}; \draw[fill] (-1.3,0.75) circle (0.06); \node at (-1.65,0.8) {\small $\mathcal{B}_2$}; \draw[fill] (0,1.5) circle (0.06); \node at (0,1.8) {\small $...$}; \end{tikzpicture} \end{figure} \noindent Since the string worldsheet theory is treated as 2d gravity (by imposing the Virasoro constraints), the operator insertions of interest $\mathcal{B}_i$ and $\mathcal{T}_i$ have to be worldsheet coordinate-invariant. The familiar strategy from string theory is to restrict these to conformal weight one (in both holomorphic and anti-holomorphic sectors), and then integrate them over the entire worldsheet: \begin{equation}\label{defopintro} \mathcal{B}= \oint_{\partial \Sigma} dx \, \Phi_{\rm M}(x)e^{\beta \phi(x)} ,\qquad \mathcal{T}= \int_\Sigma d^2z \,\mathcal{O}_{\rm M}(z,\bar{z}) e^{2 \alpha \phi(z,\bar{z})}. \end{equation} Here $\Phi_{\rm M}$ and $\mathcal{O}_{\rm M}$ denote boundary and bulk matter operators, $\phi$ is the Liouville field (scale factor in physical metric) and the parameters $\beta$ and $\alpha$ are tuned to the matter operator to make the integrand marginal in both cases. These operators will be labeled by the Liouville parameters corresponding to the matter operators $\alpha_M$ and $\beta_M$ (see \eqref{bulklioupar2} and \eqref{bdylioupar} for the definition). The conventional interpretation of these formulas is that the bare matter operators $\Phi_{\rm M}$ and $\mathcal{O}_{\rm M}$ (as objects in only the matter CFT), are gravitationally dressed by the Liouville vertex operators $e^{\beta \phi(x)}$ and $e^{2 \alpha \phi(z,\bar{z})}$ to produce observable worldsheet diff-invariant operators. From this perspective, the matter fields are the more fundamental objects and we will indeed reach this conclusion throughout our work as well. As well-known in string theory, we can use the SL$(2,\mathbb{R})$ isometries of the disk to gauge-fix the worldsheet location of three degrees of freedom (where a bulk operator counts as two, and a boundary operator as one). If one has more operator insertions, there are non-trivial integrations left over the moduli space of the punctured disk. Throughout this work, we only focus on the case without moduli integration. This leaves only four disk configurations which we explicitly investigate. In the final section of this work, we investigate higher topology, and in particular the annulus diagram which has a single worldsheet modulus. \\~\\ It should be emphasized that the worldsheet boundary coordinates $x_i$ (and their moduli) and the physical distances $\ell_i$ are distinct. They are only related by the non-local (and not so restrictive) constraints: \begin{equation}\label{introbdycond} \ell_i = \int_{x_i}^{x_{i+1}}dx\hspace{0.1cm} e^{b\phi(x)} \end{equation} in terms of the Liouville field $\phi$ appearing in the Liouville gravity models we will consider. For all disk cases we study, the worldsheet coordinate $x$-dependence drops out due to gauge-fixing, but the final result depends explicitly on the physical distances $\ell$. In this sense, even though boundary operators are integrated over the worldsheet as in \eqref{defopintro}, they behave as local insertions in the physical space and their gravitational dressing has the effect of fixing geodesic distances between them. Moreover, even though the worldsheet theory is a CFT, the boundary amplitudes as a function of physical lengths do not respect conformal symmetry (see for example \eqref{twoa} below). For the annulus amplitude, there is a single worldsheet modulus $\tau$ that needs to be integrated over. Doing so leads in the end to an amplitude that depends on the physical lengths of both boundaries of the annulus. \\~\\ Next we present a summary of the main results regarding fixed length amplitudes, some known some new, that are computed in this paper. We introduce the quantities: \begin{equation} \mu_B(s) = \kappa \cosh 2\pi b s,~~~\kappa \equiv \frac{\sqrt{\mu}}{\sqrt{\sin \pi b^2}}, \end{equation} where $\mu$ is the bulk cosmological constant, $\mu_B(s)$ is the boundary cosmological constant for FZZT boundaries labeled by $s$, and $b$ is defined through the central charge of the Liouville field $c_{\rm L}=1+6Q^2$, with $Q=b+1/b$. \paragraph{Partition Function:} We compute the marked partition function \begin{eqnarray} Z(\ell) = N \mu^{\frac{Q}{2b}} \int_{0}^{\infty} ds ~e^{-\ell \mu_B(s)}\rho(s) , \end{eqnarray} where we define the spectral weight \begin{equation} \rho(s)\equiv \sinh 2\pi b s \sinh\frac{2\pi s}{b}, \end{equation} which coincides with the Virasoro modular S-matrix $S_0{}^s=\rho(s)$, and $N$ is a length independent normalization. After performing the integral, the partition function can be put in the more familiar form $Z(\ell) \sim \frac{1}{\ell} \mu^{\frac{1}{2b^2}}K_{1/b^2}(\kappa \ell)$. This quantity was previously obtained by \cite{Fateev:2000ik} (and from the dual matrix integral by \cite{Moore:1991ir}). We present a more systematic derivation which we found to be more useful in order to generalize this to correlation functions. \\ Following \cite{Saad:2019lba} we interpret $\mu_B(s)$ as the energy of the boundary theory dual to the bulk gravity, $\rho(s)$ as a density of states, and $\ell$ as an inverse temperature. \paragraph{Bulk one-point function:} We compute the fixed length partition function with a bulk insertion $\mathcal{T}_{\alpha_M}$, and $P$ is the Liouville momentum associated to $\alpha_M$. This can be depicted as \begin{equation} \left\langle \mathcal{T}_{\alpha_M}\right\rangle_\ell =~ \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, scale=0.7] \draw[fill=blue!40!white,opacity=0.7] (0,0) ellipse (1.5 and 1.5); \draw[fill] (0,0) circle (0.06); \node at (-1.8,0.2) {\small $\ell$}; \node at (0.5,0) {\small $\mathcal{T}$}; \end{tikzpicture} \end{equation} Repeating the previous procedure we obtain \begin{equation} \label{eq:b1} \left\langle \mathcal{T}_{\alpha_M}\right\rangle_\ell = \frac{2}{b} \int_{0}^{\infty} ds\hspace{0.1cm} e^{-\ell \mu_B(s)} \cos 4 \pi P s. \end{equation} The integrand coincides with the Virasoro modular S-matrix $S_P{}^s=\cos 4 \pi P s$. We interpret the bulk operator as creating a defect (for $P$ imaginary) or a hole (for $P$ real) on the physical space. This interpretation is consistent with classical solutions of the Liouville equation, and also becomes clear in the JT gravity limit \cite{Mertens:2019tcm}. \paragraph{Boundary two-point function:} The two point function between boundary operators, labeled by $\beta_M$, inserted between segments of fixed physical length is defined from the following diagram \begin{equation} \hspace{-0.2cm}\mathcal{A}_{\beta_M}(\ell_1,\ell_2) =~ \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, scale=0.7] \draw[fill=blue!40!white,opacity=0.7] (0,0) ellipse (1.5 and 1.5); \draw[fill] (-1.5,0) circle (0.06); \draw[fill] (1.5,0) circle (0.06); \node at (-1.9,0) {\small $\mathcal{B}$}; \node at (1.9,0) {\small $\mathcal{B}$}; \node at (0,-1.8) {\small $\ell_1$}; \node at (0,1.8) {\small $\ell_2$}; \end{tikzpicture} \end{equation} We obtain \begin{equation} \label{twoa} \mathcal{A}_{\beta_M}(\ell_1,\ell_2)= N_{\beta_M} \int ds_1 ds_2 \rho(s_1) \rho(s_2)\hspace{0.05cm} e^{-\mu_B(s_1)\ell_1} e^{-\mu_B(s_2)\ell_2}\hspace{0.05cm}\mathcal{M}_{\beta_M}(s_1,s_2)^2, \end{equation} where $N_{\beta_M}$ is a length independent constant and we define the amplitude \begin{equation} \label{twoam} \mathcal{M}_{\beta_M}(s_1,s_2) \equiv \frac{\prod_{\pm\pm}S_b\left(\beta_M \pm i s_1 \pm i s_2\right)^{1/2}}{S_b(2\beta_M)^{1/2}}, \end{equation} where $S_b(x)$ is the double sine function. Its definition and properties that will be relevant in this paper can be found in Appendix B.1 of \cite{Mertens:2017mtv}. The appearance of this structure was derived somewhat cavalier in \cite{Mertens:2019tcm}, and we substantiate it here. Following \cite{Saad:2019lba}, the amplitude $\mathcal{M}_{\beta_M}(s_1,s_2)$ can be interpreted as a matrix element of operators in the dual boundary theory between energy eigenstates. We interpret this result as an exact expression for the gravitational dressing by Liouville gravity of boundary correlators (notice that the boundary lengths are not necessarily large and therefore this corresponds to gravity in a finite spacetime region). Another motivation for studying these correlators is the resemblance with exact results in double-scaled SYK derived in \cite{Berkooz:2018jqr,*Berkooz:2018qkz,*Berkooz:2020xne}, which we hope to come back to in future work. \paragraph{Boundary three-point function:} The fixed length boundary three-point function is defined as \begin{equation} \hspace{-0.2cm}\mathcal{A}_{123}(\ell_1,\ell_2,\ell_3) =~ \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, scale=0.7] \draw[fill=blue!40!white,opacity=0.7] (0,0) ellipse (1.5 and 1.5); \draw[fill] (0,1.5) circle (0.06); \draw[fill] (-1.3,-0.75) circle (0.06); \draw[fill] (1.3,-0.75) circle (0.06); \node at (-1.7,-0.8) {\small $\mathcal{B}_1$}; \node at (1.75,-0.8) {\small $\mathcal{B}_3$}; \node at (0,-1.85) {\small $\ell_1$}; \node at (-1.6,1) {\small $\ell_2$}; \node at (1.6,1) {\small $\ell_3$}; \node at (0,1.85) {\small $\mathcal{B}_2$}; \end{tikzpicture} \vspace{-0.4cm} \end{equation} and we get \begin{eqnarray} \label{threea} \mathcal{A}_{123}(\ell_1,\ell_2,\ell_3) &=& N_{\beta_1\beta_2\beta_3} \int ds_1 ds_2 ds_3 \rho(s_1) \rho(s_2)\rho(s_3) e^{- \mu_B(s_1)\ell_1}e^{- \mu_B(s_2)\ell_2}e^{- \mu_B(s_3)\ell_3} \nonumber\\ &&\times \mathcal{M}_{\beta_{M2}}(s_2,s_3)\mathcal{M}_{\beta_{M1}}(s_1,s_2)\mathcal{M}_{\beta_{M3}}(s_1,s_3) \sj{\beta_{M1}}{\beta_{M2}}{\beta_{M3}}{s_3}{s_1}{s_2}, \end{eqnarray} where $N_{\beta_1\beta_2\beta_3}$ is a length independent constant. The quantity appearing in the second line is the quantum deformed $6j$ symbols computed by Teschner and Vartanov \cite{Teschner:2012em, *Vartanov:2013ima} (this quantity is proportional to a Virasoro fusion kernel). This expression gives the universal Liouville gravitational dressing of boundary three-point functions. \paragraph{Bulk-boundary correlator:} The fixed length bulk-boundary two-point function is defined by \begin{equation} \hspace{0cm}\mathcal{A}_{\alpha_M, \beta_M}(\ell) =\hspace{0.5cm} \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, scale=0.7] \draw[fill=blue!40!white,opacity=0.7] (0,0) ellipse (1.5 and 1.5); \draw[fill] (1.5,0) circle (0.06); \node at (0,-1.985) {}; \node at (0,1.8) {\small $\ell$}; \draw[fill] (0,0) circle (0.06); \node at (-0.5,0) {\small $\mathcal{T}$}; \node at (1.9,0) {\small $\mathcal{B}$}; \end{tikzpicture} \vspace{-0.4cm} \end{equation} where $\alpha_M$ (with momentum $P$) and $\beta_M$ label the bulk and boundary insertions. We obtain \begin{align} \label{twoabb} \mathcal{A}_{\alpha_M, \beta_M}(\ell) = N_{\beta_M,P}\int_{0}^{+\infty} ds_1 ds_2 \rho(s_1) \rho(s_2) e^{-\mu_B(s_1) \ell} \, \frac{S_P{}^{s_2}}{S_0{}^{s_2}} \, \mathcal{M}_{\beta_M/2}(s_1,s_2)^2, \end{align} in terms of the Virasoro modular S-matrices defined above. \\~\\ We will also define the JT classical limits of these equations, where we will reproduce known expressions found in \cite{Mertens:2019tcm,Mertens:2017mtv,Iliesiu:2019xuh}. \\~\\ If we take the specific case of the minimal string (where the matter sector is a minimal model), we have the power of the matrix model at our disposal to aid our investigation. In particular, the set of minimal string boundary primaries correspond to setting $\beta_M = -bj$, for $j\in \mathbb{N}/2$. The two-point amplitude \eqref{twoam} becomes degenerate (due to a singularity in the denominator) and using the matrix description we will derive the answer: \begin{equation} \mathcal{M}_{\beta_M}(s_1,s_2)^2 = (2j)! \sum_{n=-j}^{j}\frac{\delta(s_1-s_2-in b)}{\prod_{\stackrel{m=-j}{m\neq n}}^{j} (\cosh 2\pi b (s+i nb) - \cosh 2\pi b (s+imb))}. \end{equation} These delta-functions have to be interpreted as causing a contour shift within the double integral \eqref{twoa}. One can also take the degenerate limit directly in \eqref{twoam} using quantum group methods, and we will find agreement. Taking the JT classical limit for these correlators, we find the degenerate Schwarzian bilocal correlators, for which the first case $j=1/2$ was studied in appendix D of \cite{Mertens:2019tcm}, and the generic case is studied in \cite{Mertens:2020pfe}. Next to these amplitudes, we also analyze multi-boundary amplitudes for the minimal string. A four-boundary example is drawn in Figure \ref{multiboundaryQ}. \begin{figure}[h] \centering \includegraphics[width=0.3\textwidth]{multiboundaryQ.pdf} \caption{Genus zero $n$-boundary loop amplitude (here $n=4$).} \label{multiboundaryQ} \end{figure} For $n$ circular boundaries, we find the genus $g$ amplitude is of the form: \begin{eqnarray}\label{eq:nloopcorr} \Big\langle \prod_{i=1}^{n} Z(\ell_i) \Big\rangle_{g, \, {\rm conn.}} \sim \prod_{i=1}^{n}\int_0^\infty \lambda_i d\lambda_i \tanh \pi \lambda_i \,V_{g,n}(\bm{\lambda}) \, \left\langle \mathcal{T}_{\alpha_{Mi}}\right\rangle_{\ell_i}, \end{eqnarray} where $\left\langle \mathcal{T}_{\alpha_{Mi}}\right\rangle$ is the bulk one-point function \eqref{eq:b1} with $P_i = b\lambda_i /2$ (which we interpret as a Liouville gravity trumpet partition function), the quantity $V_{g,n}(\bm{\lambda})$ is a symmetric polynomial of order $n+3g-3$ in the $\lambda_i^2$ and a quantum deformation of the WP volumes. The measure factor $\lambda_i d\lambda_i \tanh \pi \lambda_i$ generalizes the classical gluing formula for Riemann surfaces $b_i db_i$, where $b_i$ is the circumference of the gluing geodesic. Indeed, for large values of $\lambda_i$ (the classical JT limit), these formulas reduce to these classical WP gluing formulas. \\ In particular, we focus on the genus zero contributions, for which we give a general formula for the deformed volumes (and therefore by taking the appropriate limit, an explicit formula for the classical WP volumes). For higher genus, we argue they also take the form \eqref{eq:nloopcorr}. It would be interesting to develop a more geometrical interpretation of this quantum deformation of the WP volumes. Such derivation would confirm the choice of normalization of the one-point function and the integration measure in \eqref{eq:nloopcorr} \footnote{The ambiguity arises since, for example, the final answer (except for the special case of two boundaries and no handles) is unchanged under $d\mu(\lambda) \to f(\lambda) d\mu(\lambda)$ and $\langle \mathcal{T} \rangle \to f(\lambda)^{-1} \langle \mathcal{T} \rangle$, for an arbitrary $f(\lambda)$ that goes to one in the JT gravity limit. We argue below the choice in \eqref{eq:nloopcorr} is the most natural one.}. \\~\\ The organization of the paper and summary of some more results is as follows. In \textbf{section \ref{sec:review}} we give a quick review on the non-critical string, Liouville gravity and the minimal string. The knowledgeable reader can skip this section, although we do fix conventions and write down previous results that will be essential later on. In \textbf{section \ref{sec:diskZ}} we describe a systematic way to compute fixed length amplitudes and illustrate it by reproducing known formulas for the fixed length partition function. In \textbf{section \ref{sec:diskcorr}} we compute explicitly fixed length boundary correlation functions with and without bulk insertions. We also define and take the JT gravity limit of these observables. \textbf{Section \ref{s:qg}} explains the structure of these equations as coming from a constrained version of the $\mathcal{U}_q(\mathfrak{sl}(2,\mathbb{R}))$ quantum group. In particular, the vertex function is reproduced from a 3j-symbol computation with Whittaker function insertions. In \textbf{section \ref{sec:MM}} we show for the case of the minimal string how to produce the correlators directly from the matrix model. We check that the quantum group formulas from the previous section lead to the same structure. Finally in \textbf{section \ref{sec:othertopo}} we study other topologies. We give a streamlined derivation of the cylinder amplitude. We also review the exact result presented in \cite{Ambjorn:1990ji, Moore:1991ir} for the $n$ boundary-loop correlator at genus zero for the minimal string theory and discuss its decomposition in terms of gluing measures, bulk one-point functions and quantum deformed WP volume factors. By taking the JT gravity limit we give a very simple generating function of WP volumes for $n$ geodesic boundaries at genus zero. In \textbf{section \ref{sec:conclusions}} we end with a discussion and open problems for future work. In particular, we argue that the bulk gravity can be rewritten in terms of a 2d dilaton gravity model with a sinh dilaton potential. In the appendices, we include some related topics that would otherwise distract from the story. In particular, we discuss the role of poles in the complex $\mu_B$ plane as one transforms to fixed length amplitudes, we discuss degenerate bulk one-point functions, and degenerate (ZZ) branes as boundary segments. For the multi-boundary story for unoriented surfaces, we compute the crosscap spacetime contribution, which we show matches with a GOE/GSE matrix model calculation. \section{Non-critical strings and 2d gravity} \label{sec:review} This section contains review material on Liouville gravity and minimal string theory. We first discuss the bulk stories in \ref{s:qlg} and \ref{s:mst}, and then the boundary versions in \ref{s:bdy}. \subsection{Quantum Liouville gravity} \label{s:qlg} We study two dimensional theories on Riemann surfaces $\Sigma$ with dynamical gravity, by summing over all metrics $g_{\mu\nu}(x)$ (in Euclidean signature) modulo diffeomorphisms. We also add a matter theory with fields $\chi(x)$ living on the Riemann surfaces with action $S_M[\chi;g]$. The starting point is the path integral \begin{equation}\label{eq:deftheory} Z=\sum_{\rm topologies} \int \frac{\mathcal{D}g \mathcal{D}\chi}{{\rm Vol}({\rm Diff})} e^{ - S_M[\chi;g] - \mu_0 \int_\Sigma d^2x \sqrt{g}}, \end{equation} where $\mu_0$ is the bare cosmological constant. We will focus only on the case where the matter sector is a CFT with central charge $c_M$. We will also consider minimal models as matter CFT which might not have a path integral representation. Following \cite{Polyakov:1981rd, *Distler:1988jt, *David:1988hj} we can gauge fix conformal gauge $g_{\mu\nu}=e^{2 b\phi(x)} \hat{g}_{\mu\nu}(x)$ with $\phi$ a dynamical scale factor, $b$ a normalization to be fixed later, and $\hat{g}$ a fiducial metric. This has the effect of adding the usual $bc$-ghosts with central charge $c_{\rm gh} = -26$ and a Liouville mode coming in part from the conformal anomaly in the path integral measure and also from the bare cosmological constant. One ends up with an action consisting of the matter on the fixed fiducial metric $S_M[\chi; \hat{g}]$, the ghost action, and a Liouville field theory with action \cite{Polyakov:1981rd} \begin{equation} S_L[\phi] = \frac{1}{4\pi} \int_{\Sigma} \left[ (\hat{\nabla} \phi)^2 + Q \hat{R} \phi + 4 \pi \mu e^{2 b \phi} \right] . \end{equation} This can be interpreted as CFTs living on the fiducial metric. It is important the matter sector is a CFT so that no explicit interactions appear between matter and the Liouville field. The renormalized bulk cosmological constant is $\mu$ and scale invariance fixes the background charge $Q = b + b^{-1}$. The central charge of the Liouville mode is $c_L = 1+6Q^2$. The three sectors are coupled through the conformal anomaly cancellation \begin{equation} c_M + c_L + c_{\rm gh} =0. \end{equation} The results in this paper are mostly independent of the details of the matter CFT but we will refer to two cases for concreteness. We will analyze timelike Liouville CFT as matter, with action \begin{equation}\label{eq:timeLioaction} S_M[\chi] = \frac{1}{4\pi} \int_{\Sigma} \left[ -(\hat{\nabla} \chi)^2 - q \hat{R} \chi + 4 \pi \mu_M e^{2 b \chi} \right]. \end{equation} For simplicity we can also set its cosmological constant term $\mu_M$ to zero, in which case the theory becomes the usual Coulomb gas. The central charge for this theory is $c_M = 1- 6 q^2$. The matter and Liouville background charges are related from the anomaly cancellation \begin{equation} c_M + c_L =26,~~~\Rightarrow~~~q = 1/b - b, \end{equation} which for $\mu_M\neq 0$ is consistent with the choice of the exponential interaction in \eqref{eq:timeLioaction}. This theory is equivalent to a Liouville CFT with $\tilde{b} = i b$, $\tilde{Q} = i q$ and $\tilde{\mu}=\mu_M$. The case with non-vanishing matter cosmological constant was analyzed in detail in \cite{Zamolodchikov:2005fy}. In the next section we will also consider the case of a $(p,q)$ minimal model. Now we will go through the construction of physical operators in these theories. First, generic bulk operators of the Liouville CFT and matter CFT, seen as two independent field theories, can be written as \begin{eqnarray}\label{bulklioupar1} {\rm Liouville:}&&~~~\hspace{0.1cm}~V_\alpha = \exp{(2 \alpha \phi)}~~~~~~~~\hspace{0.1cm}\Delta_\alpha = \alpha(Q-\alpha),\\ \label{bulklioupar2} \hspace{-0.3cm}{\rm Matter:}&&~~\mathcal{O}_{\alpha_M} = \exp{(2 \alpha_M \chi)}~~~~\Delta_{\alpha_M} = \alpha_M(q+\alpha_M), \end{eqnarray} where we also wrote their scaling dimension under worldsheet conformal transformations. When we consider other matter CFT we will still label their operators by the parameter $\alpha_M$. It is customary to also introduce the Liouville momentum and energy $\alpha= Q/2 + i P$ and $ \alpha_M= -q/2 + i E$. These can be interpreted as target space energy and momentum $(E,P)$ in a Minkowski 2D target space $(X^0,X^1)=(\chi, \phi)$ with a linear dilaton background. If gravity was not dynamical, the only operators of the theory would be the matter $\mathcal{O}_{\alpha_M}$. When gravity is turned on diffeomorphism invariant observables are made out of physical operators that are marginal. The gravitational dressing necessary for this is achieved by combining matter and Liouville operators into the bulk vertex operator \begin{equation}\label{tachyondef} \mathcal{T}_{\alpha_M} \sim \int_{\Sigma} \hspace{0.1cm}\mathcal{O}_{\alpha_M}(x) \hspace{0.01cm}V_\alpha(x), \end{equation} with a normalization that will be fixed later. After gauge fixing, we can replace the integral by a local insertion with the ghosts $\mathcal{T}_{\alpha_M} \sim c \bar{c} \hspace{0.1cm}\mathcal{O}_{\alpha_M} \hspace{0.01cm}V_\alpha$. In the context of non-critical string theory, these insertions create bulk tachyons which will be labeled by its matter content. The parameter $\alpha$ controlling the gravitational dressing is fixed through the relation \cite{Knizhnik:1988ak} \begin{equation} \Delta_{\alpha_M} + \Delta_\alpha = 1 ,~~~\Rightarrow~~~\alpha_+=b-\alpha_M,~~\alpha_-=\frac{1}{b}+\alpha_M. \end{equation} For fixed $\mathcal{O}_{\alpha_M}$ these two choices are related through $\alpha_+ = Q-\alpha_-$, which up to reflection coefficients creates the same operator. For a given $\Delta_{\alpha_M}$ there are also two possible choices of $\alpha_M$ (related by $\alpha_M\to -q -\alpha_M$) giving four choices of pairs $(\alpha_M,\alpha)$ all related through Liouville reflection relations. In terms of momenta the dressing condition can be nicely summarized as $P^2 =E^2$ which is the on-shell condition of a massless field moving in the target space with 2-momentum $(E,P)$. Up to this identification between $\alpha_M$ and $\alpha$, when computing correlators of $\mathcal{T}_{\alpha_M}$ the answer factorizes into a matter, Liouville and ghost contributions, before the integration over the moduli. A simple operator that we will use later is the area operator which can be defined as $\hat{A} = \int_\Sigma V_b$. This can also be written after gauge fixing in the form of a tachyon vertex operator as above, which corresponds to picking the identity in the matter sector $\mathcal{T}_{\rm id} \sim c\bar{c}\hspace{0.1cm} V_b$. This operator measures the total area of the surface in terms of the physical metric. Before we moving on, we will enumerate some special set of operators in both the matter and Liouville sectors that will be useful to distinguish later on: \paragraph{Degenerate Liouville operators:} These operators, labeled by two positive integers $m\geq1$ and $n\geq1$, are defined through the parameter \begin{equation}\label{eq:liouvdeg} \alpha_{(m,n)}= - \frac{(n-1)b}{2} - \frac{(m-1)b^{-1}}{2},~~~{\rm and}~~\alpha_{(m,n)} \to Q-\alpha_{(m,n)}. \end{equation} \paragraph{Degenerate matter operators:} We can analogously define operators which are degenerate in the matter sector also labeled by positive integers $m\geq1$ and $n\geq1$ \begin{equation}\label{eq:mattdeg} \alpha_{M(m,n)} =- \frac{(n-1) b}{2} + \frac{(m-1)b^{-1}}{2},~~~{\rm and}~~\alpha_{M(m,n)} \to - q-\alpha_{M(m,n)}. \end{equation} Its important to notice that these operators never appear together in a tachyon vertex operator. We can easily see from the expressions above that if the matter content corresponds to a degenerate operator, then the Liouville dressing will be generic. One the other hand, if the Liouville dressing is degenerate, the matter operator will be generic instead. We can easily see this in the semiclassical (also related to JT gravity) limit: \paragraph{Semiclassical limit:} Following \cite{Saad:2019lba} we will be interested in the limit $b\to0$ for which $c_M \to -\infty$ and $c_L \to \infty$. In this limit we will parametrize light matter operators as $\alpha_M = b h$, where $h$ is a continuous parameter which is fixed in the $b\to0$ limit. They are dressed by Liouville operators with $\alpha = b(1-h)$. In this limit, $h$ corresponds to the dimension of the matter operator $\Delta_{\alpha_M} \to h$, while the Liouville field has $\Delta_\alpha \to 1-h$. Degenerate matter operators have $h_{Mn} =\frac{1-n}{2}=0,-\frac{1}{2},-1,-\frac{3}{2},\ldots$, while Liouville degenerate operators have $h_{Ln} = \frac{1+n}{2}=1,\frac{3}{2}, 2, \ldots$. These carry a single index since the other set from \eqref{eq:liouvdeg} or \eqref{eq:mattdeg} become infinitely heavy. \subsection{Minimal string theory} \label{s:mst} In this section we review the definition of the minimal string theory. This corresponds to the same theory of 2D gravity as before, but the matter CFT now consists on the $M_{p,p'}$ minimal model, for any $p'>p\geq 2$ coprime. The Liouville-like parametrization of the physical quantities that characterize this theory will be very useful later. For example, the central charge can still be written as $c_M = 1 - 6q^2$, where $q=1/b-b$ and $b=\sqrt{p/p'}$, which also matches the parameter $b$ of the gravitational Liouville mode, canceling the conformal anomaly. The matter CFT for the $(p,p')$ minimal string has a discrete and finite set of operators $\mathcal{O}_{n,m}$. These can still be parametrized through the Liouville-like parameter $\alpha_M$. The spectrum of the minimal model consists of the matter degenerate states with label $\alpha_{M(n,m)}$ and dimension $\Delta_{n,m}$ given by \begin{equation} \mathcal{O}_{n,m}:~~~\alpha_{M(n,m)} =- \frac{(n-1)b}{2} + \frac{(m-1)b^{-1}}{2},~~~\Delta_{n,m} = \frac{(nb^{-1} -m b)^2-(b^{-1}-b)^2}{4}. \end{equation} where $n=1,\ldots, p'-1$ and $m=1,\ldots p-1$. Due to the reflection property the operators $\mathcal{O}_{n,m} \equiv \mathcal{O}_{p'-n,p-m}$ are identified this gives a total of $(p'-1)(p-1)/2$ operators. For some purposes, it is useful to define a fundamental domain $(n,m)\in E_{p'p}$ defined by $1\leq n \leq p'-1$ and $1\leq m \leq p-1$ and $p' m < p n$. We can construct physical string theory vertex operators $\mathcal{T}_{n,m}$ for each primary $\mathcal{O}_{n,m}$ by adding the gravitational dressing and integrating over the worldsheet as in equation \eqref{tachyondef}. Since we will need them later, we will quote results for the torus characters for these degenerate representations \begin{equation}\label{degcharacters} \chi_{n,m}(q) = \frac{1}{\eta(q)} \sum_{k\in\mathbb{Z}} (q^{a_{n,m}(k)}-q^{a_{n,-m}(k)}),~~~~a_{n,m}(k) =\frac{( 2 p' p k + p n - p' m)^2}{4 p'p}, \end{equation} where $q=e^{2\pi i \tau}$ and $\tau$ is the torus moduli. We will also need the modular S-matrix describing their transformation under $\tau \to - 1/\tau$, which is given by \begin{equation} S_{n,m}^{n',m'} = 2 \sqrt{\frac{2}{p'p}}(-1)^{1+mn'+n m'} \sin\Big( \pi \frac{p}{p'} n' n \Big)\sin\Big( \pi \frac{p'}{p}m'm \Big). \end{equation} More results regarding these representations such as their fusion coefficients $\mathcal{N}_{n_1,m_1;n_2,m_2}^{n_3,m_3}$ can be found in \cite{DiFrancesco:1997nk}. We will be mostly interested in the $(2,2\mathfrak{m}-1)$ minimal string which is known to be dual to a single-matrix model \cite{Moore:1991ir}. This theory has $\mathfrak{m}-1$ bulk tachyons labeled by a single integer \begin{equation} \mathcal{T}_{n} \equiv \mathcal{T}_{n,1}\sim \int_{\Sigma} \hspace{0.1cm} \mathcal{O}_{n,1} \hspace{0.1cm} e^{2(b-\alpha_{M(n,1)})\phi}, \end{equation} where $n=1,\ldots, \mathfrak{m}-1$. The matter sector for these operators has parameter $\alpha_{M(n,1)} = \frac{1-n}{2}b$ and its Liouville dressing insertion has $\alpha_{n,1} = (1+n)b/2$. We have chosen these parameters in order to have a smooth semiclassical limit. We will be interested in the $\mathfrak{m}\to \infty$ limit of the $(2,2\mathfrak{m}-1)$ minimal string, which is equivalent to JT gravity \cite{Saad:2019lba}. This limit, since $b=\sqrt{2/(2\mathfrak{m}-1)}$, corresponds to $c_M \to -\infty$ and $c_L \to \infty$. We will focus on `light' operators $\mathcal{T}_n$ with fixed $n$ in the $k\to\infty$ limit. These are the semiclassical operators defined in the previous section with parameter $h=n/2$. Another interesting limit is given by heavy operators with $n/\mathfrak{m}$ fixed in the large $\mathfrak{m}$ limit. \subsection{2D gravity on the disk} \label{s:bdy} We will be mostly interested in observables on the disk. We quickly review here results for Liouville theory with boundaries, focusing mostly on the gravitational part. The simplest boundary condition for the Liouville mode corresponds to the FZZT brane \cite{Fateev:2000ik}. This is labeled by a single parameter $\mu_B$ referred to as the boundary cosmological constant. A path integral representation is given by the Liouville Lagrangian plus the following boundary term \begin{equation} S_L^{\rm bdy} [\phi] = \frac{1}{2\pi} \oint_{\partial \Sigma} \left[ Q \hat{K} \phi + 2\pi \mu_B e^{b \phi} \right]. \end{equation} It is convenient to parametrize the boundary cosmological constant in terms of the FZZT parameter $s$ as \begin{equation} \mu_B = \kappa \cosh 2\pi b s,~~~~\kappa\equiv\frac{\sqrt{\mu}}{\sqrt{\sin \pi b^2}}. \end{equation} It will also be useful to keep the parameter $\kappa=\mu_B(s=0)$, with an implicit dependence on the bulk cosmological constant $\mu$ and $b$. In the case of timelike Liouville matter we can introduce analogous branes labeled by another continuous parameter we will call $\tilde{s}$. This boundary condition can be understood from the point of view of the boundary conformal bootstrap \cite{Fateev:2000ik}. Each boundary condition is related to a Liouville primary field with parameter $\alpha =\frac{Q}{2} + i s(\mu_B)$, analogously to the rational case \cite{Cardy:1989ir}. A different set of boundary conditions is given by the ZZ brane, which are labeled by degenerate representations \cite{Zamolodchikov:2001ah}. The FZZT boundary conditions can be represented through Cardy boundary states \cite{Cardy:1989ir} \begin{eqnarray} |{\rm FZZT}(s) \rangle &=& \int_0^{\infty} dP \hspace{0.1cm}\Psi_s(P) |P\rangle\hspace{-0.1cm}\rangle,\\ \Psi_s(P) &=&(\pi \mu \gamma(b^2))^{-iP/b} \frac{\Gamma(1+2i Pb)\Gamma(1+2iP/b)}{2^{1/4}(-2 i \pi P)} \cos 4 \pi s P \end{eqnarray} where $|P\rangle\hspace{-0.1cm}\rangle$ denotes the Ishibashi state \cite{Ishibashi:1988kg} corresponding to the primary $P$ and the wavefunction $\Psi_s(P)$ was found in \cite{Fateev:2000ik}. A similar set of branes can be defined for the matter sector when written as a time-like Liouville theory. In the case of the minimal string we can also write boundary conditions in terms of boundary states. Their form for the minimal model sector is \begin{equation} |n,m \rangle = \sum_{n',m'} \frac{S_{n,m}^{n',m'}}{(S_{1,1}^{n',m'})^{1/2}} |n',m' \rangle\hspace{-0.1cm}\rangle, \end{equation} written in terms of the modular S-matrix. They are also labeled by primary operators \cite{Cardy:1989ir}. We will be interested in the case of bulk and boundary correlators on the disk (following for example \cite{Kostov:2003uh}). The Liouville parametrization of boundary changing operators is \begin{eqnarray} {\rm Liouville:}&&~~~\hspace{0.1cm}~B_\beta^{s_1s_2} = \exp{( \beta \phi)}~~~~~~~~\hspace{0.1cm}\Delta_\beta = \beta(Q-\beta),\\ \label{bdylioupar}\hspace{-0.3cm}{\rm Matter:}&&~~~~~\Phi_{\beta_M}^{\tilde{s}_1\tilde{s}_2} = \exp{(\beta_M \chi)}~~~~~\hspace{-0.06cm}\Delta_{\beta_M} = \beta_M(q+\beta_M), \end{eqnarray} where we indicated explicitly the boundary conditions $s_i$/$\tilde{s}_i$ between which these operators interpolate. With this normalization, degenerate operators for both theories can be written in terms of the same expression as bulk operators so $\beta_{(n,m)}$ and $\beta_{M(n,m)}$ are equivalent to \eqref{eq:liouvdeg} and \eqref{eq:mattdeg}. Since it will be important later, we quote here the parameter for matter degenerates \begin{equation} \beta_{{\rm M}(m,n)} =- \frac{(n-1) b}{2} + \frac{(m-1)b^{-1}}{2}, \end{equation} with $(n,m)$ a pair of positive integers. Similar operators can be written for the minimal string $\Phi_{(n,m)}^{n_1,m_1;n_2,m_2}$ which now generate a finite discrete set of dimension $\Delta_{(n,m)}$ interpolating between $(n_1,m_1)$ and $(n_2,m_2)$ branes. We construct physical open tachyon vertex operators by gravitational dressing \begin{equation} \mathcal{B}_{\beta_M} \sim \oint_{\partial\Sigma} \hspace{0.1cm}\Phi_{\beta_M} \hspace{0.1cm}B_\beta, \end{equation} where from now on we omit the boundary conditions labels on each side of the insertion. After gauge fixing this is $\mathcal{B}_{\beta_M} \sim c \hspace{0.1cm}\Phi_{\beta_M}\hspace{0.1cm}B_\beta$. The relation between $\beta_M$ and the dressing parameter $\beta$ is the same as for the bulk operators, and we will pick $\beta_M=b-\beta$. Physical correlators factorize into the ghost, matter and Liouville contribution up to a possible integral over moduli. For the minimal string we have a discrete set $\mathcal{B}_{n,m}$ and for the $(2,2\mathfrak{m}-1)$ case we have $\mathcal{B}_{n}\equiv \mathcal{B}_{n,1}$. A special operator that we will make use of analogous to the area operator in the bulk is $\mathcal{B}_{\rm id} \sim c B_b^{s_1,s_2} = c e^{b\phi}$, which we will refer to as the boundary \emph{marking operator}. It is the gravitationally dressed version of the matter identity operator $\mathbf{1}_M$. Before gauge fixing, this operator can also be written as $\hat{\ell} = \oint B_b$ which measures the physical length of the boundary. Finally, we will need the boundary correlators of Liouville CFT for an FZZT boundary \cite{Fateev:2000ik, Ponsot:2001ng}. This is simplified if we choose the fiducial metric space to be the upper half plane $(z,\bar{z})$ with ${\rm Im}(z)\geq0$ and a boundary labeled by $z=\bar{z} = x$. The bulk one point function is \begin{equation} \langle V_\alpha(z,\bar{z}) \rangle_s = \frac{U_s(\alpha)}{|z-\bar{z}|^{2\Delta_\alpha}}, \end{equation} with \begin{equation} \label{Lonep} U_s(\alpha) = \frac{2}{b}(\pi \mu \gamma(b^2))^{(Q-2\alpha)/2b} \Gamma(2b \alpha - b^2) \Gamma\Big(\frac{2\alpha}{b}-\frac{1}{b^2}-1\Big)\cosh 2\pi (2\alpha-Q)s, \end{equation} The boundary two point function is \begin{equation} \langle B_{\beta_1}^{s_1s_2}(x)B_{\beta_2}^{s_2s_1}(0)\rangle = \frac{\delta(\beta_2 + \beta_1-Q)+ d(\beta|s_1,s_2)\delta(\beta_2-\beta_1)}{|x|^{2\Delta_{\beta_1}}}. \end{equation} where we define the quantity\footnote{There is an implicit product over all four sign combinations of the $S_b$ in this and in subsequent similar equations.} \begin{equation} \label{liouvillebdy2pt} d(\beta|s_1,s_2) = (\pi \mu \gamma(b^2)b^{2-2b^2})^{\frac{Q-2\beta}{2b}} \frac{\Gamma_b(2\beta-Q)\Gamma_b^{-1}(Q-2\beta)}{S_b(\beta \pm i s_1 \pm i s_2)}. \end{equation} The bulk-boundary two point function is of the form \begin{equation} \langle V_\alpha(z,\bar{z})B_\beta^{ss}(x)\rangle_s = \frac{R_s(\alpha,\beta)}{|z-\bar{z}|^{2\Delta_\alpha-\Delta_\beta}|z-x|^{2\Delta_\beta}} \end{equation} with \begin{eqnarray}\label{bbdy} R_s(\alpha,\beta)&=&2\pi (\pi \mu \gamma(b^2)b^{2-2b^2})^{\frac{Q-2\alpha-\beta}{2b}} \frac{\Gamma_b^3(Q-\beta)}{\Gamma_b(Q)\Gamma_b(Q-2\beta)\Gamma_b(\beta)} \frac{\Gamma_b(2\alpha-\beta)\Gamma_b(2Q-2\alpha-\beta)}{\Gamma_b(2\alpha) \Gamma_b(Q-2\alpha)}\nonumber\\ &&\times \int_{\mathbb{R}} dt \hspace{0.1cm} e^{4\pi i t s} \frac{S_b(\frac{1}{2}(2\alpha+\beta-Q)+i t)S_b(\frac{1}{2}(2\alpha+\beta-Q)-it)}{S_b(\frac{1}{2}(2\alpha-\beta+Q)+it) S_b(\frac{1}{2}(2\alpha-\beta+Q)-it)} \end{eqnarray} We will look at the boundary two-point function with $\beta_1=\beta_2$. A naive application of the formula given above would predict a divergent factor of $\delta(\beta_2-\beta_1)\to\delta(0)$. This zero-mode divergence is canceled when one divides by the full group of diffeomorphisms (an analogous thing was observed recently in \cite{Erbin:2019uiz} for the case of the bosonic critical string). The correct answer is given by \begin{equation} \langle \mathcal{B}_{\beta_M} \mathcal{B}_{\beta_M} \rangle = 2(Q-2\beta) d(\beta| s_1,s_2) \times ({\rm matter}), \end{equation} as explained for example in \cite{Aharony:2003vk, *Alexandrov:2005gf}. This result can be obtained by taking a derivative of the two point function with respect to the cosmological constant, producing a three point function with all symmetries fixed, which can then be integrated obtaining the relation above. The on-shell condition relating $\beta$ with $\beta_M$ produces a cancellation of the worldsheet coordinate dependence $x$, after including the ghost two-point function. The last factor in the equation above comes from the matter normalization. \section{Disk partition function}\label{sec:diskZ} In this section we will analyze the disk partition function for the minimal string and Liouville gravity for fixed length boundary conditions. \subsection{Fixed length boundary conditions} \label{s:bosdisk} We will start by defining the fixed length boundary condition in the disk. We will mostly focus on the Liouville sector and therefore the answer will be valid for both the time-like Liouville string and the minimal string. The starting point is the disk with FZZT brane boundary conditions, specified by the boundary cosmological constant $\mu_B$. It will be useful to distinguish two different notions of partition function of the disk. The first is the unmarked partition function $Z(\mu_B)^{\scriptscriptstyle \text{U}}$. We will refer to the second type as the mark partition function $Z(\mu_B)^{\scriptscriptstyle \text{M}}$ defined by \begin{equation} Z(\mu_B)^{\scriptscriptstyle \text{M}} \equiv \partial_{\mu_B}Z(\mu_B)^{\scriptscriptstyle \text{U}}= \left\langle c \hspace{0.1cm}e^{b\phi}\right\rangle_{\mu_B}. \end{equation} This is equivalent to the partition function on a marked disk, where a boundary base point has been chosen, and we do not consider translations of the boundary coordinate as a gauge symmetry \cite{Moore:1991ir}. We will refer to insertions of $e^{b\phi}$ as marking operators. This corresponds to inserting a factor of $\ell$ in the length basis. The fixed length partition function is then defined by the inverse Laplace transform \begin{equation}\label{eq:deffixlength} Z(\ell) \equiv -i \int_{-i\infty}^{i \infty} d\mu_B e^{\mu_B \ell} Z(\mu_B)^{\scriptscriptstyle \text{M}}. \end{equation} This is explained, for example, by Kostov in \cite{Kostov:2002uq}. One can check from the path integral definition of Liouville theory that this integral when combined with the boundary term produces a fixed length delta function, justifying this formula. The first step is then to compute the FZZT partition function $Z(\mu_B)^{\scriptscriptstyle \text{U}}$. Following the calculation of Seiberg and Shih done in \cite{Seiberg:2003nm}, its useful to differentiate with respect to the bulk cosmological constant in order to fix all the symmetries in the problem \begin{eqnarray}\label{eq:unmarkedZ} \partial_\mu Z(\mu_B)^{\scriptscriptstyle \text{U}} &=& \langle c \bar{c}\hspace{0.1cm} e^{2b\phi} \rangle_{\mu_B} \\ &=& \frac{2}{b} (\pi \mu \gamma(b^2))^{\frac{1}{2b^2}-\frac{1}{2}} \Gamma(b^2) \Gamma(1-b^{-2}) \cosh 2 \Big( b-\frac{1}{b} \Big) \pi s \end{eqnarray} where in the second line we pick a normalization such that the result is precisely the bulk cosmological constant one-point function derived in \cite{Fateev:2000ik} (Seiberg and Shih make a different normalization choice). Integrating this with respect to the cosmological constant $\mu$ we obtain the unmarked disk partition function \begin{equation} \label{ses} Z(\mu_B)^{\scriptscriptstyle \text{U}} = (\pi \mu \gamma(b^2))^{\frac{1-b^2}{2b^2}} \frac{4 \Gamma(b^2) \Gamma(1-b^{-2}) \mu b^2}{b(1+b^2)} \left(b^2 \cosh{2\pi b s} \cosh \frac{2\pi s}{b} - \sinh{2\pi b s} \sinh \frac{2\pi s}{b}\right), \end{equation} where the FZZT parameter should be understood as implicitly depending on $\mu_B$ and $\mu$. We compute now the marked partition function differentiating with respect to $\mu_B$ which simplifies the $\mu_B$ dependence considerably \begin{equation} \label{Zmarked} Z(\mu_B)^{\scriptscriptstyle \text{M}} \sim \mu^{\frac{1}{2b^2}} \cosh \frac{2 \pi s}{b}, \end{equation} where we omit the $s$ independent prefactor that we will put back later. The next step is to compute the integral defined in \eqref{eq:deffixlength}. This can be done by deforming the contour around the negative real axis, as shown in figure \ref{contourDeformfirst}. \begin{figure}[t!] \centering \begin{tikzpicture}[scale=1.2] \draw[->,thick, -latex] (0,-2) -- (0,2); \draw[->,thick, -latex] (-4,0) -- (2,0); \draw[red!80!black, thick,decoration = {zigzag,segment length = 2mm, amplitude = 0.5mm},decorate] (-4,0) -- (-1,0); \filldraw[red!80!black] (-1,0) circle (0.07); \draw[blue!70!black,thick] (-4,0.2) -- (-2,0.2); \draw[blue!70!black,thick,latex-] (-2.2,0.2) -- (-1,0.2); \draw[blue!70!black,thick] (-1,0.2) arc (90:-90:0.2); \draw[blue!70!black,thick] (-1,-0.2) -- (-2,-0.2); \draw[blue!70!black,thick, latex-] (-1.8,-0.2) -- (-4,-0.2); \draw[green!60!black,thick, -latex] (0,-1.9) -- (0,0.5); \draw[green!60!black,thick] (0,0.2) -- (0,1.7); \draw[green!60!black,thick] (0,0.2) -- (0,1.7); \node at (-3,0.5) {$\mathcal{C}$}; \node at (-1,-0.4) {\scriptsize $-\kappa$}; \node at (3,2) {\large $\mu_B$}; \draw[thick] (2.65,1.75+0.4) -- (2.65,1.75) -- (2.65+0.5,1.75); \end{tikzpicture} \caption{Contour deformation from the original one (in green) to a deformed one that wraps the negative real axis (blue line). The segment $(-\kappa,0)$ has no branch cut and the contour can be further deformed to the semi-infinite interval $(-\infty,-\kappa)$.} \label{contourDeformfirst} \end{figure} This allows us to write the integral as \begin{equation} \label{toplug} Z(\ell) =-i\int_{-\kappa}^{-\infty}d\mu_B e^{\mu_B \ell}~ \text{Disc}\left[Z(\mu_B)^{\scriptscriptstyle \text{M}}\right] \end{equation} in terms of the discontinuity $\text{Disc}\left[Z(\mu_B)^{\scriptscriptstyle \text{M}}\right]$ along the negative real axis of the marked partition function. A first observation, as shown in figure \ref{contourDeformfirst}, is that the branch cut along the negative real axis starts at $\mu_B =-\kappa$, where $\kappa \equiv \sqrt{\mu/\sin \pi b^2} = \mu_B(s=0)$. The value of $ s \sim \cosh^{-1} (\mu_B/\kappa)$ on the negative real axis for $\mu_B \in \left(-\kappa,\kappa\right)$ is purely imaginary and conjugate above and below the real axis. Since any \emph{even} function of $s$ has no discontinuity, $\text{Disc}\left[Z(|\mu_B|<\kappa)^{\scriptscriptstyle \text{M}}\right]=0$. In what follows we will be mostly interested in the $\ell$ dependence of the final answer. On the interval $(-\infty,-\kappa)$, we can use the fact that $\text{arccosh}(\frac{\mu_B}{\kappa} + i\varepsilon) = \text{arccosh}\frac{\left|\mu_B\right|}{\kappa} \pm i\pi$. Then the discontinuity of an arbitrary function $F(s)$ on this interval is given by ${\rm Disc}[F(s)]=F(s+i/2b) -F(s-i/2b)$. Using this fact we can compute explicitly the discontinuity as \begin{equation} {\rm Disc} \left[\cosh \Big( \frac{1}{b^2} \text{arccosh} \frac{\mu_B}{\kappa} \Big) \right]= 2i \sin \frac{\pi}{b^2} \sinh \Big( \frac{1}{b^2}\text{arccosh}\frac{ \left|\mu_B\right|}{\kappa} \Big). \end{equation} We can use this to compute $\text{Disc}\left[Z(\mu_B)^{\scriptscriptstyle \text{M}} \right]$ and inserting the answer into \eqref{toplug} we find the fixed-length marked disk amplitude \begin{equation} \label{fixedldisk} Z(\ell) = N \mu^{\frac{1}{2b^2}} \int_{\kappa}^{\infty} d\mu_B e^{-\ell \mu_B} \sinh \Big( \frac{1}{b^2} \text{arccosh} \frac{\mu_B}{\kappa} \Big). \end{equation} This answer is consistent with the result of \cite{Fateev:2000ik}. Keeping track of the prefactor appearing in \eqref{eq:unmarkedZ}, the normalization is given by $N = (\pi \gamma(b^2))^{\frac{1}{2b^2}} \frac{8 \pi (1-b^2)}{b \Gamma(b^{-2})}$. Written in terms of the FZZT $s$ variable the partition function is \begin{eqnarray}\label{partfuncs} Z(\ell) \sim \mu^{\frac{1}{2b^2}+\frac{1}{2}} \int_{0}^{\infty} ds ~e^{-\ell \kappa \cosh(2\pi b s)}\rho(s),~~~~\rho(s)\equiv \sinh 2\pi b s \sinh\frac{2\pi s}{b}. \end{eqnarray} In the language of \cite{Saad:2019lba} where the boundary is identified as Euclidean time of a dual theory, we see $\ell$ can be interpreted as an inverse temperature $\beta \to \ell $, while $\mu_B$ is identified with the eigenvalue of the boundary Hamiltonian $E \to \mu_B = \kappa \cosh 2\pi b s$ \footnote{Interestingly, the density of states is equal to the Plancherel measure on the principal series irreps of the quantum group $\mathcal{U}_q(\mathfrak{sl}(2,\mathbb{R}))$ \cite{Ponsot:1999uf} as a function of representations labeled by $s$. It is also equal to the vacuum modular S-matrix $S_0{}^s$. We expand on this in section \ref{s:qg}.}. In terms of the energy $E$, we write: \begin{eqnarray}\label{partfuncs2} Z(\beta) \sim \mu^{\frac{1}{2b^2}} \int_{\kappa}^{\infty} dE ~e^{-\beta E}\rho_0(E),~~~~\rho_0(E) = \sinh\Big(\frac{1}{b^2} \text{arccosh}\frac{E}{\kappa}\Big). \end{eqnarray} We will review some interesting properties of this expression in section \ref{sec:proprho}. The integral can be done explicitly using the identity \begin{equation} \int_{0}^{+\infty}ds e^{-\ell \kappa \cosh 2 \pi b s} \sinh 2 \pi b s \sinh \frac{2\pi s}{b} = \frac{1}{2\pi b^3} \frac{1}{\kappa \ell} K_{\frac{1}{b^2}}(\kappa \ell), \end{equation} where the right hand side involves a modified Bessel function of the second kind. More generally, if we consider the $M$-marked fixed length partition function, then we would write: \begin{equation} \label{genmark} Z(\ell) \sim \frac{1}{\ell^{2-M}} K_{i\lambda}(\kappa \ell), \qquad i\lambda= 1/b^2. \end{equation} This formula holds since taking $\mu_B$-derivatives to bring down $\oint e^{b\phi}$ corresponds in the fixed length basis to just including factors of $\ell$. In our case we set $M=1$. The unmarked Seiberg-Shih partition function \eqref{ses}, when transformed to the fixed length basis, corresponds to setting $M=0$ in \eqref{genmark}. \subsection{Marking operators}\label{s:bosmark} In this section, we demonstrate that inserting more marking operators $c e^{b\phi}$ between generic FZZT brane segments does not affect the final answer for the fixed length partition function. More precisely, the boundary $n$-point function of $n$ marking operators, in the fixed length basis, is precisely given by the fixed-length disk partition function itself \eqref{fixedldisk}, see figure \ref{markingthree}. Notice that this is different than marking by differentiating with respect to $\mu_B$ as in \eqref{genmark}. As explained before, these operators are physical by themselves and correspond to the dressed identity operator in the matter sector $\mathbf{1}_M$. The resulting equality we mention here is then indeed expected. We illustrate this fact first with the simplest case of two operator insertions, after gauge fixing $\langle [c e^{b\phi}] [c e^{b\phi}] \rangle$. The Liouville CFT boundary two-point function is given in \eqref{liouvillebdy2pt} specialized to $\beta=b$, and its contribution to the full 2D quantum gravity two-point function is given by $2(Q-2b)d(b|s_1,s_2)$. We can simplify this expression considerably using \begin{equation} \frac{1}{S_b(b \pm i s_1 \pm i s_2)} =\frac{\sinh \frac{\pi}{b} (s_1-s_2) \sinh \frac{\pi}{b} (s_1+s_2)}{\sinh \pi b (s_1-s_2) \sinh \pi b(s_1+s_2)}= \kappa \frac{\cosh \frac{2\pi}{b} s_1 - \cosh \frac{2\pi}{b} s_2}{\mu_{B}(s_1)-\mu_{B}(s_2)}, \end{equation} giving \begin{equation}\label{eq:lft2ptmbo} d(b|s_1,s_2) = \left[(\pi \gamma(b^2))^{\frac{1}{2b^2}-\frac{1}{2}} \Gamma(b^2)\Gamma(1-b^{-2}) \frac{\sqrt{\sin(\pi b^2)}}{\pi} \right] \mu^{\frac{1}{2b^2}} \frac{\cosh \frac{2\pi}{b} s_1 - \cosh \frac{2\pi}{b} s_2}{\mu_{B1}-\mu_{B2}}. \end{equation} \begin{figure}[t!] \centering \begin{tikzpicture}[scale=0.7] \draw[fill=blue!50!white,opacity=0.7] (0,0) ellipse (2 and 1); \draw[fill] (-2,0) circle (0.08); \node at (-2.6,0) {$e^{b\phi}$}; \node at (-1.4,-1.1) {$\mu_{B3}$}; \draw[fill] (1,0.86) circle (0.08); \node at (1.4,1.3) {$e^{b\phi}$}; \draw[fill] (1,-0.86) circle (0.08); \node at (1.3,-1.2) {$e^{b\phi}$}; \node at (-1,1.2) {$\mu_{B1}$}; \node at (2.6,0) {$\mu_{B2}$}; \draw[-latex] (3.5,0) -- (4.5,0); \draw[fill=blue!50!white,opacity=0.7] (7.5,0) ellipse (2 and 1); \node at (10.5,1) {\small $\ell_1 + \ell_2 + \ell_3$}; \end{tikzpicture} \caption{FZZT brane segments between $n$ marking operators leads upon transforming to the fixed length basis with length $\ell \equiv \sum_j \ell_j$. In the figure we show an example with $n=3$.} \label{markingthree} \end{figure} The definition of the fixed length amplitude for two marking operator insertions between two intervals of length $\ell_1$ and $\ell_2$ is given by \begin{equation} \label{twolength} \mathcal{A}_b(\ell_1,\ell_2) = i^{-2} \prod_{i=1,2}\int_{-i\infty}^{+i \infty}d\mu_{Bi} e^{\mu_{Bi} \ell_i} 2(Q-2b)d(b|s_1,s_2) \end{equation} Repeating the procedure outlined in the previous section and taking the double discontinuity, we find \begin{align} {\rm Disc}\left[\frac{\cosh \frac{2\pi}{b} s_1 - \cosh \frac{2\pi}{b} s_2}{\mu_{B1}-\mu_{B2}} \right] = - 2i \sin \frac{\pi}{b^2} \sinh \Big( \frac{1}{b^2}\text{arccosh} \frac{|\mu_B|}{\kappa} \Big)2\pi i \delta(\mu_{B1}-\mu_{B2}), \end{align} which is non vanishing only for $\mu_{B1} = \mu_{B2} < - \kappa$. Plugging this into the expression \eqref{twolength} after deforming the contour and using the delta function to do one of the integrals, we get the fixed-length amplitude with two marking operator insertions: \begin{eqnarray} \label{markedsame} \mathcal{A}_b(\ell_1,\ell_2) &=& N \mu^{\frac{1}{2b^2}} \int_{\kappa}^{\infty} d\mu_B e^{-(\ell_1+\ell_2) \mu_B} \sinh \Big( \frac{1}{b^2} \text{arccosh} \frac{\mu_B}{\kappa} \Big) \nonumber\\ &=&Z(\ell_1 + \ell_2), \end{eqnarray} where we also checked that the final $b$ dependent prefactor in the equation above, derived from \eqref{eq:lft2ptmbo}, coincides with the one in the partition function derived from \eqref{Zmarked}. This result can be generalized to an arbitrary number of marking operators. Hosomichi wrote down a generalization to an arbitrary $n$-point correlator of such $\beta=b$ insertions \cite{Hosomichi:2008th} interpolating between FZZT boundaries of parameter $\mu_{Bi}=\mu_i$, \begin{eqnarray} \left\langle {}^{\mu_1}[e^{b\phi_1}]{}^{\mu_2} \hdots {}^{\mu_n}[e^{b\phi_n}]{}^{\mu_1}\right\rangle = \frac{(-)^{\frac{n(n-1)}{2}}}{\Delta(\mu_i)}\text{det}\left(\begin{array}{ccccc} 1 & \mu_1 & \hdots & \mu_1^{n-2} & Z^{\scriptscriptstyle \text{M}}(s_1) \\ \vdots & \vdots & \vdots &\vdots & \vdots \\ 1 & \mu_n & \hdots & \mu_n^{n-2} & Z^{\scriptscriptstyle \text{M}}(s_n) \end{array}\right), \end{eqnarray} where we indicated by the indices the parameters that each operator interpolates between. The transformation to fixed length generalize immediately and yields the same outcome \eqref{markedsame}, which means that all of them are equal to the (singly-marked) partition function. The main result of this section is the check that \begin{equation}\label{eq:markedtrivial} \mathcal{A}_b(\ell_1,\ldots, \ell_n) = Z(\ell_1 + \ldots + \ell_n). \end{equation} This result has a simple explanation from the perspective of the matrix integral when applied to the minimal string that we mention in section \ref{sec:MMpf}. \subsection{Properties of the density of states}\label{sec:proprho} In this section we will present some properties regarding the density of states. We will first work out the JT gravity limit of these expressions, as defined by Saad Shenker and Stanford \cite{Saad:2019lba}. To begin, we will rescale the energy and boundary length in the following way \begin{equation}\label{JTparam} E = \kappa (1+ 2\pi^2b^4 E_{\rm JT}),~~~~\ell =\frac{\ell_{\rm JT}}{2 \pi^2 \kappa b^4}. \end{equation} In terms of these variables the partition function can be written as \begin{equation} \label{defrho0} Z(\beta) \sim e^{- \ell_{JT} E_0} \int_0^\infty dE_{\rm JT}\hspace{0.1cm} e^{-\ell_{\rm JT} E_{\rm JT}} \sinh\Big(\frac{1}{b^2} \text{arccosh}\big(1 + 2 \pi^2 b^4 E_{\rm JT} \big)\Big), \end{equation} where the edge of the energy spectrum normalized to be conjugate to the rescaled length $\ell_{\rm JT}$ is given by $E_0 = 1/2\pi^2 b^4$. So far this is an exact rewriting. Now we can take the JT limit defined by $b\to0$ with $\ell_{\rm JT}$ fixed, which implies the integral is dominated by $E_{\rm JT}$ fixed in the limit. The density of states is approximately \begin{equation}\label{jtdosap} \rho_0 (E) \approx \sinh 2 \pi \sqrt{E_{\rm JT}}, \end{equation} which precisely coincides with the JT gravity answer, as first pointed out in \cite{Saad:2019lba}. We will take this as a definition of JT gravity limit in the case of more general observables, where all boundary length go to infinity as $b$ goes to zero, following equation \eqref{JTparam}. We can easily reproduce this result from the partition function written in terms of the parameter $s$ as in equation \eqref{partfuncs}. In this case the density of states is $\rho(s) = \sinh 2 \pi b s \sinh \frac{2\pi s}{b}$ and the energy $ E(s) = \kappa \cosh(2 \pi b s)$. When we pick the boundary length such that $\ell_{\rm JT}$ is fixed, the integral is dominated by $s = b k$, where we keep $k$ fixed as $b\to0$. In this limit we get $\rho(s) \sim k \sinh(2 \pi k)$ and $\ell (E(s)-\kappa)\sim \ell_{\rm JT} k^2$, reproducing the previous result after the $E_{\rm JT} = k^2$ identification. This representation will be more useful when applied to more general observables. This derivation was done for a general Liouville gravity in the small $b$ limit. When applied beyond the minimal string theory its interpretation is not clear since the theory is not dual to a single matrix integral anymore. The minimal string corresponds to $b^2=2/(2\mathfrak{m}-1)$. In this case the density of states is a polynomial in $\sqrt{E}$ of order $2\mathfrak{m}-1$, since it can be rewritten as \begin{equation} \rho_\mathfrak{m}(E) = \frac{1}{\sqrt{2E}}(T_\mathfrak{m}(1+E)-T_{\mathfrak{m}-1}(1+E)), \end{equation} where $T_p(\cos \theta) = \cos( p\theta)$ is the Chebyshev polynomial of the first kind. In the JT gravity limit $\mathfrak{m}$ is large and the series becomes approximately infinite reproducing \eqref{jtdosap}. Having presented the JT limit we will now give a more global picture of the density of states for general $b$. The energy density of states is sketched in Figure \ref{rho}. \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{rho.pdf} \caption{(Blue) Energy density of states $\rho_0(E_{JT})$ defined in \eqref{defrho0} with $b=1/2$. (Red) JT limit which focusses on the middle region. (Green) spectral edge limit.} \label{rho} \end{figure} This quantity has three regimes, the small $E$ regime close to the spectral edge where $\rho_0 \sim 2\pi \sqrt{E_{JT}}$, the intermediate JT range where effectively $E_{JT} \ll 1/b^2$, and the UV regime where a different power-law behavior is present $\rho_0(E) \sim E^{1/b^2}$ (this is evident for the minimal string but still true for arbitrary $b$). An interesting feature is that the UV rise of the spectral density in this theory is slower than that of JT gravity, which has Cardy scaling $\sim e^{2\pi \sqrt{E}}$ at high energies. Since by the UV/IR connection in holography, the high energy states probe the asymptotic region, we propose that the bulk asymptotic region becomes strongly coupled and the geometry deviates from AdS. We will discuss further how this happens in the conclusion. The saddle of the above Laplace integral \eqref{partfuncs2} gives the energy-temperature relation: \begin{equation} \label{firstlaw} \sqrt{E^2-\kappa^2} = \frac{1}{b^2 \beta}, \end{equation} where $\beta = \ell_{\rm JT}$. As above, this law changes qualitatively from $\sqrt{E_{JT}} \sim \beta^{-1}$, the AdS$_2$ JT black hole first law, into $E_{JT} \sim \beta^{-1}$ at high energies. This suggests the possibility that the UV region close to the boundary of the space is strongly coupled, even in the JT gravity limit. It is important to explain this entire thermodynamic relation as a black hole first law of the bulk gravity system. We comment on how this works in the conclusion. \section{Disk correlators}\label{sec:diskcorr} In this section, we extend the discussion to a larger class of correlators. We discuss the fixed length amplitudes of the bulk one-point function in \ref{sec:bulk1pt}, the boundary two-point function in \ref{s:bostwo}, the boundary three-point function in \ref{s:three} and the bulk-boundary two-point function \ref{s:bbtwo}. \\ Since the fixed length amplitudes are found by Fourier transforming the FZZT branes, one can also wonder whether the degenerate ZZ-branes have any relation to the fixed length branes directly. This question is only tangentially related to our main story, and we defer some of the details to appendix \ref{s:degbrane}. \subsection{Bulk one-point function} \label{sec:bulk1pt} In this section we will compute the fixed length partition function in the presence of a bulk tachyon insertion $\mathcal{T}_{\alpha_M}$ with dimension $\Delta_{\alpha_M}$. In general now we will get a contribution from the matter sector given by the matter one-point function. First we will compute the bulk Liouville one-point function for an FZZT boundary. We will normalize the tachyon vertex, after gauge fixing, in the following way \begin{equation} \label{defbulk} \mathcal{T}_{\alpha_M} = N_{\alpha_M} c\bar{c}\hspace{0.1cm}\mathcal{O}_{\alpha_M=-\frac{q}{2}+iP}\hspace{0.1cm}V_{\alpha=\frac{Q}{2}+iP}, \end{equation} where \begin{equation}\label{onepointins} N_{\alpha_M} = \frac{(\pi \mu \gamma(b^2))^{\frac{i P}{b}}}{4\pi^2 b}\frac{\Gamma(-2iP/b)}{\Gamma(1+2iP b)} \frac{1}{\text{(matter)}}. \end{equation} We divided out by the factor from the matter one-point function. In the case of the minimal string calculation of $\left\langle \mathcal{T}_{(n,m)}\right\rangle_\ell$ the matter contribution is given by the Cardy wavefunction $S_{\text{\tiny$(n',\hspace{-0.04cm}m')$}}{}^{\text{\tiny$(n,\hspace{-0.04cm}m)$}}/(S_{\text{\tiny$(1,\hspace{-0.04cm}1)$}}{}^{\text{\tiny$(n,\hspace{-0.04cm}m)$}})^{1/2}$ where the matter boundary state is a Cardy state associated to the primary $(n',m')$. The fixed length amplitude with the bulk insertion is given by the same inverse Laplace transform as the partition function with respect to the boundary cosmological constant \begin{eqnarray} \left\langle \mathcal{T}_{\alpha_M}\right\rangle_\ell &=&-i \int_{-i\infty}^{+i \infty}d\mu_B e^{\mu_B \ell} \partial_{\mu_B}\left[\left\langle \mathcal{T}_{\alpha_M}\right\rangle_{\mu_B}\right]. \end{eqnarray} Inserting the Liouville contribution \eqref{Lonep}, the marked partition function with the bulk insertion is proportional to \begin{equation} \label{fion} \partial_{\mu_B} \left[\cos 4\pi P s\right] = - \frac{2P}{b\kappa} \frac{\sin 4\pi P s}{ \sinh 2\pi b s}. \end{equation} Notice that this amplitude is actually marked twice now; we will explicitly see it in the final formula below. We can again deform the contour as we did for the partition function. The integrand is meromorphic (and actually analytic) in the complex $\mu_B$ plane except for a branch cut at negative values. The discontinuity is given by \begin{equation}\label{eq:disccos} \text{Disc }\partial_{\mu_B} \left[\cos 4\pi P s\right]= \frac{2P}{b\kappa} 2i \sinh \frac{2\pi P}{b} \frac{\cos 4\pi P s }{ \sinh 2\pi b s}, \end{equation} valid for $\mu_B<-\kappa$. For $\mu_B \in (-\kappa,0)$, the function \eqref{fion} has no discontinuity as is readily checked, and seen immediately since \eqref{fion} is even in $s$. Finally the bulk one-point function at fixed length is given by \begin{equation} \label{bulkone} \left\langle \mathcal{T}_{\alpha_M}\right\rangle_\ell = \frac{2}{b} \int_{0}^{\infty} ds\hspace{0.1cm} e^{-\ell \kappa \cosh(2\pi b s)} \cos 4 \pi P s. \end{equation} This integral can be done explicitly:\footnote{Using the identity \begin{equation} \label{idthree} \int_{0}^{+\infty}ds e^{-\ell \kappa\cosh 2 \pi b s} \cos 2\pi b \lambda s = \frac{1}{2\pi b} K_{i\lambda}(\kappa \ell). \end{equation} } \begin{equation} \left\langle \mathcal{T}_{\alpha_M}\right\rangle_\ell = \frac{1}{\pi b^2} K_{\frac{2i P}{b}}(\kappa \ell). \end{equation} Notice that no prefactors of $1/\ell$ appear, comparing to \eqref{genmark}, making this amplitude interpretable as a twice-marked amplitude. Intuitively, one marking is just as the partition function, the second marking happens because of the non-trivial bulk insertion that creates a branch cut in the chiral sector of the geometry that has to intersect the boundary somewhere, marking it a second time. We develop this intuition in appendix \ref{app:mark}. It was mentioned below equation \eqref{partfuncs} that the integrand of the disk partition function in terms of $s$ is the vacuum modular S-matrix. Here, in the presence of a bulk state of momentum $P$, we find a similar structure with the non-vacuum modular S-matrix $S_P{}^{s}$ appearing. One can parametrize microscopic bulk operators by setting $P = i \frac{\theta}{2b}$, in terms of a new parameter $\theta$. For the particular case of $\theta \in \mathbb{N}$, the Liouville one-point amplitude $U_s(\alpha)$ is divergent. We argue in Appendix \ref{app:unif} that one should not additionally mark the boundary in this case. We do this by arguing that this case is embedded in the degenerate Virasoro Liouville insertions. We complement this argument by a bulk Liouville geometry discussion. The analogous expressions are written in \eqref{degvir} and \eqref{bulkexc}. \subsection{Boundary two-point function} \label{s:bostwo} In this section we will compute the boundary two-point function between generic operators, for a fixed length boundary. We will consider a general matter operator labeled by the parameter $\beta_M$ and include its gravitational dressing Liouville operator with parameter $\beta$ \begin{equation} \label{tocu} \mathcal{A}_{\beta_M} ( \ell_1, \ell_2) = \left\langle \mathcal{B}^{+}_{\beta_M} \hspace{0.1cm}\mathcal{B}^{-}_{\beta_M}\right\rangle_{\ell_1,\ell_2}, \end{equation} where we defined the boundary tachyon operators \begin{eqnarray} \mathcal{B}^{+}_{\beta_M} &=&(\pi \mu \gamma(b^2))^{\frac{2\beta-Q}{4b}} \frac{\Gamma(b(Q-2\beta))}{\pi} c \hspace{0.1cm}e^{\beta\phi} \hspace{0.1cm}e^{\beta_M\chi}, \\ \mathcal{B}^{-}_{\beta_M} &=&(\pi \mu \gamma(b^2))^{\frac{2\beta-Q}{4b}} \frac{\Gamma(b^{-1}(Q-2\beta))}{\pi} c \hspace{0.1cm}e^{\beta\phi} \hspace{0.1cm}e^{(-q-\beta_M)\chi}, \end{eqnarray} where we included the leg-pole factor in the definition of the insertion. Since we will eventually consider light matter operators we will pick the Liouville dressing with $\beta=b-\beta_M$. We will omit the labels $+/-$ on the operators when its clear by context. It is easy to account for the matter contribution since its independent of the boundary and bulk cosmological constant. In fact we can choose the matter operator to be normalized such that the boundary two point function has unit prefactor \begin{equation} \left\langle e^{\beta_M \chi} e^{ (-q - \beta_M) \chi}\right\rangle_M = \frac{1}{x^{\Delta_{\beta_M}}}. \end{equation} This correlator corresponds to the vacuum brane changing to the state $\beta_M$ brane and then back according to the fusion $\mathbf{1} \times \beta_M \to \beta_M$ and $\beta_M \times (-q-\beta_M) \to \mathbf{1}$ (see figure \ref{mattertwo}). \begin{figure}[t!] \centering \begin{tikzpicture}[scale=0.9] \draw[fill=blue!50!white,opacity=0.7] (0,0) ellipse (2 and 1); \draw[fill] (-2,0) circle (0.08); \node at (-2.7,0) {$e^{\beta_M \chi}$}; \draw[fill] (1,0.86) circle (0.08); \node at (1.5,1.3) {$e^{(-q-\beta_M)\chi}$}; \node[red!70!black] at (1.3,-1.2) {$1$}; \node[red!70!black] at (-1,1.2) {$\beta_M$}; \draw[-latex] (3.5,0) -- (4.5,0); \draw[thick, black] (6,-0.5)--(9,-0.5); \draw[thick, black] (7.5,-0.5)--(7.5,-0.5+1.5); \node[red!70!black] at (6.3,-0.18) {\small $1$}; \node[red!70!black] at (8.7,-0.18) {\small $\beta_M$}; \node[red!70!black] at (7.5+0.5,0.8) {\small $\beta_M$}; \node at (7.5,1.4) {$e^{\beta_M \chi}$}; \end{tikzpicture} \caption{Matter Coulomb gas two-point function with a vacuum brane $\mathbf{1}$ injected with charge $\beta_M$ to form the state $\beta_M$-brane and then back.} \label{mattertwo} \end{figure} Likewise for the ghost sector. This leaves again only the Liouville sector as the source of non-trivial dependence on the boundary lengths. For these reasons we will focus again only on the Liouville sector. Starting with the boundary two-point function \begin{equation} \label{lou2} d(\beta|s_1,s_2) = (\pi \mu \gamma(b^2)b^{2-2b^2})^{\frac{Q-2\beta}{2b}} \frac{\Gamma_b(2\beta-Q)\Gamma_b^{-1}(Q-2\beta)}{S_b(\beta \pm i s_1 \pm i s_2)}, \end{equation} and denoting \begin{equation} D_{s_1,s_2} \equiv \frac{1}{S_b(\beta \pm i s_1 \pm i s_2)} = S_b(Q - \beta \pm i s_1 \pm i s_2), \end{equation} we can compute the fixed length amplitude with boundary segments $\ell_1$ and $\ell_2$ by computing the Fourier transform: \begin{eqnarray} \mathcal{A}_{\beta_M} ( \ell_1, \ell_2) &=&(\pi \mu \gamma(b^2))^{\frac{2\beta-Q}{2b}}2(Q-2\beta)\frac{\Gamma(b(Q-2\beta))}{\pi}\frac{\Gamma(b^{-1}(Q-2\beta))}{\pi} \nonumber\\ &&\times i^{-2}\prod_{i=1,2}\int_{-i\infty}^{+i \infty}d\mu_{Bi} e^{\mu_{Bi} \ell_i} d(\beta|s_1,s_2), \end{eqnarray} where we included all the prefactors coming from the Liouville mode. We can again deform the contour to wrap the negative real axis. The main quantity to compute (up to prefactors) is the following discontinuity of the product of double sine functions \begin{equation} \prod_{i=1,2}\int_{0}^{+\infty}d\mu_B e^{-\mu_{Bi} \ell_i} \text{Disc } D_{s_1,s_2}. \end{equation} The discontinuity of the object $D_{s_1,s_2}$ can be found by subtracting the terms with $s_i \pm \frac{i}{2b}$,namely \begin{equation} \text{Disc } D_{s_1,s_2} \equiv D_{s_1 + \frac{i}{2b}, s_2 + \frac{i}{2b}} - D_{s_1 +\frac{i}{2b}, s_2 -\frac{i}{2b}} - D_{s_1 - \frac{i}{2b}, s_2 + \frac{i}{2b}} + D_{s_1 - \frac{i}{2b}, s_2 - \frac{i}{2b}}. \end{equation} Using the shift formulas that this double sine function satisfies \begin{align} \label{Sshift} S_b(b+x) = 2\sin \pi b x S_b(x), \qquad S_b\Big(\frac{1}{b}+x\Big) = 2 \sin \frac{\pi x}{b} S_b\left(x \right), \end{align} the discontinuity can be tremendously simplified into\footnote{This kind of relation is actually much more general. For example replace $b\to1/b$ from equations (3.18), (3.20) and (3.24) of \cite{Kostov:2003uh}.} \begin{equation}\label{discmain} \text{Disc} \Big[D_{s_1,s_2}\Big] =\Big[16 \sin \frac{2 \pi \beta}{b} \sin \frac{\pi}{b}(b^{-1}-2\beta)\Big] \sinh \frac{2\pi s_1}{b} \sinh \frac{2\pi s_2}{b}~S_b\left(b -\beta\pm i s_1 \pm i s_2\right), \end{equation} where the factors in brackets depend only on $\beta$ and $b$ and the rest include all the $\mu_B$ dependent terms that will affect the length dependence of the final answer. Note the first term in the argument of the double sine functions was shifted from $Q-\beta \to b-\beta=\beta_M$ which is precisely the Liouville parameter associated to the matter operator. This will be important when taking the JT gravity limit. It is straightforward to check that in the range $\mu_{Bi} \in \left(-\kappa,0\right)$, one has instead a pure imaginary value of $s_i $ and its conjugate below the real axis. Since $D_{s_1,s_2}=D_{-s_1,s_2}$, $D_{s_1,s_2}=D_{s_1,-s_2}$, etc, there is again no discontinuity along this interval. Even though there are no more branch cuts, in this case there are now poles coming from the double sine functions. We can define the original $\mu_B$ contour in a way that does not pick them and the matrix model calculation of section \ref{sec:MM} supports this definition. Alternatively we will also show in Appendix \ref{app:poles} that they are negligible in the JT gravity limit. The final answer for the two-point amplitude is \begin{equation}\label{eq:2pt} \boxed{\mathcal{A}_{\beta_M}(\ell_1,\ell_2)= N_{\beta_M}\kappa^2 \int ds_1 ds_2 \rho(s_1) \rho(s_2)\hspace{0.05cm} e^{-\mu_B(s_1)\ell_1} e^{-\mu_B(s_2)\ell_2}\hspace{0.05cm}\frac{S_b\left(\beta_M \pm i s_1 \pm i s_2\right)}{S_b(2\beta_M)}}. \end{equation} The prefactor in the right hand side can be obtained keeping track of it at each step of the calculation. Surprisingly all terms conspire to simplify drastically into the $\beta$ independent prefactor $N_{\beta_M}=16 \pi b^2$. In the case of the minimal string this factor should be multiplied by the matter contribution to the two point function. When viewed as a holographic theory, the result \eqref{eq:2pt} can be interpreted (read from left to right) as a sum over two intermediate channels with their respective densities of states, their propagators over lengths $\ell_i$ weighted by energies $\mu_B(s_i)$, and a matrix element squared of the matter operator between energy eigenstates given by the product of double sine functions. Finally we can analyze the UV behavior. We will pick $\ell_1<\ell_2$ and call $\tau\equiv \ell_1$ and $\beta=\ell_1+\ell_2$. The UV behavior without gravity is given by $G_0(\tau) \sim 1/\tau^{2h}$ for very small $\tau\to 0$. This arise from a combination of the fact that even though the density of states grows exponentially $\rho(E) \sim e^{\sqrt{E}}$ the matrix elements decay too, up to a power law $\rho(E)|\langle E | \mathcal{O} | E \rangle|^2 \sim E^{2h-1} $ at high energies. The situation when quantum gravity is turned on is surprisingly not too different. Now the density of states grows as a power law at large energies $\rho(E) \sim E^{p/2}$. We can use the asymptotics of the double sine function $S_b(x) = e^{i \delta(b)} e^{\mp i x (x-Q)}$ when ${\rm Im}(x)\to \pm \infty$, where $\delta(b)$ is a phase that is independent of $x$. We find that the amplitude goes as $S_b(\ldots)\sim E^{-p/2} E^{\frac{2}{b}\beta_M-1}$. The slower growth of the density of states is exactly compensated by a slower decay of matrix elements. This gives an asymptotics that is very similar to the case without gravity $G(\tau) \sim 1/ \tau^{2 h_{\rm eff}}$, with an effective gravitational dress scaling dimension $h_{\rm eff} \equiv \beta_M/b$. This is given, as a function of the bare scaling dimension $\Delta = \beta_M(q+\beta_M)$ as \begin{equation} h_{\rm eff} =\frac{\sqrt{2b^2(2\Delta-1)+b^4+1}-1+b^2}{2b^2}, \end{equation} where we picked the root that has a smooth $b\to0$ limit. When gravity is weakly coupled $b\to0$ and $h_{\rm eff} (b\to0) \sim \Delta$. On the other hand when gravity is strong we get $h_{\rm eff}(b\sim 1) \sim \sqrt{\Delta}$ but the qualitative behavior in the UV is the same. In any case, including quantum gravity does not seem to smooth out the UV divergence. \subsection{Boundary three-point function} \label{s:three} In this subsection we will compute the boundary three point function between three operators with matter parameters $\beta_{M1}$, $\beta_{M2}$ and $\beta_{M3}$, which we will denote as \begin{equation} \mathcal{A}_{123}(\ell_1,\ell_2,\ell_3) \equiv \langle \mathcal{B}_{\beta_{M1}} \mathcal{B}_{\beta_{M2}} \mathcal{B}_{\beta_{M3}} \rangle, \end{equation} and can be obtained as an inverse Laplace transform of FZZT boundary conditions as before. The expressions required in this calculation are very involved so we will focus only on the length dependence to simplify the presentation. The first object we need is the Liouville three-point function between operators of parameter $\beta_1$, $\beta_2$ and $\beta_3$ which should be thought of as a function of the matter parameter $\beta_i = b - \beta_{Mi}$. The Ponsot-Teschner \cite{Ponsot:2001ng} boundary three-point function is \begin{equation}\label{bdy3pt} C_{\beta_3\beta_2\beta_1}^{s_3s_2s_1} = \frac{g_{Q-\beta_3}^{s_3s_1}}{g_{\beta_2}^{s_3s_2}g_{\beta_1}^{s_2s_1}} F_{s_2 \beta_3}\left[{}^{\beta_2}_{s_3} \hspace{0.1cm}{}^{\beta_1}_{s_1}\right], \end{equation} where following \cite{Ponsot:2001ng} we define \begin{equation} g_{\beta}^{s_2s_1}\equiv (\pi \mu \gamma(b^2)b^{2-2b^2})^{\beta/2b} \frac{\Gamma_b(Q) \Gamma_b(Q-2\beta) \Gamma_b(Q+ 2i s_1) \Gamma_b(Q-2is_2)}{\Gamma_b(Q-\beta\pm i s_1 \pm i s_2)}. \end{equation} The fusion matrix appearing in the right hand side of \eqref{bdy3pt} was also computed by Ponsot and Teschner previously in \cite{Ponsot:1999uf}. We can rewrite this boundary three point function in the following suggestive way \begin{eqnarray} C_{\beta_3\beta_2\beta_1}^{s_3s_2s_1} &=&\frac{S_b(2\beta_1)^{\frac{1}{2}}S_b(2\beta_2)^{\frac{1}{2}}S_b(2\beta_3)^{\frac{1}{2}}}{\sqrt{2\pi}} C_{\beta_1,\beta_2,\beta_3}{}^{\frac{1}{2}}\nonumber\\ &&\times \Big[S_b(\bar{\beta}_2\pm is_2\pm is_3)S_b(\bar{\beta}_1\pm is_1\pm is_2)S_b(\bar{\beta}_3\pm is_1\pm is_3) \Big]^{\frac{1}{2}} \sj{\bar{\beta}_1}{\bar{\beta}_2}{\bar{\beta}_3}{s_3}{s_1}{s_2},\nonumber \end{eqnarray} where we defined $\bar{\beta}=Q-\beta$ and used that $\Gamma_b(Q)^2= 2 \pi/\Upsilon'(0)$. The factor appearing in the first line is the DOZZ structure constant \begin{equation} C_{\beta_1,\beta_2,\beta_3} = \frac{(\pi \mu \gamma(b^2) b^{2-2b^2})^{(Q-\beta_{123})/b}\Upsilon'(0)\Upsilon(2\beta_1)\Upsilon(2\beta_2)\Upsilon(2\beta_3)}{\Upsilon(\beta_{1+2-3})\Upsilon(\beta_{3+2-1}) \Upsilon(\beta_{3+1-2})\Upsilon(\beta_{123}-Q)}. \end{equation} The final term is the b-deformed $6j$-symbol of SL$(2,\mathbb{R})$ computed by Teschner and Vartanov \cite{Teschner:2012em}. Now we can compute the discontinuity of the boundary OPE along the negative $\mu_B$ axis. We can do this by applying three times equation (3.24) of \cite{Kostov:2003uh} and the result, up to a $s$ independent prefactor, is \begin{equation} {\rm Disc}[C_{\beta_3\beta_2\beta_1}^{s_3s_2s_1}] \sim \sinh \frac{2\pi s_1}{b} \sinh \frac{2\pi s_2}{b} \sinh \frac{2\pi s_3}{b} C_{\beta_3+\frac{1}{b}\beta_2+\frac{1}{b}\beta_1+\frac{1}{b}}^{s_3s_2s_1}. \end{equation} Putting everything together and using the relation $\beta=b-\beta_M$ we can write a final answer for the boundary three point function \begin{eqnarray}\label{eq:3ptbdyfinal} \mathcal{A}_{123}(\ell_1,\ell_2,\ell_3) &=& N_{\beta_1\beta_2\beta_3} \int \prod_{i=1}^3 \Big[ds_i\rho(s_i) e^{- \mu_B(s_i)\ell_i}\Big] \nonumber\\ &&\hspace{-4cm}\times \big[S_b(\beta_{M2}\pm is_2\pm is_3)S_b(\beta_{M1}\pm is_1\pm is_2)S_b(\beta_{M3}\pm is_1\pm is_3) \big]^{\frac{1}{2}} \sj{\beta_{M1}}{\beta_{M2}}{\beta_{M3}}{s_3}{s_1}{s_2} . \end{eqnarray} where the prefactor $N_{\beta_1\beta_2\beta_3}$ includes contributions from both the Liouville and the matter sectors. Interestingly it is proportional to the square root of the DOZZ structure constant. This prefactor is important since it quantifies the bulk coupling between the three particles created by the boundary operators, but to estimate its size it's important to include properly the matter contribution, which depends on the theory. \subsection{Bulk boundary two-point function} \label{s:bbtwo} The bulk-boundary two-point function we will consider is of the form \begin{equation} \left\langle \mathcal{T}_{\alpha_M} \, \mathcal{B}_{\beta_M}^{+}\right\rangle_{\ell}. \end{equation} We will take the bulk operator with $\alpha = Q/2 + i P$, $\beta_1 = Q/2 + is$ as the FZZT boundary label and $\beta = b- \beta_M$ for the boundary operator. \\ The Liouville amplitude was listed in \eqref{bbdy}. We transform this to fixed length by evaluating the discontinuity across the branch cut on the negative real $\mu_B$-axis. To do this, the following functional discontinuity relation can be used:\footnote{The analogous relation for a shift in $b$ was written in eq (3.20) in \cite{Kostov:2003uh}, in turn extracted from the Teschner trick computation of \cite{Hosomichi:2001xc}. We corrected a typo in that equation in the middle Gamma-function in the denominator.} \begin{align} R_{s+\frac{i}{2b}} - R_{s-\frac{i}{2b}} &= \sinh \frac{2\pi}{b}s \,\, R_{s}(\alpha,\beta+1/b) \\ &\times 2\pi \left( \frac{\mu}{\pi \gamma(-b^2)}\right)^{1/2} \frac{\Gamma(1-\frac{2}{b}\beta)\Gamma(1-\frac{1}{b^2} - \frac{2}{b}\beta)}{\Gamma^2(1-\frac{1}{b}\beta)\Gamma(1 - \frac{1}{b}\beta - \frac{2}{b}\alpha + \frac{1}{b}Q) \Gamma(1- \frac{1}{b}\beta + \frac{2}{b}\alpha - \frac{1}{b}Q)}. \nonumber \end{align} The resulting bulk-boundary two-point function has the following complicated form: \begin{align} \label{oneone} &\int_{0}^{+\infty} ds \rho(s) e^{-\mu_B(s) \ell} \, \Gamma(b(Q-2\beta)) \frac{1}{4\pi^2 b} \frac{\Gamma(-2iP/b)}{\Gamma(1+2iPb)}\\ &\times\frac{\Gamma(1-2b^{-1}\beta)\Gamma(1-b^{-2}-2b^{-1}\beta)}{\Gamma^2(1-b^{-1}\beta)\Gamma(1-b^{-1}\beta-2b^{-1}\alpha+b^{-1}Q)\Gamma(1-b^{-1}\beta+2b^{-1}\alpha-b^{-1}Q)} \nonumber \\ &\times \frac{\Gamma_b^3(\beta_M)}{\Gamma_b(Q) \Gamma_b(-Q+2\beta_M)\Gamma_b(Q-\beta_M)} \frac{\Gamma_b(2\alpha-Q + \beta_M)\Gamma_b(Q-2\alpha+\beta_M)}{\Gamma_b(2\alpha) \Gamma_b(Q-2\alpha)} \,\, I_{\beta_1\alpha}(\beta + 1/b). \nonumber \end{align} The first line contains the legpole factors of the boundary operator, and the normalization of the bulk operator \eqref{onepointins}. The second line contains the prefactors coming from deforming the contour. The final line is the Liouville bulk-boundary two-point function in terms of the modular $S$-matrix, defined by Teschner and Vartanov as \cite{Teschner:2012em}: \begin{align} S^{\scriptscriptstyle \text{PT}}_{\beta_1 \beta_2}&(\alpha_0) \equiv \frac{S_0{}^{\beta_2} e^{\frac{\pi i}{2} \Delta \alpha_0}}{S_b(\alpha_0)} I_{\beta_1\beta_2}(\alpha_0) \\ &= \frac{S_0{}^{\beta_2} e^{\frac{\pi i}{2} \Delta \alpha_0}}{S_b(\alpha_0)} \int_{\mathbb{R}}dt e^{2\pi t (2\beta_1-Q)}\frac{S_b(\frac{1}{2}(2\beta_2+\alpha_0-Q)+it)}{S_b(\frac{1}{2}(2\beta_2-\alpha_0+Q)+it)}\frac{S_b(\frac{1}{2}(2\beta_2+\alpha_0-Q)-it)}{S_b(\frac{1}{2}(2\beta_2-\alpha_0+Q)-it)}. \end{align} The integral $I_{\beta_1\alpha}(\beta + 1/b) $ can be evaluated as: \begin{align} I_{\beta_1\alpha}(\beta + 1/b) &= \int_{\mathbb{R}}dt e^{2\pi t (2\beta_1-Q)}\frac{S_b(\frac{1}{2}(2\alpha+(\beta+1/b)-Q)+it)}{S_b(\frac{1}{2}(2\alpha-(\beta+1/b)+Q)+it)}\frac{S_b(\frac{1}{2}(2\alpha+(\beta+1/b)-Q)-it)}{S_b(\frac{1}{2}(2\alpha-(\beta+1/b)+Q)-it)} \nonumber \\ &= \frac{1}{S_b(\beta_M)^2}\int_{\mathbb{R}}dt e^{4\pi t i P} S_b( \beta_M/2 \pm i s \pm it), \end{align} where we used the property $I_{\beta_1\beta_2}(\alpha_0) = S_b(\alpha_0)^2 I_{\beta_2\beta_1}(Q-\alpha_0)$ to swap the roles of $\alpha$ and $\beta_1$. This allows for a well-defined JT limit below. Using the shift identities and the gamma-function reflection identity, the integrand of \eqref{oneone} can be simplified into: \begin{align} N_{\beta_M,P} \int_{\mathbb{R}}dt\hspace{0.1cm} e^{4\pi t i P} \frac{S_b( \beta_M/2 \pm i s \pm it)}{S_b(\beta_M)}, \end{align} which contains (in order) the prefactor $2/b$ for the bulk operator, the boundary operator, and a coupling between these in the third prefactor, given by \begin{equation} \label{prebb} N_{\beta_M, P}= \frac{2}{b} \frac{\Gamma_b(1/b +\beta_M)}{\Gamma_b(1/b + 2 \beta_M)}\frac{\Gamma_b(\frac{1}{b} + \beta_M \pm 2iP)}{\Gamma_b(\frac{1}{b} \pm 2iP)}. \end{equation} Upon using $t\to -t$ to write the $t$-integral over $\mathbb{R}^+$, we can write: \begin{equation} e^{4\pi t i P} \,\, \to \,\,\cos 4\pi P t = S_P{}^t = \frac{S_P{}^t}{S_0{}^t} S_0{}^t, \end{equation} in terms of the Virasoro modular $S$-matrix, where $S_0{}^t = \rho(t) = \sinh 2 \pi b s \sinh \frac{2\pi s}{b}$. We then obtain for the full result \eqref{oneone}: \begin{equation}\label{eq:bulkbdyfinal} \boxed{ \left\langle \mathcal{T}_{\alpha_M} \, \mathcal{B}_{\beta_M}^{+}\right\rangle = N_{\beta_M, P} \int_{0}^{+\infty} ds dt \rho(s) \rho(t) e^{-\mu_B(s) \ell} \, \frac{S_P{}^t}{S_0{}^t} \, \frac{S_b(\beta_M/2 \pm i s \pm it)}{S_b(\beta_M)}}. \end{equation} As a check on this formula, taking $\beta_M \to 0$, we can use the identity \begin{equation} \lim_{\beta_M \to 0}\frac{S_b(\beta_M/2 \pm i s \pm it)}{S_b(\beta_M)} = \frac{\delta(s-t)}{S_0{}^{t}}, \end{equation} to obtain \begin{align} \frac{2}{b} \int_{0}^{+\infty} ds e^{-\mu_B(s) \ell} S_P{}^t, \end{align} which is indeed the bulk one-point function we derived in section \ref{sec:bulk1pt}. \subsection{JT gravity limit} In this section we take the semiclassical limit of the formulas derived above, for which the central charge of the Liouville mode becomes large. We will see in each case a match with the analogous calculation done previously in JT gravity. \begin{center} \textbf{Bulk one-point function} \end{center} We will begin with the bulk one-point function \begin{equation} \left\langle \mathcal{T}\right\rangle_\ell = \frac{2}{b} \int_{0}^{+\infty} ds e^{-\ell \kappa \cosh(2\pi b s)} \cos 4 \pi P s, \end{equation} We take the $b\to0$ limit and write it in terms of $\ell_{\rm JT}$ (see section \ref{sec:proprho} for its definition in terms of $\ell$). In order to have a non-trivial limit, we consider heavy matter operators such that the Liouville momenta scales as $P=\lambda/2b$, with finite $\lambda$. Then the one-point function becomes \begin{equation} \left\langle \mathcal{T}\right\rangle_\ell = 2 \int_{0}^{+\infty} dk e^{-\ell_{\rm JT} k^2 } \cos 2 \pi \lambda k. \end{equation} This expression coincides with the JT gravity partition function on a single trumpet of geodesic length $2\pi \lambda$. Therefore in this limit the bulk operator has the effect of creating a macroscopic hole of a given length. These single defect partition functions in JT gravity are known to be related to functional integrals within the different Virasoro coadjoint orbits \cite{Mertens:2019tcm},\footnote{See also \cite{Nayak:2019evx}.} where the choice of defect selects a particular orbit. For $\lambda \in \mathbb{R}$, these can be identified with the hyperbolic orbits of the Virasoro group. On the other hand, for imaginary $\lambda \equiv i\theta$ this partition function is equivalent to the JT gravity calculation with a conical defect inside the disk, with angular identification $\varphi \sim \varphi + 2 \pi \theta$. These are identified with functional integrals along the elliptic coadjoint orbits of the Virasoro group. For $\theta\in \mathbb{N}$, these become replicated geometries. Taking the JT limit of \eqref{bulkexc}, we get: \begin{equation} \left\langle \mathcal{T^{\scriptscriptstyle \text{U}}}\right\rangle_\ell = 4 \int_{0}^{+\infty} dk e^{-\ell_{\rm JT} k^2 } k \sinh 2 \pi n k, \end{equation} matching the JT exceptional elliptic defect amplitudes discussed in \cite{Mertens:2019tcm}. Starting instead with \eqref{degvir}, and setting $n=\lambda/b^2$ with $\lambda$ a new continuous quantity, one gets the limit: \begin{equation} \left\langle \mathcal{T^{\text{deg}}}\right\rangle_\ell = 4 \int_{0}^{+\infty} dk e^{-\ell_{\rm JT} k^2 } \sinh 2 \pi \lambda k \sinh 2 \pi n k, \end{equation} which we proposed in \cite{Mertens:2019tcm} to be related to the exceptional hyperbolic Virasoro coadjoint orbits. In conclusion, the insertion of a bulk operator has the effect of creating a hole (for real $P$) or a localized conical defect (for imaginary $P$). We checked this in the semiclassical JT limit but this is consistent with the classical solution of the Liouville equation, see for example the discussion in \cite{Moore:1991ir}. \begin{center} \textbf{Boundary two-point function} \end{center} Now we will take the JT gravity limit of the two point function computed in \eqref{eq:2pt}. We will take the matter operator with parameter $\beta_M = b h$ and keep $h$ fixed in the $b\to0$ limit. We will also take the boundary length to be large with $\ell_{{\rm JT}1}$ and $\ell_{{\rm JT}2}$ fixed. Then up to only $b$ dependent terms, we can write the two point function as \begin{equation} \label{JTtwo} \mathcal{A}_{\beta_M}(\ell_1,\ell_2) \sim \mu^{\frac{1}{2b^2}} \int dk_1 dk_2 \rho_{\rm JT}(k_1)\rho_{\rm JT}(k_2)e^{-k_1^2 \ell_{{\rm JT}1}}e^{-k_2^2 \ell_{{\rm JT}2}}\frac{\Gamma(h \pm i k_1 \pm i k_2)}{\Gamma(2h)}, \end{equation} where $\rho_{\rm JT}(k) = k \sinh 2\pi k$ and we used the small $b$ asymptotic of the double sine function $S_b(bx) \propto \Gamma(x)$. This expression coincides with the JT gravity two point function computed in \cite{Mertens:2017mtv,Lam:2018pvp}. In the limit of large $\ell_{{\rm JT}1}$ and $\ell_{{\rm JT}2}$ this formula simplifies further since the Schwarzian mode becomes weakly coupled. Renaming $\tau = {\rm min} ( \ell_{{\rm JT}1}, \ell_{{\rm JT}2})$ and $\beta=\ell_{{\rm JT}1}+\ell_{{\rm JT}2}$, for large $\beta,\tau$ we get $\mathcal{A} \sim ( \sin \frac{\pi}{\beta}\tau)^{-2h}$. This is precisely the boundary correlator one would get if the gravitational mode would be turned off. In order to obtain this limit we need $b$ to be small. Therefore in general theories there is no regime where the gravitational dressing becomes weakly coupled \footnote{Similar drastic effects of gravitational dressings can happen also in higher dimensions \cite{Lewkowycz:2016ukf}.}. \begin{center} \textbf{Boundary three-point function} \end{center} Following the previous calculation we take the limit of the three-point function \eqref{eq:3ptbdyfinal} when the three boundary length to be large with fixed $\ell_{{\rm JT}i}$ and $\beta_{{\rm M}i}= b h_i$ for $i=1,2,3$. The integrals are then dominated by $s_i = b k_i$. Ignoring length independent prefactors, using the asymptotics of the double sine functions we can get \begin{eqnarray} \langle \mathcal{B}_1 \mathcal{B}_2 \mathcal{B}_3 \rangle&\sim& \int \prod_{i=1}^3 \Big[dk_i\rho_{\rm JT}(k_i) e^{- \ell_{{\rm JT}i}k_i^2}\Big] \nonumber\\ &&\hspace{-4cm}\times \big[\Gamma(h_2\pm ik_2\pm ik_3)\Gamma(h_1\pm ik_1\pm ik_2)\Gamma(h_3\pm ik_1\pm ik_3) \big]^{\frac{1}{2}} \sj{h_1}{h_2}{h_3}{k_3}{k_1}{k_2}_{\text{SL}(2,\mathbb{R})}, \end{eqnarray} where the expression involves now the $6j$-symbol of the classical group SL$(2,\mathbb{R})$ between three principal series representations labeled by $k_i$ and three discrete representations labeled by $h_i$. This is precisely the same structure as the JT gravity three-point function computed in equation (4.35) of \cite{Iliesiu:2019xuh}. \begin{center} \textbf{Bulk-boundary two-point function} \end{center} Finally we will take the JT limit of the bulk boundary correlator given in equation \eqref{eq:bulkbdyfinal}. We set $\beta_M = bh$, and $P = \lambda/2b$. It is instructive to work this out for $h \in \mathbb{N}$. In this particular case, the last factor of the prefactor \eqref{prebb} simplifies to: \begin{equation} \frac{\Gamma_b(\frac{1}{b} + \beta_M \pm i \frac{\lambda}{b})}{\Gamma_b(\frac{1}{b} \pm i \frac{\lambda}{b})} \to 2\pi b \left(\frac{\sinh \pi \lambda}{\pi \lambda} \right)^{h}, \end{equation} for a hyperbolic (macroscopic) defect with geodesic circumference $2\pi \lambda$. \\~\\ For an elliptic (microscopic) insertion, we set $\lambda = i \theta$, and obtain instead $2\pi b \left(\frac{\sin \pi \theta}{\pi \theta} \right)^{h}$. Notice that this factor vanishes for $\theta \in \mathbb{N}_0$, which are precisely the values of the Virasoro exceptional elliptic coadjoint orbits. The other prefactors scale in uninteresting ways and can be absorbed in normalization of the bulk and boundary operators separately. To find a finite result, we rescale $t\to bt$ and use the small $b$-asymptotics of the $S_b$-function to get: \begin{align} \label{11lim} \left\langle \mathcal{T}_{\alpha_M} \, \mathcal{B}_{\beta_M}^{+} \right\rangle &= 2\pi b \left(\frac{\sinh \pi \lambda}{\pi \lambda} \right)^{h} \int_{0}^{+\infty} dk dt \rho_{\rm JT}(k) \rho_{\rm JT}(t) e^{- \ell k^2} \chi_t(\lambda) \frac{\Gamma(h/2 \pm i k \pm it)}{\Gamma(h)}, \end{align} in terms of the character insertion $\chi_t(\lambda)$ for $\lambda$ a hyperbolic conjugacy class element \cite{Mertens:2019tcm}: \begin{equation} \chi_t(\lambda) = \frac{\cos 2 \pi \lambda t}{t \sinh 2\pi t}. \end{equation} The $t$-momentum variable has no exponential factor, and hence no boundary segment. The Schwarzian diagram is sketched in Figure \ref{modularS} with a bilocal line lasso-ing around the defect. \begin{figure}[h!] \centering \begin{tikzpicture}[scale=1] \draw[fill=blue!40!white,opacity=0.7] (0,0) ellipse (1.5 and 1.5); \draw[red] (0,0.5) -- (1.5,0) -- (0,-0.5); \draw[red] (0,-0.5) arc (180+94:180-94:0.5); \draw[fill] (1.5,0) circle (0.06); \node at (0,-1.985) {}; \node at (0,1.8) {\small $\ell$}; \node at (0.6,0) {\small $t$}; \node at (-0.8,0.8) {\small $k$}; \draw[fill,green!40!black] (0,0) circle (0.06); \node[green!40!black] at (-0.23,0) {\small $\lambda$}; \node at (1.9,0) {\small $\mathcal{B}$}; \end{tikzpicture} \caption{Schwarzian limit of the modular S-matrix, and hence the bulk-boundary propagator. The answer is given by the expectation value of a boundary-anchored bilocal line (red line) encircling the defect (green dot). This line separates two regions with energy parameters $k$ (region without defect) and $t$ (region with defect).} \label{modularS} \end{figure} Notice that the bilocal line has \emph{half} the value of $h$ of the boundary operator. This can be appreciated by viewing this single boundary operator as the renormalized point-split version of two boundary operators with half the value of $h$ as: \begin{equation} :\lim_{x_2\to x_1} e^{\frac{\beta_M}{2}\chi_1}e^{\frac{\beta_M}{2}\chi_2}:\,\, \equiv \,\, e^{\beta_M \chi}. \end{equation} leading indeed to the vertex functions present in \eqref{11lim}. We also remark that this renormalization removes the coincident UV divergence of the two constituent boundary operators which would correspond in the JT limit to a contractible bilocal line (i.e. \emph{not} encircling the defect). \section{A quantum group perspective} \label{s:qg} We have seen that the propagation factors in the amplitudes $e^{-\mu_B(s) \ell}$ (as in e.g. \eqref{twoa}) contain in the exponent the factor $\cosh 2 \pi b s$, and the measure is $\rho(s) = \sinh 2\pi b s \sinh \frac{2 \pi s}{b}$. In this section we highlight the quantum group structure that underlies these expressions. \\ The quantity $C_s \equiv \cosh 2 \pi b s$ is identified with the Casimir $C_s$ of the (continuous) self-dual irreps $\mathcal{P}_s$ labeled by $s$ of $\mathcal{U}_q(\mathfrak{sl}(2,\mathbb{R}))$ with $q=e^{\pi i b^2}$. The associated Plancherel measure on this set of representations is \begin{equation} d\mu(s) = ds \sinh 2\pi b s \sinh \frac{2 \pi s}{b}. \end{equation} This class of representations is characterized by the following \cite{Ponsot:1999uf,Ponsot:2000mt,*Bytsko:2002br,*Bytsko:2006ut,*Ip}: \begin{itemize} \item It is a \emph{positive} representation, in the sense that all generators are represented by positive self-adjoint operators. \item They are closed under tensor product in the sense: \begin{equation} \mathcal{P}_{s_1} \otimes \mathcal{P}_{s_2} \simeq \int^{\oplus} d\mu(s) \mathcal{P}_s. \end{equation} \item They are simultaneously representations of the dual quantum group $\mathcal{U}_{\tilde{q}}(\mathfrak{sl}(2,\mathbb{R}))$ where $\tilde{q} = e^{\pi i b^{-2}}$. Hence they can be viewed naturally as representations of the modular double $\mathcal{U}_{q}(\mathfrak{sl}(2,\mathbb{R})) \otimes \mathcal{U}_{\tilde{q}}(\mathfrak{sl}(2,\mathbb{R}))$. \end{itemize} This means the expressions \eqref{eq:b1}, \eqref{twoa}, \eqref{threea} and \eqref{twoabb} have the same group theoretic structure as those of 2d Yang-Mills or 2d BF theory, but based on the modular double of $\mathcal{U}_{q}(\mathfrak{sl}(2,\mathbb{R}))$ as underlying quantum group structure. Notice that the restriction to only these self-dual representations is a strong constraint on the group-theoretic structure. But it is one that is necessary to make contact with geometric notions, as can be seen through the link with Teichm\"uller theory \cite{Nidaiev:2013bda}. Roughly speaking, the positivity constraint ensures one only has eigenstates of positive geodesic distance. \\~\\ JT gravity can be realized in a similar group theoretical language, based on the subsemigroup SL${}^+(2,\mathbb{R})$ structure \cite{Blommaert:2018iqz}, where the defining representation of the subsemigroup consists of all positive $2\times 2$ matrices. This positivity is directly related to having only hyperbolic monodromies and hence only smooth (i.e. not punctured) geometries. Additionally, one has to impose gravitational boundary conditions at all holographic boundaries. These boundary conditions enforce a coset structure of the underlying group and reduce the complete set of intermediate states from the full space of irrep matrix elements $R_{ab}(g)$ (by the Peter-Weyl theorem), to the double coset matrix elements $R_{00}(x)$ where both indices are fixed by the gravitational constraints. \\ From a SL${}^+(2,\mathbb{R})$ perspective, the generators $J^+$ and $J^-$ are constrained as $J^+=1$, $J^-=1$ for resp. the ket and the bra of the matrix element. This corresponds to imposing constraints on the parabolic generators, and we call the resulting matrix element a mixed parabolic matrix element. In the mathematics literature, such matrix elements are called Whittaker functions. \\~\\ The vertex function in JT gravity $\frac{\Gamma(h \pm i k_1 \pm i k_2)^{1/2}}{\Gamma(2h)^{1/2}}$ is known to correspond to the integral definition of a 3j-symbol. For a compact group, one writes the expression as: \begin{equation} \int d g R_{1,m_1n_1}(g) R_{2,m_2n_2}(g) R_{3,m_3n_3}(g) = \tj{R_1}{R_2}{R_3}{m_1}{m_2}{m_3}\tj{R_1}{R_2}{R_3}{n_1}{n_2}{n_3}.\label{3R} \end{equation} In the JT gravity case, we have insertions of two principal series representation mixed parabolic matrix elements, and one insertion of a discrete representation (corresponding to the operator insertion): \begin{equation} \label{JT3j} \int d x R_{k_1,00}(x) R_{h,00}(x) R_{k_2,00}(x) = \int_{-\infty}^{+\infty}dx\, K_{2ik_1}(e^{x}) e^{2 h x} K_{2ik_2}(e^{x}) = 2^{2h-3}\frac{\Gamma(h \pm i k_1 \pm i k_2)}{\Gamma(2h)} . \end{equation} \\~\\ \noindent We here illustrate that this structure persists in the $q$-deformed case and in particular to the vertex functions we wrote down \eqref{twoam} in this work. The Whittaker function of the principal series representation of $\mathcal{U}_q(\mathfrak{sl}(2,\mathbb{R}))$ was derived in \cite{Kharchev:2001rs}: \begin{equation} \label{qmelb} \psi^{\epsilon}_s(x) = e^{\pi i 2 s x} \int_{-\infty}^{+\infty} \frac{d\zeta}{(2\pi b)^{-2i\zeta/b-2is/b}} S_b(-i\zeta) S_b(-i 2 s -i \zeta ) e^{-\pi i \epsilon (\zeta^2 + 2s \zeta)} e^{2\pi i \zeta x} , \end{equation} where $\epsilon = \pm 1$. In the notation of \cite{Kharchev:2001rs}, this corresponds to choosing $g = (2\pi b)^{1/b}$. It satisfies the following finite difference equation: \begin{align} \label{wdw} \left(1+(2\pi b)^2 e^{2\pi b x - i \pi b^2} \right) \psi_s^{-}(x-ib) + \psi_s^{-}(x+ib) &= 2\cosh 2\pi b s \psi^{-}_s(x), \\ \psi_s^{+}(x-ib) + \left(1+(2\pi b)^2 e^{2\pi b x + i \pi b^2} \right) \psi_s^{+}(x+ib) &= 2\cosh 2\pi b s \psi^{+}_s(x), \end{align} which boils down from the Casimir equation on $\mathcal{U}_q(\mathfrak{sl}(2,\mathbb{R}))$ by constraining a parabolic generator in both the left- and right-regular representation. The rhs contains the Casimir eigenvalue in the irrep $s$. In the classical $b\to 0$ limit, this structure is precisely the same as how one constrains the $\mathfrak{sl}(2,\mathbb{R})$ Casimir equation to produce the 1d Liouville equation. Indeed, the classical $b\to 0$ limit transforms the finite difference equations both into the 1d Liouville differential equation. The options $\epsilon =\pm 1$ can be viewed as different discretizations (quantum versions) of the same classical problem. At the level of the eigenfunctions, one has the limiting behavior: \begin{equation} \lim_{b\to 0}\psi^{\epsilon}_s \left(\frac{x}{\pi b}\right) = \frac{1}{\pi b} K_{2 i s/b}\left(\frac{2}{b}e^{x}\right). \end{equation} Setting $s=bk$ and shifting $x$, the function $K_{2 i k}\left(e^{x}\right)$ is known as the Whittaker function of SL${}^+(2,\mathbb{R})$ and was inserted in \eqref{JT3j}. It is equally the 1d Liouville Schr\"odinger eigenfunction.\footnote{Crucially, in the same notation, the Whittaker function of SL$(2,\mathbb{R})$ is $\cosh \pi k \, K_{2i k}(e^{x})$ and this difference in prefactor in the end produces the SL$(2,\mathbb{R})$ Plancherel measure $d\mu(k) = dk \frac{k \sinh 2\pi k}{\cosh^2 \pi k} = 2 dk k \tanh \pi k$, in stark contrast to the SL${}^+(2,\mathbb{R})$ Plancherel measure $d\mu(k) = dk k \sinh 2\pi k$, relevant for gravity. One may encounter this Whittaker function with an additional factor of $e^x$ present: this compensates for the Haar measure on the group (coset) manifold, and one can choose to remove it and simultaneously take a flat measure in the $x$-integral as we have done.} The modified Bessel function has a Mellin-Barnes integral representation as: \begin{equation} \label{mbk} K_\nu(z) = \frac{1}{4\pi i }\left(\frac{z}{2}\right)^{\nu} \int_{-i\infty}^{+i\infty} dt \Gamma(t) \Gamma(t-\nu) \left(\frac{z}{2}\right)^{-2t}, \end{equation} and the above formula \eqref{qmelb} is its $q$-deformed version. We need to scale $s \to b k$ in order to obtain a finite classical limit. By analogy with the lhs of \eqref{JT3j}, we hence compute the integral of two Whittaker functions, and one discrete insertion, of the type ($\beta_M = b h$): \begin{equation} \int_{-\infty}^{+\infty} dx\, \psi^{\epsilon}_{s_1} (x) \psi^{\epsilon * }_{s_2} (x) e^{2 \beta_M \pi x}. \end{equation} Inserting the explicit expression \eqref{qmelb}, one can evaluate the $x$-integral as: \begin{equation} \int_{-\infty}^{+\infty} dx e^{\pi i (2s_1 - 2s_2 + 2 \zeta_1 - 2 \zeta_2) + 2 \beta_M \pi x} = \delta(\zeta_1-\zeta_2 + s_1 - s_2 - i \beta_M). \end{equation} We get: \begin{align} &\int_{-\infty}^{+\infty} dx \psi^{\epsilon}_{s_1} (x) \psi^{\epsilon * }_{s_2} (x) e^{2 \beta_M \pi x} = e^{-\pi i \epsilon(\beta_M^2-s_1^2+s_2^2+2i s_1 \beta_M)} \\ &\times \int_{-\infty}^{+\infty} \frac{d\zeta_1}{(2\pi b)^{2\beta_{M}/b}} e^{\pi 2 \epsilon \beta_M \zeta_1} S_b(-i\zeta_1) S_b(-i \zeta_1-2is_1) S_b(i\zeta_1+is_1 -is_2 + \beta_M)S_b(i\zeta_1+is_1 + is_2 + \beta_M). \nonumber \end{align} The $q$-deformed first Barnes lemma is: \begin{align} \label{qbarne} \int d\tau e^{\pi \tau (\alpha+\beta+\gamma+\delta)}&S_b(\alpha+i\tau)S_b(\beta+i\tau)S_b(\gamma-i\tau)S_b(\delta-i\tau) \\ &= e^{\pi i (\alpha\beta-\gamma\delta)} \frac{S_b(\alpha+\gamma)S_b(\alpha+\delta)S_b(\beta+\gamma)S_b(\beta+\delta)}{S_b(\alpha+\beta+\gamma+\delta)}. \nonumber \end{align} Using \eqref{qbarne}, we can do the remaining integral and obtain finally:\footnote{The prefactor is immaterial and can be absorbed into the normalization of the boundary operator. Reinstating the parameter $g$ of \cite{Kharchev:2001rs}, the prefactor would be $\frac{1}{g^{2\beta_M}}$ instead.} \begin{equation}\label{idwhitsb} \boxed{ \int_{-\infty}^{+\infty} dx \hspace{0.1cm} \psi^{\epsilon}_{s_1} (x) \psi^{\epsilon * }_{s_2} (x) e^{2 \beta_M \pi x} = \frac{1}{(2\pi b)^{2\beta_{M}/b}} \frac{S_b(\beta_M \pm is_1 \pm is_2)}{S_b(2\beta_M) }}\, . \end{equation} Following the structure of \eqref{JT3j}, we interpret this as the square of the 3j-symbol with two mixed parabolic entries, and one discrete parabolic entry, of the quantum group $\mathcal{U}_q(\mathfrak{sl}(2,\mathbb{R}))$. As a check, taking the $b\to 0$ limit of both sides, we get the equality: \begin{equation} \left(\frac{b}{2}\right)^{2h}\frac{1}{(\pi b)^3} \int_{-\infty}^{+\infty} dx K_{2ik_1}(e^{x}) K_{2ik_2}(e^{x}) e^{2hx} = b^{2h} \frac{1}{(2\pi b)^3} \frac{\Gamma(h\pm i k_1 \pm ik_2)}{\Gamma(2h)}, \end{equation} matching back onto \eqref{JT3j} \subsection{Wheeler-DeWitt wavefunction} We have computed above the partition function on a hyperbolic Euclidean disk with a fixed length boundary. We can cut this disk along a bulk geodesic with length function $L$, that joins two boundary points separated by a distance $\beta/2$. This can be interpreted as a Euclidean preparation of the Wheeler-DeWitt (WdW) wavefunction $\Psi_\beta(L)$ corresponding to the two-sided black hole, see figure \ref{fig:wdw}. This wavefunction has been studied in the context of JT gravity in \cite{Harlow:2018tqv}.\footnote{This is different than the radial-quantization WdW wavefunction studied for example in \cite{Maldacena:2019cbz,Iliesiu:2020zld}.} Based on the properties of the Whittaker function $\psi_s(x)$ above, we propose the following identification \begin{equation} \Psi_\beta (L) = \int ds\hspace{0.1cm} e^{- \frac{1}{2}\beta \mu_B(s)} \rho(s) \psi^+_s(L), \end{equation} where we take $\epsilon=+1$ for concreteness. When we take the JT gravity limit, the density of states becomes the Schwarzian density of states, while the Whittaker function becomes a Bessel function derived directly from JT gravity in \cite{Harlow:2018tqv}. We have identified the group (coset) parameter $x$ of the Whittaker function with the argument of the wavefunction $L$. In the classical $b\to 0$ limit, this quantity is related to the boundary-to-boundary geodesic length $d$ as $x \to e^{d/2}$. The wavefunction can also be interpreted as the Euclidean partition function in the disk with an end-of-the-world brane. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[scale=0.75] \pgftext{\includegraphics[scale=0.3]{WdW.pdf}} at (0,0); \draw (2,-1) node {\small $\beta/2$}; \draw (0,1) node {\small $L$}; \end{tikzpicture} \caption{\label{fig:wdw} Depiction of the geometry creating the Hartle-Hawking state $\Psi_\beta(L)$. The state is labeled by a parameter $\beta$ that gives the proper length of the boundary segment preparing the state. The constant time slice is labeled by $L$, which is related to the geodesic distance along the slice.} \end{center} \end{figure} To verify this identification we can rewrite the exact two point function \eqref{twoa} in the following form \begin{equation} \langle \mathcal{B}\hspace{0.05cm} \mathcal{B} \rangle = \int dL \hspace{0.1cm} e^{2 \beta_M \pi L} \hspace{0.1cm}\Psi_{\ell_1}(L)^\dagger \hspace{0.1cm} \Psi_{\ell_2}(L), \end{equation} where we used the relation \eqref{idwhitsb}. This expression can be interpreted as gluing two portions of the disk along their bulk geodesic with the inclusion of the matter propagator $e^{2 \beta_M \pi L}$. This is structurally identical to the JT gravity expressions, and it would be interesting to give a more rigorous derivation from Liouville gravity. Finally, the wavefunction $\Psi_\beta(L)$ proposed here satisfies an interesting equation. We can rewrite the wavefunction for the same Hartle-Hawking state in an energy basis, which becomes the Whittaker function $\Psi_{E=\mu_B(s)}(L) = \psi^+_s(L)$ and satisfies the difference equation \eqref{wdw}. In terms of the fixed length basis this equation is \begin{equation} \Psi_\beta(L-ib) + \big(1+(2\pi b)^2e^{2\pi b L + i \pi b^2} \big) \Psi_\beta(L+ib) = 4 \frac{\partial}{\partial \beta} \Psi_\beta(L), \end{equation} which can be viewed as a discretized (due to the $q$-deformation) ancestor of the Wheeler-DeWitt equation. This suggests that Liouville quantum gravity effectively discretizes the spacetime in a way we do not understand sufficiently, and this discreteness might be related to the quantum group structure present in the theory. \subsection{Degenerate fusion algebra} Modified Bessel functions satisfy the following identity: \begin{equation} \label{besselprop} K_{\alpha+1}(x) - K_{\alpha-1}(x) = \frac{2\alpha}{x}K_{\alpha}(x), \end{equation} which can be proved directly from the Mellin-Barnes representation \eqref{mbk}. This identity is important since they act as the degenerate fusion rules that directly lead to the degenerate $h\in -\mathbb{N}/2$ vertex functions for JT gravity \cite{Mertens:2020pfe}, where the vertex function in e.g. \eqref{JTtwo} is singular. Following a similar strategy with \eqref{qmelb}, one can prove the following fusion property for $\epsilon = \pm 1$: \begin{equation} \label{qfus} \psi^{\epsilon}_{s+ib/2}(x) - \psi^{\epsilon}_{s-ib/2}(x) = \frac{\sinh 2 \pi b s}{ \pi i b\, e^{\pi b x}} \psi^{\epsilon}_s(x). \end{equation} This relation is the basis to derive the minimal string correlators where $\beta_M \in - b\, \mathbb{N}/2$ from the continuum approach directly. The trick is to successively apply it to compute ($j \in \mathbb{N}/2$): \begin{equation} \int_{-\infty}^{+\infty} dx \psi^{\epsilon}_{s_1}(x) \psi^{\epsilon * }_{s_2}(x) e^{- 2 \pi b j x}, \end{equation} until we reach \begin{equation} \int_{-\infty}^{+\infty} dx \psi^{\epsilon}_{s_1}(x) \psi^{\epsilon * }_{s_2}(x) = \frac{\delta(s_1-s_2)}{S_0^{s_1}}. \end{equation} After providing a matrix model computation of these minimal string correlators, we will come back to this approach using \eqref{qfus} and check explicitly that they match indeed. \section{Dual matrix models}\label{sec:MM} In this section we will give a matrix model interpretation of some of the results in the previous sections for the case of the $(2,p)$ minimal string. This case is special since the dual is a single matrix model. The discrete calculation of disk boundary correlators was proposed in \cite{Kostov:2002uq} (see also \cite{Hosomichi:2008th, Ishiki:2010wb, Bourgine:2010ja}). Besides the explicit checks, the new ingredient is to interpret the dual matrix as a boundary Hamiltonian in the sense of holography, as suggested by \cite{Saad:2019lba}. Then we will see boundary correlators of the bulk theory are equal to boundary correlators of random operators. \subsection{Partition function} \label{sec:MMpf} Motivated by \cite{Saad:2019lba} we will denote the random matrix as $H$ since we will interpret it as a boundary random Hamiltonian. The matrix model dual of a marked disk partition function is \begin{equation} Z(\mu_B) = \left\langle \text{Tr}\hspace{0.1cm}\frac{1}{H-\mu_B}\right\rangle. \end{equation} After inverse Laplace transforming the fixed length partition function is instead \begin{equation} Z(\ell) = \left\langle \text{Tr}\hspace{0.1cm}e^{-\ell H}\right\rangle. \end{equation} By choosing an appropriate potential for the matrix model ensemble we can make this match with the continuum answer in the double scaling limit. Before moving on, we want to show that the result \eqref{eq:markedtrivial} can actually be easily deduced using the matrix model language. According to this formulation of the theory, the $n$ marking operator correlator is given by the expectation value of the following product of matrices \begin{equation}\label{eq:markcorrmamo} \left\langle {}^{\mu_1}e^{b\phi_1}{}^{\mu_2} \hdots {}^{\mu_n}e^{b\phi_n}{}^{\mu_1}\right\rangle = \left\langle \text{Tr}\hspace{0.1cm}\frac{1}{(H-\mu_1)\hdots (H-\mu_n)}\right\rangle. \end{equation} Instead of finding the expectation value first, we can inverse Laplace transform directly the matrix model observable \begin{equation} \left\langle \text{Tr}\hspace{0.1cm}e^{-(\ell_1+\hdots + \ell_n)H}\right\rangle, \end{equation} which makes manifest that depends only on the total boundary length and is consistent with \eqref{eq:markedtrivial}, since the operator $\text{Tr}\hspace{0.1cm}e^{-\ell H}$ is dual to inserting a fixed length $\ell$ boundary. \subsection{Amplitudes} The matrix model dual to the minimal string with boundary insertions can be written by introducing vector degrees of freedom \begin{equation}\label{eq:genfuncmm} e^Z = \int DH D\bar{v}Dv \hspace{0.1cm} e^{- L {\rm Tr} V(H) - \bar{v}_a C^{ab}(H) v_b}, \end{equation} where $v_a$ are $N$ dimensional vectors and $a=1,\ldots, N_f$. For example, the FZZT unmarked boundary partition function can be obtained by taking a single vector $N_f=1$ and an interaction $C(H)=\mu_B-H$. Similarly the boundary correlator of $n$ marking operators in the previous section can be obtained still by a single vector and a higher order polynomial interaction $C(H) = (\mu_{B1}-H)(\mu_{B2}-H)\ldots (\mu_{Bn}-H)$ which should be compared to \eqref{eq:markcorrmamo}. We will follow the presentation in \cite{Ishiki:2010wb}. For the insertion of the two point function corresponding to $\mathcal{B}_{2,1}$ we need two vectors and the following interaction \begin{equation}\label{matrixvectorinter} C(H) = \begin{pmatrix} \mu_B(s_1)-H&c^{12}\\ c^{21} & F_2(H)\\ \end{pmatrix},~~~~F_2(H)=\prod_{\pm} (\mu_B(s_2 \pm i b)-H). \end{equation} For this choice \eqref{eq:genfuncmm} is a generating function of $\mathcal{B}_{2,1}$ correlators for which $c^{12}$ and $c^{21}$ are sources and boundary conditions shift from $\mu_B(s_1) \to \mu_B(s_2)$. For the minimal string matrix model this produces the same answer as the star polymer operators in the context of the loop gas formalism \cite{Kostov:2002uq}. For example, the two point function is \begin{equation} \langle\mathcal{B}_{2,1}\mathcal{B}_{2,1} \rangle = \left\langle \text{Tr}\hspace{0.1cm}\frac{1}{(H-\mu_B(s_1))}\frac{1}{(H-\mu_B(s_2-ib))}\frac{1}{(H-\mu_B(s_2+ib))}\right\rangle. \end{equation} This can be compared directly in the fixed cosmological constant basis to the results from the continuum Liouville approach. Instead we will transform the observable directly into fixed length basis. For this we need to perform the inverse Laplace transform of the previous formula for the operator inside the trace \begin{align} \int_{-i\infty}^{+i\infty}dy \frac{e^{-y \ell_1}}{(y-H)}\int_{-i\infty}^{+i\infty}dx \frac{1}{(\cosh(2\pi b (s_2+ib/2)) - H) (\cosh( 2\pi b (s_2-ib/2) - H)} e^{-x\ell_2}, \end{align} where for simplicity we set $\kappa=1$ and define $x=\cosh 2\pi b s_2$, $y=\cosh 2\pi b s_1$. The $y$-integral directly gives the marked length $\ell_1$ operator $e^{-\ell_1 H}$. The denominator can be written as $x^2 - 2H \cos \pi b^2 x + H^2 - \sin^2 \pi b^2$ and the integral can then be directly evaluated by residues, picking up two pole contributions, yielding \begin{equation} \langle\mathcal{B}_{2,1}\mathcal{B}_{2,1} \rangle = \left\langle \text{Tr}\hspace{0.1cm} e^{-\ell_1 H} e^{-\ell_2 H \cos \pi b^2} \frac{\sin \left( \ell_2 \sin \pi b^2 \sqrt{H^2-1}\right)}{\sin \pi b^2 \sqrt{H^2-1}} \right\rangle. \end{equation} This is for the matrix $H$ underlying the minimal string matrix integral. If we now identify \begin{equation} H \leftrightarrow \cosh 2\pi b s = \mu_B, \qquad \sqrt{H^2-1} \leftrightarrow \sinh 2\pi b s, \end{equation} we get for the full result at leading order in the genus expansion, using the leading density of states \begin{align} \langle\mathcal{B}_{2,1}\mathcal{B}_{2,1} \rangle &= \int_0^\infty ds \rho(s) \hspace{0.1cm} e^{-\ell_1 \cosh 2\pi b s } e^{-\ell_2 \cosh 2 \pi b s \cos \pi b^2} \frac{\sin \left( \ell_2 \sin \pi b^2 \sinh 2\pi b s \right)}{\sin \pi b^2 \sinh 2\pi b s} \nonumber \\ \label{tocmp} &= \int_0^\infty ds \rho(s) \hspace{0.1cm} e^{-\ell_1 \cosh 2\pi b s }\left[ \frac{e^{-\ell_2 \cosh 2\pi b (s+ib/2)}}{\sin \pi b^2 \sinh 2\pi b s} - \frac{e^{-\ell_2 \cosh 2\pi b (s-ib/2)}}{\sin \pi b^2 \sinh 2\pi b s}\right]. \end{align} Following the interpretation of \cite{Saad:2019lba} of the random matrix as a random Hamiltonian we can interpret the boundary correlator as inserting an operator. Since they match for fixed FZZT boundaries this correlator matches with the fixed length two-point function when $\beta_M = - b/2 $ corresponding to $\mathcal{B}_{2,1}$. This correlator has a very simple JT gravity limit. Following the previous discussion, we set $s= bk$, with fixed $k$ as $b\to 0$ and define the renormalized length $\ell_{{\rm JT}i} \equiv 2\pi^2 b^4\ell_i$. This gives the simple answer \begin{eqnarray} \langle\mathcal{B}_{2,1}\mathcal{B}_{2,1} \rangle_{(2,p\to\infty)} &=& \int_0^\infty kdk \hspace{0.1cm} \sinh 2 \pi k e^{-(\ell_{{\rm JT}1}+\ell_{{\rm JT}2}) k^2}\, e^{ \frac{1}{4} \ell_{{\rm JT}2}} \frac{\sin \left( \ell_{{\rm JT}2} k \right)}{k}\\ &\sim& \Big( \frac{\beta}{\pi} \sin \frac{\pi \tau}{\beta} \Big) e^{\frac{\tau(\beta-\tau)}{4\beta}}, \end{eqnarray} where in the second line we defined $\tau =\ell_{{\rm JT}2}$ and $\beta= \ell_{{\rm JT}1} + \ell_{{\rm JT}2}$. This is precisely equal to the exact Schwarzian two point function for operators of dimension $\Delta=-1/2$. This is equivalent to equation (D.7) of \cite{Mertens:2019tcm}, for $C_{\rm there}=1/2$. As explained there, only operators with negative half integer dimension have such a simpler form, and these correspond to the minimal model CFT dimensions. This discussion can be extended to higher degenerate insertions $\mathcal{B}_{2j+1,1}$ where $\beta_M = -bj$ and $j\in \mathbb{N}/2$. This can be achieved still with two vectors interacting through the same two by two matrix in \eqref{matrixvectorinter}, but with $F_j(H) =\prod_{n=-j}^{j}(\cosh(2\pi b (s+inb)) - H)$. The two-point function of $\mathcal{B}_{2j+1,1}$ corresponds then to the matrix integral insertion \begin{align} \langle \mathcal{B}_{2j+1,1}\mathcal{B}_{2j+1,1}\rangle = \left\langle \text{Tr}\hspace{0.1cm}\frac{1}{\cosh 2\pi b s_1-H}(2j)! \prod_{n=-j}^{j} \frac{1}{\cosh(2\pi b (s_2+inb)) - H}\right\rangle . \end{align} Transferring to the fixed length basis, one has to perform the integral \begin{align} (2j)!\int_{-i\infty}^{+i\infty}dy \frac{e^{-y \ell_1}}{(y-H)}\int_{-i\infty}^{+i\infty}dx \frac{1}{\prod_{n=-j}^{j}(\cosh(2\pi b (s+inb)) - H)} e^{-x\ell_2}. \end{align} Combining the factors $\pm n$ together, we can play the same game, and combine the denominators into: \begin{align} x^2 - 2H \cos 2\pi n b^2 x + H^2 - \sin^2 2\pi n b^2 = (x - \cosh 2\pi b (s\pm i n b)). \end{align} If $2j+1$ is even, then these are all of the factors. If $2j+1$ is odd, then we have one additional factor $(x-H)$ in the denominator. What is left is just a sum of $2j+1$ residues, where the denominator is a polynomial in $H$ of order $2j$. The previous procedure can be done for any $j \in \mathbb{N}/2$ and we get the complicated general expression:\footnote{We have conventionally divided by the partition function $Z$ in this equation.} \begin{align} \label{gendeg} &\langle \mathcal{B}_{2j+1,1}\mathcal{B}_{2j+1,1} \rangle \nonumber \\ &= \frac{1}{Z}\int_0^{+\infty} ds \rho(s) \, e^{-\ell_1 \cosh 2\pi b s}\sum_{n=-j}^{+j} \frac{(2j)!e^{-\ell_2 \cosh 2\pi b (s+i nb)}}{\prod_{\stackrel{m=-j}{m\neq n}}^{j} (\cosh 2\pi b (s+i nb) - \cosh 2\pi b (s+imb))}. \end{align} One can check that in the UV limit $\ell_2 \to 0$, the entire sum becomes $\ell_2^{2j} + \mathcal{O}(\ell_2^{2j+1})$, and the expression reduces to \begin{equation} \label{UVdeg} \langle \mathcal{B}_{2j+1,1}\mathcal{B}_{2j+1,1} \rangle \to \ell_2^{2j}, \end{equation} matching the general analysis in section \ref{s:bostwo}. \\ In the JT limit, the pole contributions and exponentials are expanded as: \begin{align} \label{reslim} \cosh 2\pi b (s+inb) - \cosh 2\pi b (s+imb)\quad &\to \quad b^4 2\pi^2 (m-n)\left(n+m -2 ik \right) + \mathcal{O}(b^6), \\ e^{- \ell \cosh 2\pi b (s+inb)} \quad &\to \quad e^{- \ell_{\rm JT} k^2} e^{\ell_{\rm JT} n^2 }e^{- 2 i \ell_{\rm JT} n k}. \end{align} This is precisely the structure expected for a degenerate Schwarzian insertion \cite{Mertens:2020pfe}: the denominators \eqref{reslim} produce a polynomial in $k$, while the $m-n$ factors conspire to give a binomial coefficient. In the end, we can identify this matrix insertion with the degenerate Schwarzian bilocal as: \begin{equation} \mathcal{B}_{2j+1,1}\mathcal{B}_{2j+1,1} \quad \to \frac{1}{b^{8j}(2\pi^2)^{2j}} \, \mathcal{I}^{j}(0)\mathcal{I}^{j}(\tau), \end{equation} where the prefactor is also readily determined from \eqref{UVdeg} combined with the relation between $\ell_{\rm JT}$ and $\ell$. Here $\mathcal{I}^j$ indicates an operator in the Schwarzian theory of dimension $\Delta=-j/2$. \\~\\ This structure of the minimal string correlators \eqref{gendeg} matches with the continuum approach by using the fusion property \eqref{qfus}. As an example, for the first minimal string $j=1/2$ insertion, a single application of \eqref{qfus} leads to the identity: \begin{align} \int_{-\infty}^{+\infty} dx \psi^{\epsilon}_{s_1}(x) \psi^{\epsilon * }_{s_2}(x) e^{- \pi b x} = \frac{\pi b}{i S_0{}^{s_1}}\left[\frac{\delta(s_1-s_2-ib/2)}{\sinh 2 \pi b s_2 }- \frac{\delta(s_1-s_2 + ib/2)}{\sinh 2 \pi b s_2 }. \right] \end{align} The delta-function enforces the correct dependence in the exponential factor in \eqref{tocmp}. We also see the $1/\sinh 2 \pi b s$ factor in the denominator of \eqref{tocmp} appearing. \\ It is clear that for generic $j\in \mathbb{N}/2$, we will find a similar result. As an example, in Appendix \ref{app:one} we work out the formulas for $j=1$, and check indeed that the methods match. \section{Other topologies}\label{sec:othertopo} In this section we will extend previous calculations to situations with more general topologies and multiple boundaries. We will focus here on the minimal string theory since it has a direct interpretation as a one-matrix integral. \subsection{Cylinder} We will first study minimal string theory on a cylinder between fixed length boundaries. This was computed from a continuum approach by Martinec \cite{Martinec:2003ka} and from a discrete approach by Moore, Seiberg and Staudacher \cite{Moore:1991ir}. We will present a technically simplified derivation from the continuum limit and make a connection with JT gravity for the $(2,p)$ string with large $p$. As another example, we will apply our method to the crosscap spacetime in Appendix \ref{app:crosscap}, also reproducing the matrix model result. Using the boundary state formalism \cite{Ishibashi:1988kg, Cardy:1989ir} we can described a boundary labeled by an FZZT parameter $s$ and matter labels $(n,m)$ by the following combination of Ishibashi states \begin{equation} |s; n,m \rangle = \sum_{n',m'} \int_0^{\infty} dP \hspace{0.1cm}\Psi_s(P) \frac{S_{n,m}^{n',m'}}{(S_{1,1}^{n',m'})^{1/2}} |P\rangle\hspace{-0.1cm}\rangle_L |n',m' \rangle\hspace{-0.1cm}\rangle_M. \end{equation} As pointed out by Seiberg and Shih this state can be simplified as a sum of matter identity branes over shifted FZZT parameters, modulo BRST exact terms that cancel when computing physical observables, see equation (3.8) in \cite{Seiberg:2003nm}. Therefore in the end of the calculation we will focus on the matter sector identity brane. \begin{figure}[t!] \centering \begin{tikzpicture}[scale=0.9] \node at (-3.5,0) {\small $s_1{}_{(n_1,m_1)}$}; \node at (3.5,0) {\small $s_2{}_{(n_2,m_2)}$}; \draw[thick] (-2,0) ellipse (0.4 and 1.5); \draw[thick] (2,0) ellipse (0.4 and 1.5); \draw[thick] (-1.95,1.49) to [bend right=50] (1.95,1.49); \draw[thick] (-1.95,-1.49) to [bend left=50] (1.95,-1.49); \node at (0,-1.985) {}; \end{tikzpicture} \caption{The figure shows the cylinder amplitude we are computing between two FZZT boundaries with boundary cosmological constants $\mu_B(s_1)$ and $\mu_B(s_2)$ and matter boundary conditions labeled by $(n_1,m_1)$ and $(n_2,m_2)$.} \label{fig:cylinder} \end{figure} As explained in \cite{Martinec:2003ka} the annulus partition function between unmarked $s_1; n_1,m_1$ and $s_2;n_2,m_2$ branes is computed as the overlap of the boundary states, integrated over the moduli. For the annulus, there is a single real modulus $\tau$ parametrizing the length along the cylinder. Notice that this is a coordinate on the worldsheet and it is integrated over. In the end we will find dependence on \emph{physical} lengths instead as emphasized in the Introduction. Before integration the answer factorizes into the Liouville (L), matter (M) and ghost (G) contributions \begin{equation}\nonumber \langle Z(s_1;n_1,m_1)^{\scriptscriptstyle \text{U}} Z(s_2;n_2,m_2)^{\scriptscriptstyle \text{U}}\rangle = \int d\tau Z_{L} Z_{M} Z_{G},~~\begin{cases} Z_{L} = \int_0^\infty \frac{dP}{\pi} \frac{\cos 4 \pi s_1 P \cos 4 \pi s_2 P}{\sqrt{2}\sinh 2 \pi P b \sinh 2 \pi \frac{P}{b}}\chi_P(q), \\ ~~\vspace{-0.5cm}\\ Z_{M} = \sum_{n,m}\mathcal{N}_{n,m}^{(n_1,m_1)(n_2,m_2)} \chi_{n,m}(q'),\\ ~~\vspace{-0.5cm}\\ Z_{G} =\eta(q)^2, \end{cases} \end{equation} where $q'=e^{-2 \pi i/\tau}$ and $\mathcal{N}_{n,m}^{(n_1,m_1)(n_2,m_2)} $ denote the fusion coefficient of the matter theory. In the matter sector, we used the Verlinde formula \cite{Verlinde:1988sn} to simplify the boundary state inner product as a sum over the dual channel characters weighted by the fusion numbers. This simplifies the calculation compared to \cite{Martinec:2003ka}. We will write $\tau = i t$ where $t$ is integrated over the positive real line. Then using the modular property of the Dedekind eta function the contribution from descendants cancel up to a factor of $t^{-1/2}$ and we can write \begin{eqnarray} \langle Z(s_1;n_1,m_1)^{\scriptscriptstyle \text{U}} Z(s_2;n_2,m_2)^{\scriptscriptstyle \text{U}}\rangle&=& \int_0^\infty \frac{dP}{\pi} \frac{\cos 4 \pi s_1 P \cos 4 \pi s_2 P}{\sqrt{2}\sinh 2 \pi P b \sinh 2 \pi \frac{P}{b}} \nonumber\\ &&\hspace{-3.5cm}\times \sum_{n,m}\mathcal{N}_{n,m}^{(n_1,m_1)(n_2,m_2)} \sum_k \int_0^\infty \frac{dt}{\sqrt{t}} e^{-2\pi t P^2} \big(e^{-\frac{2\pi}{t}a_{n,m}(k)}-e^{-\frac{2\pi}{t} a_{n,-m}(k)}\big), \end{eqnarray} where $a_{n,m}(k)$ was defined in equation \eqref{degcharacters}. We first integrate over $t$. The answer depends on whether $k>0$, $k<0$ or $k=0$ so each case has to be considered separately. We then sum over $k$ taking this into account. The final answer is very simple \begin{equation}\nonumber \sum_k \int \frac{dt}{\sqrt{2t}} e^{-2\pi t P^2} (e^{-\frac{2\pi}{t}a_{n,m}(k)}-e^{-\frac{2\pi}{t} a_{n,-m}(k)}) = \begin{cases} \frac{\sinh 2 \pi b P (p'-n) \sinh 2 \pi \frac{P}{b} m }{P \sinh 2 \pi p \frac{P}{b}} ~~~{\rm if}~~np>mp'\\ ~~\vspace{-0.2cm}\\ \frac{\sinh 2 \pi b P n \sinh 2 \pi \frac{P}{b} (p-m) }{P \sinh 2 \pi p \frac{P}{b}} ~~~{\rm if}~~np<mp'\end{cases} \end{equation} When we sum over the primaries we can use the fundamental domain $E_{p'p}$ which corresponds precisely to the first case in the result above. Then the final answer for the annulus partition function becomes \begin{eqnarray} \langle Z(s_1;n_1,m_1)^{\scriptscriptstyle \text{U}} Z(s_2;n_2,m_2)^{\scriptscriptstyle \text{U}}\rangle &=& \sum_{(n,m)\in E_{p'p}}\mathcal{N}_{n,m}^{(n_1,m_1)(n_2,m_2)} \int_0^\infty \frac{dP}{\pi} \nonumber\\ &&\hspace{-3cm}\times \frac{\cos(4 \pi s_1 P) \cos(4 \pi s_2 P) \sinh(2 \pi b P (p'-n)) \sinh(2 \pi \frac{P}{b} m) }{P \sinh(2 \pi p \frac{P}{b}) \sinh(2 \pi b P) \sinh(2 \pi \frac{P}{b})}, \end{eqnarray} where we make explicit that this expression is valid when $(n,m)$ are in the fundamental domain $E_{p'p}$. This generalizes the formula derived by Martinec, which only includes boundary states of the form $(n,1)$ to an arbitrary boundary state and also matches in this case with the expression derived in reference \cite{Kutasov:2004fg}. We can use the Seiberg-Shih relation between boundary states to justify focusing on the matter identity branes, and the partition function simplifies to \begin{equation} \langle Z(s_1)^{\scriptscriptstyle \text{U}} Z(s_2)^{\scriptscriptstyle \text{U}}\rangle_{(p,p')} = \int_0^\infty \frac{dP}{\pi} \frac{\cos(4 \pi s_1 P) \cos(4 \pi s_2 P) \sinh(2 \pi \frac{P}{b} (p-1)) }{P \sinh(2 \pi p \frac{P}{b}) \sinh(2 \pi \frac{P}{b})}. \end{equation} This is valid for the $(p,p')$ minimal string. Since we will be interested mostly in theories dual to single matrix models we can further take the $(2,p)$ minimal model and get \begin{equation} \langle Z(s_1)^{\scriptscriptstyle \text{U}} Z(s_2)^{\scriptscriptstyle \text{U}}\rangle_{(2,p)} = \int_0^\infty dP\frac{\cos(4 \pi s_1 P) \cos(4 \pi s_2 P)}{2\pi P \sinh 2 \pi \frac{P}{b} \cosh 2 \pi \frac{P}{b}}. \end{equation} In the rest of this section we will analyze this expression. As indicated, these are unmarked FZZT boundaries. Using the methods described above we can first compute the marked boundary amplitude which is more directly related to the matrix integral. Taking derivatives with respect to the boundary cosmological constant using \eqref{fion} we get \begin{eqnarray}\label{annulusunmarked} \langle Z(s_1)^{\scriptscriptstyle \text{M}} Z(s_2)^{\scriptscriptstyle \text{M}}\rangle_{(2,p)} &=&\frac{2}{b^2} \int_0^\infty \frac{dP}{\pi} \frac{\sin(4 \pi s_1 P) \sin(4 \pi s_2 P)}{\kappa \sinh \pi b s_1 \kappa \sinh \pi b s_2} \frac{2P }{\sinh 4 \pi \frac{P}{b}} ,\nonumber\\ &=& \frac{1}{8\pi} \frac{1}{\sqrt{-\mu_1 + \kappa}\sqrt{-\mu_2 + \kappa}}\frac{1}{(\sqrt{-\mu_1+\kappa} + \sqrt{-\mu_2 + \kappa})^2}, \end{eqnarray} where $\mu_i=\mu_B(s_i)$. The expression in the second line is precisely the connected component to the resolvent two point function (see for example equation (47) of \cite{Saad:2019lba}). When written in the appropriate variables this result is completely independent of $p$ and therefore independent of the precise density of states. This is evident in the matrix integral approach but unexpected from the continuum approach. \begin{figure}[t!] \centering \begin{tikzpicture}[scale=0.9] \node at (-2.6,0) {\small $\ell_1$}; \node at (2.6,0) {\small $\ell_2$}; \draw[thick] (-2,0) ellipse (0.3 and 1.5); \draw[thick] (2,0) ellipse (0.3 and 1.5); \draw[thick] (-1.937,1.47) to [bend right=50] (1.937,1.47); \draw[thick] (-1.937,-1.47) to [bend left=50] (1.937,-1.47); \node at (0,-1.985) {}; \node at (4.5,0) {\large $=\int $ {\small $d\mu(\lambda)$}}; \draw[thick] (6.5,0) ellipse (0.3 and 1.5); \draw[thick] (8.5,0) ellipse (0.1 and 0.7); \draw[thick] (6.54,1.49) to [bend right=20] (8.5,0.7); \draw[thick] (6.54,-1.49) to [bend left=20] (8.5,-0.7); \draw[thick] (11.5,0) ellipse (0.3 and 1.5); \draw[thick] (9.5,0) ellipse (0.1 and 0.7); \draw[thick] (11.46,1.49) to [bend left=20] (9.5,0.7); \draw[thick] (11.46,-1.49) to [bend right=20] (9.5,-0.7); \node at (5.9,0) {\small $\ell_1$}; \node at (12.1,0) {\small $\ell_2$}; \node at (8.5,-1) {\small $\lambda$}; \node at (9.5,-1) {\small $\lambda$}; \end{tikzpicture} \caption{We depict the cylinder amplitude in physical space between fixed length boundaries. The final answer can be interpreted as gluing minimal string trumpets generalizing the procedure of JT gravity} \label{fig:trumpetgluing} \end{figure} We can compute the fixed length amplitude in two ways. Firstly, we can apply the method above to compute the inverse Laplace transform through the discontinuity before integrating over $P$. In order to do this we can use the expression \eqref{eq:disccos} for the discontinuity. Secondly, we can apply this directly to the second line of \eqref{annulusunmarked}. Either way the result is the same, given after relabeling $\lambda = 2P/b$ by the formula \begin{eqnarray}\label{eq:2loopcorr} \langle Z(\ell_1) Z(\ell_2) \rangle &=&\frac{2}{\pi} \int_0^\infty \lambda d\lambda \tanh \pi \lambda ~K_{i\lambda}(\kappa \ell_1) K_{i \lambda}(\kappa \ell_2),\\ &=& \frac{\sqrt{\ell_1 \ell_2}}{\ell_1+\ell_2}e^{-\kappa (\ell_1+ \ell_2)}. \end{eqnarray} The first line of the previous equation has a very familiar form when comparing with JT gravity. As we explained before, inserting a bulk operator in the disk can be interpreted as creating a hole in the physical space which in the JT gravity limit becomes a geodesic boundary of length $\sim \lambda$. Therefore we can compare the integral above after replacing $ \lambda = b_{\rm JT}/(2\pi b^2)$ and $ \ell = \ell_{\rm JT}/(2\kappa \pi^2 b^4)$ as gluing two minimal string trumpets with a deformed measure\footnote{The prefactors of this equation can be tracked by using the integral representation \eqref{idthree}.} \begin{eqnarray} e^{\kappa \ell} K_{i\lambda}(\kappa \ell) &\to& \pi b^2 \sqrt{\frac{\pi}{\ell_{\rm JT}}}e^{- \frac{b_{\rm JT}^2}{4\ell_{\rm JT}}},\\ \lambda d\lambda \tanh \pi \lambda &\to& \frac{1}{4\pi^2b^4}b_{\rm JT}db_{\rm JT} ,~~~{\rm for}~\lambda \to \infty. \end{eqnarray} Liouville CFT is deeply intertwined with Teichm\"uller theory (the universal cover of the moduli space of Riemann surfaces), see e.g. \cite{Verlinde:1989ua,Teschner:2002vx}.\footnote{A related observation is the following. The partition function of group $G$ Chern-Simons theory on an annulus times $\mathbb{R}$ is known to be describable through the diagonal modular invariant of the $\hat{G}$ (non-chiral) WZW model, where the chiral sectors of the WZW model are each associated to one of the boundary cylindrical walls \cite{Elitzur:1989nr}. Something similar was observed in \cite{Blommaert:2018iqz} for Liouville CFT: the Liouville diagonal torus partition function yields the two-boundary sector of 3d gravity, but glued within Teichm\"uller space.} Here we see that when Liouville is combined with the minimal model into a full gravitational theory the integral becomes the WP measure over the moduli space instead, in accordance with the matrix model expectation. This is clear in the JT gravity limit, and it would be interesting to understand the origin of this tanh measure for finite $p$ minimal string, and to confirm this is its correct normalization. \subsection{Multiple boundaries} It will be useful to rephrase the minimal string as a matrix integral in the double scaling limit using the formalism of \cite{Brezin:1990rb, Banks:1989df}. A central object from this approach is the heat capacity $u(x)$ appearing in the string equation. This is related to the density of states as \begin{equation}\label{eqdeff} \rho_0(E) = \frac{1}{2\pi} \int_{E_0}^E \frac{du}{\sqrt{E-u}} f(u), \end{equation} where $\partial_x u = - f(u)^{-1}$ (see \cite{Johnson:2019eik, *Johnson:2020heh,Okuyama:2019xbv,*Okuyama:2020ncd} for recent discussions). It will be convenient for us to define shifted and rescaled quantities that will have a finite large $p$ limit, as: \begin{equation} \label{ujt} E = \kappa \Big(1 + \frac{8\pi^2}{p^2} E_{\rm JT} \Big), \qquad u = \kappa\Big(1 + \frac{8\pi^2}{p^2} u_{\rm JT} \Big). \end{equation} For ease of notation, we set $\kappa=1$ in the following. With these conventions the undeformed minimal string will correspond to $x\to0$. The minimal string density of states according to the Liouville calculation is \begin{eqnarray} \rho_0(E_{\rm JT}) &=&\frac{1}{4\pi^2} \sinh \Big( \frac{p}{2} {\rm arccosh} \Big(1 + \frac{8\pi^2}{p^2} E_{\rm JT} \Big) \Big),\\ &=& \sum_{j=0}^\mathfrak{m} \frac{(2\pi)^{2j-3}}{(2j-1)!} \frac{4^{j-1}(\mathfrak{m}+j-2)!}{(2\mathfrak{m}-1)^{2j-2}(\mathfrak{m}-j)!} (\sqrt{E_{\rm JT}})^{2j-1} ~~~~{\rm with}~~p=2\mathfrak{m}-1, \end{eqnarray} where $\mathfrak{m}\in \mathbb{Z}$. For large $\mathfrak{m}$ we get the JT gravity density of states. We can find the function $f(u)$ by solving \eqref{eqdeff} and get \begin{equation} f(u_{\rm JT}) = \frac{1}{2} {}_2F_1\left(\frac{1-p}{2},\frac{1+p}{2},1, - \frac{4\pi^2}{p^2}u_{\rm JT}\right), \end{equation} where ${}_2F_1(a,b,c,x)$ is the hypergeometric function. Integrating this relation we can get an implicit formula for the minimal string heat capacity \begin{equation} \frac{u_{\rm JT}}{2} \hspace{0.0cm} {}_2F_1 \Big(\frac{1-p}{2},\frac{1+p}{2},2, - \frac{4\pi^2}{p^2}u_{\rm JT}\Big) = - x. \end{equation} This can be written in a more familiar form recognizing that for these values of parameters the hypergeometric function becomes a Legendre polynomial, e.g.: \begin{equation} f(u_{\rm JT}) = \frac{1}{2} P_{\mathfrak{m}-1}\Big(1+\frac{8\pi^2}{p^2}u_{\rm JT} \Big) \end{equation} The relation above becomes the string equation \cite{Brezin:1990rb} (to leading order in genus expansion) written in the usual form, given by \begin{equation} \sum_{j=0}^\mathfrak{m} t_j u_{\rm JT}^j = 0 ,~~~~~~t_j\equiv \frac{1}{2} \frac{\pi^{2j-2}}{ j! (j-1)!} \frac{4^{j-1}(\mathfrak{m}+j-2)!}{(\mathfrak{m}-j)! (2\mathfrak{m}-1)^{2j-2}} \end{equation} where $p=2\mathfrak{m}-1$, we introduced the couplings $t_j$ and defined $t_0=x$. As explained in \cite{Moore:1991ir}\cite{Belavin:2008kv} this is an analytic redefinition of coupling constants of the $\mathfrak{m}$'th multicritical point and for large $x$ behaves as $u \sim x^{1/\mathfrak{m}}$, as expected. Knowing the couplings $t_j$, it is possible to also compute higher genus corrections by replacing the power law in the equation above by the KdV hierarchy operators $u^j \to R_j[u]$ derived in \cite{Gelfand1975,*Gelfand2}. Knowing the heat capacity for the minimal string $u(x)$ derived from the density of states a surprising formula can be written for the $n$th loop correlator first proposed by \cite{Ambjorn:1990ji, Moore:1991ir}. The relation can be written in different ways but we found a useful version to be \begin{equation} \label{nbdyformula} \Big\langle \prod_i Z(\ell_i)^{\scriptscriptstyle \text{M}} \Big\rangle =- \frac{\sqrt{\ell_1\ldots \ell_n}}{2 \pi^{n/2}} \Big( \frac{\partial}{\partial x} \Big)^{n-3} u'(x) e^{-u(x)(\ell_1+\ldots+\ell_n)} \Big|_{u\to 1}. \end{equation} From now one we will only work with marked boundaries and omit the $\scriptscriptstyle \text{M}$ suffix. \eqref{nbdyformula} is based on the discrete approach and its surprising such a simple answer exists from the continuum approach. Shifting to our variable $u_{\rm JT}$ as $u = 1 + \frac{8\pi^2}{p^2} u_{\rm JT}$, the final answer is evaluated at $u_{\rm JT}(x\to0)=0$. To apply this formula we need the derivatives $\partial_x u_{\rm JT}$ but the relation $u_{\rm JT}(x)$ is given only implicitly. To find the necessary derivatives, we can apply the Lagrange inversion theorem to write \begin{equation}\label{eq:inversederums} \left.\partial_x^n u_{\rm JT}\right|_{u_{\rm JT} =0} = \lim_{u_{\rm JT}\to 0} \frac{d^{n-1}}{du_{\rm JT}^{n-1}} \left(- \frac{2}{{}_2F_1(\frac{1-p}{2},\frac{1+p}{2},2, - \frac{4\pi^2}{p^2}u_{\rm JT})} \right)^n. \end{equation} This can be used order by order to find all terms appearing in the loop correlators. We will now use this to generate some $n$ loop correlators for fixed boundary length. The case $n=1$ is special and actually is used to fix $u(x)$. The case $n=2$ is also special and gives $\langle Z(\ell_1) Z(\ell_2) \rangle = \frac{1}{2\pi} \frac{\sqrt{\ell_1 \ell_2}}{\ell_1+\ell_2}$, which coincides with \eqref{eq:2loopcorr} after appropriate shifts and redefinitions mentioned above. The cases $n=3,4,5$ give\footnote{We have defined the length parameter $\ell^{\rm JT} = \kappa \frac{8\pi^2}{p^2} \ell = 2\pi^2b^4\kappa \ell$ as in \eqref{JTparam}, and have redefined the overall normalization by dropping a factor of $\big(\frac{8\pi^2}{p^2}\big)^{1-\frac{n}{2}} e^{-\kappa \sum_i \ell_i}$.} \begin{align} \Big\langle \prod_{i=1}^3 \frac{Z(\ell^{\rm JT}_{i})}{\sqrt{\ell^{\rm JT}_{i}} } \Big\rangle &=-\frac{1}{2\pi^{3/2}} \left.\frac{\partial u_{\rm JT} }{ \partial x}\right|_{u_{\rm JT}=0} =\frac{1}{2\pi^{3/2}} 2 , \\ \Big\langle \prod_{i=1}^4 \frac{Z(\ell^{\rm JT}_{i})}{\sqrt{\ell^{\rm JT}_{i}} } \Big\rangle &= \frac{1}{2\pi^{2}} \Big( 4 \Big(\sum_{i=1}^4\ell^{\rm JT}_{i}\Big) +4\pi^2\Big(1-\frac{1}{p^2} \Big) \Big), \\ \Big\langle \prod_{i=1}^5 \frac{Z(\ell^{\rm JT}_{i})}{\sqrt{\ell^{\rm JT}_{i}} } \Big\rangle &= \frac{1}{2\pi^{5/2}} \Big(8 \Big(\sum_{i=1}^5\ell^{\rm JT}_{i}\Big)^2 +24 \pi^2 (1-\frac{1}{p^{2}})\Big(\sum_{i=1}^5\ell^{\rm JT}_{i}\Big) + 4\pi^4 (5-\frac{2}{p^{2}} -\frac{3}{p^{4}}) \Big). \end{align} At this point it should be clear how to generalize it to arbitrary boundaries. As a further check of these expressions we will take the JT gravity limit $p \to \infty$. First we will take the JT limit of the string equation. Using the following identity (Abramowitz and Stegun eq (9.1.71)): \begin{equation} \label{AS} \lim_{\nu \to+\infty} P_\nu \left(\cos \frac{x}{\nu}\right) = J_0(x), \qquad \text{with }\cos \frac{x}{\mathfrak{m}-1} = 1 + \frac{8\pi^2}{p^2}u_{\rm JT}, \end{equation} one shows that $f(u) \to \frac{1}{2} I_0(2\pi \sqrt{u})$. For large $p$ the couplings become $t_j \to \frac{1}{2} \frac{\pi^{2j-2}}{j!(j-1)!}+\mathcal{O}(1/p)$. The sum can be done explicitly and the JT gravity string equation becomes \begin{equation} \sum_{j=1}^\infty \frac{1}{2} \frac{\pi^{2j-2}}{ j! (j-1)!} u_{\rm JT}^j = \frac{\sqrt{ u_{\rm JT} }}{2\pi} I_1(2\pi \sqrt{ u_{\rm JT}}) = -x. \end{equation} The $n$-boundary JT gravity partition function to leading order in the genus expansion is then \begin{equation} \label{nbdyformulaJT} \Big\langle \prod_i Z_{\rm JT}(\ell^{\rm JT}_{i}) \Big\rangle =- \frac{\sqrt{\ell^{\rm JT}_{1}\ldots \ell^{\rm JT}_{n}}}{2\pi^{n/2}} \Big( \frac{\partial}{\partial x} \Big)^{n-3} u'_{\rm JT}(x) e^{-u_{\rm JT}(x)(\ell^{\rm JT}_{1}+\ldots+\ell^{\rm JT}_{n})} \Big|_{u_{\rm JT}\to 0}. \end{equation} This can be seen as a generating function for the genus $0$ WP volumes with $n$ geodesic boundaries $V_{g=0,n}(\mathbf{b})$ with length $\mathbf{b}=(b_1,\ldots, b_n)$, after an appropriate Laplace transform we will do explicitly in the next section. We can check this formula computing some simple cases with $n=3,4,5,\ldots$. This can be obtained either from the $p\to\infty$ limit of the minimal string or directly using the JT string equation $u_{\rm JT}(x)$. The result is \begin{eqnarray} \Big\langle \prod_{i=1}^3 \frac{Z_{\rm JT}(\ell^{\rm JT}_{i})}{\sqrt{\ell^{\rm JT}_{i}} } \Big\rangle &=&\frac{1}{2\pi^{3/2}} 2 ,\\ \Big\langle \prod_{i=1}^4 \frac{Z_{\rm JT}(\ell^{\rm JT}_{i})}{\sqrt{\ell^{\rm JT}_{i}} } \Big\rangle &=& \frac{1}{2\pi^{2}} \Big( 4 \Big(\sum_{i=1}^4\ell^{\rm JT}_{i}\Big) +4\pi^2 \Big), \\ \Big\langle \prod_{i=1}^5 \frac{Z_{\rm JT}(\ell^{\rm JT}_{i})}{\sqrt{\ell^{\rm JT}_{i}} } \Big\rangle &=& \frac{1}{2\pi^{5/2}} \Big(8 \Big(\sum_{i=1}^5\ell^{\rm JT}_{i}\Big)^2 +24 \pi^2\Big(\sum_{i=1}^5\ell^{\rm JT}_{i}\Big) + 20\pi^4 \Big). \end{eqnarray} It is surprising that these correlators match with the direct JT gravity calculation \cite{Saad:2019lba} where \begin{equation} \Big\langle \prod_{i=1}^n Z_{\rm JT}(\ell^{\rm JT}_{i}) \Big\rangle = \int \prod_{i=1}^n b_i db_i Z_{\rm trumpet}(\ell^{\rm JT}_{i}, b_i) V_{0,n}(\mathbf{b}), \end{equation} where the trumpet partition function is given by $Z_{\rm trumpet}(\ell^{\rm JT}_{i}, b_i) =e^{-b_i^2/4\ell^{\rm JT}_{i}}/2\sqrt{\pi \ell^{\rm JT}_{i}} $. For the WP volumes we used the expressions in \cite{do2011moduli} and we also check this works for $n=6$ and $7$, although we did not write it here. Therefore we see that \eqref{nbdyformulaJT} gives a simple generating function for (a simple integral transform) of WP volumes. \subsection{$p$-deformed Weil-Petersson volumes} In this section we will point out some interesting structure in the minimal string multi-loop correlator. One can write the amplitude in two ways: \begin{align} \label{Wppdef} \left\langle \prod_{i=1}^n Z(\ell_i)\right\rangle &= -\left. \frac{\sqrt{\ell_1 \hdots \ell_n}}{2\pi^{n/2}}\left(\frac{\partial}{\partial x}\right)^{n-3} u ' (x) e^{-u(x) (\ell_1 + \hdots \ell_n)}\right|_{u=1} \\ \label{defwp} &= \frac{2^{n-1}}{\pi^{n/2}} \prod_i \int_{0}^{+\infty} d\lambda_i \lambda_i \sinh \pi \lambda_i K_{i\lambda_i}(\ell_i) \frac{V_{0,n}(\bm{\lambda})}{\cosh \pi \lambda_i} \end{align} where in the second line we have written the integral in terms of multiple gluing cycles $\lambda_i$, with the gluing measure $d\mu(\lambda) = d\lambda \lambda \tanh \pi \lambda$ which we have written suggestively. The quantity $V_{0,n}(\bm{\lambda}) \equiv V_{0,n}(\lambda_1,...\lambda_n)$ will turn out to be a polynomial in $\lambda_i^2$ and can be viewed as a generalization of the WP volumes to the $p$-deformed setup. The numerical prefactors were chosen such that the $p\to+\infty$ limit directly yields back the WP volumes. \\ We can find explicit expressions for the $V_{0,n}(\bm{\lambda})$ by applying the Kontorovich-Lebedev (KL) transform: \begin{align} \label{KL} g(y) = \int_0^{+\infty} \frac{dx}{x} f(x) K_{iy}(x), \qquad f(x) = \frac{2}{\pi^2} \int_{0}^{+\infty} dy g(y) K_{iy}(x) y \sinh \pi y \end{align} leading to \begin{align} \label{wpfi} V_{0,n}(\bm{\lambda}) = \left.\left(\frac{\pi}{2}\right)^{n/2} \left(\frac{\partial}{\partial x}\right)^{n-3} u ' \prod_i \mathcal{L}_u\left(\frac{K_{i\lambda_i}(\ell_i)}{\sqrt{\ell_i}}\right) \cosh \pi \lambda_i\right|_{u=1} \end{align} where we need the following Laplace transform:\footnote{The integral is convergent at $x=0$ since $\left|K_{i\lambda}\right|$ is bounded close to zero.} \begin{align} \mathcal{L}_u\left(\frac{K_{i\lambda_i}(\ell_i)}{\sqrt{\ell_i}}\right) = \int_0^{+\infty} \frac{dx}{\sqrt{x}}K_{i\lambda}(x) e^{-u x} &= \frac{(2\pi)^{3/2}}{4\cosh \pi \lambda} {}_2F_{1}\left(\frac{1}{4} + \frac{i \lambda}{2}, \frac{1}{4} - \frac{i \lambda}{2}, 1, 1-u^2\right),\\ \label{legp} &= \frac{(2\pi)^{3/2}}{4\cosh \pi \lambda} P_{-\frac{1}{2}-i\lambda}(u) \end{align} where $ u\geq 1$. Notice that the KL transform is invertible, and hence the WP volumes are unambigously defined by the above relation \eqref{Wppdef}. It is convenient as before to work with the shifted and rescaled variable $u_{\rm JT}$ defined by the relation $u=1+\frac{8\pi^2}{p^2}u_{\rm JT}$. We hence write: \begin{equation} \label{WPp} \boxed{ V_{0,n}(\bm{\lambda}) =\lim_{u_{\rm JT}\to 0}- \frac{1}{2} \Big( \frac{\partial}{\partial x} \Big)^{n-3} u'_{\rm JT}(x) \prod_{i=1}^n P_{-\frac{1}{2}-i\lambda}\left(1+\frac{8\pi^2}{p^2}u_{\rm JT}(x)\right)}, \end{equation} where an explicit formula for $\partial_x^n u_{ \rm JT}(x=0) $ was given above in equation \eqref{eq:inversederums}. Zograf proved a theorem about a generating function for WP volumes in the sphere with n punctures $V_{0,n}(\mathbf{0})$ \cite{zograf1998weilpetersson}. This formula \eqref{WPp} gives a minimal string version of it and extends it to finite size boundary lengths. \\ To find explicit formulas from \eqref{WPp}, we can have to differentiate and evaluate at $u_{\rm JT} = 0$ in the end. To that effect, we can use the result: \begin{align} \partial_{u_{\rm JT}}^m \left. P_{-\frac{1}{2}-i\lambda}(u)\right|_{u_{\rm JT} = 0} &= (-)^m \left(\frac{8\pi^2}{p^2}\right)^m\frac{4\cosh \pi \lambda}{(2\pi)^{3/2}}\int_0^{+\infty} dx x^{m-1/2}K_{i\lambda}(x) e^{-x} \\ \label{toins} &= \left(\frac{8\pi^2}{p^2}\right)^m (-)^m \frac{1}{2^m m!}\prod_{j=1}^{m}(\lambda^2+(2j-1)^2/4) \end{align} The equality in the last line is the KL transform of equations written in Appendix C of \cite{Moore:1991ag}. Importantly, this produces a polynomial in $\lambda_i^2$, mirroring the analogous situation for the WP volumes. \\ Finally, in order to make contact with the Weil-Petersson volumes at $p\to + \infty$, we define \begin{equation} \label{geolim} \lambda_i = \frac{p}{4\pi}b_i \end{equation} in terms of the geodesic length $b_i$ that stays finite as we take the limit. As explicit examples, for $n=4,5,6$ we obtain by inserting \eqref{eq:inversederums} and \eqref{toins} into \eqref{WPp} : \begin{align} V_{0,4}(\bm{\lambda}) &= \Big( 2\pi^2 + \frac{6 \pi^2}{p^2}\Big)+\frac{1}{2} \sum_i b_i^2 \\ V_{0,5}(\bm{\lambda}) &=\Big( 10\pi^4 + \frac{56\pi^4}{p^2} + \frac{104 \pi^4}{p^4} \Big) +\left(3\pi^2+ \frac{10\pi^2}{p^2}\right) \sum_i b_i^2 + \frac{1}{2}\sum_{i<j}b_i^2b_j^2 + \frac{1}{8} \sum_i b_i^4\\ V_{0,6}(\bm{\lambda}) &= \Big(\frac{244}{3}\pi^6 + \frac{1972\pi^6}{3p^2} + \frac{6604\pi^6}{3p^4} + \frac{3060 \pi^6}{p^6}\Big) + \left(26\pi^4+ \frac{160\pi^4}{p^2} + \frac{916\pi^4}{3p^4}\right) \sum_i b_i^2 \\ &\hspace{-1cm} + \left(6\pi^2+ \frac{21\pi^2}{p^2}\right) \sum_{i<j} b_i^2 b_j^2 + \left(\frac{3\pi^2}{2}+ \frac{31\pi^2}{6p^2}\right) \sum_i b_i^4 + \frac{3}{4}\sum_{i<j<k}b_i^2b_j^2b_k^2 + \frac{3}{16}\sum_{i,j, i \neq j}b_i^4b_j^2 + \frac{1}{48}\sum_i b_i^6 \nonumber \end{align} All of these satisfy the correct $p\to+\infty$ WP limit, as can be seen by comparing to Appendix B of \cite{do2011moduli}, see also \cite{Mirzakhani:2006fta, *Mirzakhani:2006eta}. \begin{center} \textbf{Adding handles} \end{center} We will show some more evidence of the structure identified here. We will derive the simplest correction for a single boundary and higher genus $g=1$, and then discuss some properties of the generic higher genus result. This is very hard to do from the continuous approach but we can assume the duality is true and obtain the leading handle correction to the partition function using the matrix model. To find higher genus amplitudes, we can use Eynard's topological recursion relations as follows \cite{Eynard:2004mh,Eynard:2007kz}. Provided the two quantities: \begin{equation} W_{0,1}(z) = 2 zy(z), \qquad W_{0,2}(z_1,z_2) = \frac{1}{(z_1-z_2)^2}, \end{equation} the generic amplitude for a double-scaled matrix integral can be found recursively by computing the residue \begin{align} \label{eynard} &W_{g,n}(z_1,J) = \\ &\text{Res}_{z\to 0}\left\{\frac{1}{(z_1^2-z^2)} \frac{1}{4y(z)}\left[W_{g-1,n-1}(z,-z,J) + \sum_{h,I,h',I'} W_{h,1+I}(z,I)W_{h',1+I'}(-z,I')\right]\right\}, \nonumber \end{align} where $h+h'=g$ and $I \cup I' =J$ denoting a subset of the labels $z_2, \hdots z_n$, and the sum excludes the cases $(h=g,I=J)$ and $(h'=g,I'=J)$. Using the minimal string spectral curve as seed, and applying it to genus one with one boundary, we get the following correction to the partition function \begin{equation}\label{eq:Z11} Z(\ell_{\rm JT})_{g=1,n=1} = \frac{\sqrt{\ell_{\rm JT}}}{12\sqrt{\pi}} (\ell_{\rm JT} + \pi^2(1-p^{-4})), \end{equation} which we wrote in term of the normalized length $\kappa\ell=\ell_{\rm JT} \frac{p^2}{ 8\pi^2}$. Using the Kontorovich-Lebedev transform this correction can be written \begin{equation} Z(\ell)_{g=1} \sim \int \lambda d\lambda \tanh \pi \lambda K_{i \lambda} (\ell) V_{1,1}(\lambda), \end{equation} where we will not worry about the overall normalization. The $p$-deformed WP volume appearing from \eqref{eq:Z11} is given by \begin{equation} \label{pgen1} V_{1,1}(\lambda)= \Big(\frac{\pi^2}{12} + \frac{\pi^2}{12 p^2} - \frac{\pi^2}{12 p^4}\Big)+ \frac{\pi^2}{3p^2}\lambda^2 . \end{equation} It is easy to see that after calling $\lambda = \frac{p}{4\pi} b_{\rm JT}$, the large $p$ limit of this expression reproduces the WP volume for torus with one geodesic boundary of length $b_{\rm JT}$, namely $V_{1,1}(\lambda)\approx (b_{\rm JT}^2 + 4\pi^2)/48$. \\~\\ This $p$-deformed WP volume \eqref{pgen1} is again a polynomial in $\lambda_i^2$ as before. Using the recursion relation \eqref{eynard}, we can give an argument why this is so for arbitrary genus $g$ and boundaries $n$. The resolvents $W_{g,n}(z_1, \hdots z_n)$ for a one-cut matrix model with edges at $z=a,b$ are symmetric rational functions of the $z_i$ with poles only at $z_i = a,b$, see e.g. section 4.2.3 in \cite{Eynard:2015aea}.\footnote{Except of course $W_{0,2}$.} In addition, they decay to zero as $z_i \to \infty$. For a double-scaled matrix integral, for which we shift the edge to $z_i=0$, these properties fix the $W_{g,n}(z_1, \hdots z_n)$ to be multivariate polynomials of $1/z_i$. If the spectral curve $y(z)$ is in addition an odd function of $z$, then the $W_{g,n}(z_1, \hdots z_n)$ are polynomials with only even powers of $1/z_i$, making it a polynomial in $1/z_i^2$.\footnote{The reason for this constraint is that $W_{0,2}$ is not an even function of the $z_i$, but for $y(z)$ odd, when computing the residue in \eqref{eynard}, the Taylor series of $W_{0,2}(z,z_1)$ around $z=0$ needs to select an even power of $z$ (and hence of $z_1$) in order to contribute to the residue.} For the minimal string case at hand, the spectral curve is odd and hence this is true. The resolvent $W_{g,n}(z_1, \hdots z_n)$ is related to the multi-loop amplitude $Z_{g,n}(\ell_1 \hdots \ell_n)$ through \begin{equation} W_{g,n}(z_1, \hdots z_n) = 2^n z_1\hdots z_n \int_{0}^{+\infty} \prod_i d\ell_i e^{- \ell_i z_i^2} Z_{g,n}(\ell_1 \hdots \ell_n), \end{equation} which is in turn related to the WP volume $V_{g,n}(\bm{\lambda})$ by \eqref{defwp}. Each such $1/z_i^{2(m+1)}$ term in $W_{g,n}(z_1, \hdots z_n)$, where $m=0,1,\hdots$, then gets inverse Laplace transformed and Kontorovich-Lebedev transformed to the WP volumes using consecutively:\footnote{The $e^{-\ell}$ factor is explained by our choice to shift the spectral edge to $z=0$.} \begin{align} \frac{2 (m+1)!}{z^{2(m+1)}} &= 2z \int_{0}^{+\infty} d\ell e^{- \ell z^2} \ell^{m+1/2}, \qquad m=0,1,\hdots, \\ \ell^{m+1/2} e^{-\ell} &= \sqrt{\frac{2}{\pi}} \frac{1}{2^m m!}\int_{0}^{+\infty} d\lambda \lambda \tanh \pi \lambda \prod_{j=1}^{m} (\lambda^2 + (2j-1)^2/4) K_{iE}(\ell). \end{align} Hence if $W_{g,n}$ is a multivariate polynomial in the $1/z_i^{2}$, as happens for the minimal string, then the $p$-deformed WP volumes are polynomials in the $\lambda_i^2$: \begin{equation} W_{g,n}(z_1,\hdots z_n) = \sum_{i_1\hdots i_n} \frac{c_{i_1\hdots i_n}}{z_1^{2i_1}z_2^{2i_2} \hdots z_n^{2i_n}} \qquad \to \qquad V_{g,n} = \sum_{i_1\hdots i_n \hdots =0}^{n+3g-3}\tilde{c}_{i_1\hdots i_n} \lambda_1^{2i_1}\lambda_2^{2i_2} \hdots \lambda_n^{2i_n}, \end{equation} as was to be shown. \begin{center} \textbf{Classical WP volumes} \end{center} As a final application of these results we will write an explicit formula for WP volumes in the sphere. One can take the JT limit directly at the level of the generating functions. Considering the description in terms of a Legendre function \eqref{legp}, inserting \eqref{geolim} and \eqref{ujt}, and using \eqref{AS}, we get: \begin{equation} P_{-\frac{1}{2}-\frac{ip}{4\pi}b_i}\Big(1+\frac{8\pi^2}{p^2}u_{\rm JT}\Big) \,\, \to \,\, J_0(b_i\sqrt{u_{\rm JT}}), \end{equation} leading to the closed formula for the (undeformed) WP volumes: \begin{equation} \boxed{ V_{0,n}(\mathbf{b}) =\lim_{x\to0} -\frac{1}{2} \Big( \frac{\partial}{\partial x} \Big)^{n-3} u'_{\rm JT}(x) \prod_{i=1}^n J_0(b_i \sqrt{u_{\rm JT}(x)})}, \end{equation} where the derivatives of $u_{\rm JT}(x)$ are equal to \begin{equation} \partial_x^{n} u_{\rm JT}(x=0) = \lim_{u\to0} \frac{d^{n-1}}{du^{n-1}} \Big( -\frac{2\pi \sqrt{u}}{I_1(2\pi\sqrt{u})}\Big)^{n}. \end{equation} For each value of $n$ it is easy to take the appropriate derivatives and obtain a formula for WP volumes with $n$ holes. We computed these explicitly for $n=1,\ldots,7$ matching previous results that use the loop equations presented, for example, in Appendix B of \cite{do2011moduli}. This is surprising since even though we derived this formula from the matrix model we did not use the loop equations explicitly. As a special case, we can take the WP volume on the sphere with $n$ punctures which is equivalent to taking the limit $\mathbf{b}\to0$. It is easy to see that this gives $V_{0,n}(\mathbf{0}) =-\frac{1}{2}\partial_x^{n-2} u_{\rm JT}(0)$. Using the expression above for these derivatives using the Lagrange inversion theorem gives a somewhat more explicit formula \begin{equation} V_{0,n}(\mathbf{0}) =\lim_{u\to0} \frac{1}{2} \frac{d^{n-3}}{du^{n-3}} \left( \frac{2\pi \sqrt{u}}{J_1(2\pi\sqrt{u})}\right)^{n-2}, \end{equation} where we used that the minus signs can be absorbed in a shift of Bessel functions $I_1\to J_1$. This result is equivalent to the WP volume extracted from the generating function derived by Zograf \cite{zograf1998weilpetersson}, which is precisely the string equation of JT gravity. \begin{center} \textbf{Summary} \end{center} With these polynomials, we can now explicitly decompose the $n$-loop amplitude as: \begin{align} \left\langle \prod_{i=1}^n Z(\ell_i)\right\rangle_g = 2^n (2\pi)^{n-3}(\pi b^2)^{n} \prod_{i=1}^{n} \int_0^\infty \lambda_i d\lambda_i \tanh \pi \lambda_i \,V_{g,n}(\bm{\lambda}) \left\langle \mathcal{T}_{\alpha_{Mi}}\right\rangle_{\ell_i}, \end{align} in terms of the $p$-deformed gluing measure $d\mu(\lambda) \sim d\lambda_i \lambda_i \tanh \pi \lambda_i$, the $p$-deformed WP-volume polynomial $V_{g,n}(\bm{\lambda})$, and the bulk one-point functions \eqref{bulkone} with $\lambda = 2P/b$. Graphically, we have the situation: \begin{align} \left\langle \prod_{i=1}^n Z(\ell_i)\right\rangle \quad = \quad \prod_{i=1}^{n}\int d\mu(\lambda_i) \qquad \raisebox{-25mm}{\includegraphics[width=0.3\textwidth]{multiboundaryglue.pdf}} \end{align} Notice that one only integrates over the macroscopic labels where $\alpha_M = -q/2 + iP$ with $P \in \mathbb{R}$, in analogy with the JT limit. For finite $p$, one can deform the contour of integration and replace the integral by a discrete sum over minimal string physical operators \cite{Moore:1991ir}. We studied this mainly for $g=0$, where we found explicit expressions \eqref{WPp}, but proposed a very similar structure for higher genus contributions, which we checked explicitly by computing $V_{1,1}(\lambda)$ and utilizing general arguments based on the topological recursion relations of the matrix model. \section{Conclusions} \label{sec:conclusions} Throughout this work, we have presented fixed length amplitudes of Liouville gravities, and in particular of the minimal string. We have developed both the continuum approach and the discrete matrix model approach. A particular emphasis was placed on the interpretation in terms of Euclidean gravity amplitudes at fixed temperature $\beta^{-1}$, and in their JT parametric limit. \\ We here present some open problems and preliminary results that will be left to future work. \begin{center} \textbf{Heavy boundary operators and cusps} \end{center} We have seen that taking $\beta_M = bh$ in \eqref{eq:2pt} and letting $b\to 0$, one finds the JT boundary two-point function. However, the expression \eqref{eq:2pt} is more general. In particular, if we set $\beta_M = Q-bh$, we would find a finite $b\to 0$ limit as well: \begin{equation} \mathcal{A}_{\beta_M}(\ell_1,\ell_2) \sim \int dk_1 dk_2 \rho_{\rm JT}(k_1)\rho_{\rm JT}(k_2)e^{-k_1^2 \ell_{{\rm JT}1}}e^{-k_2^2 \ell_{{\rm JT}2}}\frac{\Gamma(2h)}{\Gamma(h \pm i k_1 \pm i k_2)}, \end{equation} with \emph{inverted} vertex functions. This corresponds to taking a heavy boundary insertion. Since we know heavy bulk insertions correspond geometrically to conical singularities in the Euclidean JT geometry, it is natural to suspect that the situation here corresponds geometrically to having cusps in the boundary at the location of the operators. Such expressions are ill-defined when $h \in -\mathbb{N}/2$. \begin{center} \textbf{Quantum groups} \end{center} In section \ref{s:qg} we have developed the quantum group perspective on these amplitudes, mirroring the structure of JT gravity based on SL$(2,\mathbb{R})$. An interesting question is to understand precisely how this structure persists for four- and higher-point functions. This is dependent on understanding how the moduli summation for multiple ($>3$) boundary insertions works when combining the Liouville and the matter sectors. \\ The group theoretic structure $\mathcal{U}_{q}(\mathfrak{sl}(2,\mathbb{R}))$ is present in 3d gravity as well \cite{Jackson:2014nla}.\footnote{Another connection with 3d (and higher dimensional) gravity was developed for example in \cite{Ghosh:2019rcj, Iliesiu:2020qvm}, but only works in the Schwarzian limit.} In that case however, one has angular dependence on all correlators, requiring a more complicated combination of these group theoretic building blocks. Our setup is based on the same (quantum) group structure, but does not require additional features. As such, it is one of the simplest quantum extensions of the SL$(2,\mathbb{R})$ case. \\ Another setting that generalizes JT gravity through $q$-deformation is the double-scaled SYK model, explicitly solved in \cite{Berkooz:2018jqr}. In that case the vertex functions were found to be of the form: \begin{equation} \frac{\Gamma_b(h\pm is_1 \pm is_2)}{\Gamma_b(2h)}, \end{equation} which is not quite the same as the structure we have. This can be explained since that work argues that double-scaled SYK is governed by the $q$-deformation into SU$_q(1,1)$, which is a different quantum group theoretical structure than ours. In the classical regime $q\to 1$, both groups coincide since we have the classical isomorphism SL$(2,\mathbb{R}) \simeq$ SU$(1,1)$. \begin{center} \textbf{Multi-boundary and higher genus amplitudes} \end{center} In the last section \ref{sec:othertopo}, we have investigated the structure of multi-loop amplitudes, both in the continuum approach and through matrix model techniques. This leads to several unanswered questions. \\ We found the gluing measure for the minimal string for genus zero multi-loop amplitudes to be $d\mu(\lambda) = \lambda\, d\lambda\, \tanh\pi \lambda$, limiting to the Weil-Petersson measure $d\mu_{\rm WP}(b)=b\, db$ in the semi-classical limit where $\lambda \to \infty$. The quantity $b$ has a geometric interpretation as circumference of the gluing tube, and the factor of $b$ in $b\, db$ represents the sum over all possible twists, ranging from 0 to $b$, happening before gluing two tubes together. It would be interesting to find a similar geometric interpretation for the measure $d\lambda \lambda \tanh\pi \lambda$, perhaps as a gluing formula on quantum Riemann surfaces. \\ In the same vein, we can observe that for generic $c_M<1$ matter, the two-loop amplitude for fixed matter momentum $p$, can be written suggestively as \cite{Moore:1991ag,Martinec:2003ka} \begin{equation} \left\langle Z(\ell,p)Z(\ell',-p)\right\rangle \sim \int_{0}^{+\infty}dE \rho_{\widetilde{SL(2)}}\left( \frac{E}{2}, \frac{p}{2} \right) K_{iE}(\ell) K_{iE}(\ell'), \end{equation} with gluing measure the Plancherel measure of the universal cover of SL$(2,\mathbb{R})$: \begin{equation} \rho_{\widetilde{SL(2)}}(s,\mu) = \frac{s\sinh 2\pi s}{\cosh 2\pi s - \cos 2\pi \mu}, \quad 0\leq \mu \leq 1. \end{equation} For the $(2,2\mathfrak{m}-1)$ minimal string, the matter momentum takes on values $p=\pm 1/2$, and hence $\cos 2\pi \mu=0$. We do not understand the significance of this. \\ When summing over higher genus, it remains to be seen whether a simplification occurs. For the case of $c=1$ ($b=1$) several expressions for the all-genus result are known in a very concise form, see e.g. \cite{Moore:1991sf} for early work and \cite{Betzios:2020nry} for a recent account. \\ Finally, the expression \eqref{Wppdef} has some interesting implications. A different way to write it is the following \begin{equation} \Big\langle \prod_{i=1}^n Z(\ell_i)\Big\rangle =\lim_{x\to 0}\sqrt{\frac{\ell_1\ldots \ell_n}{\ell_1+\ldots+\ell_n}} \Big(\frac{\partial}{\partial x}\Big)^{n-1} \langle Z(x;\ell_1+ \ldots+ \ell_n)\rangle. \end{equation} Each derivative can be interpreted as an insertion for each boundary of the KdV operator associated to the parameter $x$ (corresponding to $t_0$ in the usual nomenclature). The undeformed ($x=0$) version of $Z(\ell_1+ \ldots+ \ell_n)$ is, in the JT limit, the answer one would obtain from a multi-loop amplitude in BF theory associated to (the universal cover of) $SL(2,\mathbb{R})$, as derived in \cite{Verlinde:2020upt}. It would be interesting to understand the BF nature of this KdV operator, since it allows to go from the moduli space of flat connections to the WP one, up to the simple length dependent prefactor in the equation above. This formula also predicts a very simple behavior for the higher order spectral form factor correlator $ \langle |Z(\beta+ i T)|^{2n} \rangle_{\rm conn} \sim (\beta^2+T^2)^{n/2}$, which (to leading order in genus expansion) is valid for all times. A possible application of the multi-loop amplitudes computed here is to study the structure of the baby universe Hilbert space introduced in \cite{Coleman:1988cy,*Giddings:1988cx,*Giddings:1988wv} (and recently further developed in \cite{Saad:2019pqd} and \cite{Marolf:2020xie}), which we leave for future work. These euclidean wormholes were recently found to be relevant towards understanding unitarity of black hole evaporation \cite{Saad:2019pqd, Almheiri:2019qdq, *Penington:2019kki, Marolf:2020xie}.\footnote{Although their Lorenzian interpretation is not clear \cite{Giddings:2020yes}.} Also, adding brane boundaries can be interpreted as fixing eigenvalues of the random matrix integral \cite{Maldacena:2004sn}, which allows one to simulate an underlying discrete system \cite{Blommaert:2019wfy,*Blommaert:2020seb}. \begin{center} \textbf{Supersymmetric versions} \end{center} Our construction of fixed length amplitudes can be generalized to the $\mathcal{N}=1$ minimal superstring, composed of $\mathcal{N}=1$ super-Liouville with a superminimal model, mimicking most of the steps in this work. The comparison to JT gravity can be made since both the disk partition functions, the bulk one-point function and the boundary two-point functions are all known \cite{Stanford:2017thb,Mertens:2017mtv,Stanford:2019vob}. The resulting structure of the amplitudes is quite analogous and is presented in \cite{Mertens:2020pfe}. \begin{center} \textbf{Dilaton gravity interpretation} \end{center} It would be of high interest to get a better understanding of the bulk gravitational interpretation of the Liouville gravities, with the holographic interpretations made in this work. We point out a connection of Liouville gravity to dilaton gravity in Appendix \ref{app:connliouville}, derived in \cite{StanfordSeiberg}, where we combine the Liouville $\phi$ and matter field $\chi$ into the conformal factor of the metric $\rho$ and the dilaton field $\Phi$. In particular, the dilaton potential is $V(\Phi) \sim \sinh 2b^2 \Phi$. Assuming such a connection to dilaton gravity exists, we can substantiate the precise form of the potential purely from bulk gravity considerations as follows. It is known that for a generic model with dilaton potential $V(\Phi)$ \begin{equation} \label{act} S = - \frac{1}{2} \int d^2 \sqrt{g}(\Phi R + V(\Phi)), \end{equation} every classical solution to this system can be written in the form \cite{Gegenberg:1994pv,Witten:2020ert}: \begin{equation} \label{bhgen} ds^2 = A(r) dt^2 + \frac{dr^2}{A(r)}, \qquad \Phi(r) = r, \end{equation} where the asymptotic region $r\to +\infty$, has a linearly diverging dilaton field, like in JT gravity. The classical solution is determined by the equations of motion in terms of the potential $V$ as: \begin{equation} A(r) = \int_{r_h}^{r}dr' V(r'), \end{equation} where $r_h$ is the horizon location. Moreover, the energy-temperature relation of the black hole is determined by \begin{equation} E = \frac{1}{2} \int^{V^{-1}(4\pi T)} V(\Phi) d\Phi, \end{equation} in terms of the dilaton potential $V(\Phi)$, where $V^{-1}$ denotes the inverse function. Given an $E(T)$ relation, one can solve this functional equation to find the dilaton potential $V(\Phi)$.\footnote{In fact, there is an explicit solution for the inverse function $\Phi(V)$. First computing the canonical entropy $S(T)$ as a function of temperature $T$, one finds: \begin{equation} \Phi = \frac{1}{2\pi} S\left(\frac{V}{4\pi}\right), \end{equation} which is uniquely invertible into $V(\Phi)$ given an assumption of monotonicity of $S(T)$ as a function of $T$. } Taking \begin{equation} V(\Phi) = 4\pi b^2 \kappa \, \sinh 2 \pi b^2\Phi, \end{equation} we indeed find \begin{equation} E = \sqrt{ T^2/b^4+ \kappa^2} , \end{equation} reproducing the first law \eqref{firstlaw} we found for the fixed-length disk partition function, but now coming from a (thermodynamically stable) bulk black hole solution. This provides substantial evidence to our claim that the bulk gravity is a 2d dilaton gravity model with a $\sinh$ dilaton potential.\footnote{The precise coefficients in the sinh potential can be changed by rescalings and are not important for our purposes here.} The (real-time) classical black hole solution \eqref{bhgen} is then: \begin{equation} \label{geom} ds^2 = - 2\kappa \left[\cosh 2\pi b^2 r - \cosh 2 \pi b^2 r_h \right] dt^2 + \frac{dr^2}{2\kappa \left[\cosh 2\pi b^2 r - \cosh 2\pi b^2 r_h \right]}, \qquad \Phi(r) = r, \end{equation} where the horizon radius $r_h$ is related to the temperature $T$ as \begin{equation} r_h = \Phi_h = \frac{1}{2\pi b^2} \text{arcsinh} \frac{T}{\kappa b^2}. \end{equation} The thermal entropy of the system can be found as the Bekenstein-Hawking entropy, or directly by using the first law, and we get: \begin{equation} S = 2 \pi \Phi_h +S_0 = \frac{1}{b^2} \text{arcsinh} \frac{T}{\kappa b^2} +S_0. \end{equation} One checks that the Ricci scalar of this solution is indeed \begin{equation} R = - 8 \pi^2 b^4 \kappa\, \cosh 2\pi b^2 r = - V'(\Phi), \end{equation} as required by the $\Phi$ equation of motion of \eqref{act}. The geometry \eqref{geom} interpolates between the JT black hole for $r,r_h \ll 1/b^2$ with constant negative Ricci scalar, and an exponentially rising Ricci-scalar closer to the boundary. This black hole solution has been written before in \cite{Kyono:2017jtc,*Kyono:2017pxs,*Okumura:2018xbh} in the context of a Yang-Baxter deformation of JT gravity.\footnote{The thermodynamical relations are not the same as there due to a coordinate transformation in the time coordinate.} It would be interesting to understand this connection and the dilaton gravity description better, which we postpone to future work. To further probe the bulk gravitational dynamics, we can mention the following. Heavy operator insertions serve as interesting probes of backreaction effects, which are expected to have a gravitational interpretation in terms of classical energy injections. For JT gravity, this setup was analyzed in \cite{Lam:2018pvp,Goel:2018ubv}. In \cite{Blommaert:2019hjr,*Mertens:2019bvy,*Blommaert:2020yeo} JT bulk observables and their correlators were introduced, exploiting a radar definition to anchor bulk points to the holographic boundary. This relied strongly on the specifics of JT gravity as a theory of boundary frames (the Schwarzian description). While the bulk here would not be so easily treated, it would be very interesting to understand whether a similar construction in the bulk would be viable, and in particular whether bulk physics behaves similarly. Since the IR of the Liouville gravities studied here matches that of JT gravity, we do not expect strong deviations from conclusions made there. Finally, it would be interesting to apply these methods to understanding closed universes. This can be done by considering fixed length boundaries with imaginary length \cite{Maldacena:2019cbz, Cotler:2019nbi}. In particular, the CFT perspective on Liouville gravity might help finding the correct inner product between no-boundary states. \paragraph{Acknowledgements} We thank V. Gorbenko, K. Jensen, S. Okumura, S. Shenker, D. Stanford, M. Usatyuk, H. Verlinde and W. Weng for useful discussions. We also thank A. Blommaert for initial collaboration. TM gratefully acknowledges financial support from Research Foundation Flanders (FWO Vlaanderen). GJT is supported by a Fundamental Physics Fellowship.
proofpile-arXiv_065-4434
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{I. Introduction} The physical content regarding perturbations in a black hole spacetime can be viewed as reminiscent of a damped harmonic oscillator. Due to the dissipative nature of the system, the frequencies of the oscillation are usually complex. It is well-known that for a harmonic oscillator, its natural frequencies are independent of the specific initial pulse. However, when a sinusoidal driving force is applied, quite the contrary, the frequency of the steady-state solution is governed by that of the external force. This indicates the distinct characteristics between the initial pulse and the external driving force. Not surprisingly, these concepts can be explored analogously in the context of black hole perturbations. In fact, the problem of black hole quasinormal mode~\cite{agr-qnm-review-01,agr-qnm-review-02,agr-qnm-review-06,agr-qnm-review-03,agr-qnm-review-04} is more sophisticated. As an open system, the dissipation demonstrates itself by ingoing waves at the horizon or the outgoing waves at infinity in asymptotically flat spacetimes, which subsequently leads to energy loss. Subsequently, for a non-Hermitian system, the relevant excited states are those of quasinormal modes with complex frequencies. Besides, the boundary condition demands more strenuous efforts, as the solution diverges at both spatial boundaries. For black hole quasinormal modes, most studies concern the master equation without any explicit external source in the time-domain. In other words, the master equation in the time-domain is a homogeneous equation, where the initial perturbation pulse furnishes the system with an initial condition. In the Laplace s-domain, however, the initial condition is transformed to the r.h.s. of the equation, so that the resultant ordinary second-order differential equation becomes inhomogeneous. Nonetheless, one intuitively argues that the resultant source term in the s-domain is of little physical relevance, as it does not affect the existing quasinormal modes. Also, we note that the above scenario is largely related to the matter being minimally coupled to the curvature in Einstein's general relativity in spherical symmetry. In the Kerr black hole background, the radial part of the master equation is not a single second-order differential equation. Its eigenvalue is coupled to that of the angular part of the master equation, and therefore, it is not obvious why the initial pulse will not influence the quasinormal frequencies of the Kerr black holes. Further investigation is called for. Moreover, motivated mainly by the observed accelerated cosmic expansion, theories of modified gravity have become a topic of increasing interest in the last decades. Among many promising possibilities, the latter includes scalar-tensor~\cite{agr-modified-gravity-Horndeski-01,agr-modified-gravity-dhost-01}, vector-tensor~\cite{agr-modified-gravity-vector-01, agr-modified-gravity-vector-02}, and scalar-vector-tensor~\cite{agr-modified-Moffat-01} theories. There, the matter field can be non-minimally coupled to the curvature sector, and therefore, one might expect the resultant master equations in the time-domain to become inhomogeneous. As an example, the degenerate higher-order scalar-tensor (DHOST) theory~\cite{agr-modified-gravity-dhost-01, agr-modified-gravity-dhost-02, agr-modified-gravity-dhost-03} may admit ``stealth" solutions~\cite{agr-modified-gravity-dhost-06, agr-modified-gravity-dhost-10, agr-modified-gravity-dhost-11, agr-modified-gravity-dhost-12}. The latter does not influence the background metric due to its vanishing energy-momentum tensor~\cite{agr-qnm-17}. The resultant metric differs from the Kerr one by dressing up with a linearly-time dependent field. Indeed, as it has been demonstrated~\cite{agr-bh-nohair-03,agr-bh-nohair-04}, under moderate hypotheses, the only non-trivial modification that can be obtained is at the perturbation level. In this regard, the metric perturbations in the DHOST theories have been investigated ~\cite{agr-modified-gravity-dhost-07, agr-modified-gravity-dhost-08, agr-modified-gravity-dhost-09} recently. It was shown that the equation of motion for the tensorial perturbations are characterized by some intriguing features. To be specific, the scalar perturbation is shown to be decoupled from those of the Einstein tensor. This result leads to immediate simplifications, namely, the time-domain master equations for the tensor perturbations possess the form of linearized Einstein equations supplemented with a source term. The latter, in turn, is governed by the scalar perturbation. Therefore, it is natural to expect that the study of the related quasinormal modes may provide essential information on the stealth scalar hair, as well as the properties of the spacetime of the gravity theory in question. Furthermore, as an external driving force affects the harmonic oscillator, the external source is expected to trigger novelty. One physically relevant example related to the external field source is a {\it quench}, introduced to act as a driving force to influence the system~\cite{condensed-quench-02, condensed-quench-review-01}. For instance, a holographic analysis of quench is carried out in Ref.~\cite{adscft-condensed-quench-06}, where a zero mode, reminiscent of the Kibble-Zurek scaling in the dual system, was disclosed. In particular, the specific mode does not belong to the original metric neither the gauge field, it is obtained via a time-dependent source introduced onto the boundary. The present work is motivated by the above intriguing scenarios and aims to study the properties of quasinormal modes with external sources in the time-domain. To be specific, we will investigate the case when the master equation in the time-domain is inhomogeneous. In order to show the influences in perturbation behaviors caused by different characteristics of the source, we first concentrate our attention on the case where the source takes on the role of an initial pulse. Intuitively the quasinormal frequencies should not be affected by such an initial pulse source. We will confirm that such intuition holds not only for static spherical black hole backgrounds but also for rotating configurations. It is especially not straightforward to confirm such intuition for rotating case, since the eigenvalues of the radial and angular parts of the master equation are coupled, which makes the original arguments based on contour integral invalid. Moreover, in comparison to the case of a driven harmonic oscillator, it is meaningful to examine further the non-trivial influences on the perturbation caused by external source terms. We will show that different from the initial pulse source, the external field introduced to the system may bring additional modes to the perturbed system. This is because the external source term can introduce dissipative singularities in the complex plane, which results in novel modes in the system. The organization of the paper is as follows. In the following section, we first discuss the solution of a specific driven harmonic oscillator. Then in section III, we generalize to a rigorous discussion on black hole quasinormal models in terms of the analysis of the pole structure of the associated Green function in the Laplace s-domain. We concentrate on the effects of the source on the perturbated system as the initial pulse. In section III.A, we confirm the physical intuition on the influence of the initial pulse in the perturbation around static spherical black hole backgrounds. In section III.B, we present a proof to support the intuition on the initial pulse source effect on perturbation system in rotating configurations. In section IV, we extend our discussions to the external source influence on the perturbation system generated by the external field. We show that the external source term may induce additional quasinormal frequencies due to its modifications to the pole structures of the solution. We present numerical confirmation to support our analytic arguments in section V. The last section is devoted to further discussions and conclusions. \section{II. The quasinormal frequencies of a vibrating string with dissipation} As characterized by complex natural frequencies, a damped oscillator is usually employed to illustrate the physical content for the quasinormal oscillations in a dissipation system. Moreover, regarding a dissipative wave equation, a vibrating string subjected to a driven force is an appropriate analogy. To illustrate the main idea, the following derivations concerns a toy model investigated recently by the authors of Ref.~\cite{agr-modified-gravity-dhost-08}. In this model, the wave propagating along the string is governed by the following dissipative wave equation with a source term, namely, \begin{eqnarray} \frac{\partial^2 \Psi}{\partial x^2}-\frac{\partial^2\Psi}{\partial t^2}-\frac{2}{\tau}\frac{\partial\Psi}{\partial t}=S(t, x) , \label{toy_eq} \end{eqnarray} where the the string is held fixed at both ends, and therefore the wave function $\Psi(t, x)$, as well as the source $S(t, x)$, satisfies the boundary conditions \begin{eqnarray} \Psi(t, 0)=\Psi(t, L)=0 \nonumber\\ S(t, 0)=S(t, L)=0, \label{toy_boundary_conditions} \end{eqnarray} respectively, where $L$ denotes the length of the string. Here, the relaxation time $\tau = \mathrm{const.}$ carries out the role of a simplified dissipation mechanism. If the source term vanishes, $\Psi$ is governed by the superposition of the quasinormal oscillations \begin{eqnarray} \Psi(t, x) = \sum_n A_n \sin(n\pi x/L)e^{-i\omega_n t} , \label{toy_sol_sourceless} \end{eqnarray} with complex frequencies $\omega_n$ given by \begin{eqnarray} \omega_n^\pm \tau \equiv -i\pm \sqrt{n^2\frac{\pi^2\tau^2}{L^2}-1} . \label{toy_qnm} \end{eqnarray} With the presence of the source term, it is straightforward to verify that the formal solution of the wave equation Eq.~\eqref{toy_eq} reads \begin{eqnarray} \Psi(t, x)=-\sum_n\int_{-\infty}^\infty d\omega\frac{\mathscr{S}(\omega, n)\sin(n\pi x/L)e^{-i\omega t}}{n^2\pi^2/L^2-\omega^2-2i\omega/\tau} , \label{toy_sol_source_formal} \end{eqnarray} where we have conveniently expanded the source term $S(t, x)$ in the form \begin{eqnarray} S(t, x)=\sum_n\int_{-\infty}^\infty d\omega \sin(n\pi x/L) e^{-i\omega t}\mathscr{S}(\omega, n). \label{toy_source} \end{eqnarray} We note that the integral range for the variable $\omega$ is from $-\infty$ to $\infty$, as it originates from a Fourier transform. Subsequently, for $t>0$, according to Jordan's lemma, one chooses the contour to close the integral around the lower half-plane. As a result, the integral evaluates to the summation of the residue of the integrant, namely, \begin{eqnarray} \Psi(t, x)&=&2\pi i\sum_n \mathrm{Res} \left(\frac{\mathscr{S}(\omega, n)\sin(n\pi x/L)e^{-i\omega t}}{n^2\pi^2/L^2-\omega^2-2i\omega/\tau}\right)\nonumber\\ &=& 2\pi i\sum_n \frac{\sin(n\pi x/L)}{\omega_n^+-\omega_n^-}\left[\mathscr{S}(\omega_n^+, n)e^{-i\omega_n^+ t}-\mathscr{S}(\omega_n^-, n)e^{-i\omega_n^- t}\right]. \label{toy_sol_source_res} \end{eqnarray} Here we have assumed that the source term is a moderately benign function in the sense that $\mathscr{S}(\omega, n)$ does not contain any singularity. The quasinormal frequencies can be read off by examing the temporal dependence of the above results, namely, the $e^{i\omega_n^\pm t}$ factors. They are, therefore, determined by the residues of the integrant. The latter is related to the zeros of the denominator on the first line of Eq.~\eqref{toy_sol_source_res}, which are precisely the frequencies given in Eq.~\eqref{toy_qnm}. It is observed that both quasinormal frequencies given by Eq.~\eqref{toy_qnm} are below the real axis, and therefore will be taken into account by the residue theorem. If $n^2\pi^2\tau^2/L^2 > 1$, the two frequencies lie on a horizontal line, symmetric about the imaginary axis. If, on the other hand, $n^2\pi^2\tau^2/L^2 < 1$, both frequencies lie on the imaginary axis below the origin. It is rather interesting to point out that the above simple model also furnishes an elementary example of the so-called {\it gapped momentum states}~\cite{GMS-01, GMS-02}, where a gap is present in its dispersion relation but on the momentum axis. Under moderate assumption for the source term, we have arrived at a conclusion which seemingly contradicts the example of driven harmonic oscillator given initially. In the following section, we first extend the results to the context of black hole quasinormal modes. Then, section IV, we resolve the above apparent contradiction by further exploring the different characteristic between the initial pulse and external driving force. \section{III. Moderate external source acting as the initial pulse} At this point, one might argue that the analogy given in the last section can only be viewed as a toy model when compared with the problem of black hole quasinormal modes. First, the boundary condition of the problem is different: the solution is divergent at both spatial boundaries. Besides, the system is dissipative not due to localized friction but owing to the energy loss from its boundary, namely, the ingoing waves at the horizon and/or the outgoing waves at infinity. As a result, the oscillation frequencies are complex, which can be further traced to the fact that the system is non-Hermitian. In terms of the wave equation, the term concerning the relaxation time $\tau$ is replaced by an effective potential. Nonetheless, in the present section, we show that a similar conclusion can also be reached for black hole quasinormal modes. We first discuss static black hole metric, then extend the results to the case of perturbations in rotating black holes. The last subsection is devoted to the scenario where the external source itself introduces additional quasinormal frequencies. \subsection{A. Schwarzschild black hole metric} For a static black hole metric, the perturbation equation of various types of perturbations can be simplified by using the method of separation of variables $\chi=\Psi(t,r)S(\theta)e^{im\varphi}$. The radial part of the master equation is a second order differential equation~\cite{agr-qnm-review-03,agr-qnm-review-06}. \begin{eqnarray} \frac{\partial^2}{\partial t^2}\Psi(t, x)+\left(-\frac{\partial^2}{\partial x^2}+V\right)\Psi(t, x)=0 , \label{master_eq_ns} \end{eqnarray} where the effective potential $V$ is determined by the given spacetime metric, spin ${\bar{s}}$ and angular momentum $\ell$ of the perturbation. For instance, in four-dimensional Schwarzschild or SAdS metric, for massless scalar, electromagnetic and vector gravitational perturbations, it reads \bqn V=f\left[\frac{\ell(\ell+1)}{r^2}+(1-{\bar{s}}^2)\left(\frac{2M}{r^3}+\frac{4-{\bar{s}}^2}{2L^2}\right)\right] , \lb{V_master} \eqn where \bqn f=1+r^2/L^2-2M/r , \lb{f_master} \eqn $M$ is the mass of the black hole, and $L$ represents the curvature radius of AdS spacetime so that the Schwarzschild geometry corresponds to $L\to\infty$. The master equation is often conveniently expressed in terms of the tortoise coordinate, $x\equiv r_*(r)=\int dr/f$. By expanding the external source in terms of spherical harmonics at a given radius, $r$ or $x$, the resultant radial equation is given by \begin{eqnarray} \frac{\partial^2}{\partial t^2}\Psi(t, x)+\left(-\frac{\partial^2}{\partial x^2}+V(x)\right)\Psi(t, x)=S(t, x) , \label{master_eq} \end{eqnarray} where $S(t, x)$ corresponds to the expansion coefficient for a given harmonics $(\ell,m)$. In what follows, one employs the procedure by carrying out the Laplace transform in the time domain~\cite{agr-qnm-12,agr-qnm-review-01,agr-qnm-review-02}, \begin{eqnarray} \hat{f}(s, x)=\int_0^\infty e^{-st} \Psi(t, x) dt \nonumber ,\\ \mathscr{S}(s, x)=\int_0^\infty e^{-st} S(t, x) dt , \label{master_Laplace} \end{eqnarray} Subsequently, the resultant radial equation in s-domain reads \begin{eqnarray} \hat{f}''(s, x)+\left(-s^2-V(x)\right)\hat{f}(s, x)=\mathcal{I}(s, x) - \mathscr{S}(s, x) . \label{master_eq_s} \end{eqnarray} where a prime $'$ indicates the derivative regarding $x$, and the source terms on the r.h.s. of the equation consist of $\mathscr{S}(s, x)$ and $\mathcal{I}(s, x)$. The latter is governed by the initial condition \begin{eqnarray} \mathcal{I}(s, x) = -s\left.\Psi\right|_{t=0} - \left.\frac{\partial \Psi}{\partial t}\right|_{t=0}. \label{master_eq_sic} \end{eqnarray} We note that the lower limit of the integrations in Eqs.~\eqref{master_Laplace} is ``0". Subsequently, $\hat{f}(s, x)$ and $\mathscr{S}(s, x)$ are not able to capture any detail of $\Psi(t, x)$ for $t<0$, unless the latter indeed vanish identically in practice. It is apparent that the above equation falls back to that of the sourceless case by taking $S(s, x)=0$~\cite{agr-qnm-12}. The solution of the inhomogeneous differential equation Eq.~\eqref{master_eq_s} can be formally obtained by employing the Green function method. To be specific, \begin{eqnarray} \hat{f}(s, x) = \int_{-\infty}^{\infty}G(s, x, x')(\mathcal{I}(s, x') - \mathscr{S}(s, x))dx' , \label{formal_solution_eq_s} \end{eqnarray} where the Green function satisfies \begin{eqnarray} G''(s, x, x')+\left(-s^2-V(x)\right)G(s, x, x')=\delta(x-x') . \label{master_eq_Green} \end{eqnarray} It is straightforward to show that \begin{eqnarray} G(s, x, x') = \frac{1}{W(s)}f_-(s, x_<)f_+(s, x_>) \label{master_eq_Green} \end{eqnarray} where $x_<\equiv \min(x, x')$, $x_>\equiv \max(x, x')$, and $W(s)$ is the Wronskian of $f_-$ and $f_+$. Here $f_-$ and $f_+$ are the two linearly independent solutions of the corresponding homogeneous equation satisfying the physically appropriate boundary conditions~\cite{agr-qnm-12} \begin{eqnarray} \left\{ \begin{matrix} f_-(s, x)\sim e^{s x} & \mathrm{as}\ x\to -\infty \\ f_+(s, x)\sim e^{-s x} & \mathrm{as}\ x\to \infty \end{matrix}\right. \label{master_bc} \end{eqnarray} in asymptotically flat spacetimes, which are bounded with $\Re s>0$ The wave function thus can be obtained by evaluating the integral \begin{eqnarray} \Psi(t, x)=\frac{1}{2\pi i}\int_{\epsilon-i\infty}^{\epsilon+i\infty} e^{st}\hat{f}(s, x)ds , \label{inverse_Laplace} \end{eqnarray} where the integral is carried out on a vertical line in the complex plane, where $s=\epsilon+is_1$ with $\epsilon>0$. Reminiscent of the toy model presented in the previous section, the discrete quasinormal frequencies are again established by evaluating Eq.~\eqref{inverse_Laplace} using the residue theorem. In this case, one employs extended Jordan's lemma to close the contour with a large semicircle to the left of the original integration path~\cite{book-methods-mathematical-physics-03}. The integration gives rise to the well-known result \begin{eqnarray} \oint e^{st}\hat{f}(s, x)ds = {2\pi i}\sum_q \mathrm{Res}\left(e^{st}\hat{f}(s, x), s_q\right) + (\mathrm{other\ contributions}), \label{int_poles} \end{eqnarray} where $s_q$ indicates the poles inside the counter, ``other contributions" are referring to those~\cite{agr-qnm-07, agr-qnm-08,agr-qnm-13,agr-qnm-14} from branch cut on the real axis, essential pole at the origin, and large semicircle. Therefore, putting all pieces together, namely, Eqs.~\eqref{inverse_Laplace}, \eqref{int_poles}, \eqref{formal_solution_eq_s}, and \eqref{master_eq_Green} lead to \begin{eqnarray} \Psi(t, x)&=&\frac{1}{2\pi i}\int_{\epsilon-i\infty}^{\epsilon+i\infty} e^{st}G(s, x, x')\left[\mathcal{I}(s, x')-\mathscr{S}(s, x')\right]dx'ds \nonumber\\ &=&\frac{1}{2\pi i}\oint e^{st}\frac{1}{W(s)}\int_{-\infty}^{\infty}f_-(s, x_<)f_+(s, x_>)\left[\mathcal{I}(s, x')-\mathscr{S}(s, x')\right]dx'ds \nonumber\\ &=&\sum_q e^{s_qt}\mathrm{Res}\left(\frac{1}{W(s)}, s_q\right)\int_{-\infty}^{\infty}f_-(s_q, x_<)f_+(s_q, x_>)\left[\mathcal{I}(s_q, x')-\mathscr{S}(s_q, x')\right]dx' , \label{Laplace_eq_formal_solution} \end{eqnarray} where the residues are substituted after the last equality. The above results can be rewritten as \begin{eqnarray} \Psi(t, x)=\sum_q c_q u_q(t, x) , \label{Laplace_eq_formal_solution_w_coefficients} \end{eqnarray} with \begin{eqnarray} c_q&=&\mathrm{Res}\left(\frac{1}{W(s)}, s_q\right)\int_{x_1}^{x_\mathcal{I}}f_-(s_q, x')\left[\mathcal{I}(s_q, x')-\mathscr{S}(s_q, x')\right]dx' \nonumber,\\ u_q(t, x)&=& e^{s_q t}f_+(s_q, x) , \label{Laplace_eq_formal_coefficients} \end{eqnarray} where one considers the case where the initial perturbations has a compact support, in other words, it locates in a finite range $x_1 < x' < x_\mathcal{I}$ and the observer is further to the right of it $x > x_\mathcal{I}$. The quasinormal frequencies can be extracted from the temporal dependence of the solution, namely, Eq.~\eqref{Laplace_eq_formal_coefficients}. Since $e^{i s_q t}$ is the only time-dependent factor, it is dictated by the values of the residues $s_q$. The locations of the poles $s_q$ are entirely governed by the Green function Eq.~\eqref{master_eq_Green}, which, in turn, is determined by the zeros of the Wronskian. Therefore, according to the formal solution Eq.~\eqref{Laplace_eq_formal_solution} or \eqref{Laplace_eq_formal_coefficients}, they are irrelevant to $c_q$, where the source $\mathscr{S}(s, x)$ is plugged in. As $\Re s_q < 0$ the wave functions diverge at the spatial boundaries, which can be readily seen by substituting $s=s_q$ into Eqs.~\eqref{master_bc}, consistent with the results from the Fourier analysis~\cite{agr-qnm-review-02,agr-qnm-review-03,agr-qnm-lq-matrix-04}, as mentioned above. It is observed that from Eq.~\eqref{Laplace_eq_formal_coefficients}, for given initial condition $\mathcal{I}(s_q, x)$, one may manipulate the external driving force $\mathscr{S}(s_q, x)$ so that only one single mode $s_q$ presented in the solution. We note that the above discussions follow closely to those in the literature (see, for instance, Ref.~\cite{agr-qnm-12,agr-qnm-review-02}). The only difference is that one subtracts the contribution of the external source, namely, $\mathscr{S}(s, x)$, from the initial condition $\mathcal{I}(s, x') $ in Eq.~\eqref{formal_solution_eq_s}. It is well-known that the initial conditions of perturbation are irrelevant to the quasinormal frequencies, which characterize the {\it sound} of the black hole. In this context, it is inviting to conclude that the external source term on the r.h.s. of the master equation Eq.~\eqref{master_eq} bears a similar physical content. The Laplace formalism employed in this section facilitates the discussion. On the other hand, from Eq.~\eqref{Laplace_eq_formal_coefficients}, one finds that the quasinormal modes' amplitudes will still be affected by the external source. Overall, regarding the detection of quasinormal oscillations, the inclusion of external source does imply a significant modification of observables, such as the signal-noise ratio (SNR). We note that the above discussions are valid under ths assumption that $\mathscr{S}(s, x)$ features a moderate spectrum in s-domain. A notable exception will be discussed below in subsection C. Before closing this subsection, we briefly comment on the equivalence between the above formalism based on Laplace transform and those in terms of Fourier analysis. The results concerning the contour of integral and the quasinormal modes can be compared readily by taking $s=-i \omega$~\cite{agr-qnm-14}. To be more explicit, if one employs the Fourier transform together with the Green function method to solve Eq.~\eqref{master_eq_ns}, the formal solution has the form~\cite{book-methods-mathematical-physics-04} \begin{eqnarray} \Psi(t,x)=\int dx' G(t,x,x')\left.\frac{\partial \Psi(t,x')}{\partial t}\right|_{t=0}+\int dx' \frac{\partial G(t,x,x')}{\partial t}\left.\Psi(t,x')\right|_{t=0} , \label{solution_Green_Fourier} \end{eqnarray} where one considers the case without a source, and the contributions from the boundary at spatial infinity are irrelevant physically and have been ignored. The Green function is the defined by \begin{eqnarray} \frac{\partial^2}{\partial t^2}G(t,x,x')+\left(-\frac{\partial^2}{\partial x^2}+V\right)G(t,x,x')=\delta(t-t')\delta(x-x') , \label{Green_Fourier} \end{eqnarray} If we assume that the perturbations vanish identically for $t<0$, in other words, $G(t,x,x')=0$ for $t<0$. By employing the Fourier transform in the place of Laplace transform, we have \begin{eqnarray} \tilde{G}(\omega, x,x')= \int_{-\infty}^\infty dt G(t,x,x') e^{i\omega t}=\int_0^\infty dt G(t,x,x') e^{i\omega t} . \label{solution_Green_Fourier} \end{eqnarray} where $\tilde{G}(\omega, x,x')$ satisfies \begin{eqnarray} -\omega^2 \tilde{G}(\omega, x,x')+\left(-\frac{\partial^2}{\partial x^2}+V\right)\tilde{G}(\omega, x,x')=\delta(x-x') . \label{master_equation_Green_Fourier} \end{eqnarray} Now, it is apparent that, up to an overall sign, the solution of Eq.~\eqref{master_equation_Green_Fourier} is essentially identical to Eq.~\eqref{master_eq_Green} in terms of Laplace transform. The boundary contributions to the formal solution Eq.~\eqref{solution_Green_Fourier} are precisely those that the initial condition $\mathcal{I}$ contribute to Eq.~\eqref{formal_solution_eq_s}. As discussed above, the main reason to employ the Laplace transform is that the formalism provides a transparent interpretation of the role taken by the external source. \subsection{B. Kerr black hole metric} As most black holes are likely to be rotating, calculations regarding stationary but rotating metrics are of potential significance from an experimental viewpoint. In this subsection, we extend the above arguments to the case of the Kerr metric. Here, the essential point is that the master equation of the Kerr metric cannot be rewritten in the form of a single second-order ordinary differential equation, such as Eq.~\eqref{master_eq}. To be specific, by employing the method of separation of variables $\chi=e^{-i\omega t}e^{im\varphi}R(r)S(\theta)$, in standard Boyer-Lindquist coordinates, the master equation is found to be~\cite{agr-qnm-15} \bqn \lb{master_eq_Kerr} \Delta^{-{\bar{s}}}\frac{d}{dr}\left(\Delta^{{\bar{s}}+1}\frac{d}{dr}\right)\hat{R}(\omega,r)+V\hat{R}(\omega,r)&=&0 ,\\ \left[\frac{d}{du}(1-u^2)\frac{d}{du}\right]{_{\bar{s}}S}_{\ell m}&& \nb\\ +\left[a^2 \omega^2u^2-2a\omega\bar{s}u+\bar{s}+{_{\bar{s}}A}_{\ell m}-\frac{(m+{\bar{s}}u)^2}{1-u^2}\right]{_{\bar{s}}S}_{\ell m}&=&0 \lb{master_eq_Kerr2}, \eqn where \bqn \lb{potential_Kerr} V(r)&=&\frac{1}{\Delta(r)}\left\{(r^2+a^2)^2\omega^2-4Mam\omega r+a^2m^2+2ia(r-M)m{\bar{s}}-2iM(r^2-a^2){\bar{s}}\omega\right\}\nb\\ &+&(-a^2 \omega^2+2i\omega{\bar{s}}r-{_{\bar{s}}A}_{\ell m}) ,\nb\\ \Delta(r)&=&r^2-2Mr+a^2 ,\nb\\ u&\equiv&\cos\theta . \eqn Also, $M$ and $aM\equiv J$ are the mass and angular momentum of the black hole, $m$ and $\bar{s}$ are the mass and spin of the perturbation field. The solution of the angular part, ${_{\bar{s}}S}_{\ell m} = {_{\bar{s}}S}_{\ell m}(a\omega,\theta,\phi)$, is known as the spin-weighted spheroidal harmonics. Here we have adopted the formalism in Fourier transform for simplicity. Although both equations are ordinary differential equations, the radial equation for the quasinormal frequency $\omega$ now depends explicitly on ${_{\bar{s}}A}_{\ell m}$. The latter is determined by the angular part of the master equation, which again involves $\omega$. Therefore, when an external source is introduced, it seems one can no longer straightforwardly employ the arguments presented in the last section. In particular, the arguments based on contour integral seem to work merely for the case where the radial equation is defined in such a way that it is independent of $\omega$. In what follows, however, we elaborate to show that the existing spectrum of quasinormal frequencies remains unchanged. We divide the proof into two parts. The starting point is to assume that the solution of the homogeneous Eqs.~\eqref{master_eq_Kerr}-\eqref{master_eq_Kerr2} is already established. First, let us focus on one particular quasinormal frequency $\omega = \omega_{n,\ell,m}$. For a given $\omega_{n,\ell,m}$, the angular part Eq.~\eqref{master_eq_Kerr2} is well-defined, and its solution is the spin-weighted spheroidal harmonics, ${_{\bar{s}}S}_{\ell m}$. The latter is uniquely associated with a given value of ${_{\bar{s}}A}_{\ell m}$. Now let us introduce an external source to the perturbation equations~\eqref{master_eq_Kerr}-\eqref{master_eq_Kerr2}. One can show that $\omega=\omega_{n,\ell,m}$ must also be a pole of the Green function of the radial part of the resultant master equation. The proof proceeds as follows. It is known that the spin-weighted spheroid harmonics form a complete, orthogonal set for a given combination of $\bar{s}, a\omega$, and $m$~\cite{book-blackhole-physics-Frolov}. Therefore, it can be employed to expand any arbitrary external source. The expansion coefficient $\mathscr{S}$ is a function of radial coordinate $r$ will enter the radial part of the master equation, namely, \bqn \lb{master_eq_Kerr_source} \Delta^{-{\bar{s}}}\frac{d}{dr}\left(\Delta^{{\bar{s}}+1}\frac{d}{dr}\right)\hat{R}(\omega,r)+V({_{\bar{s}}A}_{\ell m \omega_n})\hat{R}(\omega,r)=\mathscr{S}(\omega,r) , \eqn while the angular part Eq.~\eqref{master_eq_Kerr2} remains the same. It is note that ${_{\bar{s}}A}_{\ell m}$ is given with respect to given $\omega_{n,\ell,m}$, thus is denoted by ${_{\bar{s}}A}_{\ell m \omega_n}$. Now, one is allowed to release and vary $\omega$ in Eq.~\eqref{master_eq_Kerr_source} in order to solve an equation similar to Eq.~\eqref{master_eq_s}. As discussed in the last section, one may utilize the Green function method, namely, for real values of $\omega$ one solves \bqn \lb{master_eq_Kerr_Green} \Delta^{-{\bar{s}}}\frac{d}{dr}\left(\Delta^{{\bar{s}}+1}\frac{d}{dr}\right)G(\omega,r,r')+V({_{\bar{s}}A}_{\ell m \omega_n})G(\omega, r,r')=\delta(r-r') , \eqn and then considers analytic continuation of $\omega$ onto the complex plane. It is evident that $\omega_{n,\ell,m}$ must be a pole of the above Green function. This is because Eq.~\eqref{master_eq_Kerr_Green} does not involve the external source $S(\omega,r)$, and therefore the poles must be identical to the quasinormal frequencies of related sourceless scenario. As we have already assumed, the latter, Eqs.~\eqref{master_eq_Kerr}-\eqref{master_eq_Kerr2}, have already be solved and $\omega_{n,\ell,m}$ is one of the quasinormal frequencies. Besides, we note that the other poles of Eq.~\eqref{master_eq_Kerr_Green} are irrelevant, since they obviously do not satisfy Eq.~\eqref{master_eq_Kerr2}. Moreover, the forms of Eq.~\eqref{master_eq_Kerr_source} as well as the Green function both change once a different value for $\omega_{n,\ell,m}$ is considered. Secondly, let us consider a given $\omega$ but of arbitrary value. Again, the angular part of the master equation Eq.~\eqref{master_eq_Kerr2} is a well-defined as an eigenvalue problem. Subsequently, its solution, the spin-weighted spheroid harmonics, as a complete, orthogonal set for given $\bar{s}, a\omega$, and $m$, can be utilized to expand the external source. One finds the following radial equation \bqn \lb{master_eq_Kerr_source_omega} \Delta^{-{\bar{s}}}\frac{d}{dr}\left(\Delta^{{\bar{s}}+1}\frac{d}{dr}\right)\hat{R}(\omega,r)+V({_{\bar{s}}A}_{\ell m \omega})\hat{R}(\omega,r)=\mathscr{S}(\omega,r) . \eqn It is noted that the only difference is that ${_{\bar{s}}A}_{\ell m}$ explicitly depends on $\omega$ and it is therefore denoted as ${_{\bar{s}}A}_{\ell m \omega}$. Although ${_{\bar{s}}A}_{\ell m \omega}$ is a function of $\omega$, the above equation is still a second order ordinary differential equation in $r$. In other words, ${_{\bar{s}}A}_{\ell m \omega}$ can be simply viewed as a constant as long as one is solving the differential equation regarding $r$. Once more, we will employ the Green function method, where the Green function in question satisfies \bqn \lb{master_eq_Kerr_Green_omega} \Delta^{-{\bar{s}}}\frac{d}{dr}\left(\Delta^{{\bar{s}}+1}\frac{d}{dr}\right)G(\omega,r,r')+V({_{\bar{s}}A}_{\ell m \omega})G(\omega, r,r')=\delta(r-r') . \eqn Now, one is left to observe that the pole at $\omega=\omega_{n,\ell,m}$ of the Green function Eq.~\eqref{master_eq_Kerr_Green} is also a pole for the Green function Eq.~\eqref{master_eq_Kerr_Green_omega}. The reason is that the pole $\omega=\omega_{n,\ell,m}$ of the Green function Eq.~\eqref{master_eq_Kerr_Green} corresponds to one of the zeros of the related Wronskian. The latter is an algebraic (nonlinear) equation for $\omega$. Likewise, the poles of the Green function Eq.~\eqref{master_eq_Kerr_Green_omega} also correspond to the zeros of a second Wronskian. The latter is also an algebraic equation except that the constant ${_{\bar{s}}A}_{\ell m \omega_n}$ is replaced by ${_{\bar{s}}A}_{\ell m \omega}$, a function of $\omega$. However, since \bqn \left.{_{\bar{s}}A}_{\ell m \omega}\right|_{\omega=\omega_{n,\ell,m}}={_{\bar{s}}A}_{\ell m \omega_n}, \nb \eqn $\omega_{n,\ell,m}$ must also be a zero of the second Wronskian. We, therefore, complete our proof that quasinormal frequencies $\omega=\omega_{n,\ell,m}$ are also the poles for the general problem with the external source. The above results will be verified in the following section against explicit numerical calculations. Moreover, we note that additional poles, besides those originated from the zeros of the Wronskian, might also be introduced owing to the presence of an external source. One interesting example is that they may come from the ``quasi-singularity" of the external source. This possibility will be explored in the next section. \section{IV. Additional modes introduced by the external source} In the above, we mentioned that when a sinusoidal external force is applied, the frequency of the steady-state oscillation is known to be identical to that of the driving force. Moreover, it is understood that the resonance takes place when the magnitude of the driving frequency matches that of the natural frequency of the oscillator. At a first glimpse, since the driven force's frequency is usually independent of the natural frequency of the oscillator, the above results seem to contradict our conclusion so far. In previous sections, we have shown that if the external source is not singular, namely, characterized by a moderate frequency spectrum, the system's natural frequencies will not be affected. However, the results given in Eqs.~\eqref{toy_sol_source_res} and \eqref{Laplace_eq_formal_solution} will suffer potential modification when the source term $\mathscr{S}$ contains singularity. First of all, we argue that in the context of black hole physics, the sinusoidal driving force is not physically relevant, as it corresponds to some perpetual external energy source. A physically meaningful scenario should be related to some dissipative process, such as when the external source is characterized by some resonance state. In particular, the resonance will be associated with a complex frequency, where the imaginary part of the frequency gives rise to the half-life of the resonance decay. Mathematically, the external source thus possesses a pole on the complex plane. The physical requirement of dissipative nature indicates that, in the Laplace s-domain, the real part of $s=-i\omega$ is negative. In other words, the poles of the source term, if any, must be located on the left of the imaginary axis, and therefore they are inside the contour in Eq.~\eqref{int_poles}. In turn, according to the residue theorem, they will introduce additional quasinormal frequencies to the temporal oscillations. In the case of the toy model, if a given frequency governs the driven force, it corresponds to the case where a single frequency dominates $ \mathscr{S}(\omega, n)$, namely, $\mathscr{S}(\omega, n)\sim \delta(\omega - \omega_R)$\footnote{It is noted that the Dirac delta function has to be viewed as a limit of a sequence of complex analytic functions, such as the Poisson kernel, for the discussions carried out in terms of the contour integral to be valid.}. Regarding Eq.~\eqref{toy_sol_source_res}, this will affect the evaluation of residue. To be specific, the driving force gives rise to a pole in the complex plane at $\omega = \omega_R - i\epsilon$, where the additional infinitesimal imaginary part $i\epsilon$ corresponds to a resonance state with infinite half-life. As a result, the long-term steady-state oscillations will be entirely overwhelmed by the contribution from this pole. In other words, a {\it normal} mode will govern the system's late-time behavior, consistent with our initial observations. As discussed above, in the case of the black hole, one deals with some external resonance source, which corresponds to quasinormal modes. Since the external driving force is independent of the nature of the system, those quasinormal modes are not determined by the Green function Eq.~\eqref{master_eq_Green}. In other words, by definition, they are not governed by the black hole parameters as for the conventional quasinormal modes of the metric. In the following section, we will show numerically that additional quasinormal frequency can indeed be introduced by the external source. It is worth noting that the physical nature of external source discussed in the present section is different from that of the initial pulse or initial condition. To be specific, in literature~\cite{agr-qnm-12, agr-qnm-review-02, agr-qnm-review-03}, the quainormal modes are defined regarding the perturbation equation in the time domain, Eq.~\eqref{master_eq_ns}. By considering the Laplace transform, the equation is rewritten where the initial condition $\mathcal{I}(s, x)$ appears on the r.h.s. as a source term. As discussed above, this term might affect the amplitudes of the quasinormal oscillation but is irrelevant to the quasinormal frequencies. This is because the real physical content it carries is an initial pulse. It is evident that a harmonic oscillator's initial condition will never affect the oscillator's natural frequency. On the other hand, as in Eq.~\eqref{master_eq}, if one introduces a source term directly onto the r.s.h. of the master equation in the time domain, one might encounter a different scenario. As discussed above, now the physical content resides in the well-known example that a driven harmonic oscillator will follow the external force's frequency when, for instance, a sinusoidal driving force is applied. Therefore, these are two {\it distinct} scenarios associated with the term external source, which, as discussed in the text, lead to different implications. The above conclusion can be confirmed mathematically. In fact, it can be readily shown that the term $\mathcal{I}(s, x)$ given by the Laplace transform must not contain any singularity. Observing Eq.~\eqref{master_eq_sic}, its frequency dependence is linear in $s$, thus averting any potential pole on the complex plane. \section{V. Numerical results} In this section, we demonstrate that the results obtained analytically in previous sections agree with numerical calculations. To be specific, we first solve the inhomogeneous differential equations numerically. Subsequently, the evolution of perturbations in the time domain is used to extract the dominant complex frequencies by utilizing the Prony method. These frequencies are then compared against the numerical results of the corresponding quasinormal modes, obtained by standard approaches. We first demonstrate the precision of our numerical scheme by studying the toy model presented in section II. Then we proceed to show the results for the Schwarzschild as well as Kerr metrics. For the toy model, one considers the master Eq.~\eqref{toy_eq} numerically for $\tau=1, L=1, n=1$ and the source Eq.~\eqref{toy_source} where \begin{eqnarray} \mathscr{S}_{(1)}(\omega, n)=1 , \label{toy_source_num_01} \end{eqnarray} and \begin{eqnarray} \mathscr{S}_{(2)}(\omega, n)=\frac{1}{\omega^2+1} , \label{toy_source_num_02} \end{eqnarray} \begin{figure} \begin{tabular}{cc} \vspace{0pt} \begin{minipage}{225pt} \centerline{\includegraphics[width=200pt]{fig1_omega1}} \end{minipage} & \begin{minipage}{225pt} \centerline{\includegraphics[width=200pt]{fig1_omega2}} \end{minipage} \end{tabular} \renewcommand{\figurename}{Fig.} \caption{(Color online) The calculated time series of the toy model for two different types of sources given in Eq.~\eqref{toy_source_num_01} and~\eqref{toy_source_num_02} are shown in the left and right plot, respectively. The calculations are carried out to generate a total of 40 points in the time series.} \label{evolution_toy} \end{figure} Our first goal is to find the temporal dependence of the solution for the two arbitrary sources chosen above. This is accomplished by first solve Eq.~\eqref{toy_eq} in the frequency space and then carry out an inverse Fourier transform at an arbitrary given position $x$ for various time instant $t$. Although part of the above procedure can be obtained analytically, we have chosen to adopt the numerical approach, since later on, for more complicated scenarios, we will eventually resort to the ``brutal" numerical force. The resultant time series are shown in Fig.~\ref{evolution_toy}. It is observed that the temporal evolution indeed follows the pattern of quasinormal oscillations. In order to extract the quasinormal frequencies, the Prony method~\cite{agr-qnm-16} is employed. The method is a powerful tool in data analysis and signal processing. It can be used to extract the complex frequencies from a regularly spaced time series. The method is implemented by turning a non-linear minimization problem into that of linear least squares in matrix form. As shown below, in practice, even a small dataset of 40 points is often sufficient to extract precise results. In the following, we choose the modified least-squares Prony~\cite{agr-qnm-16} over others, as the impact of noise is not significant in our study. For Eq.~\eqref{toy_source_num_01}, the two most dominating quasinormal frequencies are found to be $\omega_{(1)}^{\pm}=-0.999i-2.982, -0.999i+2.967$. For Eq.~\eqref{toy_source_num_02}, one also obtains two dominating complex frequencies $\omega_{(2)}^{\pm}=-0.999999i-2.978190, -0.999998i+2.978189$. The numerical results together with their respective weights are shown in Tab.~\ref{PronyList}. When compared with the analytic values $\omega^{\pm}=-i\pm \sqrt{\pi^2-1}\sim -i\pm 2.978188$, one finds that desired precision has been achieved. Next, one proceeds to the case of the Schwarzschild black hole. Here, we consider massless scalar perturbation with the following source term \begin{eqnarray} \mathscr{S}_{(3)}(\omega, x)=\frac{1}{1+\omega^2} \frac{1}{rf^2(r)} V(r)e^{i\omega r} , \label{Schwarzschild_source_num_03} \end{eqnarray} where we take $\bar{s}=0, r_h=2M=1, \ell=1, L=\infty$, $V$ and $f$ are given by Eq.~\eqref{V_master}-\eqref{f_master}, the tortoise coordinate $x=\int dr/f$. It is noted that the factor $e^{i\omega r}V(r)/f^2(r)$ is introduced to guarantee that the source satisfies appropriate boundary conditions. The remaining factor $\frac{1}{1+\omega^2}\frac{1}{r}$ can largely be chosen arbitrarily. \begin{figure} \begin{tabular}{cc} \vspace{0pt} \begin{minipage}{225pt} \centerline{\includegraphics[width=200pt]{fig2_omegax}} \end{minipage} & \begin{minipage}{225pt} \centerline{\includegraphics[width=200pt]{fig2_temporal}} \end{minipage} \end{tabular} \renewcommand{\figurename}{Fig.} \caption{(Color online) Results on massless scalar perturbations in Schwarzschild black hole metric with external source. Left: The calculated imaginary part of the numerical solution of the master equation in the frequency domain, shown as a 2D function of $\omega$ and $x$. Right: The calculated time series of the massless scalar perturbations. The calculations are carried out to generate a total of 50 points in the time series.} \label{evolution_Schwarzschild} \end{figure} To find the temporal evolution, we again solve the master equation in the frequency domain of Eq.~\eqref{master_eq} by employing a adapted matrix method~\cite{agr-qnm-lq-matrix-01,agr-qnm-lq-matrix-02}. To be specific, the radius coordinate is transform into a finite interval $x\in [0,1]$ by $r\to \frac{2M}{1-x}$, which subsequently discretized into 22 spatial grids. For simplicity, we consider $\alpha=1, \ell=1$. By expressing the function and its derivatives in terms of the function values on the grids, the differential equation is transformed into a system of linear equations represented by a matrix equation. The solution of the equation is then obtained by reverting the matrix, as shown in the left plot of Fig.~\ref{evolution_Schwarzschild}. Subsequently, the inverse Fourier transform is carried out numerically at a given spatial grid $x=\frac{5}{21}$, presented in the right plot of Fig.~\ref{evolution_Schwarzschild}. As an approximation, the numerical integration is only carried for the range $\omega\in [-20,20]$, where a necessary precision check has been performed. By employing the Prony method, one can readily extract the most dominate quasinormal frequency. The resultant value is $\omega_{(3)}=-0.5847 - 0.1954 i$, consistent with $\omega_{n=0,\ell=1}=-0.5858 - 0.1953 i$ obtained by the matrix method~\cite{agr-qnm-lq-matrix-02}. Now, we are ready to explore the master equation Eq.~\eqref{master_eq_Kerr_source} for Kerr metric with the following form for the source term \begin{eqnarray} \mathscr{S}_{(4)}(\omega, r)=\frac{1}{1+\omega^2}\frac{r(r-r_+)}{\Delta} e^{i\omega r} , \label{Kerr_source_num_04} \end{eqnarray} where $r_+=M+\sqrt{M^2-a^2}$ is the radius of the event horizon. Here, the form $\frac{r(r-r_+)}{\Delta} e^{i\omega r}$ is to guarantee that the external source vanishes at the spatial boundary as $a\to 0$, so that the asymptotical behavior of the wave function remains unchanged. Also, the factor $\frac{1}{1+\omega^2}$ is again introduced, based on the observation that its presence in Eq.~\eqref{toy_source_num_02} has led to better numerical precision. The latter is probably due to that the resultant numerical integration regarding the inverse Fourier transform converges faster. This choice turns out to be particularly helpful in the present scenario where the numerical precision becomes an impeding factor. In the following calculations, we choose $M=0.5, a=0.3$, and $\ell=2$. \begin{figure} \begin{tabular}{cc} \vspace{0pt} \begin{minipage}{225pt} \centerline{\includegraphics[width=200pt]{fig3_omega}} \end{minipage} & \begin{minipage}{225pt} \centerline{\includegraphics[width=200pt]{fig3_temporal}} \end{minipage} \end{tabular} \renewcommand{\figurename}{Fig.} \caption{(Color online) Results on massless scalar perturbations in Kerr black hole metric with external source. Left: The real and imaginary parts of the master equation's numerical solution in the frequency domain, evaluated at $x=\frac{4}{21}$. Right: The calculated time series of the massless scalar perturbations. The calculations are carried out to generate a total of 40 points in the time series.} \label{evolution_Kerr} \end{figure} Based on the matrix method, the entire range of the spatial and polar coordinates $r$ and $\theta$ is divided by 22 grids. Subsequently, the radial, as well as angular parts of the master equation, are discretized into two matrix equations~\cite{agr-qnm-lq-matrix-03}. We first solve the angular part of the master equation Eq.~\eqref{master_eq_Kerr2} for a given $\omega$ to obtain ${_{\bar{s}}A}_{\ell m \omega}$. This can be achieved with relatively high precision, namely, with a {\it WorkingPrecision} of $100$ in {\it Mathematica}. The obtained $\omega$ to obtain ${_{\bar{s}}A}_{\ell m \omega}$ is substituted back into Eq.~\eqref{master_eq_Kerr_source} to solve for the wave function in the frequency domain. To improve efficiency, we only carry out the calculation for a given spatial point at $x=\frac{4}{21}$, without losing generality. The resultant wave function is shown in the left plot of Fig.~\ref{evolution_Kerr}. To proceed, we evaluate the wave function at $600$ discrete points between $-30 <\omega < 30$ and then use those values to approximate the numerical integration in the frequency domain. The resultant time series with 40 points are shown in the right plot of Fig.~\ref{evolution_Kerr}. By using the Prony method, the most dominant quasinormal frequency is found to be $\omega_{(4)}=-0.9981 - 0.1831 i$, in good agreement with the value $\omega_{n=0,\ell=2}=0.9918 - 0.1869 i$ obtained by the 21th order matrix method~\cite{agr-qnm-lq-matrix-03}. \begin{table}[htb] \begin{center} \scalebox{1.00}{\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{source} & \multicolumn{2}{c|}{$1$st} & \multicolumn{2}{c|}{$2$nd} \\ \cline{2-5} & \multicolumn{1}{c|}{~$\omega^-$~} & \multicolumn{1}{c|}{weight} & \multicolumn{1}{c|}{~$\omega^+$~} & \multicolumn{1}{c|}{weight} \\ \hline $\mathscr{S}_{(1)}$& $ -0.999878i -2.982507$ & $7.5\times 10^{-01}$ & $-0.998883i +2.966696$ & $7.5\times 10^{-01}$ \\ \hline $\mathscr{S}_{(2)}$ & $-0.999999i-2.978190$ & $7.0\times 10^{-02}$ & $-0.999998i+2.978189$ & $7.0\times 10^{-02}$ \\ \hline \multirow{2}{*}{} & \multicolumn{2}{c|}{$3$rd} & \multicolumn{2}{c|}{$4$th} \\ \cline{2-5} & \multicolumn{1}{c|}{~$\omega$~} & \multicolumn{1}{c|}{weight} & \multicolumn{1}{c|}{~$\omega$~} & \multicolumn{1}{c|}{weight} \\ \hline $\mathscr{S}_{(1)} $& $-2.236222i -7.901108$ & $4.7\times 10^{-03}$ & $-2.229535 i-11.873706$ & $2.2\times 10^{-03}$ \\ \hline $\mathscr{S}_{(2)}$ & $-0.999999i -6.035914\times 10^{-08}$ & $2.5\times 10^{-01} $ & $0.016891i -9.887078$ & $2.1\times 10^{-07}$ \\ \hline \end{tabular}} \end{center} \caption{The calculated quasinormal frequencies by using the Prony method for the source terms Eqs.~\eqref{toy_source_num_01} and \eqref{toy_source_num_02}. The numerical code has been implemented to extract five modes while the first four most dominante ones, as well as their respective amplitudes, are listed.}\label{PronyList} \end{table} Last but not least, we investigate whether the poles in the external source will also demonstrate itself in the resultant temporal series. This can be demonstrated by revisiting the toy model. In particular, it is evident the external source Eq.~\eqref{toy_source_num_02} contains two poles on the complex plane, for $t>0$ the relevant pole is $\omega^e=-i$. Therefore, if everything checks out, the additional frequency $\omega^e$ must also be captured by the Prony method. Taking a close look at the results listed in Tab.~\ref{PronyList} reveals that this is indeed the case. For the source term $\mathscr{S}_{(1)} $, the first two modes overwhelm others by two orders of magnitude. On the other hand, concerning $\mathscr{S}_{(2)} $, not only it helps to improve the precision of the numerical integration, a third dominant mode appears, which reads $\omega^e_{(2)}=-0.999999i -6.035914\times 10^{-08}$. It readily confirmed that the poles in the driving force are relevant, and present themselves as additional quasinormal modes in the resultant time series. One can proceed to show explicitly that it is also the case in the context of black hole configurations. However, on the numerical aspect, it is a bit tricky. We note that, by comparing Eq.~\eqref{toy_source_num_02} against Eq.~\eqref{Schwarzschild_source_num_03}, it is evident that the latter also contains the pole at $\omega^e$. Unfortunately, the present numerical scheme is not robust enough to pick out this singularity. In order to accomplish our goal, one might deliberately bring the singularities to the region where their detection becomes feasible while the frequency domain integral still converges reasonably fast. This can be achieved by replacing the source term in Eq.~\eqref{Schwarzschild_source_num_03} by an appropriately chosen form \begin{eqnarray} \mathscr{S}_{(3)}(\omega, x)=\frac{1}{(\omega+\frac13 i+1)(\omega-\frac13 i+1)} \frac{1}{rf^2(r)} V(r)e^{i\omega r} . \label{Schwarzschild_source_num_04} \end{eqnarray} It gives rise to an additional pair of singularities, out of which $\omega^{e-}=-\frac13 i- 1$ is relevant to the contour in question. By carrying out an identical procedure, we manage to extract the latter using the present algorithm.. The first two dominant modes extracted by the Prony method are found to be $\omega_{(5)}=-0.5824 - 0.1896 i$ and $\omega_{(6)}=-0.9952 - 0.3326 i$. In other words, both the fundamental quasinormal mode and the singularity in the source term are identified successfully. We are looking forward to improving the algorithm further so that its application to more sophisticated scenarios becomes viable. \section{VI. Further discussions and concluding remarks} To summarize, in this work, we study the properties of external sources in blackhole perturbations. We show that even with the presence of the source term in the time-domain, the quasinormal frequencies may largely remain unchanged. In this case, the physical content of the external source is an initial pulse. The statement is valid for various types of perturbation in both static and/or stationary metrics. Although, for rotating black holes, the arguments are elaborated with additional subtlety. We also discuss the physically relevant scenraio where the external source acts as a driving force and introduces additional modes. The findings are then attested against the numerical calculations for several particular scenarios. It is noted that in our discussions, the effects of the branch cut on the negative real axis have not been considered. These discontinuity from the branch cut arises from that of the solution of the homogeneous radial equation, which satisfies the boundary condition at infinity. As a result, their effects remain unchanged as the external source is introduced. Moreover, as the branch cut stretches from the origin, it primarily associated with the late-time behavior of the perturbations. Therefore, they are largely not relevant to the quasinormal frequencies in the context of the present study. The numerical calculations carried out in the present paper only involve rather straightforward scenarios such as the Schwarzschild metric. Since our results are expected to be valid in a more general context, as mentioned above, it is physically meaningful to explore further the possible implications in more sophisticated cases. These include the perturbations in modified gravity theories, such as the scalar-tensor theories. One relevant feature of the theory is that the scalar perturbations are entirely decoupled from those of the Einstein tensor. In some recent studies, the metric perturbations in the DHOST theory are found to possess a source term~\cite{agr-modified-gravity-dhost-07, agr-modified-gravity-dhost-08}. Besides, the master equation for scalar perturbations is shown to be a first-order differential equation decoupled from the Einstein tensor perturbations. Subsequently, for such specific cases, one may obtain the general solution (see, for example, Eq.~(26) of Ref.~\cite{agr-modified-gravity-dhost-08}), which does not contain any pole in the frequency domain. In other words, the discussions in section III.B can be readily applied to these cases. In this regard, we have demonstrated that while the magnitude of the perturbation wave function is tailored by the source and initial condition, the quasinormal frequencies might stay the same. Therefore, the findings of the present work seem to indicate a subtlety in extracting information on the stealth scalar hair in the DHOST theory via quasinormal modes. In our view, it is rather inviting to explore the details further, and also for other modified theories of gravity. Further studies along this direction are in progress. \section*{Acknowledgments} WLQ is thankful for the hospitality of Chongqing University of Posts and Telecommunications. We gratefully acknowledge the financial support from Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP), Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado do Rio de Janeiro (FAPERJ), Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico (CNPq), Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'ivel Superior (CAPES), and National Natural Science Foundation of China (NNSFC) under contract Nos. 11805166, 11775036, and 11675139. A part of the work was developed under the project INCTFNA Proc. No. 464898/2014-5. This research is also supported by the Center for Scientific Computing (NCC/GridUNESP) of the S\~ao Paulo State University (UNESP). \bibliographystyle{h-physrev}
proofpile-arXiv_065-4471
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The machine learning (ML) community has seen a recent rush of work aiming to understand when and why ML systems have different statistical behavior for different demographic groups, and how to best equalize various metrics across groups while maintaining high performance overall \citep[e.g.,][]{hardt2016equality,berk2017,zafar2019JMLR}. Two main factors in ensuring this goal are: (a) comprehensive and unbiased data collection, and (b) optimizing the right loss function under appropriate constraints. However, much of the work concerning algorithmic fairness in the ML community has focused on (b), i.e., algorithms for optimizing various loss functions that encode fairness considerations (see Section~\ref{section_related_work} for related work). In reality, the success of these methods crucially depends on how and what type of data has been collected. Furthermore, the challenges of collecting data from under-represented groups are well documented \citep{holstein2019improving} and poor data collection can make the outcome of the algorithmic stage meaningless. In this work we posit that in the context of algorithmic fairness, data collection and loss function optimization should go hand in hand. Furthermore, in many real-world settings such joint optimization occurs naturally. For instance in online advertising, microlending, and numerous settings where not-for-profit and governmental agencies have begun to experiment with ML techniques, learning algorithms continue to collect data and refine the predictions they make over time. Hence it is important to understand how to ensure algorithmic fairness in settings where the current performance of the classifier dictates further data collection. The above discussion naturally motivates the study of algorithms for adaptively sampling data in order to address disparate performance across subpopulations. The motivation for studying such algorithms is twofold. First, adaptively selecting samples can often improve the total labeling cost required. Second, many practical deployment scenarios of ML systems have the ability to collect additional data, at some cost, to refine the predictions they make over time. In this work we propose a simple and easy to implement algorithm for adaptively sampling to simultaneously avoid disparate performance across demographic groups and achieve high overall performance. Our algorithm proceeds in rounds: in each round, the algorithm selects a model, then either samples the next data point uniformly from the entire population, or from the population that is currently disadvantaged according to a specified metric. The choice between the two sampling options is governed by a sampling probability $p \in [0,1]$, where $p=1$ corresponds to always sampling from the entire population and thus focusing on optimizing overall performance, while $p=0$ corresponds to sampling from the disadvantaged population which aims to equalize performance amongst the groups. The sample is then added to the training set and the process begins again. We provide a theoretical analysis of our proposed algorithm and provide extensive empirical evidence regarding its effectiveness, both on real and simulated data. In particular, \begin{itemize} \item To analyze our algorithm theoretically, we consider an idealized model consisting of points in a one-dimensional space with the goal of equalizing the error rate across two groups while maintaining high overall classification accuracy. For this setting, we precisely characterize the convergence of our algorithm as a function of the sampling probability $p$. \item We compare the performance of our proposed adaptive sampling algorithm with existing methods that work in the {\em batch setting} \citep{agarwal2018, hardt2016equality}. We demonstrate that our algorithm achieves comparable or superior performance with less data than existing methods. \item Finally, we conduct a case study by applying our strategy to a real-life sequential decision process that occurred in the wake of the Flint water crisis \citep{flint_paper_1}. The task in question consisted of a timeline of the selection of homes that received water pipe inspections, with the goal of finding and removing lead pipes. This particular scenario presents an ideal application of our methodology, as Flint's challenges arose in large part from distributional and equity issues, the home selection process was made in an adaptive fashion, and individual inspections and replacements cost in the hundreds to thousands of dollars. We chose our group categorization based on a home's ward number, a reasonable proxy for other demographic indicators such as race or income, and we evaluate the performance of our adaptive sampling algorithm relative to a uniform sampling benchmark as well as what was actually used in practice. We show that our algorithm leads to a significant decrease in disparate performance across groups when compared to what was actually implemented in Flint, and does so even as the uniform random sampling strategy does not. \end{itemize} \section{Adaptive sampling with the goal of balanced performance}\label{introduce_strategy} In this section, we introduce our formal model and propose our strategy for training an accurate classifier with approximately equal performance on different demographic groups. We assume the data $(x,y,a)\in \mathcal{X}\times \{-1,1\}\times \mathcal{A}$ comes from a joint probability distribution $\Pr$. The variable $x$ represents the features of the data point $(x,y,a)$, $y$ the data point's ground-truth label and $a$ its demographic information encoded as group membership. We assume that $\mathcal{A}$ is finite and refer to all data points that share the same attribute~$a$ as belonging to group~$G_a$. For concreteness, let $\mathcal{A}=\{0,\ldots,a_{max}\}$ such that $|\mathcal{A}|=a_{max}+1$. We assume that $\mathcal{X}$ is some suitable feature space. Our goal is to learn a classifier~$h:\mathcal{X}\rightarrow \mathbb{R}$ from some hypothesis class $\mathcal{H}$\footnote{We could also consider classifiers $h':\mathcal{X}\times \mathcal{A}\rightarrow \mathbb{R}$, but legal restrictions sometimes prohibit the use of the demographic attribute for making a prediction \citep{lipton2018} or the attribute might not even be available at test time.}, with high accuracy measured according to a loss function $l:\mathbb{R}\times\{-1,1\}\rightarrow \mathbb{R}_{\geq 0}$. That is, we want $\mathbb{E}_{(x,y)\sim \Pr}l(h(x),y)=:l_{|a}(h)$ to be small. Simultaneously, we wish for $h$ to have approximately equal performance for different demographic groups, where the performance on a group~$G_a$ is measured by $\mathbb{E}_{(x,y)\sim \Pr|_{G_a}}f(h(x),y) =: f_{|a}(h)$, or could also be measured by $\mathbb{E}_{(x,y)\sim \Pr|_{G_a\wedge y=1}}f(h(x),y)$, with $f:\mathbb{R}\times\{-1,1\}\rightarrow \mathbb{R}_{\geq 0}$ being another loss function. The two loss functions $l$ and $f$ could be the same or different. Throughout this paper, we generally consider $l$ as a relaxation of $01$-loss and $f$ as either equal to $l$ or $f$ measuring $01$-loss. Different $f$ correspond to several well-studied notions from the literature on fair ML: \begin{itemize} \setlength{\itemsep}{.1in} \item If our goal is to equalize $\mathbb{E}_{(x,y)\sim \Pr|_{G_a}}\mathds{1}\{\sign h(x)\neq y\}=\Pr|_{G_a}[\sign h(x)\neq y]$, $a\in \mathcal{A}$, we are aiming to satisfy the fairness notion of \emph{overall accuracy equality} \citep[e.g.,][]{berk2017,zafar2017www}. \item If our goal is to equalize $\mathbb{E}_{(x,y)\sim \Pr|_{G_a\wedge y=1}}\mathds{1}\{\sign h(x)\neq y\}=\Pr|_{G_a}[\sign h(x)\neq y|y=1]$, $a\in \mathcal{A}$, we are aiming to satisfy the fairness notion of \emph{equal opportunity} \citep[e.g.,][]{hardt2016equality,corbett-davies2017,zafar2017www}. \end{itemize} Hence, we sometimes refer to a classifier with equal values of $f$ on different groups as a \emph{fair} classifier. Importantly, our strategy builds on the idea that picking a classifier~$h \in \argmin_{h'\in \mathcal{H}} \mathbb{E}_{(x,y)\sim S}l(h'(x),y)$, then the performance of $h$ on group~$G_a$ as measured by $\mathbb{E}_{(x,y)\sim \Pr|_{G_a}}f(h(x),y)$ will improve as the number of points in the training data set from $G_a$ grows. Our experiments (cf. Section~\ref{section_experiments}) and some theoretical analysis suggests (cf. Section~\ref{section_analysis_1d}) when $f$ is the $01$-loss, our process approximately equalizes $f$ across groups while minimizing $l$. This is not the case for all choices of functions $f$ and $l$, however: for example, if we choose $f(h(x),y)=\mathds{1}\{h(x)=1\}$ and $l$ a relaxation of $01$-loss, which would correspond to aiming for the fairness notion of \emph{statistical parity} \citep[e.g.,][]{fta2012,zliobaite2015,zafar2017}, then equalizing $f$ across groups is likely to be in conflict with minimizing~$l$~overall. \subsection{Our strategy} Our key idea to learn a classifier with balanced performance on the different demographic groups is the following: train a classifier $h_0$ to minimize loss on $S$, evaluate which group $G_a$ has higher $f$ value with respect to $h_0$, sample $(x,y,a)$ from group $G_a$ and repeat with $S \cup {(x,y,a)}$. Furthermore, we can incorporate a simple way to trade-off the two goals of (i) finding a classifier that minimizes loss in a group-agnostic sense, and (ii) finding a classifier that has balanced performance on the different groups, which often---but not always---are at odds with each other (cf. \citealp{wick2019}, and the references therein): instead of always sampling the new data point from the currently disadvantaged group, in each round we could throw a biased coin and with probability~$p$ sample a data point from the whole population and with probability $1-p$ sample from only the disadvantaged group. The larger the value of $p$ the more we care about accuracy, and the smaller the value of $p$ the more we focus on the fairness of the classifier. The generic outline of our proposed strategy is summarized~by~Algorithm~\ref{basic_outline}. \begin{algorithm}[t!] \caption{Our strategy for learning an accurate and fair classifier} \label{basic_outline} \begin{algorithmic}[1] \vspace{1mm} \STATE {\bfseries Input:} parameter $p\in[0,1]$ governing trade-off bet. accuracy and fairness; number of rounds~$T$ \vspace{1mm} \STATE {\bfseries Output:} a classifier $h\in \mathcal{H}$ \vspace{3mm} \STATE Start with some initial classifier $h_0$ (e.g., trained on an initial training set~$S_0$ or chosen at random) \vspace{1mm} \FOR{$t=1$ \TO $t=T$} { \vspace{1mm} \STATE Let $G_a$ be the group for which $f_{|a}(h_{t-1})$ is largest, evaluated on a validation set. \vspace{1mm} \STATE With probability $p$ sample $(x,y,a)\sim \Pr$ and with probability $1-p$ sample $(x,y,a)\sim\Pr|_{G_a}$ \vspace{1mm} \STATE Set $S_t = S_{t-1} \cup \{(x,y,a)\}$; update $h_{t-1}$ to obtain the classifier~$h_t$ (either train $h_t$ on $S_t$ or perform an SGD update with respect to $(x,y,a)$) \vspace{1mm} } \ENDFOR \vspace{1mm} \RETURN $h_T$ \end{algorithmic} \end{algorithm} With a finite sample, we must estimate loss as well as which group has higher $f$-value for $h_t$. That can either be done by splitting the initial training set into train and validation sets, or by using the entire sample to estimate these values in each round. In this paper, we mainly focus on the approach which uses the train/validation split since it is conceptually simpler, but also provide some analysis for the case that we use the entire training set to estimate the relevant quantities (cf. Section~\ref{section_finite_sample_analysis}). Several other variants of this meta-algorithm are possible: one could sample with or without replacement from a training pool (cf. Section~\ref{section_experiments}); sample according to a strategy from the active learning literature such as uncertainty sampling \citep{settles_survey}; sample several points each round; or weight the samples nonuniformly for optimization. We focus on the simplest of these variants to instigate the study of adaptive sampling to mitigate disparate performance, and leave these variants as directions for future~research. \section{Analysis in an idealized 1-dimensional setting}\label{section_analysis_1d} In this section, we analyze our strategy in an idealized 1-dimensional setting. Assume that features $x$ are drawn from a mixture of two distributions $\Psymb_{G_0}$ and $\Psymb_{G_1}$ on $\mathbb{R}$, corresponding to demographic groups $G_0$ and $G_1$. Assume that for each group the label~$y$ is defined by a threshold on $x$: namely $y=\sign(x-t_0)$ if $x \sim \Psymb_{G_0}$ and $y=\sign(x-t_1)$ if $x\sim \Psymb_{G_1}$ respectively. We consider performing loss minimization with respect to a margin-based loss function over the class of threshold classifiers of the form $\hat{y}=\sign(x-c)$, $c\in\mathbb{R}$. Fix some $\lambda_0\in[0,1]$ and $n_0\in\mathbb{N}$ (the initial fraction of $G_0$ and the initial sample size, respectively). Assume we have computed $c(\lambda_0)$, the threshold minimizing the true loss for the distribution $\Psymb_{\lambda_0}$, the weighted mixture of $\Psymb_{G_0}$ and $\Psymb_{G_1}$ with mixture weights $\lambda_0$ and $1-\lambda_0$. We use $01$-loss to measure which group is disadvantaged with respect to each classifier. Depending on whether the true $01$-error of $c(\lambda_0)$ is greater for $G_0$ or $G_1$, we set $\lambda_1=\frac{\lambda_0 n_0+1}{n_0+1}$ or $\lambda_1=\frac{\lambda_0 n_0}{n_0+1}$, (the reweighting analog of adding a data point from $G_0$ or $G_1$ to the training set). We then obtain the minimizer of the true loss for the distribution $\Pr_{\lambda_1}$ and continue the process. We prove that the threshold obtained in the $i$-th round of our strategy converges (as $i\rightarrow \infty$) to the most fair threshold, which has the same error for $G_0$ and $G_1$. If we mix the strategy of sampling from the disadvantaged group with uniform sampling with probability $p$, then the $i$-th round threshold converges to a threshold $\bar{c}(p)$ in expectation. The threshold $\bar{c}(p)$ continuously depends on $p$, with $\bar{c}(0)$ the most fair threshold and $\bar{c}(1)$ the threshold that minimizes the loss. In order to formally prove these claims, we make the following assumptions: \vspace{2mm} \begin{assumptions}\label{assumptions_1d}[Data generating model and technical assumptions.] ~ \vspace{-6mm} \begin{enumerate}[leftmargin=*] \item The data $(x,y,a)\in\mathbb{R}\times\{-1,1\}\times \{0,1\}$ comes from a distribution $\Pr^{\star}$ such that: \begin{enumerate} \item $\Pr^{\star}[a=0]=\lambda^{\star}$ and $\Pr^{\star}[a=1]=1-\lambda^{\star}$ for some $\lambda^{\star}\in(0,1)$. \item For $j\in\{0,1\}$, if $(x,y,a)$ belongs to $G_j$, then $x$ is distributed according to an absolutely continuous distribution with density function~$f_j$ and $y$ is a deterministic function of $x$ given by $y=\sign(x-t_j)$. We assume that $t_0<t_1$. \item For $j\in\{0,1\}$, there exists a compact interval $I_j$ such that $f_j(x)=0$, $x\notin I_j$, and $f_j|_{I_j}$ is continuous. Furthermore, there exist $l,u\in I_0\cap I_1$ and $\delta>0$ with $l<t_0<t_1<u$ and $f_0(x),f_1(x)\geq\delta$ for all $x\in[l,u]$. \end{enumerate} \item We perform loss minimization with respect to a strictly convex margin-based loss function $l:\mathbb{R}\rightarrow\mathbb{R}_{\geq 0}$. It follows (see Appendix~\ref{appendix_proofs}) that the two functions, both defined on all of $\mathbb{R}$, \begin{align}\label{def_pop_risks} c\mapsto \mathbb{E}_{(x,y)\sim\Pr^{\star}|_{G_0}}l(y\cdot(x-c)), \qquad c\mapsto \mathbb{E}_{(x,y)\sim\Pr^{\star}|_{G_1}}l(y\cdot(x-c)) \end{align} are strictly convex. We assume that they attain a global minimum at $t_0$ and $t_1$, respectively. \end{enumerate} \end{assumptions} For $\lambda\in[0,1]$, we define a distribution $\Pr_{\lambda}$ over $(x,y,a)\in \mathbb{R}\times\{-1,1\}\times \{0,1\}$ by $\Pr_{\lambda}[a=0]=\lambda$, $\Pr_{\lambda}[a=1]=1-\lambda$, $\Pr_{\lambda}|_{G_0}=\Pr^{\star}|_{G_0}$ and $\Pr_{\lambda}|_{G_1}=\Pr^{\star}|_{G_1}$. Note that $\Pr_{\lambda^{\star}}=\Pr^{\star}$. Under Assumptions~\ref{assumptions_1d} we can prove the following proposition. The proof can be found in Appendix~\ref{appendix_proofs}. \vspace{2mm} \begin{proposition}\label{proposition_1d} Under Assumptions~\ref{assumptions_1d} the following claims are true: \begin{enumerate} \item Consider the function $Bias:\mathbb{R}\rightarrow [-1,+1]$ with \begin{align*} Bias(c)={\Pr}^{\star}|_{G_0}[\sign(x-c)\neq y]-{\Pr}^{\star}|_{G_1}[\sign(x- c)\neq y]. \end{align*} This function is continuous, with $Bias(t_0)<0$, $Bias(t_1)>0$, and $Bias|_{[t_0,t_1]}$ strictly increasing. So, there exists a unique $c_{fair} \in (t_0,t_1)$ with $Bias(c_{fair})=0$ for $\hat{y}=\sign(x - c_{fair})$. \item For every $\lambda \in[0,1]$, there exists a unique $c(\lambda)\in\mathbb{R}$ that minimizes \begin{align}\label{risk_general} \mathbb{E}_{(x,y)\sim\Pr_\lambda}l(y\cdot(x-c)). \end{align} It is $c(\lambda)\in[t_0,t_1]$. \item The function $c:[0,1]\rightarrow [t_0,t_1]$, $c:\lambda\mapsto c(\lambda)$, is continuous, decreasing, $c(0)=t_1$ and $c(1)=t_0$. So, there exist $\lambda_{fair}^L,\lambda_{fair}^U$ with $0<\lambda_{fair}^L\leq\lambda_{fair}^U<1$ and \begin{align*} c(\lambda)>c_{fair},~\,\lambda<\lambda_{fair}^L,\quad~~ c(\lambda)=c_{fair},~\,\lambda_{fair}^L\leq \lambda \leq \lambda_{fair}^U,\quad~~ c(\lambda)<c_{fair},~\,\lambda>\lambda_{fair}^U. \end{align*} \end{enumerate} \end{proposition} Now we return to the process outlined at the beginning of this section. Assume that we start with $\lambda_0=\lambda^{\star}$ and that we can write $\lambda_0=\frac{|S_0\cap G_0|}{|S_0|}$. $S_i$ plays the role of the $i$th-round training set. In each round, given $\lambda_i$, we obtain $c_i=c(\lambda_i)$ (the minimizer of \eqref{risk_general}) and compute $\Dis(i+1)\in\{G_0,G_1\}$, the disadvantaged group at the beginning of round~$i+1$. We choose $\Dis(i+1)=G_1$ if and only if $Bias(c_i)<0$ (this is just one way of choosing the disadvantaged group in case that $c_i$ is fair, i.e. $Bias(c_i)=0$). Next, with probability~$1-p$ we draw a data point from $\Dis(i+1)$ to $S_i$, and with probability~$p$ we sample a data point from $\Pr^{\star}$ and add it to $S_i$, in order to form $S_{i+1}$. Then we update $\lambda_{i+1}=\frac{|S_{i+1}\cap G_0|}{|S_{i+1}|}$ accordingly, and continue the process. Then, in expectation, for all $i\geq 0$, \begin{align}\label{recurrence_lambda_with_parameter_p_N} \begin{split} \lambda_{i+1}=\frac{|S_{i+1} \cap G_0|}{|S_{i+1}|}&=\frac{|S_{i} \cap G_0|+(1-p)\cdot\mathds{1}\{\Dis({i+1})=G_0\}+p\cdot\mathds{1}\{\text{drawing from }G_0\} }{|S_0|+{i+1}}\\ &=\frac{|S_0|+i}{|S_0|+{i+1}}\lambda_{i}+\frac{(1-p)\cdot\mathds{1}\{c_i\geq c_{fair}\}}{|S_0|+{i+1}}+\frac{p\cdot \lambda^{\star}}{|S_0|+{i+1}}. \end{split} \end{align} For this process, whenever the claims of Proposition~\ref{proposition_1d} are true, we can prove the following theorem: \vspace{2mm} \begin{theorem}\label{theorem_1d} Consider our strategy as described in the previous paragraph and assume that all claims of Proposition~\ref{proposition_1d} are true. Then the following are true: \begin{enumerate} \item If $\lambda^{\star}\geq \lambda_{fair}^U$, then for $p\in[0,{\lambda_{fair}^U}/{\lambda^{\star}}]$, we have $\lambda_i\rightarrow \lambda_{fair}^U$ and hence $c_i\rightarrow c_{fair}$ as $i\rightarrow \infty$, and for $p\in[{\lambda_{fair}^U}/{\lambda^{\star}},1]$, we have $\lambda_i\rightarrow p\lambda^{\star}$ and $c_i\rightarrow c(p\lambda^{\star})$. \item If $\lambda^{\star}\leq \lambda_{fair}^U$, then for $p\in[0,{(1-\lambda_{fair}^U)}/{(1-\lambda^{\star})}]$, we have $\lambda_i \rightarrow \lambda_{fair}^U$ and hence $c_i\rightarrow c_{fair}$, and for $p\in[{(1-\lambda_{fair}^U)}/{(1-\lambda^{\star})},1]$, we have $\lambda_i\rightarrow 1-p+p\lambda^{\star}$ and $c_i\rightarrow c(1-p+p\lambda^{\star})$. \end{enumerate} \end{theorem} \begin{proof}[Proof~~(Sketch---the full proof can be found in Appendix~\ref{appendix_proofs})] Using \eqref{recurrence_lambda_with_parameter_p_N} and induction we can show \begin{align}\label{recurrence_lambda_proof_sketch} \lambda_i=\frac{|S_0|}{|S_0|+i}\cdot \lambda_0 + \frac{(1-p)}{|S_0|+i}\cdot\sum_{j=0}^{i-1}\mathds{1}\{\lambda_{j}\leq \lambda_{fair}^U\}+\frac{i}{|S_0|+i}\cdot p\lambda^{\star}, \end{align} where we have used that $c_{j}\geq c_{fair}\Leftrightarrow\lambda_j\leq \lambda_{fair}^U$. From \eqref{recurrence_lambda_proof_sketch}, it is not hard to show the convergence of $\lambda_i$. The convergence of $c_i$ follows from the continuity of $c:\lambda\mapsto c(\lambda)$. \end{proof} Note that the limit of $c_i$ continuously depends on $p$ and that for $p=0$ this limit equals $c_{fair}$ and for $p=1$ it equals $c(\lambda^{\star})$, the threshold that minimizes the risk for the true data generating distribution~$\Pr^{\star}$. Note that Assumptions~\ref{assumptions_1d} are not necessary conditions for Proposition~\ref{proposition_1d}, and hence Theorem~\ref{theorem_1d}, to hold. Indeed, in Appendix~\ref{appendix_analysis_1d} we show that Proposition~\ref{proposition_1d} holds for two marginal distributions of $x$ for $G_0$ and $G_1$ that are continuous uniform distributions and hinge loss (which is not strictly convex). For that concrete example, we can also prove that $c_i$ converges to $c_{fair}$ with a rate of $1/(|S_0|+i)$ when $p=0$. \section{Some finite sample analysis}\label{section_finite_sample_analysis} In this section, we describe that a result somewhat akin to Theorem~\ref{theorem_1d} takes place with respect to a broader class of distributions and hypotheses, even if one has only finite-sample estimates of the loss functions and the bias of a hypothesis. Let us establish some useful notation for stating the result formally. For round $t$, let $\sam{0}{t}, \sam{1}{t}$ represent the set of samples from group $G_0, G_1$ in round~$t$ respectively, and let $\ss{0}{t} = |\sam{0}{t}|, \ss{1}{t} = | \sam{1}{t}|$ represent the number of those samples in round $t$. Let $\elt{t}$ represent the empirical loss function in round $t$ and $\eloss{a}{t}$ the empirical loss on $G_a$ in round $t$. The result of this section states that one variant of Algorithm~\ref{basic_outline} either approximately equalizes the losses for the two groups, or would draw a sample from the group with larger error if run for another round. Suppose we instantiate the variants of the algorithm to satisfy the following assumptions. \begin{assumptions}\label{assumptions_algorithm}[Algorithmic specifications.] ~ \vspace{-6mm} \begin{enumerate}[leftmargin=*] \item Assume Algorithm~\ref{basic_outline} selects $\hypt{t}\in \argmin_{h\in \mathcal{H}}\elt{t}$ (which minimizes empirical loss in round $t$). \item Assume the group-specific performance is set to be $\eloss{a}{t}$ for group $G_a$. \item Assume all of these quantities are evaluated on the training set, not on a validation set. \end{enumerate} \end{assumptions} Intuition suggests that sampling from the higher empirical loss group would lead to a hypothesis which approximately equalizes the losses across the two groups. This theorem shows that this intuition holds in a formal sense. \begin{theorem}\label{thm_vc} Suppose Algorithm~\ref{basic_outline} is instantiated to satisfy Assumptions~\ref{assumptions_algorithm} for some $\mathcal{H}$ and $l$. Then with probability $1-\delta$, for the hypothesis $\hypt{T}$ output in round $T$, either \begin{itemize} \item $|\loss{0}{}(\hypt{T}) - \loss{1}{}(\hypt{T})| \leq 2\max_a\sqrt{\frac{\mathcal{VC}(H) \ln\frac{2T}{\delta}}{\ss{a}{T}}} \leq 2 \max_a\sqrt{\frac{\mathcal{VC}(H) \ln\frac{2T}{\delta}}{\ss{a}{0}}}$, or \item $|\loss{0}{}(\hypt{T}) - \loss{1}{}(\hypt{T})| > 2\max_a\sqrt{\frac{\mathcal{VC}(H) \ln\frac{2T}{\delta}}{\ss{a}{T}}}$, and a $(T+1)$st round would sample from the group with higher true loss. \end{itemize} \end{theorem} The formal proof is in Appendix~\ref{appendix_proof_jamie}. The approximation term is governed by the initial sample size of the smaller population, since in the worst case we draw no additional samples from that population. \section{Related work}\label{section_related_work} \paragraph{Fair ML} A huge part of the work on fairness in ML \citep{barocas-hardt-narayanan} aims at balancing various performance measures across different demographic groups for classification and more recently for dimensionality reduction and clustering \citep{samira2018,samira2019,kleindessnerguarantees,kleindessner2019fair,chierichetti2017fair,ahmadian2019clustering,bera2019fair,celis2018fair}, and ranking~\citep{celis2017ranking,singh2018fairness,zehlike2017fa}. Numerous works consider a classifier to be fair if it has approximately equal error, false positive, and/or false negative rates on different groups \citep[e.g.,][and references therein]{hardt2016equality,berk2017,agarwal2018,donini2018,zafar2019JMLR}. Most such work equalize these rates by: pre-processing data, in-processing (satisfying the fairness constraints during training), or post-processing an unfair classifier's predictions. Pre-processing and post-processing~\citep{woodworth2017} can be highly suboptimal; instead, addressing unfairness during data collection is often a more effective approach~\citep{chen2018why}. Our proposed strategy is one such approach. Our strategy is also related to the approach by \citet{agarwal2018}, which reduces the problem of learning a fair classifier to a sequence of cost-sensitive classification problems, resulting in reweighting the training points from different groups. While our approach also reweights data from different groups, we do so tacitly by drawing additional samples from groups with higher error. This approach more naturally lends itself to settings where additional data can be gathered at some cost. \paragraph{Adaptive sampling} Adaptive sampling of many flavors is used pervasively in ML, and is at the heart of active learning, where one queries for labels of data points to learn a high-accuracy model with less labeled data; uncertainty sampling, query-by-committee or sampling according to expected error reduction are commonly used query strategies \citep[e.g.,][]{settles_survey}. There is only limited theoretical understanding of the latter heuristics \citep{Freund1997,hanneke2014theory}. Empirically, they have been found to often work well, but also to badly fail in some situations \citep{schein2007}. Closely related to our work is the recent paper on fair active learning by \citet{Anahideh2020}. Their sampling strategy queries, in each round, the label of a data point that is both informative and expected to yield a classifier with small violation of a fairness measure. Unlike our work, their approach requires training a classifier for every data point which might be queried (and with every possible label that point may have) before actually querying a data point, resulting in a significant computational overhead. Moreover, this work does not provide any theoretical analysis of their strategy. Also related is the paper by~\citet{campero2019}, which actively collects additional \emph{features} for data points to equalize performance on different groups subject to a budget constraint. \section{Experiments}\label{section_experiments} In this section, we perform a number of experiments to investigate the performance of our proposed strategy. We first compare our strategy to the approaches of \citet{agarwal2018} and \citet{hardt2016equality}. Next, we apply our strategy to the Flint water data set~\citep{flint_paper_1}, where the goal is to predict whether a house's water access contains lead, and observe that our strategy reduces accuracy disparity among the nine wards of Flint. Finally, we study our approach in a synthetic 1-dimensional setting similar to Section~\ref{section_analysis_1d} in Appendix~\ref{appendix_experiments_1d}. \subsection{Trading-off accuracy vs. fairness: comparison with some algorithms from the literature} \label{subsec_experiments_real} We compare error-fairness Pareto frontiers for classification of our algorithm to counterparts produced by \citet{agarwal2018}, \citet{hardt2016equality}, and scikit-learn's logistic regression~\citep{scikit-learn}. We relax the fairness constraints to produce a Pareto frontier for the optimization of~\citet{hardt2016equality}. We implement Algorithm~\ref{basic_outline} for logistic regression in two ways: first, we completely retrain $h_t$ from $S_t$ (referred to as the batch version); second, we perform a step of SGD update using the sampled data (referred to as the SGD version, results in Appendix~\ref{appendix_experiments_real}). The former has stronger convergence guarantees while the latter requires less per-round computation. The data sets we consider are real-world data sets common in the fairness literature: Adult \citep{Dua2019}, Law School \citep{wightman1998lsac} \footnote{Downloaded from https://github.com/jjgold012/lab-project-fairness \citep{bechavod2017penalizing}.}, and the 2001 Dutch Census data ~\citep{agarwal2018,kearns2019empirical,donini2018}. We also evaluate our work using a synthetic data set that was used in \citep{zafar2017}. For each data set, each strategy is run 10 times. {\bf In each run, all strategies are compared using the same training data, although it is split into train, pool, and validation for our method.} Over the 10 runs, the test set remains fixed. Each strategy is evaluated with an equal number of possible hyperparameter values. For \citet{agarwal2018}, a grid size of 100 is used; for \citet{hardt2016equality}, 100 evenly spaced numbers over 0 and 1 are used as coefficients to the relaxation of optimization constraints; and for our strategy, we used 100 evenly spaced numbers over 0 and 1 as probability~$p$ of sampling from whole population pool. The error and fairness violations are averaged over 10 runs. From those averaged results for each strategy, we plotted points on the Pareto frontiers. Figure~\ref{experiment_real_batch_1} shows results when $f$, the method of comparing group performance, is equalized odds~\citep{hardt2016equality} plotted against classification error on Adult Income and Dutch Census data sets. {\bf Our strategy produces Pareto frontiers that are competitive to other strategies, after seeing a smaller sized labeled data set.} Both \citet{hardt2016equality} and \citet{agarwal2018} use all labeled points as training points. Our strategy as indicated in Algorithm~\ref{basic_outline} breaks the given set of labeled points down into three disjoint parts, initial training points, points used as a pool to sample from, and points used as a validation set to determine disadvantaged group; we do not explicitly use the validation set for training. As suggested in Section~\ref{introduce_strategy}, our strategy can be adapted for different fairness measures. In Appendix~\ref{appendix_experiments_real}, we show analogous comparisons of Pareto frontiers using other group performance measures. \begin{figure}[t] \centering \includegraphics[width=73mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=adult_NrUpdatesMyAppr=3000.pdf} \hspace{-9mm} \includegraphics[width=73mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=dutch_NrUpdatesMyAppr=3000.pdf} \\ \caption{{ Pareto frontiers produced by our strategy on Adult Income data (left two columns) and Dutch Census data (right two columns) vs. other three strategies. Error is $01$-error. } }\label{experiment_real_batch_1} \end{figure} \begin{figure} \centering % \begin{overpic} [scale=0.108]{FinalFlint/Flint_MultiGroups_train_in_order_compressed.png} \put(72,35){\includegraphics[scale=0.08]{FinalFlint/legendN.pdf}} \end{overpic} \hspace{-3mm} \includegraphics[scale=0.108]{FinalFlint/Flint2-CROPPED_compressed.png} \hspace{-3mm} \includegraphics[scale=0.108]{FinalFlint/Flint1-CROPPED_compressed.png} \caption{ Experiment on the Flint water data set. Error on the nine wards as a function of $t$ for a strategy where training points are added in the order of their timestamp (left), a strategy where training points are added in a random order (middle), and our strategy (right). For our strategy, for $680\leq t\leq 5000$ the errors on Wards 2, 3, 6, 7, 8, and 9 are within 0.043. After that, some wards have exhaustively been sampled (we sample without replacement) and errors start to diverge. The plots are obtained from running the experiment for ten times.} \label{fig:exp_flint_data} \end{figure} \subsection{Case study: sequentially replacing lead pipes in Flint MI}\label{subsec_experiment_Flint} In late 2015 it became clear to officials in Flint, MI that changes to the source of municipal drinking water, which had taken place nearly two years prior, caused corrosion to home water pipes throughout the city and allowed lead to leach into residents' drinking water. As media reports brought huge attention to the issue, and federal regulators arrived, the State of Michigan initiated a program to find and replace dangerous water pipes. Part of this program included researchers who developed a model to estimate which homes were most likely to have lead-based water lines~\citep{flint_paper_1} using features such as the home's age, value, and location. From this work emerged a large data set of houses with property and parcel information, census data, and often, the material type of a house's water service lines and a timestamp of when this was determined, usually through a dig-inspection. Flint is comprised of nine wards, with most wards being highly homogeneous with respect to its residents' race or income~\citep{CityOfFl56:online}. After removing records with missing entries, the Flint water data set has 22750 records, distributed roughly equally among the nine wards (Ward 1: 2548, W2: 2697, W3: 1489, W4: 2998, W5: 1477, W6: 2945, W7: 2732, W8: 2970, W9: 2894). We use a random test set of size 5000 to evaluate errors. We compare Flint's sampling strategy and the random-order strategy starting with a training set of size 1200 in round $t=0$; our strategy starts with a randomly sampled training set of 200 and uses the remaining 1000 randomly sampled data points as a validation set: evaluating the error of the current model on the validation set, our strategy samples the next data point uniformly from the ward with the highest error among all wards with unlabeled points remaining. Importantly, every strategy samples each data point exactly once (sampling without replacement) since the city's ultimate goal was to know the true label of as many houses as possible. In each round~$t$, we train a logistic regression classifier on all training points gathered until round $t$. Bureaucratic decision making for large projects like those in Flint involve many seemingly-arbitrary considerations, and equalizing classifier performance across groups--in this case city wards--was certainly not the top priority item for policy makers after this crisis. On the other hand, as we see in the first panel of Figure~\ref{fig:exp_flint_data}, the large disparate impact in performance across wards in the city is quite stark. Simply choosing homes for inspection at random results in the comparison plot in the second panel of Figure~\ref{fig:exp_flint_data}. There still remains a non-equal estimator accuracy across wards, of course, since some wards are much harder to predict than others at baseline, and no attempt is made to equalize at performance. The third panels shows relative performance for our adaptive selection strategy. In the random-sampling strategy, between round~$t=680$ and round~$t=5000$, the difference in error between Ward~3 and Ward~9, on average, is always at least 0.1. In contrast, as long as there are unlabeled data points from each ward, our strategy (right plot) reduces the error on the ward with highest error and, for at least Wards 2, 3, 6, 7, 8, and 9, brings their accuracy closer. That is, between round~$t=680$ and round~$t=5000$, the difference between the error on any two of these wards is always smaller than 0.043. In later rounds, we exhaust the supply of samples from some wards and the errors diverge until they finally equal the errors for Flint's strategy or the random sampling strategy. Importantly, the reduction in accuracy disparity we achieve comes with only a mild increase in overall classification error (cf. Figure~\ref{fig:exp_flint_data2} in Appendix~\ref{appendix_flint_experiment}). \section*{Broader impact} This work aims to evaluate whether and when adaptive sampling can mitigate the difference in predictive performance for different demographic groups of a baseline classifier. This can certainly impact the way organizations choose to allocate their resources when collecting data in an adaptive fashion. This approach simultaneously reduces the impact of two possible sources of this disparate performance: the lack of a large enough sample from a population to generalize well, and the fact that the loss minimization prioritizes average loss rather than loss with respect to one particular group. This simple strategy, of merely gathering additional data from whatever group has lower performance, is both simple enough that it stands a chance at adoption in high-stakes environments involving many stakeholders, and admits a formal analysis of what this process can guarantee, like the one presented~here. \small \section*{Appendix} \vspace{1mm} \section{Proofs}\label{appendix_proofs} \vspace{1mm} \textbf{Proof that the functions defined in \eqref{def_pop_risks} are strictly convex:} \vspace{2pt} If $l:\mathbb{R}\rightarrow\mathbb{R}_{\geq 0}$ is strictly convex, then $l(tx+(1-t)y))<tl(x)+(1-t)l(y)$ for all $x\neq y\in\mathbb{R}$ and $t\in(0,1)$. Let $c_1\neq c_2\in\mathbb{R}$ and $t\in(0,1)$. It is for all $x,y\in\mathbb{R}$ \begin{align*} l(y\cdot(x-(tc_1+(1-t)c_2))&=l(t(yx-yc_1)+(1-t)(yx-yc_2))\\ &<t l(yx-yc_1)+(1-t)l(yx-yc_2). \end{align*} Let $y=1$ (or $y=-1$) be fixed. Then both the left and the right side of the above inequality are continuous as a function of $x$. Hence, for every $x$ there exists an interval such that the difference between the left and the right side is greater than some small $\varepsilon$ on this interval. Using that $f_0\geq \delta$ on $[l,u]$ it follows that \begin{align*} &\mathbb{E}_{(x,y)\sim\Pr^{\star}|_{G_0}}l(y\cdot(x-(tc_1+(1-t)c_2))<\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~ t\cdot \mathbb{E}_{(x,y)\sim\Pr^{\star}|_{G_0}}l(yx-yc_1)+(1-t)\cdot\mathbb{E}_{(x,y)\sim\Pr^{\star}|_{G_0}}l(yx-yc_2). \end{align*} Similarly, $f_1\geq \delta$ on $[l,u]$ implies that $c\mapsto \mathbb{E}_{(x,y)\sim\Pr^{\star}|_{G_1}}l(y\cdot(x-c))$ is strictly convex. \hfill$\square$ \vspace{8mm} For proving Proposition~\ref{proposition_1d} we require a simple technical lemma: \vspace{2mm} \begin{lemma}\label{lemma_conv_functions} Let $F,G:\mathbb{R}\rightarrow\mathbb{R}$ be two strictly convex functions and assume they attain a global minimum at $x_F$ and $x_G$, respectively, where $x_F<x_G$. For $\lambda\in[0,1]$, let $$H_\lambda(x)=\lambda F(x)+(1-\lambda)G(x).$$ Then: \begin{enumerate}[leftmargin=*] \item The function $H_\lambda:\mathbb{R}\rightarrow\mathbb{R}$ is strictly convex and attains a unique global minimum at some $x_\lambda\in[x_F,x_G]$. \item The function $\lambda\mapsto x_\lambda$ is decreasing.\footnote{If $F$ and $G$ are differentiable, it is not hard to show that $\lambda\mapsto x_\lambda$ is actually strictly decreasing; however, the counter-example $F(x)=(x-1)^2+|x|$ and $G(x)= (x+1)^2+|x|$ shows that in general this is not true.} \end{enumerate} \end{lemma} \begin{proof} First note that a strictly convex function is continuous and has at most one global minimum. \begin{enumerate}[leftmargin=*] \item Clearly, $H_\lambda$ is strictly convex. It is $F|_{(-\infty,x_F]}$ strictly decreasing and $F|_{[x_F,+\infty)}$ strictly increasing. Similarly, $G|_{(-\infty,x_G]}$ is strictly decreasing and $G|_{[x_G,+\infty)}$ strictly increasing. It follows that for $x<x_F$ \begin{align*} H_\lambda(x)=\lambda F(x)+(1-\lambda)G(x)>\lambda F(x_F)+(1-\lambda)G(x_F)=H_\lambda(x_F) \end{align*} and, similarly, $H_\lambda(x)>H_\lambda(x_G)$ for $x>x_G$. Hence $\inf_{x\in\mathbb{R}}H_\lambda(x)=\inf_{x\in[x_F,x_G]}H_\lambda(x)$, and on the compact interval~$[x_F,x_G]$ the continuous function $H_\lambda$ attains a minimum. \item Assume that $\lambda'>\lambda$, but $x_{\lambda}<x_{\lambda'}$ (note that $x_{\lambda},x_{\lambda'}\in[x_F,x_G]$). However, \begin{align*} H_{\lambda'}(x_{\lambda})&=\lambda' F(x_{\lambda})+(1-\lambda')G(x_{\lambda})\\ &=\lambda F(x_{\lambda})+(1-\lambda)G(x_{\lambda})+(\lambda'-\lambda) F(x_{\lambda})-(\lambda'-\lambda)G(x_{\lambda})\\ &=H_\lambda(x_\lambda)+(\lambda'-\lambda) F(x_{\lambda})-(\lambda'-\lambda)G(x_{\lambda})\\ &<H_\lambda(x_{\lambda'})+(\lambda'-\lambda) F(x_{\lambda'})-(\lambda'-\lambda)G(x_{\lambda'})\\ &= \lambda F(x_{\lambda'})+(1-\lambda)G(x_{\lambda'})+(\lambda'-\lambda) F(x_{\lambda'})-(\lambda'-\lambda)G(x_{\lambda'})\\ &=\lambda' F(x_{\lambda'})+(1-\lambda')G(x_{\lambda'})\\ &=H_{\lambda'}(x_{\lambda'}), \end{align*} which is a contradiction to $x_{\lambda'}$ being the global minimizer of $H_{\lambda'}$. \end{enumerate} \end{proof} \vspace{8mm} \textbf{Proof of Proposition~\ref{proposition_1d}:} \begin{enumerate}[leftmargin=*] \item Let \begin{align*} F_0(z)=\int_{-\infty}^z f_0(x) dx,\qquad F_1(z)=\int_{-\infty}^z f_1(x) dx, \quad z\in\mathbb{R}, \end{align*} be the cumulative distribution functions of $G_0$ and $G_1$, respectively. It is for $c\in\mathbb{R}$ \begin{align*} &{\Pr}^{\star}|_{G_0}[\sign(x-c)\neq y]=\int_{\min\{c,t_0\}}^{\max\{c,t_0\}}f_0(x)dx=F_0(\max\{c,t_0\})-F_0(\min\{c,t_0\}),\\ &{\Pr}^{\star}|_{G_1}[\sign(x- c)\neq y]=\int_{\min\{c,t_1\}}^{\max\{c,t_1\}}f_1(x)dx=F_1(\max\{c,t_1\})-F_1(\min\{c,t_1\}) \end{align*} and hence the function $Bias$ is continuous. It is for $c\in[t_0,t_1]$ \begin{align*} Bias(c)=F_0(c)+F_1(c)-(F_0(t_0)+F_1(t_1)) \end{align*} and because of $f_0,f_1\geq \delta$ on $[t_0,t_1]$, $Bias|_{[t_0,t_1]}$ is strictly increasing. We also have $Bias(t_0)<0$ and $Bias(t_1)>0$. By the intermediate value theorem there exists a unique $c_{fair}$ with $Bias(c_{fair})=0$. \item It is \begin{align*} \mathbb{E}_{(x,y)\sim\Pr_\lambda}l(y\cdot(x-c))=\lambda \mathbb{E}_{(x,y)\sim\Pr^{\star}|_{G_0}}l(y\cdot(x-c))+(1-\lambda) \mathbb{E}_{(x,y)\sim\Pr^{\star}|_{G_1}}l(y\cdot(x-c)), \end{align*} and the claim follows from Lemma~\ref{lemma_conv_functions}. \item The functions $$c\mapsto \mathbb{E}_{(x,y)\sim \Pr^{\star}|G_0}l(y\cdot(x-c))=\int_{I_0\cap(-\infty,t_0]}l(c-x) f_0(x) dx+\int_{I_0\cap[t_0,+\infty)}l(x-c) f_0(x) dx$$ and $$c\mapsto \mathbb{E}_{(x,y)\sim \Pr^{\star}|G_1}l(y\cdot(x-c))=\int_{I_1\cap(-\infty,t_1]}l(c-x) f_1(x) dx+\int_{I_1\cap[t_1,+\infty)}l(x-c) f_1(x) dx$$ are continuous since the integrands are continuous as a function of $(c,x)$ and the domains of integration are compact. Hence, also the function $$(c,\lambda)\mapsto \mathbb{E}_{(x,y)\sim\Pr_\lambda}l(y\cdot(x-c))=\lambda \mathbb{E}_{(x,y)\sim \Pr^{\star}|G_0}l(y\cdot(x-c))+(1-\lambda) \mathbb{E}_{(x,y)\sim \Pr^{\star}|G_1}l(y\cdot(x-c))$$ is continuous. The function $c:\lambda\mapsto c(\lambda)$ is obtained by minimizing $\mathbb{E}_{(x,y)\sim\Pr_\lambda}l(y\cdot(x-c))$ with respect to $c\in[t_0,t_1]$. By the maximum theorem \citep[Chapter E.3]{Ok_2007}, the function $c:\lambda\mapsto c(\lambda)$ is continuous. We have $c(0)=t_1$ and $c(1)=t_0$ according to Assumptions~\ref{assumptions_1d}, and according to Lemma~\ref{lemma_conv_functions}, $c:\lambda\mapsto c(\lambda)$ is decreasing. \end{enumerate} \hfill$\square$ \vspace{8mm} \textbf{Proof of Theorem~\ref{theorem_1d}:} \vspace{2pt} Note that the function $c:\lambda \mapsto c(\lambda)$ is continuous according to Proposition~\ref{proposition_1d} and hence the claims about the convergence of $c_i$ follow from the claims about the convergence of $\lambda_i$. \vspace{2mm} According to \eqref{recurrence_lambda_with_parameter_p_N} it is \begin{align*} \lambda_{i+1}=\frac{|S_0|+i}{|S_0|+{i+1}}\lambda_{i}+\frac{(1-p)\cdot\mathds{1}\{c_i\geq c_{fair}\}}{|S_0|+{i+1}}+\frac{p\cdot \lambda^{\star}}{|S_0|+{i+1}}. \end{align*} Using this recurrence relation, by means of induction it is not hard to show that \begin{align*} \lambda_i=\frac{|S_0|}{|S_0|+i}\cdot \lambda_0 + \frac{(1-p)}{|S_0|+i}\cdot\sum_{j=0}^{i-1}\mathds{1}\{c_{j}\geq c_{fair}\}+\frac{i}{|S_0|+i}\cdot p\lambda^{\star}. \end{align*} Because of $c_{j}\geq c_{fair}~\Leftrightarrow~\lambda_j\leq \lambda_{fair}^U$, we obtain \begin{align*} \lambda_i=\frac{|S_0|}{|S_0|+i}\cdot \lambda_0 + \frac{(1-p)}{|S_0|+i}\cdot\sum_{j=0}^{i-1}\mathds{1}\{\lambda_{j}\leq \lambda_{fair}^U\}+\frac{i}{|S_0|+i}\cdot p\lambda^{\star}. \end{align*} From this it follows that \begin{align*} (|S_0|+i)\cdot(\lambda_i-\lambda_{i-1})+\lambda_{i-1}=(1-p)\mathds{1}\{\lambda_{i-1}\leq \lambda_{fair}^U\}+p\lambda^{\star} \end{align*} and hence \begin{align}\label{lambda_n_monotonicity_N} \begin{split} \lambda_i>\lambda_{i-1}~~~\Leftrightarrow~~~\lambda_{i-1}<(1-p)\mathds{1}\{\lambda_{i-1}\leq \lambda_{fair}^U\}+p\lambda^{\star},\\ \lambda_i=\lambda_{i-1}~~~\Leftrightarrow~~~\lambda_{i-1}=(1-p)\mathds{1}\{\lambda_{i-1}\leq \lambda_{fair}^U\}+p\lambda^{\star},\\ \lambda_i<\lambda_{i-1}~~~\Leftrightarrow~~~\lambda_{i-1}>(1-p)\mathds{1}\{\lambda_{i-1}\leq \lambda_{fair}^U\}+p\lambda^{\star}. \end{split} \end{align} It also follows that \begin{align}\label{difference_lambda_i} |\lambda_i-\lambda_{i-1}|\leq \frac{2}{|S_0|+i} \end{align} and hence $|\lambda_i-\lambda_{i-1}|\rightarrow 0$. \vspace{2mm} In the following, we make four claims and prove each of them separately. \vspace{2mm} \textbf{Claim A:} \emph{We always have $\lambda^{\star}\leq 1-p+p\lambda^{\star}$. Furthermore, we have $\lambda^{\star}\geq p\lambda^{\star}$ and $p\lambda^{\star}\leq 1-p+p\lambda^{\star}$.} \vspace{1mm} It is \begin{align*} \lambda^{\star}\leq 1-p+p\lambda^{\star}~~~\Leftrightarrow~~~ \lambda^{\star}-p\lambda^{\star}\leq 1-p~~~\Leftrightarrow~~~ \underbrace{\lambda^{\star}}_{\in[0,1]}(1-p)\leq 1-p \quad~~\checkmark \end{align*} The two other claims are a simple consequence of $p\in[0,1]$. \vspace{3mm} \textbf{Claim 1:} \emph{If $1-p+p\lambda^{\star}\leq \lambda_{fair}^U$, then $\lambda_i\rightarrow 1-p+p\lambda^{\star}$.} \vspace{1mm} Hence we have $\lambda_0=\lambda^{\star}\leq 1-p+p\lambda^{\star}\leq \lambda_{fair}^U$. We first show that if $\lambda_j\leq 1-p+p\lambda^{\star}$ for all $0\leq j\leq i-1$, then we also have $\lambda_{i}\leq 1-p+p\lambda^{\star}$: in this case, it is \begin{align* \lambda_i&=\frac{|S_0|}{|S_0|+i}\cdot \lambda_0 + \frac{(1-p)i}{|S_0|+i}+\frac{i}{|S_0|+i}\cdot p\lambda^{\star}\\ &\leq \frac{|S_0|}{|S_0|+i}\cdot (1-p+p\lambda^{\star}) + \frac{(1-p)i}{|S_0|+i}+\frac{i}{|S_0|+i}\cdot p\lambda^{\star}\\ &=1-p+p\lambda^{\star}. \end{align*} Hence, under the assumption of Claim 1, we have $\lambda_i=\frac{|S_0|}{|S_0|+i}\cdot \lambda_0 + \frac{(1-p)i}{|S_0|+i}+\frac{i}{|S_0|+i}\cdot p\lambda^{\star}$ for all $i\in\mathbb{N}$ and $\lambda_i\rightarrow 1-p+p\lambda^{\star}$. \vspace{3mm} \textbf{Claim 2:} \emph{If $p\lambda^{\star}> \lambda_{fair}^U$, then $\lambda_i\rightarrow p\lambda^{\star}$.} \vspace{1mm} Hence we have $\lambda_0=\lambda^{\star}\geq p\lambda^{\star}> \lambda_{fair}^U$. We first show that if $\lambda_j\geq p\lambda^{\star}$ for all $0\leq j\leq i-1$, then we also have $\lambda_{i}\geq p\lambda^{\star}$: in this case, it is \begin{align* \lambda_i=\frac{|S_0|}{|S_0|+i}\cdot \lambda_0 + \frac{i}{|S_0|+i}\cdot p\lambda^{\star}\geq \frac{|S_0|}{|S_0|+i}\cdot p\lambda^{\star} +\frac{i}{|S_0|+i}\cdot p\lambda^{\star}=p\lambda^{\star}. \end{align*} Hence, under the assumption of Claim 2, we have $\lambda_i=\frac{|S_0|}{|S_0|+i}\cdot \lambda_0 + \frac{i}{|S_0|+i}\cdot p\lambda^{\star}$ for all $i\in\mathbb{N}$ and $\lambda_i\rightarrow p\lambda^{\star}$. \vspace{3mm} \textbf{Claim 3:} \emph{If $p\lambda^{\star}\leq \lambda_{fair}^U< 1-p+p\lambda^{\star}$, then $\lambda_i\rightarrow \lambda_{fair}^U$.} \vspace{1mm} According to \eqref{lambda_n_monotonicity_N}, if $\lambda_i \leq \lambda_{fair}^U$, then $\lambda_{i+1}>\lambda_i$, and if $\lambda_i > \lambda_{fair}^U$, then $\lambda_{i+1}<\lambda_i$. If $\lambda_i \leq \lambda_{fair}^U$ for all $i\in\mathbb{N}$, then $\lambda_i=\frac{|S_0|}{|S_0|+i}\cdot \lambda_0 + \frac{(1-p)i}{|S_0|+i}+\frac{i}{|S_0|+i}\cdot p\lambda^{\star}$, $i\in\mathbb{N}$, and $\lambda_i\rightarrow 1-p+p\lambda^{\star}>\lambda_{fair}^U$, which is a contradiction. If $\lambda_i > \lambda_{fair}^U$ for all $i\in\mathbb{N}$, then $\lambda_i=\frac{|S_0|}{|S_0|+i}\cdot \lambda_0 + \frac{i}{|S_0|+i}\cdot p\lambda^{\star}$, $i\in\mathbb{N}$, and $\lambda_i\rightarrow p\lambda^{\star}\leq\lambda_{fair}^U$, which is only possible if $p\lambda^{\star}=\lambda_{fair}^U$ and $\lambda_i\rightarrow \lambda_{fair}^U$. Otherwise, for every $N\in\mathbb{N}$, there exist $i_1(N)>i_2(N)> N$ with $\lambda_{i_1(N)}\leq \lambda_{fair}^U$ and $\lambda_{i_2(N)}> \lambda_{fair}^U$. Because of $|\lambda_i-\lambda_{i-1}|\rightarrow 0$ and the monotonicity of the sequence on each side of $\lambda_{fair}^U$ it follows that $\lambda_i\rightarrow \lambda_{fair}^U$. \vspace{4mm} Combining Claims~1 to~3 yields the claims about the convergence of $\lambda_i$ as stated in the theorem. \hfill$\square$ \vspace{6mm} \section{Concrete example illustrating the findings of Section~\ref{section_analysis_1d}}\label{appendix_analysis_1d} As a concrete example of our findings in Section~\ref{section_analysis_1d} consider the case that the marginal distributions of $x$ for both $G_0$ and $G_1$ are continuous uniform distributions, that is $f_0(x)=\frac{1}{\beta_0-\alpha_0}\mathds{1}\{\alpha_0\leq x\leq \beta_0\}$ and $f_1(x)=\frac{1}{\beta_1-\alpha_1}\mathds{1}\{\alpha_1\leq x\leq \beta_1\}$. We assume that \begin{align}\label{assu_concrete_example} \alpha_0+1<\alpha_1+1< t_0-1<t_0+1<t_1-1<t_1+1<\beta_0-1<\beta_1-1. \end{align} We study the case of the hinge loss function~$l(z)=\max\{0,1-z\}$. Note that the hinge loss function is convex, but not strictly convex. Still we will show Proposition~\ref{proposition_1d} to be true. \vspace{3mm} Let $w_0=\frac{1}{\beta_0-\alpha_0}$ and $w_1=\frac{1}{\beta_1-\alpha_1}$. It is \begin{align}\label{prob_A_unif} \begin{split} {\Pr}^{\star}|_{G_0}[\sign(x-c)\neq y]=\begin{cases} \Pr^{\star}|_{G_0}[x \leq t_0]= (t_0-\alpha_0)\cdot w_0~~~\text{if } c\leq \alpha_0,\\ {\Pr}^{\star}|_{G_0}[c\leq x \leq t_0]= (t_0-c)\cdot w_0~~~\text{if }\alpha_0\leq c\leq t_0,\\ \Pr^{\star}|_{G_0}[t_0\leq x \leq c]= (c-t_0)\cdot w_0 ~~~\text{if }t_0\leq c \leq \beta_0,\\ \Pr^{\star}|_{G_0}[t_0 \leq x]= (\beta_0-t_0)\cdot w_0~~~\text{if } \beta_0\leq c \end{cases} \end{split} \end{align} and \begin{align}\label{prob_B_unif} \begin{split} {\Pr}^{\star}|_{G_1}[\sign(x-c)\neq y]=\begin{cases} \Pr^{\star}|_{G_1}[x \leq t_1]= (t_1-\alpha_1)\cdot w_1~~~\text{if } c\leq \alpha_1,\\ \Pr^{\star}|_{G_1}[c\leq x \leq t_1]= (t_1-c)\cdot w_1~~~\text{if } \alpha_1 \leq c\leq t_1,\\ \Pr^{\star}|_{G_1}[t_1\leq x \leq c]= (c-t_1)\cdot w_1 ~~~\text{if }t_1\leq c\leq \beta_1,\\ \Pr^{\star}|_{G_1}[t_1 \leq x]= (\beta_1-t_1)\cdot w_1~~~\text{if } \beta_1\leq c \end{cases}. \end{split} \end{align} Consequently, for $Bias(c)={\Pr}^{\star}|_{G_0}[\sign(x-c)\neq y]-{\Pr}^{\star}|_{G_1}[\sign(x-c)\neq y]$ we obtain \begin{align*} Bias(c)=\begin{cases} (t_0-\alpha_0)\cdot w_0 - (t_1-\alpha_1)\cdot w_1 ~~~\text{if } c\leq \alpha_0,\\ (t_0-c)\cdot w_0 - (t_1-\alpha_1)\cdot w_1 ~~~\text{if }\alpha_0\leq c\leq \alpha_1,\\ (t_0-c)\cdot w_0 - (t_1-c)\cdot w_1 ~~~\text{if }\alpha_1\leq c\leq t_0,\\ (c-t_0)\cdot w_0 - (t_1-c)\cdot w_1 ~~~\text{if }t_0\leq c\leq t_1,\\ (c-t_0)\cdot w_0 - (c-t_1)\cdot w_1 ~~~\text{if }t_1\leq c \leq \beta_0,\\ (\beta_0-t_0)\cdot w_0 - (c-t_1)\cdot w_1 ~~~\text{if }\beta_0\leq c \leq \beta_1,\\ (\beta_0-t_0)\cdot w_0 - (\beta_1-t_1)\cdot w_1 ~~~\text{if }\beta_1 \leq c \end{cases}. \end{align*} It is straightforward to verify that $Bias$ is continuous and $Bias(t_0)<0$ and $Bias(t_1)>0$. It is $$Bias|_{[t_0,t_1]}(c)=(c-t_0)\cdot w_0 - (t_1-c)\cdot w_1=(w_0+w_1)\cdot c -t_0w_0-t_1w_1,$$ and hence $Bias|_{[t_0,t_1]}$ is strictly increasing. Hence, we have shown the first claim of Proposition~\ref{proposition_1d} to be true. It is $c_{fair}=\frac{w_0t_0+w_1t_1}{w_0+w_1}$. \vspace{3mm} Let $l(z)=\max\{0,1-z\}$ be the hinge loss function. It is \begin{align*} \mathbb{E}_{(x,y)\sim\Pr_\lambda|_{G_0}}l(y\cdot(x-c))&= \mathbb{E}_{(x,y)\sim\Pr_\lambda|_{G_0}}\max\{0,1-y(x-c)\}\\ &=\int_{\alpha_0}^{t_0}l(-x+c)w_0 \,dx +\int_{t_0}^{\beta_0}l(x-c)w_0 \,dx. \end{align*} We have \begin{align*} \max\{0,1+x-c\}= \begin{cases} 0 &\quad\text{if }x\leq c-1,\\ 1+x-c &\quad\text{if }x\geq c-1 \end{cases},\\ \max\{0,1-x+c\}= \begin{cases} 0 &\quad\text{if }x\geq c+1,\\ 1-x+c &\quad\text{if }x\leq c+1 \end{cases} \end{align*} and hence \begin{align*} \mathbb{E}_{(x,y)\sim\Pr_\lambda|_{G_0}}\max\{0,1-y(x-c)\}&= \int_{\min\{t_0,\max\{\alpha_0,c-1\}\}}^{t_0}(1-c+x)w_0 \,dx +\\ &~~~~~~~~~~~~~\int_{t_0}^{\max\{t_0,\min\{\beta_0,c+1\}\}}(1+c-x)w_0 \,dx. \end{align*} It is \begin{align*} \min\{t_0,\max\{\alpha_0,c-1\}\}&= \begin{cases} \alpha_0\quad\text{if }c\leq \alpha_0+1,\\ c-1\quad\text{if }\alpha_0+1\leq c \leq t_0+1,\\ t_0\quad\text{if }t_0+1\leq c \end{cases} \end{align*} and \begin{align*} \max\{t_0,\min\{\beta_0,c+1\}\}&= \begin{cases} t_0\quad\text{if }c\leq t_0-1,\\ c-1\quad\text{if }t_0-1\leq c \leq \beta_0-1,\\ \beta_0\quad\text{if }\beta_0-1\leq c \end{cases}. \end{align*} It is straightforward to verify that \begin{align*} &\mathbb{E}_{(x,y)\sim\Pr_\lambda|_{G_0}} \max\{0,1-y(x-c)\}=\\ &~~~~~~~~~~~~~~= \begin{cases} -c\cdot w_0(t_0-\alpha_0)+\left[w_0(t_0-\alpha_0)+w_0\frac{1}{2}(t_0^2-\alpha_0^2)\right]\quad\text{if }c\leq \alpha_0+1,\\ c^2\cdot\frac{1}{2}w_0 - c\cdot(t_0w_0 +w_0) + \left[\frac{1}{2}t_0^2w_0 + t_0w_0 + \frac{1}{2}w_0\right] \quad\text{if }\alpha_0+1\leq c \leq t_0-1\\ c^2\cdot w_0 - c\cdot2t_0w_0 + \left[t_0^2w_0 + w_0\right] \quad\text{if }t_0-1\leq c \leq t_0+1,\\ c^2 \cdot \frac{1}{2}w_0 - c\cdot (t_0 w_0 - w_0) + \left[\frac{1}{2}t_0^2 w_0 - t_0 w_0 + \frac{1}{2}w_0\right]\quad\text{if }t_0+1\leq c\leq \beta_0-1,\\ c\cdot(-t_0 w_0 + \beta_0 w_0) + \left[\frac{1}{2}t_0^2 w_0 - t_0 w_0 - \frac{1}{2} \beta_0^2w_0 + \beta_0w_0\right] \quad\text{if }\beta_0-1\leq c \end{cases}. \end{align*} Similarly, we have \begin{align*} &\mathbb{E}_{(x,y)\sim\Pr_\lambda|_{G_1}} \max\{0,1-y(x-c)\}=\\ &~~~~~~~~~~~~~~= \begin{cases} -c\cdot w_1(t_1-\alpha_1)+\left[w_1(t_1-\alpha_1)+w_1\frac{1}{2}(t_1^2-\alpha_1^2)\right]\quad\text{if }c\leq \alpha_1+1,\\ c^2\cdot\frac{1}{2}w_1 - c\cdot(t_1w_1 +w_1) + \left[\frac{1}{2}t_1^2w_1 + t_1w_1 + \frac{1}{2}w_1\right] \quad\text{if }\alpha_1+1\leq c \leq t_1-1\\ c^2\cdot w_1 - c\cdot2t_1w_1 + \left[t_1^2w_1 + w_1\right] \quad\text{if }t_1-1\leq c \leq t_1+1,\\ c^2 \cdot \frac{1}{2}w_1 - c\cdot (t_1 w_1 - w_1) + \left[\frac{1}{2}t_1^2 w_1 - t_1 w_1 + \frac{1}{2}w_1\right]\quad\text{if }t_1+1\leq c\leq \beta_1-1,\\ c\cdot(-t_1 w_1 + \beta_1 w_1) + \left[\frac{1}{2}t_1^2 w_1 - t_1 w_1 - \frac{1}{2} \beta_1^2w_1 + \beta_1w_1\right] \quad\text{if }\beta_1-1\leq c \end{cases}. \end{align*} \vspace{2mm} It is \begin{align*} &\mathbb{E}_{(x,y)\sim\Pr_\lambda}\max\{0,1-y(x-c)\}=\lambda\cdot \mathbb{E}_{(x,y)\sim\Pr_\lambda|_{G_0}}\max\{0,1-y(x-c)\} + \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~(1-\lambda)\cdot \mathbb{E}_{(x,y)\sim\Pr_\lambda|_{G_1}}\max\{0,1-y(x-c)\}. \end{align*} and hence \begin{align}\label{function_E_in_example} \begin{split} &\mathbb{E}_{(x,y)\sim\Pr_\lambda}\max\{0,1-y(x-c)\}=\\ &\\ &~~~~~= \begin{cases} -c\cdot [\lambda w_0(t_0-\alpha_0) +(1-\lambda)w_1(t_1-\alpha_1)]+\lambda\left[w_0(t_0-\alpha_0)+w_0\frac{1}{2}(t_0^2-\alpha_0^2)\right]+\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(1-\lambda)\left[w_1(t_1-\alpha_1)+w_1\frac{1}{2}(t_1^2-\alpha_1^2)\right]~~\qquad\text{if }c\leq \alpha_0+1,\\ \\ c^2\cdot\lambda\frac{1}{2}w_0 - c\cdot[\lambda(t_0w_0 +w_0)+(1-\lambda)w_1(t_1-\alpha_1)] + \lambda\left[\frac{1}{2}t_0^2w_0 + t_0w_0 + \frac{1}{2}w_0\right]+\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~(1-\lambda)\left[w_1(t_1-\alpha_1)+w_1\frac{1}{2}(t_1^2-\alpha_1^2)\right]~~\qquad\text{if }\alpha_0+1\leq c\leq \alpha_1+1,\\ \\ c^2\cdot[\lambda\frac{1}{2}w_0+(1-\lambda)\frac{1}{2}w_1] - c\cdot[\lambda(t_0w_0 +w_0)+(1-\lambda)(t_1w_1 +w_1)]+\\ ~~~\lambda\left[\frac{1}{2}t_0^2w_0 + t_0w_0 + \frac{1}{2}w_0\right] + (1-\lambda)\left[\frac{1}{2}t_1^2w_1 + t_1w_1 + \frac{1}{2}w_1\right]\quad\text{if }\alpha_1+1\leq c\leq t_0-1,\\ \\ c^2\cdot [\lambda w_0 +(1-\lambda)\frac{1}{2}w_1]- c\cdot[\lambda 2t_0w_0+(1-\lambda)(t_1w_1 +w_1)] +\lambda \left[t_0^2w_0 + w_0\right] + \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(1-\lambda)\left[\frac{1}{2}t_1^2w_1 + t_1w_1 + \frac{1}{2}w_1\right] ~~\qquad\text{if }t_0-1\leq c\leq t_0+1,\\ \\ c^2 \cdot [\lambda\frac{1}{2}w_0+(1-\lambda)\frac{1}{2}w_1] - c\cdot [\lambda(t_0 w_0 - w_0)+(1-\lambda)(t_1w_1 +w_1)] + \\ ~~~\lambda\left[\frac{1}{2}t_0^2 w_0 - t_0 w_0 + \frac{1}{2}w_0\right] +(1-\lambda) \left[\frac{1}{2}t_1^2w_1 + t_1w_1 + \frac{1}{2}w_1\right]\quad\text{if } t_0+1\leq c\leq t_1-1,\\ \\ c^2 \cdot [\lambda\frac{1}{2}w_0+(1-\lambda)w_1] - c\cdot [\lambda(t_0 w_0 - w_0)+(1-\lambda)2t_1w_1] + \\ ~~~~~~~~~~~~~~~ \lambda\left[\frac{1}{2}t_0^2 w_0 - t_0 w_0 + \frac{1}{2}w_0\right]+(1-\lambda) \left[t_1^2w_1 + w_1\right] ~~\qquad\text{if }t_1-1\leq c \leq t_1+1,\\ \\ c^2 \cdot [\lambda\frac{1}{2}w_0+(1-\lambda)\frac{1}{2}w_1] - c\cdot [\lambda(t_0 w_0 - w_0)+(1-\lambda)(t_1 w_1 - w_1)] +\\ ~~~\lambda\left[\frac{1}{2}t_0^2 w_0 - t_0 w_0 + \frac{1}{2}w_0\right]+(1-\lambda)\left[\frac{1}{2}t_1^2 w_1 - t_1 w_1 + \frac{1}{2}w_1\right]\quad\text{if }t_1+1\leq c\leq \beta_0-1,\\ \\ c^2 \cdot (1-\lambda)\frac{1}{2}w_1+c\cdot[\lambda(-t_0 w_0 + \beta_0 w_0)-(1-\lambda)(t_1 w_1 - w_1)] + \\ ~~~~~~~~~~~~~~~~~~~\lambda\left[\frac{1}{2}t_0^2 w_0 - t_0 w_0 - \frac{1}{2} \beta_0^2w_0 + \beta_0w_0\right]+\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~(1-\lambda) \left[\frac{1}{2}t_1^2 w_1 - t_1 w_1 + \frac{1}{2}w_1\right]\qquad\text{if }\beta_0-1\leq c\leq \beta_1-1,\\ \\ c\cdot[\lambda(-t_0 w_0 + \beta_0 w_0)+(1-\lambda)(-t_1 w_1 + \beta_1 w_1)] +\\ ~~~~~~~~~~~~~~~~~~~\lambda\left[\frac{1}{2}t_0^2 w_0 - t_0 w_0 - \frac{1}{2} \beta_0^2w_0 + \beta_0w_0\right]+\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~(1-\lambda) \left[\frac{1}{2}t_1^2 w_1 - t_1 w_1 - \frac{1}{2} \beta_1^2w_1 + \beta_1w_1\right] \qquad\text{if }\beta_1-1\leq c \end{cases}. \end{split} \end{align} \begin{figure}[t] \centering \includegraphics[scale=0.4]{sketches/loss_example.png} \hspace{2mm} \includegraphics[scale=0.4]{sketches/loss_example_det.png} \caption{Example of $c\mapsto \mathbb{E}_{(x,y)\sim\Pr_\lambda}\max\{0,1-y(x-c)\}$ for $\lambda=0.3$, $\alpha_0=0$, $\alpha_1=1$, $t_0=4$, $t_1=7$, $\beta_0=10$, $\beta_1=14$.}\label{figure_example_function_E} \end{figure} Let us write $E(c)=\mathbb{E}_{(x,y)\sim\Pr_\lambda}\max\{0,1-y(x-c)\}$ as given in \eqref{function_E_in_example}. An example of the function~$E$ is provided in Figure~\ref{figure_example_function_E}. It is clear from \eqref{function_E_in_example} that $E|_{(-\infty,\alpha_0+1]}$ is strictly decreasing and $E|_{[\beta_1-1,+\infty)}$ is strictly increasing. Hence, \begin{align*} \inf_{c\in\mathbb{R}}E(c)=\inf_{c\in[\alpha_0+1,\beta_1-1]} E(c), \end{align*} and since $E$ is continuous, we have \begin{align*} \inf_{c\in[\alpha_0+1,\beta_1-1]} E(c)=\min_{c\in[\alpha_0+1,\beta_1-1]} E(c). \end{align*} It is \begin{align*} \min_{c\in[\alpha_0+1,\beta_1-1]} E(c)=\min_{l=1,\ldots,7} \,\min_{c\in J_l} E(c) \end{align*} with \begin{align*} J_1=[\alpha_0+1<\alpha_1+1],~~~J_2=[\alpha_1+1,t_0-1],~~~J_3=[t_0-1,t_0+1],~~~J_4=[t_0+1,t_1-1],\\ J_5=[t_1-1,t_1+1],~~~J_6=[t_1+1,\beta_0-1],~~~J_7=[\beta_0-1,\beta_1-1]. \end{align*} We observe the following: \begin{itemize} \item $E|_{J_1}$ is strictly decreasing: this is clear if $\lambda=0$. If $\lambda>0$, $E|_{J_1}$ is part of a parabola (opening to the top) with vertex at $$\frac{\lambda(t_0w_0 +w_0)+(1-\lambda)w_1(t_1-\alpha_1)}{\lambda w_0}\geq t_0+1,$$ which lies on the right side and outside of $J_1$. \item Similarly, $E|_{J_7}$ is strictly increasing. \item $E|_{J_2}$ is strictly decreasing: $E|_{J_2}$ is part of a parabola (opening to the top) with vertex at $$\frac{\lambda w_0(t_0+1)+(1-\lambda)w_1(t_1+1)}{\lambda w_0+(1-\lambda)w_1}\geq t_0+1,$$ which lies on the right side and outside of $J_2$. \item Similarly, $E|_{J_6}$ is strictly increasing. \end{itemize} % Hence, we have \begin{align*} \min_{c\in[\alpha_0+1,\beta_1-1]} E(c)=\min_{l\in\{3,4,5\}} \,\min_{c\in J_l} E(c). \end{align*} Note that the above observations also imply that for $c\notin J_3\cup J_4\cup J_5$ we have $$E(c)>\min_{l\in\{3,4,5\}} \,\min_{c\in J_l} E(c).$$ Let $\widetilde{E}_3,\widetilde{E}_4,\widetilde{E}_5:\mathbb{R}\rightarrow \mathbb{R}$ be the quadratic functions (parabolas opening to the top) that coincide with $E$ on $J_3$, $J_4$ and $J_5$, respectively. Let $S_3,S_4,S_5$ be their vertices. It is \begin{align*} S_3&=\frac{\lambda 2t_0w_0+(1-\lambda)w_1(t_1+1)}{2\lambda w_0+(1-\lambda)w_1}\in[t_0,t_1+1]\\ S_4&=\frac{\lambda w_0 (t_0-1)+(1-\lambda)w_1(t_1+1)}{\lambda w_0 +(1-\lambda)w_1}\in[t_0-1,t_1+1],\\ S_5&=\frac{\lambda w_0(t_0-1)+(1-\lambda)2t_1w_1}{\lambda w_0 +2(1-\lambda)w_1}\in[t_0-1,t_1].\\ \end{align*} It is \begin{align}\label{temp_eq_1} S_3\leq t_0+1~~~\Leftrightarrow~~~ (1-\lambda)w_1t_1\leq (1-\lambda)w_1t_0+2\lambda w_0~~~\Leftrightarrow~~~ S_4\leq t_0+1 \end{align} and \begin{align}\label{temp_eq_2} S_5\geq t_1-1~~~\Leftrightarrow~~~ \lambda w_0t_1\leq \lambda w_0t_0+2(1-\lambda) w_1~~~\Leftrightarrow~~~ S_4\geq t_1-1, \end{align} where equality on one side of an equivalence holds if and only if it holds on the other side. We distinguish three cases: \begin{itemize} \item \eqref{temp_eq_1} \emph{is true:} If \eqref{temp_eq_1} is true, then \eqref{temp_eq_2} cannot be true. Then $E|_{J_4}$ and $E|_{J_5}$ are both strictly increasing, and the minimum of $E_{J_3}$ at $S_3\in[t_0,t_0+1]$ is the unique global minimum of $E$. \item \eqref{temp_eq_2} \emph{is true:} Similarly to the previous case we conclude that $E$ has a unique global minimum at $S_5\in[t_1-1,t_1]$. \item \emph{Neither \eqref{temp_eq_1} nor \eqref{temp_eq_2} is true:} If neither \eqref{temp_eq_1} nor \eqref{temp_eq_2} is true, then $E|_{J_3}$ is strictly decreasing and $E|_{J_5}$ is strictly increasing, and the minimum of $E_{J_4}$ at $S_4\in(t_0+1,t_1-1)$ is the unique global minimum of $E$. \end{itemize} Note that $[t_0,t_0+1]\subseteq[t_0,t_1]$ and $[t_1-1,t_1]\subseteq[t_0,t_1]$, and we have proven the second claim of Proposition~\ref{proposition_1d} to be true. \vspace{2mm} If $\lambda=0$, then \eqref{temp_eq_2} is true and we have $c(\lambda)=c(0)={S_5}|_{\lambda=0}=t_1$. Similarly, we obtain $c(1)=t_0$. $S_3$, $S_4$ and $S_5$ as a function of $\lambda$ are continuous. We have \begin{align*} & \eqref{temp_eq_1} ~~~\Leftrightarrow~~~ \phi(\lambda):=-\lambda[w_1(t_1-t_0)+2w_0]+w_1(t_1-t_0)\leq 0,\\ & \eqref{temp_eq_2} ~~~\Leftrightarrow~~~ \psi(\lambda):=\lambda[w_0(t_1-t_0)+2w_1]-2w_1\leq 0. \end{align*} The two functions $\phi$ and $\psi$ are continuous, and hence $c:\lambda\mapsto c(\lambda)$ is continuous on $\{\lambda: \phi(\lambda)<0\}\dot{\cup}\{\lambda: \psi(\lambda)<0\}\dot{\cup} \{\lambda: \phi(\lambda)>0\wedge \psi(\lambda)>0\}$. Since in \eqref{temp_eq_1} and \eqref{temp_eq_2} equality on one side of an equivalence holds if and only if it holds on the other side, it follows that $c:\lambda\mapsto c(\lambda)$ is also continuous at the points $\lambda_\phi$ and $\lambda_\psi$ with $\phi(\lambda_\phi)=0$ and $\psi(\lambda_\psi)=0$, respectively. Finally, $S_3$, $S_4$ and $S_5$ as a function of $\lambda$ are strictly decreasing, the function $\psi$ is strictly increasing and the function $\phi$ is strictly decreasing. It follows that $c:\lambda\mapsto c(\lambda)$ is strictly decreasing, and we have also proven the third claim of Proposition~\ref{proposition_1d} to be true. In this example, since $c:\lambda\mapsto c(\lambda)$ is strictly increasing, it is $\lambda_{fair}^L=\lambda_{fair}^U$. \vspace{4mm} \textbf{Convergence rate in case of $w_0=w_1$:} In the following, we study the rate at which $\lambda_i\rightarrow \lambda_{fair}^U$ and $c_i\rightarrow c_{fair}$, respectively, when $p=0$ in our strategy and in the particularly simple case that $w_0=w_1$. In this case, $c_{fair}=\frac{w_0t_0+w_1t_1}{w_0+w_1}=\frac{t_0+t_1}{2}$. Moreover, we claim that $\lambda_{fair}^U=\frac{1}{2}$: because of \eqref{assu_concrete_example} it is $\phi(\frac{1}{2})=\psi(\frac{1}{2})>0$ and hence $c(\frac{1}{2})=S_4|_{\lambda=\frac{1}{2}}=\frac{t_0+t_1}{2}=c_{fair}$, which shows that $\lambda_{fair}^U=\frac{1}{2}$. It is \begin{align*} c(\lambda)=\begin{cases} S_3=\frac{\lambda 2t_0+(1-\lambda)(t_1+1)}{1+\lambda}\quad\text{if }~\phi(\lambda)\leq 0 \,\Leftrightarrow\, 1-\frac{2}{t_1-t_0}\leq \lambda\\ S_5=\frac{\lambda(t_0-1)+(1-\lambda)2t_1}{2-\lambda}\quad\text{if }~\psi(\lambda)\leq 0 \,\Leftrightarrow\, \lambda\leq \frac{2}{t_1-t_0+2}\\ S_4=\lambda (t_0-1)+(1-\lambda)(t_1+1)\quad\text{else} \end{cases}. \end{align*} The function $c:\lambda\mapsto c(\lambda)$ is piecewise smooth. Let $c':\lambda\mapsto c'(\lambda)$ be the function that coincides with the first derivative of $c$ at those $\lambda$ for which $c$ is differentiable and with $c'(\lambda)=0$ otherwise. Then we have $c(\lambda)=c(0)+\int_0^{\lambda}c'(r)dr$ for all $\lambda\in[0,1]$. It is straightforward to see that $|c'(\lambda)|\leq 2(t_1-t_0+1)$, $\lambda\in[0,1]$, and hence \begin{align}\label{control_c} |c(\lambda)-c(\lambda')|\leq \int_{\min\{\lambda,\lambda'\}}^{\max\{\lambda,\lambda'\}}|c'(r)|dr \leq 2(t_1-t_0+1)\cdot |\lambda-\lambda'|. \end{align} \vspace{2mm} Now assume we run our strategy as described in Section~\ref{section_analysis_1d} with $\lambda_0=\lambda^{\star}=\frac{|S_0\cap G_0|}{|S_0|}$. Here we consider the case that $\lambda^{\star}<\lambda_{fair}^U=\frac{1}{2}$. Then $c_0=c(\lambda_0)>c_{fair}$ and $\lambda_1=\frac{|S_0\cap G_0|+1}{|S_0|+1}$. Note that $\lambda_i$ keeps increasing until time step $|S_0|\cdot(1-2\lambda_0)$ with \begin{align*} \lambda_{|S_0|\cdot(1-2\lambda_0)}=\frac{|S_0\cap G_0|+|S_0|\cdot(1-2\lambda_0)}{|S_0|+|S_0|\cdot(1-2\lambda_0)} =\frac{\lambda_0|S_0|+|S_0|\cdot(1-2\lambda_0)}{|S_0|+|S_0|\cdot(1-2\lambda_0)} =\frac{1}{2}. \end{align*} It is not hard to see that from then on \begin{align*} \lambda_{|S_0|\cdot(1-2\lambda_0)+1+2k}>\frac{1}{2}\quad\text{and}\quad \lambda_{|S_0|\cdot(1-2\lambda_0)+2k}=\frac{1}{2}\quad\text{for all }k\in\mathbb{N}_0. \end{align*} According to \eqref{difference_lambda_i} it is $|\lambda_i-\lambda_{i-1}|\leq {2}/({|S_0|+i})$ and hence, using \eqref{control_c}, we conclude that \begin{align*} |c_{fair}-c_i|=\left|c\left({1}/{2}\right)-c(\lambda_i)\right|\leq \frac{4(t_1-t_0+1)}{|S_0|+i} \quad\text{for all } i\geq |S_0|\cdot(1-2\lambda_0). \end{align*} \section{Further experiments}\label{appendix_further_experiments} \subsection{Experiment in 1-dimensional setting similar to Section~\ref{section_analysis_1d} }\label{appendix_experiments_1d} We illustrate our findings of Section~\ref{section_analysis_1d} and empirically show that the claims that we made there hold true in a finite-sample setting and when performing SGD updates. For doing so, we consider the case that the feature $x\in\mathbb{R}$ comes from a mixture of two Gaussians $\mathcal{N}(0,1)$ and $\mathcal{N}(2,2)$ with mixture weights 0.85 and 0.15, respectively. A data point~$(x,y,a)$ belongs to group~$G_0$ (i.e., $a=0$) if $x$ comes from the first Gaussian and to group~$G_1$ if it comes from the second Gaussian. If $a=0$, then $y=\sign(x)$, and if $a=1$, then $y=\sign(x-1.4)$ (in the notation of Section~\ref{section_analysis_1d}, it is $t_0=0$ and $t_1=1.4$). Starting with an initial sample of size 50, we compute the threshold $c_0$ (corresponding to the threshold classifier $\hat{y}=\sign(x-c_0)$) that minimizes the empirical risk with respect to the hinge loss over all possible thresholds. Then, in each round, we estimate the error of the current threshold on $G_0$ and $G_1$ using a validation set comprising a sample of 500 (top row) / 10000 (bottom row) data points from each group. We sample a data point from the group with larger estimated error (the disadvantaged group), that is $p=0$, and use it to perform an SGD update of the current threshold. We choose the learning rate in round $t$ as $1/\sqrt{t}$. \begin{figure}[t] \centering \includegraphics[scale=0.27]{Experiments_1dim/1dimActive_Data=normal_Loss=hinge_NO-POOL_Nvalidation=1000_Ninitial=50_prob=0_Seed=123456_SGD.pdf} \includegraphics[scale=0.27]{Experiments_1dim/1dimActive_Data=normal_Loss=hinge_NO-POOL_Nvalidation=20000_Ninitial=50_prob=0_Seed=123456_SGD.pdf} \vspace{-1mm} \caption{Our strategy in a 1-dim setting similar to Section~\ref{section_analysis_1d}. We learn the threshold $c$ for a classifier $\hat{y}=\sign(x-c)$ by performing in each round~$t$ an SGD update w.r.t. the hinge~loss. As a function of~$t$, the plots show: the threshold~$c_t$ as it approximates the fair threshold~$c_{fair}$ (left); the error on the two groups of $c_t$ (middle); and the fraction of sample points from $G_1$ among all sample~points (right). In the top row, the validation set has size 1000 and there is some obvious deviation between $c_t$ and~$c_{fair}$. In the bottom row, the validation set has size 20000 and $c_t$ approximates $c_{fair}$ very well.}\label{experiment_1D} \end{figure} Figure~\ref{experiment_1D} shows some results: in each row, the left plot shows the threshold~$c_t$ obtained in the $t$-th round as a function of~$t$ and also shows the threshold $c_{fair}$ that equalizes the error on the two groups (obtained analytically). The middle plot shows the true error (evaluated analytically---\emph{not} estimated on the validation set) on the two groups of the threshold $c_t$. Finally, the right plot shows the fraction of sample points from group~$G_1$ among all sample points considered until round~$t$. We can see that that the threshold~$c_0$, which is learnt based on an i.i.d. sample from the whole population, has highly different errors on the two groups. This is not surprising since about 85\% of the data points in an i.i.d. sample come from group~$G_0$. As $t$ increases, the difference in the errors gets smaller. Apparently, all considered quantities are converging. However, in the top row, where the validation set has only size 1000, the threshold~$c_t$ does not converge to $c_{fair}$, but rather some slightly smaller threshold that does not exactly equalize the error on the two groups. In the bottom row, where the validation set is significantly larger, $c_t$ does converge to $c_{fair}$ (or something very close) and here the errors on the two groups are (almost) perfectly equalized. Figure~\ref{experiment_1D_app2} shows the results for slightly different settings: in the first row, we consider the logistic loss instead of the hinge loss. In the second and the third row we consider a mixture of two uniform distributions $\mathcal{U}(0,10)$ and $\mathcal{U}(6,12)$ rather than a mixture of two normal distributions. It is $t_0=7$ and $t_1=9$. The mixture weights are $0.85$ and $0.15$ as before. In the experiment of the second row the loss function is the hinge loss, in the experiment of the third row it is the logistic loss. In all three experiments the validation set has size 2000. The results are similar to the ones before. However, for the mixture of the uniform distributions our strategy converges much faster and yields a threshold very close to $c_{fair}$ even though the validation set is of only moderate size. Finally, Figure~\ref{experiment_1D_app3} shows the results for an experiment where we do not always sample from the disadvantaged group, but with probability~$p=0.8$ we sample from the whole population. As we can see and is in accordance with our analysis of Section~\ref{section_analysis_1d}, in this case the threshold~$c_t$ converges to a threshold between the one that minimizes the risk (purple link) and the threshold $c_{fair}$, which equalizes the error on the two groups (pink line). The error of $c_t$ on group $G_0$ is larger than the error of the risk minimizing threshold, and the error of $c_t$ on group $G_1$ is smaller than the error of the risk minimizing threshold. In this experiment, we chose the learning rate in round $t$ as $0.1/\sqrt{t}$. Overall, the results of this section confirm the validity of our main findings of Section~\ref{section_analysis_1d}. \begin{figure}[t] \centering \includegraphics[scale=0.27]{Experiments_1dim/1dimActive_Data=normal_Loss=logistic_NO-POOL_Nvalidation=2000_Ninitial=50_prob=0_Seed=123456_SGD.pdf} \includegraphics[scale=0.27]{Experiments_1dim/1dimActive_Data=uniform_Loss=hinge_NO-POOL_Nvalidation=2000_Ninitial=50_prob=0_Seed=123456_SGD.pdf} \includegraphics[scale=0.27]{Experiments_1dim/1dimActive_Data=uniform_Loss=logistic_NO-POOL_Nvalidation=2000_Ninitial=50_prob=0_Seed=123456_SGD.pdf} \caption{Similar experiment as in Figure~\ref{experiment_1D}, with the logistic loss instead of the hinge loss and / or a mixture of uniform distributions instead of a mixture of normal distributions.}\label{experiment_1D_app2} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.27]{Experiments_1dim/1dimActive_Data=normal_Loss=hinge_NO-POOL_Nvalidation=10000_Ninitial=50_prob=08_Seed=123456_SGD.pdf} \includegraphics[scale=0.27]{Experiments_1dim/1dimActive_Data=normal_Loss=logistic_NO-POOL_Nvalidation=10000_Ninitial=50_prob=08_Seed=123456_SGD.pdf} \caption{Similar experiment as in Figure~\ref{experiment_1D}, but rather than always sampling from the disadvantaged group, with probability $p=0.8$ we sample from the whole population.}\label{experiment_1D_app3} \end{figure} \subsection{More experiments as in Section~\ref{subsec_experiments_real}}\label{appendix_experiments_real} \subsubsection{Details of implementation} Here we give more details of implementation of our strategy. In batch version, we implemented our strategy by providing a query strategy to modAL \citep{danka2018modal} \footnote{https://modal-python.readthedocs.io/en/latest/index.html}. ModAL is an active learning framework that takes in a specified loss function and sampling strategy as parameters, and performs model fitting and sampling alternatively. Specifically, we provide the query strategy that samples from disadvantaged group with a probability parameter described in Section~\ref{introduce_strategy}. In our experiments, the active learner underlying modAL is solving for logistic loss in each time step. In SGD version, we implement SGD update to optimize for logistic loss correspondingly. \subsubsection{More results with equalized odds measure} Here we present Pareto frontiers produced by our strategy in batch version on more data sets, and in scatter plot style. As mentioned in Section~\ref{subsec_experiments_real}, each strategy is run 10 times. Points on Pareto frontiers over the 10 runs are collected and used to scatter plot. Figure~\ref{Batch_EO_adult}, Figure~\ref{Batch_EO_dutch}, Figure~\ref{Batch_EO_law} and Figure~\ref{Batch_EO_Zafar} show points on Pareto curves over the 10 runs of our strategy compared to other strategies. \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=adultcombo=0.pdf} & \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=adultcombo=1.pdf} \\ \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=adultcombo=2.pdf} & \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=adultcombo=3.pdf} \\ \end{tabular} \caption{Points on Pareto frontiers. Our strategy in batch version. Adult Income data. Fairness measure: equalized odds. }\label{Batch_EO_adult} \end{figure} \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=dutchcombo=0.pdf} & \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=dutchcombo=1.pdf} \\ \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=dutchcombo=2.pdf} & \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=dutchcombo=3.pdf} \\ \end{tabular} \caption{Points on Pareto frontiers. Our strategy in batch version. Dutch Census data. Fairness measure: equalized odds.}\label{Batch_EO_dutch} \end{figure} \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=lawcombo=0.pdf} & \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=lawcombo=1.pdf} \\ \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=lawcombo=2.pdf} & \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=lawcombo=3.pdf} \\ \end{tabular} \caption{Points on Pareto frontiers. Our strategy in batch version. Law Admission data. Fairness measure: equalized odds.}\label{Batch_EO_law} \end{figure} \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=zafarcombo=0.pdf} & \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=zafarcombo=1.pdf} \\ \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=zafarcombo=2.pdf} & \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Odds_Data=zafarcombo=3.pdf} \\ \end{tabular} \caption{Points on Pareto frontiers. Our strategy in batch version. Synthetic data from \citet{zafar2017www}. Fairness measure: equalized odds.}\label{Batch_EO_Zafar} \end{figure} \subsubsection{Performance of SGD version} Here we present Pareto frontiers generated by SGD version of our strategy, in comparison with Pareto frontiers generated with other strategies. Pareto frontiers here represent trade-offs between classification error and the fairness measure of equalized odds. Similar to experiments in batch setting, each strategy is run 10 times. Each time, all strategies compared get the same training data. Over the 10 runs, test set remain the same. Each strategy is experimented on equal number of possible parameter values. In the SGD implementation, after training on a small initial sample, our algorithm repeats the following operations: determines a disadvantaged group at the current time step, samples one point from that group, and makes one SGD update. To be comparable to others, SGD is optimizing for logistic loss. In all repeated runs, we cross validate to choose appropriate learning rate and regularization rate. We make a total of 3000 SGD updates before evaluating on test data. We note in some cases, the process may not have converged with 3000 updates. To create scatter plots, points on Pareto frontiers over all 10 runs are collected. Figure~\ref{SGD_dutch}, Figure~\ref{SGD_law} and Figure~\ref{SGD_zafar} show SGD implementation with 3000 updates of our strategy produces Pareto frontiers that are competitive to other strategies. For Adult Income data, Figure~\ref{SGD_adult} shows our strategy produces predictions that are most fair but not most accurate at the end of 3000 SGD updates. We suspect the process did not converge with 3000 updates. \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=65mm]{Experiments_ndim/SGD_Ten_run_Equalised_Odds_Data=adultcombo=0.pdf} & \includegraphics[width=65mm]{Experiments_ndim/SGD_Ten_run_Equalised_Odds_Data=adultcombo=1.pdf} \\ \includegraphics[width=65mm]{Experiments_ndim/SGD_Ten_run_Equalised_Odds_Data=adultcombo=2.pdf} & \includegraphics[width=65mm]{Experiments_ndim/SGD_Ten_run_Equalised_Odds_Data=adultcombo=3.pdf} \\ \end{tabular} \caption{Points on Pareto frontiers. Our strategy in SGD version. Adult Income data. Fairness measure: equalized odds.}\label{SGD_adult} \end{figure} \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=65mm]{Experiments_ndim/SGD_Ten_run_Equalised_Odds_Data=dutchcombo=0.pdf} & \includegraphics[width=65mm]{Experiments_ndim/SGD_Ten_run_Equalised_Odds_Data=dutchcombo=1.pdf} \\ \includegraphics[width=65mm]{Experiments_ndim/SGD_Ten_run_Equalised_Odds_Data=dutchcombo=2.pdf} & \includegraphics[width=65mm]{Experiments_ndim/SGD_Ten_run_Equalised_Odds_Data=dutchcombo=3.pdf} \\ \end{tabular} \caption{Points on Pareto frontiers. Our strategy in SGD version. Dutch Census data. Fairness measure: equalized odds.}\label{SGD_dutch} \end{figure} \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=65mm]{Experiments_ndim/SGD_Ten_run_Equalised_Odds_Data=lawcombo=0.pdf} & \includegraphics[width=65mm]{Experiments_ndim/SGD_Ten_run_Equalised_Odds_Data=lawcombo=1.pdf} \\ \includegraphics[width=65mm]{Experiments_ndim/SGD_Ten_run_Equalised_Odds_Data=lawcombo=2.pdf} & \includegraphics[width=65mm]{Experiments_ndim/SGD_Ten_run_Equalised_Odds_Data=lawcombo=3.pdf} \\ \end{tabular} \caption{Points on Pareto frontiers. Our strategy in SGD version. Law Admission data. Fairness measure: equalized odds.}\label{SGD_law} \end{figure} \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=65mm]{Experiments_ndim/SGD_Ten_run_Equalised_Odds_Data=Zafarcombo=0.pdf} & \includegraphics[width=65mm]{Experiments_ndim/SGD_Ten_run_Equalised_Odds_Data=Zafarcombo=1.pdf} \\ \includegraphics[width=65mm]{Experiments_ndim/SGD_Ten_run_Equalised_Odds_Data=Zafarcombo=2.pdf} & \includegraphics[width=65mm]{Experiments_ndim/SGD_Ten_run_Equalised_Odds_Data=Zafarcombo=3.pdf} \\ \end{tabular} \caption{Points on Pareto frontiers. Our strategy in SGD version. Synthetic data from \citet{zafar2017www}. Fairness measure: equalized odds. }\label{SGD_zafar} \end{figure} \subsubsection{Performance when evaluated with equal opportunity} In this section, we present Pareto frontiers that represent trade-offs between classification error and equal opportunity \citep{hardt2016equality}. Experiments are run in analogous ways as described in Section~\ref{subsec_experiments_real}. The difference is optimization formulation in \citet{hardt2016equality} is constrained by the notion of equal opportunity only. Our algorithm is adapted so that the group with most violation of equal opportunity is determined as disadvantaged. Since the current implementation\footnote{https://github.com/fairlearn/fairlearn } of the method by \citet{agarwal2018} does not support equal opportunity as a fairness measure, it is not included here. Figure~\ref{line_plot_adult_dutch_Equal_Opportunity_1} shows the Pareto frontiers produced from averaging results over 10 runs on Adult Income and Dutch Census data sets. Figure~\ref{batch_adult_EOppo}, Figure~\ref{batch_dutch_EOppo}, Figure~\ref{batch_law_EOppo} and Figure~\ref{batch_zafar_Eoppo} show our strategy when being evaluated by the fairness measure of equal opportunity produces Pareto frontiers that are competitive to counterparts produced by \citet{hardt2016equality}. \begin{figure}[t] \centering \includegraphics[width=73mm]{Experiments_ndim/Ten_run_Equal_Opportunity_Data=adult_NrUpdatesMyAppr=3000.pdf} \hspace{-9mm} \includegraphics[width=73mm]{Experiments_ndim/Ten_run_Equal_Opportunity_Data=dutch_NrUpdatesMyAppr=3000.pdf} \caption{Pareto frontiers produced by our strategy on Adult Income data (left two columns) and Dutch Census data (right two columns) compared with Pareto frontiers produced by equal opportunity constrained post-processing \citep{hardt2016equality} and unconstrained scikit-learn Logistic Regression. All three strategies being compared are given the same training sets. Our strategy breaks down given training set into initial training set, pool and validation set. Each subplot corresponds to our strategy having a different set of initial training set size, pool size and validation set size. Fairness performance measure is equal opportunity \citep{hardt2016equality}. }\label{line_plot_adult_dutch_Equal_Opportunity_1} \end{figure} \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Opportunity_Data=adultcombo=0.pdf} & \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Opportunity_Data=adultcombo=1.pdf} \\ \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Opportunity_Data=adultcombo=2.pdf} & \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Opportunity_Data=adultcombo=3.pdf} \\ \end{tabular} \caption{Points on Pareto frontiers. Our strategy in batch variation. Adult Income data. Fairness measure: equal opportunity}\label{batch_adult_EOppo} \end{figure} \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Opportunity_Data=dutchcombo=0.pdf} & \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Opportunity_Data=dutchcombo=1.pdf} \\ \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Opportunity_Data=dutchcombo=2.pdf} & \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Opportunity_Data=dutchcombo=3.pdf} \\ \end{tabular} \caption{Points on Pareto frontiers. Our strategy in batch variation. Dutch Census data. Fairness measure: equal opportunity}\label{batch_dutch_EOppo} \end{figure} \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Opportunity_Data=lawcombo=0.pdf} & \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Opportunity_Data=lawcombo=1.pdf} \\ \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Opportunity_Data=lawcombo=2.pdf} & \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Opportunity_Data=lawcombo=3.pdf} \\ \end{tabular} \caption{Points on Pareto frontiers. Our strategy in batch variation. Law Admission data. Fairness measure: equal opportunity}\label{batch_law_EOppo} \end{figure} \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Opportunity_Data=zafarcombo=0.pdf} & \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Opportunity_Data=zafarcombo=1.pdf} \\ \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Opportunity_Data=zafarcombo=2.pdf} & \includegraphics[width=65mm]{Experiments_ndim/Ten_run_Equalised_Opportunity_Data=zafarcombo=3.pdf} \\ \end{tabular} \caption{Points on Pareto frontiers. Our strategy in batch variation. Synthetic data from \citet{zafar2017www}. Fairness measure: equal opportunity }\label{batch_zafar_Eoppo} \end{figure} \subsection{Addendum to Section~\ref{subsec_experiment_Flint}}\label{appendix_flint_experiment} Figure~\ref{fig:exp_flint_data2} shows the overall error of the classifiers obtained from following the various sampling strategies considered in Section~\ref{subsec_experiment_Flint}. \begin{figure} \centering \includegraphics[scale=0.54]{FinalFlint/Flint_MultiGroups_ERROR_compressed.pdf} \caption{Experiment on the Flint water data set. Overall classification error as a function of $t$ for a strategy where training points are added in the order of their timestamp (blue), a strategy where training points are added in a random order (brown), and our strategy (green).} \label{fig:exp_flint_data2} \end{figure} \section{Proof of Theorem~\ref{thm_vc}}\label{appendix_proof_jamie} We now state a simple observation which will help us prove the main theorem. \begin{observation}\label{obs:indep} In each round $t$ and for each group $a$, $\sam{a}{t} \sim_{i.i.d} \Pr^{\star}_{| G_a} $: the set of samples from group $G_a$ are drawn independently from the true distribution over $G_a$. \end{observation} The remainder of the analysis follow from applying standard uniform concentration bounds to the group-specific empirical loss for the set of thresholds. \begin{proof}[Proof of Theorem~\ref{thm_vc}] In round $t$, let \[\eloss{a}{t}(h) = \frac{1}{|\sam{a}{t}|}\sum_{(x, y) \in \sam{a}{t}} \ell ((x,y), h) \] the empirical loss of a hypothesis $h$ for group $G_i$. Using a Hoeffding concentration inequality, Observation~\ref{obs:indep} implies that for a fixed $h, a$ and ,$t$, \[|\loss{a}{}(h) - \eloss{a}{t}(h)| \leq\sqrt{2\frac{\ln{\frac{1}{\delta}}}{\ss{a}{t}}}\] with probability $1-\delta$. Furthermore, using standard uniform concentration arguments over a class with finite VC dimension, we have that with probability $1-\delta$, for \emph{all} $h\in\mathcal{H}$ \[|\loss{a}{}(h) - \eloss{a}{t}(h)| \leq \sqrt{\frac{2 \mathcal{VC}(\mathcal{H})\ln{\frac{1}{\delta}}}{\ss{a}{t}}}.\] Finally, if we take a union bound over both groups and all rounds $T$, we have that with probability $1- \delta$, for all $a ,t \leq T$, and $h$ \[|\loss{a}{t}(h) - \eloss{a}{t}(h)| \leq \sqrt{\frac{2 \mathcal{VC}(\mathcal{H})\ln{\frac{2T}{\delta}}}{\ss{a}{t}}}.\] % We condition on this event for the remainder of the proof. With these tools in hand, we can now analyze the dynamics of the process which, in round $t$, selects $\thr{t}$ to minimize $\lambda_t \eloss{0}{t} + (1- \lambda_t) \eloss{1}{t}$. \iffalse \jamiecomment{not clear if we want this to use the same loss function or a different one to evaluate bias...}\fi The process then evaluates \[\widehat{\textrm{Bias}(\thr{t})} = \eloss{1}{t}(\thr{t}) - \eloss{0}{t}(\thr{t})= \sum_{(x,y)\in \sam{1}{t}} \loss{}{}(\thr{t}, (x,y)) - \sum_{(x,y)\in \sam{0}{t}} \loss{}{}(\thr{t}, (x,y)),\] the empirical bias of the current hypothesis, and samples from group $G_0$ when this is negative and $G_1$ when this is positive. Fix a round $t$. We define the following error parameter \[\sqrt{\frac{2\mathcal{VC}(\mathcal{H})\ln{\frac{2T}{\delta}}}{\ss{0}{t}}} + \sqrt{\frac{2\mathcal{VC}(\mathcal{H})\ln{\frac{2T}{\delta}}}{ \ss{1}{t}}} := \epsilon(t) \] which captures amount by which our empirical estimates of the loss of any hypothesis might differ from its true loss in round $t$ with the samples available in that round. Consider the set \[\mathcal{H}_{\epsilon} := \{ h \in \mathcal{H} : \exists \alpha \in [0,1] \textrm{ s.t. } \alpha \loss{0}{}(h) + (1-\alpha) \loss{1}{}(h) \leq \min_{f \in \mathcal{H}} \alpha \loss{0}{}(f) + (1-\alpha) \loss{1}{}(f) + \epsilon\}, \] those hypotheses in $\mathcal{H}$ which are within $\epsilon$ of the Pareto frontier of trading off between loss on the groups. We claim that $\thr{t}\in \mathcal{H}_{\epsilon(t)}$. This follows from the fact that $\thr{t}$ minimizes the empirical $\alpha$-weighted loss between groups $G_0, G_1$ for some $\alpha$, and that the empirical loss of $\thr{t}$ is within $\epsilon(t)$ of its expectation. We now analyze which group will be sampled from in round $t$. Note that one of the three holds for $\thr{t}$: \begin{itemize} \item $\loss{0}{}(\thr{t}) < \loss{1}{}(\thr{t}) - \epsilon(t)$ \item $\loss{1}{}(\thr{t}) < \loss{0}{}(\thr{t}) - \epsilon(t)$ \item $|\loss{1}{}(\thr{t}) - \loss{0}{}(\thr{t})| \leq \epsilon(t)$. \end{itemize} In the first two cases, the procedure will sample from the group with higher true loss, since the empirical losses will be ordered consistent with the true loss ordering. In the third case, either group might be sampled from. The former two settings will yield a round $t+1$ with an additional sample from group $a$ with higher empirical loss in round $t$. Finally, we note that $\epsilon(t) \leq 2\max_a \sqrt{\frac{2\mathcal{VC}(\mathcal{H})\ln{\frac{2T}{\delta}}}{\ss{a}{t}}}$, implying the claim. \end{proof}
proofpile-arXiv_065-4479
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{Intro} The nature of dark matter (DM) and its possible interactions with the fields of the Standard Model (SM) is an ever-growing mystery. Historically, Weakly Interacting Massive Particles(WIMPs) \cite{Arcadi:2017kky}, which are thermal relics in the $\sim$ few GeV to $\sim$ 100 TeV mass range with roughly weak strength couplings to the SM, and axions \cite{Kawasaki:2013ae,Graham:2015ouw} were considered to be the leading candidates for DM as they naturally appeared in scenarios of physics beyond the Standard Model (BSM) that were constructed to address other issues. Important searches for these new states are continuing to probe ever deeper into the remaining allowed parameter spaces of these respective frameworks. However, the null results so far have prompted a vast expansion in the set of possible scenarios \cite{Alexander:2016aln,Battaglieri:2017aum} which span a huge range in both DM masses and couplings. In almost all of this model space, new forces and, hence, new force carriers must also exist to mediate the interactions of the DM with the SM which are necessary to achieve the observed relic density \cite{Aghanim:2018eyx}. One way to classify such interactions is via ``portals" \cite{vectorportal} of various mass dimension that result from integrating out some set of heavy fields; at the renormalizable level, the set of such portals is known to be quite restricted \cite{vectorportal,KM}. In this paper, we will be concerned with the implications of the vector boson/kinetic mixing (KM) portal, which is perhaps most relevant for thermal DM with a mass in the range of a $\sim$ few MeV to $\sim$few GeV and which has gotten much attention in the recent literature \cite{KM}. In the simplest of such models, a force carrier (the dark photon, a gauge field corresponding to a new gauge group, $U(1)_D$ under which SM fields are neutral) mediates the relevant DM-SM interaction. This very weak interaction is the result of the small KM between this $U(1)_D$ and the SM hypercharge group, $U(1)_Y$, which is generated at the one-(or two)-loop level by a set of BSM fields, called portal matter (PM)\cite{KM,PM}, which carry charges under both gauge groups. In the IR, the phenomenology of such models is well-described by suitably chosen combinations of only a few parameters: the DM and dark photon masses, $m_{DM,V}$, respectively, the $U(1)_D$ gauge coupling, $g_D$ and $\epsilon$, the small dimensionless parameter describing the strength of the KM, $\sim 10^{-(3-4)}$. Frequently, and in what follows below, this scenario is augmented to also include the dark Higgs boson, whose vacuum expectation value (vev) breaks $U(1)_D$ thus generating the dark photon mass. This introduces two additional parameters with phenomenological import: the dark Higgs mass itself and the necessarily (very) small mixing between the dark Higgs and the familiar Higgs of the SM. Successfully extending this scenario to a completion in the UV while avoiding any potential issues that can be encountered in the IR remains an interesting model-building problem. Extra dimensions (ED) have proven themselves to be a very useful tool for building interesting models of new physics that can address outstanding issues that arise in 4-D\cite{ED}. In a previous pair of papers \cite{Rizzo:2018ntg,Rizzo:2018joy}, hereafter referred to as I and II respectively, we considered the implications of extending this familiar 4-D kinetic mixing picture into a (flat) 5-D scenario where it was assumed that the DM was either a complex scalar, a Dirac fermion or a pseudo-Dirac fermion with an $O(1)$ mass splitting. In all cases we found some unique features of the 5-D setup, {\it e.g.}, the existence of strong destructive interference between the exchanges of Kaluza-Klein (KK) excitations of the dark photon allowing for light Dirac DM, which is excluded by CMB \cite{Ade:2015xua,Liu:2016cnk} constraints in 4-D, new couplings of the split pseudo-Dirac states to the dark photon that avoids co-annihilation suppression found in 4-D \cite{4dmaj}, or the freedom to choose appropriate 5-D wave function boundary conditions, {\it etc}, all of which helped us to avoid some of the model building constraints from which the corresponding 4-D KM scenario can potentially suffer. The general structure of the model setups considered previously in I and II followed from some rather basic assumptions: ($i$) The 5-D space is a finite interval, $0\leq y\leq \pi R$, that is bounded by two branes, upon one of which the SM fields reside while the dark photon lives in the full 5-D bulk. This clearly implies that the $U(1)_D-U(1)_Y$ KM must solely occur on the SM brane. The (generalization of) the usual field redefinitions required to remove this KM in order to obtain canonically normalized fields then naturally leads to the existence of a very small, but {\it negative} brane localized kinetic term (BLKT) \cite{blkts} for the dark photon which itself then leads to a tachyon and/or ghost field in its Kaluza-Klein (KK) expansion. We are then led to the necessary conclusion that an $O(1)$ {\it positive} BLKT must already exist to remove this problem; the necessity of such a term was then later shown to be also very useful for other model building purposes. ($ii$) A simple way to avoid any significant mixing between SM Higgs, $H$, and the dark Higgs, $S$ which is employed in 4-D to generate the dark photon mass, is to eliminate the need for the dark Higgs to exist. This then removes the necessity of fine-tuning the parameter $\lambda_{HS}$ in the scalar potential describing the $\sim S^\dagger SH^\dagger H$ interaction in order to avoid a large branching fraction for the invisible width for the SM Higgs, $H$ \cite{HiggsMixing,inv}.\footnote{We note that this method of avoiding a dark Higgs while maintaining a massive dark photon is hardly unique. For example, a Stuckelberg mass may be introduced, as discussed in a different region of parameter space in the second reference of \cite{vectorportal}. However, the method employed in I and II (and here) affords far greater model-building flexibility than a Stuckelberg mechanism construction. We could, for instance, implement a non-Abelian dark gauge group using the extra-dimensional setup in I and II, which would be impossible if we assumed a Stuckelberg mass for the dark photon.} As is well known, one can employ appropriate (mixed) boundary conditions on the 5-D dark photon wave function on both branes to break the $U(1)_D$ symmetry and generate a mass for the lowest lying dark photon KK mode \cite{csakiEDs} without the presence of the dark Higgs. These boundary conditions generically have the form (in the absence of BLKTs or other dynamics on either brane) $v(y)|_1=0,~\partial v(y)|_2=0$ where $v$ is the 5-D dark photon wavefunction, $y$ describing the co-ordinate in the new extra dimension as above and $|_i$ implies the evaluation of the relevant quantity on the appropriate brane. Since the SM exists and the corresponding KM happens on one of these branes, it is obvious the $v$ cannot vanish there, so we identify the location of the SM with brane 2. It is then also obvious that the DM itself cannot live on brane 1 otherwise it would no longer interact with either the dark photon or, through it, with the SM, so the DM must {\it also} reside in the bulk, have its own set of KK excitations and, for phenomenological reasons, its own somewhat larger BLKT along with constrained boundary conditions. While allowing us to successfully circumvent some of the possible problems associated with analogous 4-D setup, this arrangement leads to a rather unwieldy structure. Can we do as well (or better) with a less cumbersome setup? This is the issue we address in the current paper. The complexity of the previously described structure follows directly from ($ii$), {\it i.e.}, employing boundary conditions to break $U(1)_D$ so that there is no dark Higgs-SM Higgs mixing (as there is no dark Higgs with which to mix). In the present analysis, we consider an alternative possibility which also naturally avoids this mixing in an obvious way, {\it i.e.}, localizing the dark Higgs as well as the DM on the other, non-SM ({\it i.e.}, dark) brane with only the dark photon now living in the full 5-D bulk to communicate their existence to us. Thus, keeping ($i$) but with the breaking of $U(1)_D$ on the dark brane via the dark Higgs vev, we eliminate the need for KK excitations of the DM field while also disallowing any tree-level dark Higgs-SM Higgs mixing, and thus significantly diminishing the phenomenological role of the dark Higgs itself as we will see below. In what follows, we will separately consider and contrast both flat as well as the warped \cite{Randall:1999ee,morrissey} versions of this setup in some detail assuming that the DM is a complex scalar field, $\phi$, which does not obtain a vev. This choice, corresponding to a dominantly p-wave annihilation via the spin-1 dark photon mediator to SM fermions, allows us to trivially avoid the constraints from the CMB while still recovering the observed relic density\cite{Steigman:2015hda,21club}.\footnote{While it was demonstrated in II that under certain conditions, the CMB constraints on $s$-channel fermionic DM annihilation might be avoided, these setups are dramatically more complicated than simply assuming that the DM is a complex scalar. To keep the construction here as simple as possible, we restrict our discussion to the scalar case and leave a detailed exploration of analogous fermionic DM models with an s-wave annihilation process to future work.} As we will see, in addition to the IR parameters noted above and suitably defined here in the 5-D context, only 2(3) additional parameters are present for the flat (warped) model version, these being the SM brane BLKT for the dark photon, $\tau$, and size of the mass term, $m_V$. In the warped version, as is usual, the curvature of the anti-deSitter space scaled by the compactification radius, $kR$, is also, in principle, a free parameter. Here, however, the value of this quantity is roughly set by the ratio of the weak scale, $\sim 250$ GeV, to that associated with the dark photon mass, $\sim 100$ MeV, {\it i.e.}, $kR\sim 1.5-2$. In what follows, an $O(1)$ range of choices for the values of all of these quantities will be shown to directly lead to phenomenologically interesting results. Unlike in the previous setups, now the boundary conditions applied to wavefunction $v(y)$ will be significantly relaxed so that requirement of at least one of $v(y=0,\pi R)=0$ is no longer a problem, and the values of these wavefuctions will be determined by the values of the parameters $m_V (\tau)$ on the dark (SM) brane. The outline of this paper is as follows: In Section \ref{Setup}, we present the construction of this model while remaining agnostic to the specific geometry of the extra dimension, while in Section \ref{FlatAnalysis} we specialize our discussion to the case in which the extra dimension is flat and present a detailed analysis of this scenario. In Section \ref{WarpedAnalysis}, we present an analogous discussion of the model in the case of a warped extra dimension, with appropriate comparison to the results from the flat case scenario. Section \ref{Conclusion} contains a summary and our conclusions. \section{General Setup}\label{Setup} Before beginning the analysis of the current setup, we will very briefly review the formalism from I that remains applicable, generalizing it slightly to incorporate either a flat or warped extra dimension. \subsection{Field Content and Kaluza-Klein Decomposition} As noted in the Introduction, we consider the fifth dimension to be an interval $0\leq y \leq \pi R$ bounded by two branes; for definiteness we assume that the entirety of the SM is localized on the $y=0$ brane, while the dark matter (DM) field (which we shall refer to as $\phi$) and the dark Higgs (which we shall refer to as $S$) are localized on the opposite brane at $y= \pi R$. For a flat extra dimension, this assignment of branes is arbitrary, however we shall see that it has some physical motivation in the warped case. For clarity, we depict the localization of the different fields in the model in Figure \ref{figOverview}. It should be noted that although both $\phi$ and $S$ are complex scalars localized on the dark brane, they must be separate fields. This is in order to avoid potential pitfalls related to DM stability: If $\phi$ were to serve as both the dark matter and dark Higgs, then the physical DM particle could decay via a pair of virtual dark photons into SM particles, requiring draconian constraints on DM coupling parameters in order to maintain an appropriately long lifetime. This phenomenon is discussed in greater detail in the discussion of Model 2 in I, where the additional complexity added by making $\phi$ a bulk field allows for sufficient model-building freedom to circumvent these constraints. However, as our DM field $\phi$ is brane-localized in this construction, the methods outlined in that work are not applicable here, and we are forced to assume that $\phi$ acquires no vev and posit a separate dark Higgs, $S$, to break the dark gauge symmetry. \begin{figure}[htbp] \centerline{\includegraphics[width=5.0in,angle=0]{Setup_Diagram.pdf}} \caption{A simple diagram overviewing the construction of our model. The fields $\phi$ and $S$, representing the complex scalar dark matter (DM) and the dark Higgs, respectively, are localized on Brane 1 (the ``dark brane") at $y=\pi R$. The Standard Model (SM) and kinetic mixing (KM) terms are localized on Brane 2 (the ``SM brane") at $y=0$. The bulk contains only one field, the dark photon $V$.} \label{figOverview} \end{figure} The metric of our 5-dimensional model is assumed to take the form, \begin{equation}\label{genericMetric} ds^2 = f(y)^2 ~\eta_{\mu \nu} dx^{\mu} dx^{\nu}-dy^2, \end{equation} where $f(y)$ is simply some function of the bulk coordinate $y$: For a flat extra dimension, $f(y)=1$, while for a Randall-Sundrum setup \cite{Randall:1999ee}, $f(y)=e^{-ky}$, where $k$ is a curvature scale. The dark photon, described by a gauge field $\hat{V}_A(x,y)$ lies in the full 5-D bulk, and kinetically mixes with the 4-D SM hypercharge gauge field $\hat{B}_\mu (x)$ on the SM brane via a 5-D kinetic mixing (KM) parameter $\epsilon_{5D}$ as described (before symmetry breaking) by the action \begin{align}\label{originalAction} S=\int d^4x \int_{0}^{\pi R} dy &~\Big[-\frac{1}{4} \Big( \hat V_{\mu \nu} \hat V^{\mu \nu} -2 f(y)^{2}(\partial_\mu \hat{V}_y -\partial_y \hat{V}_\mu)(\partial^\mu \hat{V}^y -\partial^y \hat{V}^\mu) \Big) \\ &~+\Big(-\frac{1}{4} \hat B_{\mu\nu} \hat B^{\mu\nu} +\frac{\epsilon_{5D}}{2c_w} \hat V_{\mu\nu} \hat B^{\mu\nu} + L_{SM} \Big) ~\delta(y) \Big] \,, \nonumber \end{align} where $c_w=\cos(\theta_w)$, the weak mixing angle, Greek indices denote only the 4-dimensional vector parts of the gauge field $\hat{V}$, and $\hat{V}_y$ denotes the fifth component of this field. Since spontaneous symmetry breaking takes place on the dark brane via the vev of the dark Higgs, $S$, we know \cite{5d,Casagrande:2008hr} that in the Kaluza-Klein (KK) decomposition the 5th component of $\hat V_A$ (which does not experience KM) and the imaginary part of $S$ combine to form the Goldstone bosons eaten by $\hat V$ to become the corresponding longitudinal modes. So, we are free in what follows to work in the $V_y=0$ gauge, at least for the flat and Randall-Sundrum-like geometries that we are considering here. Then the alluded-to relevant KK decomposition for the 4-D components of $\hat V$ is given by{\footnote{Note that $n=1$ labels the lowest lying excitation appearing in this sum.}} \begin{equation} \hat V^\mu (x,y) = \frac{1}{\sqrt{R}}\sum_{n=1}^\infty ~ v_n (y) \hat V_n^\mu (x)\,, \end{equation} where we have factored out $R^{-1/2}$ in order to render $v_n(y)$ dimensionless. To produce a Kaluza-Klein tower, then we will require that the functions $v_n(y)$ must satisfy the equation of motion \begin{align}\label{generalEOM} \partial_y [f(y)^2 ~\partial_y v_n(y)] = - m_n^2 v_n (y) \end{align} in the bulk, where here the $m_n$ are the physical masses of the various KK excitations. Defining the KK-level dependent quantity $\epsilon_n = R^{-1/2} \epsilon_{5D} v_n (y=0)$, which we see explicitly depends on the values of the dark photon KK wavefunctions evaluated on the SM brane, we see that the 5-D KM becomes an infinite tower of 4-D KM terms given by \begin{equation} \sum_n \frac{\epsilon_n}{2 c_w} \hat V_n^{\mu\nu} \hat B_{\mu\nu}\, . \end{equation} As discussed in I, the intuitive generalization of the usual kinetic mixing transformations, $\hat B^{\mu\nu} = B^{\mu\nu}+\sum_n \frac{\epsilon_n}{c_w} V^{\mu\nu}_n$, $\hat V^{\mu\nu} \rightarrow V^{\mu\nu}$, will be numerically valid in scenarios in which the infinite sum $\sum_{n} \epsilon_n^2/\epsilon_1^2$ is approximately $\mathrel{\mathpalette\atversim<} O(10)$, and $\epsilon_1 \ll 1$; in other words, $\epsilon_1$ is sufficiently small and $\epsilon_n$ shrinks sufficiently quickly with increasing $n$. Otherwise, terms of $O(\epsilon_1^2)$ (at least) become numerically significant and can't be ignored in the analysis, even if each individual $\epsilon_n$ remains small. In both the cases of a warped and flat extra dimension, the sum $\sum_{n} \epsilon_n^2/\epsilon_1^2$ is within the acceptable range as long as there is a sufficiently large positive brane-localized kinetic term (BLKT) on the same brane as the SM-dark photon kinetic mixing, as was shown for flat space in I and will be demonstrated for warped space in Section \ref{WarpedAnalysis}. So, by selecting $\epsilon_1 \sim 10^{-(3-4)}$, as suggested by experiment, within our present analysis we can always work to leading order in the $\epsilon_n$'s, and thus the transformations $\hat B^{\mu\nu} = B^{\mu\nu}+\sum_n \frac{\epsilon_n}{c_w} V^{\mu\nu}_n$, $\hat V^{\mu\nu} \rightarrow V^{\mu\nu}$ will be sufficient for our purposes in removing the KM. It is interesting to note that we can see the requirement for a positive brane-localized kinetic term (BLKT) in our setup more immediately from the action of Eq.(\ref{originalAction}). In particular, as noted in I and the Introduction, making the usual substitution in the 5D theory to eliminate kinetic mixing (that is, $\hat{V}\rightarrow V$ and $\hat{B}\rightarrow B+\frac{\epsilon_{5D}}{c_w} \hat{V}$) produces the small negative BLKT $\sim -\frac{\epsilon_{5D}^2}{Rc_w^2}$ mentioned in the Introduction. In this 5-D treatment, the effective BLKT experienced by the $V$ on the SM brane would therefore be equal to whatever BLKT existed before mixing, \emph{shifted} by the mixing-induced negative brane term. This shift is highly suggestive of the necessity of introducing a positive BLKT to the model before mixing, in order to avoid the pitfalls associated with negative BLKT's (for example, in the case of a flat extra dimension, negative BLKT's such as this are well known to lead to tachyonic KK modes or ghost-like states); the BLKT before mixing is applied must be large enough that the effective term after mixing is non-negative. In our explicit treatment of the model's kinetic mixing, because we only apply field shifts at the level of the effective 4-D theory, this negative brane term does not appear, but the requirement for a positive BLKT instead emerges as a condition to keep the kinetic mixing between the SM hypercharge boson and an infinite number of KK dark photons small. As will be seen, such considerations lead to a lower bound on the SM-brane BLKT. Next, we note that the sum of the brane actions corresponding to the usual (positive) BLKT for $V$ on the SM brane, which we shall denote by $\tau$, and the corresponding dark Higgs generated mass term for $V$ on the dark brane, denoted by $m_V$, is given by \begin{equation} S_{branes}=\int d^4x \int_{0}^{\pi R} dy ~\Big[-\frac{1}{4} V_{\mu\nu} V^{\mu\nu} \cdot \tau R~\delta(y) +\frac{1}{2} m_V^2 R ~V_\mu V^\mu ~\delta(y-\pi R)\Big]\,, \end{equation} where factors of $R$ have been introduced to make $\tau$ dimensionless as usual and for $m_V$ to have the usual 4-D mass dimension. We note that one of the main advantages of our present setup is that the dark Higgs which generates the brane mass term $m_V$ is isolated from any mixing with the SM Higgs, and as a result, its phenomenological relevance in this construction is quite limited. Given that it is unstable (from decays via on- or off-shell dark photons, depending on the dark Higgs and dark photon masses), its most salient effect on any observables in the theory would be if a process such as $\phi \phi^\dagger \rightarrow V^{(n)*} \rightarrow V^{(m)} S$ were to dominate the calculation of the DM relic density. While this sort of construction may be of some interest (for example, \cite{Baek:2020owl} discusses a 4D model in a similar parameter space that may address the recent XENON1T electron recoil excess \cite{Aprile:2020tmw} in which a light dark Higgs plays such a role\footnote{It should be noted that without substantial modifications to our own setup, such as the addition of mass splitting between the two degrees of freedom of the complex scalar DM field \cite{Baek:2020owl} or additional slightly heavier DM scalars that facilitate the production of boosted $\phi$ pairs \cite{Jia:2020omh}, the XENON1T excess cannot be explained in the parameter space we are considering. A detailed discussion of how these or other mechanisms might be incorporated into a 5-D model like that discussed here is beyond the scope of this work.}), this effect can be easily suppressed by assuming that the dark Higgs (or rather, its real component after spontaneous symmetry breaking) has a sufficiently large mass (slightly greater than twice the DM mass, assuming cold dark matter), rendering this process kinematically forbidden. As such, for our analysis we can ignore this scalar and instead simply assume the existence of the brane-localized mass term $m_V$ without further complications. We will define the 4-D gauge coupling of the dark photon to be that between the DM and the lowest $V$ KK mode as evaluated on the dark brane. The action $S_{branes}$ supplies the boundary conditions, as well as the complete orthonormality condition, necessary for the complete solutions of the $v_n$. These are \begin{equation}\label{generalBCs} \Big (f(0)^2~\partial_y + m_n^2 \tau R\Big)v_n(0)=0, ~~~~~\Big( f(\pi R)^2 ~\partial_y+m_V^2 R\Big)v_n(\pi R)=0\, \end{equation} for the boundary conditions, and \begin{equation}\label{orthoRelation} \frac{1}{R}\int_{0}^{\pi R} dy \; v_n(y) v_m(y) (1+ \tau R \delta(y)) = \delta_{n m} \end{equation} for the orthonormality condition. At this point, once the function $f(y)$ is specified, as we shall do in Sections \ref{FlatAnalysis} and \ref{WarpedAnalysis} for a flat and a warped extra dimension respectively, it is possible to uniquely determine the bulk wave functions $v_n(y)$ for all $n$ given the parameters $R$, $\tau$, $m_V^2$, and whatever additional parameters are necessary to uniquely specify $f(y)$. Beyond discussing characteristics of individual KK modes, we shall find it convenient at times in our analysis to speak in terms of summations over exchanges of the entire dark photon KK tower. In particular, the sum \begin{equation}\label{FDefinition} F(y,y',s) \equiv \sum_n \frac{v_n(y) v_n(y')}{s-m_n^2} \end{equation} shall appear repeatedly in our subsequent discussion, where for our purposes here $s$ is simply a positive number, but in our actual analysis shall denote the Mandelstam variable of the same name. To evaluate this sum, we can perform an analysis similar to that of \cite{Casagrande:2008hr,Hirn:2007bb}. First, we note that the orthonormality condition of the KK modes in Eq.(\ref{orthoRelation}) requires that \begin{align}\label{generalOrthonormality} \frac{1}{R}\int_{0}^{\pi R} d y \; v_m (y)v_n(y)(1+\tau R \delta(y)) = \delta_{m n}\\ \rightarrow \sum_{n} v_n(y) v_n(y')(1+\tau R \delta(y))=R \delta(y-y'), \nonumber \end{align} where the sum in the second line of this equation is over all KK modes $n$. We then note that the equation of motion Eq.(\ref{generalEOM}) and the $y=0$ boundary condition of Eq.(\ref{generalBCs}) can be recast in an integral form as \begin{align}\label{generalIntegralEOM} v_n(y) =v_n(0)-m_n^2 \int_0^{y} dy_1 \; [f(y_1)]^{-2}\int_0^{y_1} dy_2 \; v_n (y_2)(1+\tau R \delta(y_2)) \end{align} Using this integral form of the equation of motion for $v_n(y)$, we can now compute the sum $F(y,y',s)$. Combining Eqs.(\ref{generalOrthonormality}) and (\ref{generalIntegralEOM}), we can write the integral equation \begin{align}\label{FIntegralEq} F(y,y',s) &= \int_0^{y} dy_1 \; f(y_1)^{-2}\int_0^{y_1} dy_2 \; [R\delta(y_2-y')-s F(y_2,y',s)(1+\tau R \delta(y_2))]+F(0,y',s). \end{align} Eq.(\ref{FIntegralEq}) can be straightforwardly rewritten as a differential equation, \begin{align}\label{FDiffEq} \partial_y [f(y)^2 ~\partial_y F(y,y',s)] = R\delta(y-y')- s F(y,y',s), \nonumber\\ \partial_y F(y,y',s)|_{y=0} = -s \tau R f(0)^{-2}F(0,y',s),\\ \partial_y F(y,y',s)|_{y=\pi R} = - m_V^2 R f(\pi R)^{-2} F(\pi R, y',s), \nonumber \end{align} where the $y=0$ boundary condition is explicitly in the integral equation Eq.(\ref{FIntegralEq}), while the second is easily derivable from the $y=\pi R$ boundary condition on $v_n(y)$ given in Eq. (\ref{generalBCs}). Once a function $f(y)$ (and therefore a metric) has been specified, the function $F(y,y',s)$ is then uniquely specified by Eq.(\ref{FDiffEq}). \subsection{Dark Photon Couplings} With equations of motion for the Kaluza-Klein (KK) modes' bulk profiles $v_n(y)$ and the summation $F(y,y',s)$ specified, it is now useful to discuss some general aspects of our construction's phenomenology before explicitly choosing a metric. First, we note that the effective couplings of the $n^{th}$ KK mode of the dark photon to the DM on the dark brane are given by $g^{DM}_n = g_{5D} v_n(y=\pi R)/\sqrt{R}$, where $g_{5D}$ is the 5-dimensional coupling constant appearing in the theory, while recalling that the effective kinetic mixing (KM) parameters $\epsilon_n$ are similarly given by $\epsilon_n = \epsilon_{5D} v_n(y=0)/\sqrt{R}$. In terms of the value of these parameters for the least massive KK mode, $g_D \equiv g^{DM}_1$ and $\epsilon_1$, we can then write \begin{align}\label{gepsilonDefs} g^{DM}_n &= g_D \frac{v_n (y=\pi R)}{v_1 (y= \pi R)}, \\ \epsilon_n &= \epsilon_1 \frac{v_n(y=0)}{v_1(y=0)}. \nonumber \end{align} Armed with these relationships, our subsequent analysis will treat $\epsilon_1$, $g_D$, and the mass of the least massive KK dark photon excitation, $m_{1}$ (which we trade for $R$), as free parameters and identify them with the corresponding quantities that appear in the conventional 4-D KM portal model. With the dark photon coupling to DM given in Eq.(\ref{gepsilonDefs}), we now can remind the reader of the slightly more complex form of the dark photon coupling to SM fermions, previously derived in I. In particular, once the shift $B \rightarrow B + \sum_n \frac{\epsilon_n}{c_w} V_n$ is applied, the $Z$ boson undergoes a small degree of mixing with the $V_n$ fields. Once the mass matrix of the $Z$ boson and the $V_n$ modes is diagonalized again, then to leading order in the $\epsilon$'s the $V_n$ fields couple to the SM fermions as \begin{align}\label{gVff} \frac{g}{c_w}t_w \epsilon_n \bigg[ T_{3L} \frac{m_n^2}{m_Z^2-m_n^2}+Q \frac{c_w^2 m_Z^2 - m_n^2}{m_Z^2-m_n^2} \bigg] \xrightarrow{m_n \ll m_Z} e Q \epsilon_n, \end{align} where $Q$ is the fermion's electric charge, $T_{3L}$ is the third component of its weak isospin, $m_Z$ is the mass of the $Z$ boson, $m_n$ is the mass of the dark photon KK mode $V_n$, $e$ is the electromagnetic coupling constant, and $c_w$ and $t_w$ represent the cosine and tangent of the Weinberg angle, respectively. As pointed out in I and in Eq.(\ref{gVff}), when $m_n \ll m_Z$, the coupling in Eq.(\ref{gVff}) simplifies dramatically; we shall find this approximation exceedingly useful in our subsequent analysis. We also remind the reader that, as discussed in I, the kinetic and mass mixing of the dark photon fields with the $Z$ boson results in non-trivial modifications to the $Z$ boson and SM Higgs phenomenology. In particular, the $Z$ boson gains an $O(\epsilon)$ coupling to the DM, as well as an $O(\epsilon^2)$ correction to its mass. It was pointed out in I that the $\epsilon$ suppression of these effects keeps them far below present experimental bounds (for example, from precision electroweak observables for the mass correction and measurements of the invisible $Z$ decay width for the $Z$ coupling to DM), and given the fact that the $Z$ boson is roughly $10^2$ to $10^3$ times more massive than the lighter dark photon KK modes, this coupling also does not contribute significantly to the DM relic abundance calculation or direct detection scattering processes. As a result, we shall ignore these couplings in our subsequent analysis. Meanwhile, the SM Higgs field $H$ gains two phenomenologically interesting new couplings from this mixing, which may contribute to experimentally constrained Higgs decays to either a pair of dark photon modes or to a single dark photon mode with a $Z$. First, new $HZV_n$ couplings emerge of the form, \begin{align}\label{gHZV} K_{HZV_n} = \frac{2 m_Z^2}{v_H} \bigg[ \frac{t_w \epsilon_n m_n^2}{m_Z^2-m_n^2} \bigg], \end{align} where $v_H$ denotes the SM Higgs vev $\sim$ 246 GeV. Meanwhile, $H V_n V_m$ couplings emerge of the form, \begin{align}\label{gHVV} K_{H V_n V_m} = \frac{2 m_Z^2}{v_H} \bigg[ \frac{t_w \epsilon_n m_n^2}{m_Z^2-m_n^2} \bigg] \bigg[ n \rightarrow m \bigg]. \end{align} Notably, the couplings of the Higgs in Eq.(\ref{gHZV}) and (\ref{gHVV}) to a given dark photon field $V_n$ are both proportional to the ratio $m_n^2/(m_Z^2-m_n^2)$, which results in an approximate $m_n^2/m_Z^2$ suppression when $m_n \ll m_Z$. Given that we are considering the parameter space in which the lightest dark photon, $V_1$, has a mass of $\sim 100$ MeV, this term provides extremely strong suppression of the SM Higgs couplings to the lighter KK modes of $V$. Given that the $\epsilon_n$'s shrink as $n$ gets very large (as necessitated by the convergence of the sum $\sum_n \epsilon_n^2/\epsilon_1^2$), and hence couplings of the Higgs to more massive dark photon KK modes are highly suppressed by smaller $\epsilon_n$'s, the numerical effect of these couplings on the observable physics in the model is minute, even compared to other $\epsilon$-suppressed quantities. As a result, and as discussed in detail for the analogous system in I, the contributions of the couplings in Eqs.(\ref{gHZV}) and (\ref{gHVV}) are many orders of magnitude below the present constraints from Higgs branching fractions. For example, if we assume that the $\epsilon_1 = 10^{-3}$, $m_1=0.1-1$ GeV, our results for the Higgs decay width from processes $H \rightarrow Z V_n$ never exceed $O(1)$ eV for the model parameter space we discuss in either our flat or warped space constructions. Meanwhile, the sum of the $H \rightarrow V_n V_m$ widths, being doubly suppressed, never achieves a value of more than $O(10^{-4})$ eV. Both of these processes represent negligible contributions to the Higgs decay width, and therefore negligible branching fractions. \subsection{Dark Matter Phenomenology} To round out our general discussion of this model's setup and phenomenology, it is useful to give symbolic results for two quantities that are of particular phenomenological interest for dark matter within the mass range we are considering, and which can be expressed in a manner agnostic to the specific functional form of the bulk wavefunctions $v_n(y)$ and the sum $F(y,y',s)$. In particular, we shall give symbolic expressions for the DM-electron scattering cross section and the thermally averaged annihilation cross section of DM into SM particles.\footnote{The simple thermal freeze-out treatment here is valid for DM particles of mass $\mathrel{\mathpalette\atversim>} O(\textrm{MeV})$, as long as the force mediator controlling annihilation, in our case the dark photon, is of similarly small mass\cite{Boehm:2003hm}. In most of the parameter space this cross section, up to factors of $\sim$5-40\% as we will see below, is identical to that obtained in well-studied 4-D models.} We note that for the class of models we consider, these represent the dominant sources of constraints. For example, in \cite{Cho:2020mnc}, constraints from direct detection of DM particles boosted by cosmic rays are found to be much weaker than conventional DM-electron scattering constraints in this region of parameter space for a generic 4-D model of kinetic mixing/vector portal DM. It should be noted that when performing the computations of the direct detection scattering and annihilation cross sections, we have made two significant simplifying assumptions: First, we have approximated the coupling of any Kaluza-Klein (KK) dark photon mode $V_n$ to a given SM fermion species as $\approx e Q \epsilon_n$, which, according to Eq.(\ref{gVff}), is only a valid approximation when $m_n \ll m_Z$, and therefore breaks down if we sum over the entire infinite tower of KK modes once we reach sufficiently large $n$. Second, we have assumed that the contribution of the $Z$ boson exchange to both of these processes is negligible compared to that of the exchange of KK tower bosons. In practice, both of these approximations amount to letting $m_Z \rightarrow \infty$, namely assuming that the $Z$ boson is much heavier than \emph{all} KK modes of the dark photon. Numerically, we find that the $m_Z \rightarrow \infty$ approximation has a negligibly small effect on our results: Because the lightest dark photon modes are approximately $10^2$ to $10^3$ times lighter than the $Z$ boson, and these light modes also have the numerically largest kinetic mixing, applying the precise dark photon coupling of Eq.(\ref{gVff}) and including the effects of $Z$ boson exchanges in these computations serves only to significantly complicate our symbolic expressions while altering the numerical result at well below the percent level. As such, for the purposes of these computations we confidently work in the limit where $m_Z \rightarrow \infty$. First, we note that in the limit where the DM particle $\phi$'s mass $m_{DM}$ is far greater than the mass of an electron, we can approximate the DM-electron scattering cross section for direct detection as \begin{align}\label{sigmae} \sigma_{\phi e} = 4 \alpha_{\textrm{em}} m_e^2 (g_D \epsilon_1)^2 \bigg\lvert \frac{F(0,\pi R,0)}{v_1(\pi R) v_1 (0)} \bigg\rvert^2. \end{align} To ensure that our DM candidate produces the correct relic abundance, we also must compute the thermally averaged annihilation cross section for DM into SM particles (which we shall denote by the symbol $\sigma$), weighted by the M$\o$ller velocity of the DM particle pair system $v_{M\o l}$ in the cosmic comoving frame \cite{Gondolo:1990dk}. We are careful to note that $\sigma$ here refers to the Lorentz-$\textit{invariant}$ cross section. To find this average, we must integrate $\sigma v_{\textrm{M\o l}}$ weighted by the two Bose-Einstein energy distributions, $f(E)$, of the complex DM fields in the initial state. As noted in \cite{Gondolo:1990dk}, if the freeze-out temperature, $T_F$ satisfies $x_F=m_{DM}/T_F\mathrel{\mathpalette\atversim>} 3-4$ as it will below, we can approximate these Bose-Einstein distributions with Maxwell-Boltzmann ones, and can employ the following formula to express the thermal average as a one-dimensional integral, \begin{equation}\label{singleIntegralAvg} <\sigma v_{\textrm{M\o l}}>= \frac{2 x_F}{K_2^2(x_F)}\int_0^{\infty} d \varepsilon \; \varepsilon^{1/2}(1+2 \varepsilon)K_1(2 x_F \sqrt{1+\varepsilon})\sigma v_{lab}, \end{equation} where $K_n(z)$ denotes the modified Bessel function of the second kind of order $n$, $v_{lab}$ is the relative velocity of the two DM particles in a frame in which one of them is at rest, and $\varepsilon \equiv (s-4m_{DM}^2)/(4m_{DM}^2)$, {\it i.e.}, the kinetic energy per unit mass in the aforementioned reference frame. This integral can be performed numerically; in our numerical evaluations here we will assume $x_F=20$ but note that other values in the 20-30 range give very similar results. We can proceed now by computing the cross-section for the annihilation of a DM particle-antiparticle pair into a pair of SM fermions of mass $m_f$ and electric charge $Q_f$, in which case we invoke the following expression for the cross-section of a $2\rightarrow 2$ process, \begin{align} \sigma v_{lab} = \frac{\sqrt{s(s-4 m_f^2)}}{s(s-2 m_{DM}^2)}\int \frac{d \Omega ~|\mathcal{M}|^2}{(64 \pi^2)}, \end{align} where $s$ is the standard Mandelstam variable, $m_{DM}$ is the mass of the DM particle, $\Omega$ is the center-of-mass scattering angle and $\mathcal{M}$ is the matrix element for the annihilation process we are considering. When $s$ is far from any KK mode resonances, we arrive at the result \begin{align}\label{sigmavrelNonRes} \sigma v_{lab} = \frac{1}{3}\frac{g_D^2 \epsilon_1^2 \alpha_{\textrm{em}}Q_f^2}{v_1(\pi R)^2 v_1 (0)^2 }\frac{(s+2m_f^2)(s-4 m_{DM}^2)\sqrt{s(s-4 m_f^2)}}{s(s-2 m_{DM}^2)} \lvert F(0,\pi R,s) \rvert^2. \end{align} In practice, for both of the specific cases we shall discuss in our analysis, we shall find it necessary to consider regions of parameter space such that DM annihilation through the first KK mode enjoys some resonant enhancement \cite{Feng:2017drg}. In order to accommodate this scenario, we have to modify Eq.(\ref{sigmavrelNonRes}) slightly, arriving at \begin{align}\label{sigmavrel} \sigma v_{lab} &= \frac{1}{3}\frac{g_D^2 \epsilon_1^2 \alpha_{\textrm{em}}Q_f^2}{v_1(\pi R)^2 v_1 (0)^2 }\frac{(s+2m_f^2)(s-4 m_{DM}^2)\sqrt{s(s-4 m_f^2)}}{s(s-2 m_{DM}^2)}\\ &\times\Bigg\lvert F(0,\pi R, s) -v_1(\pi R) v_1 (0)\Big( \frac{1}{s-m_1^2}-\frac{1}{s-m_1^2+i m_1 \Gamma_1}\Big)\Bigg\rvert^2, \nonumber \end{align} where $\Gamma_i$ is the total width of $V_i$ which we need to calculate as a function of $m_{i}$. We note that $V_1$ in particular will be very narrow as $\Gamma_1/m_1\simeq \alpha \epsilon_1^2/3 \sim 10^{-10}$ when decays to DM pairs are not kinematically allowed. Physically, we have simply subtracted the contribution of the lowest-lying KK mode from the sum $F(0,\pi R, s)$, where its propagator appears with its pole mass, and added this contribution again with the Breit-Wigner mass instead. Since the annihilation of two complex scalars into a pair of fermions through a vector gauge boson is $p$-wave process, and so is $v_{rel}^2$ suppressed at later times ({\it i.e.}, at lower temperatures when the DM is moving slowly), we are safe from the previously mentioned strong constraints on DM annihilation during the CMB at $z \sim 10^3$ \cite{Steigman:2015hda}. We further note that if $m_{DM}>m_{1}$, then we would expect the $s$-wave process $\phi \phi^\dagger \rightarrow 2V_1$ to be dominant for unsuppressed values of $g_D$. In order to avoid this possibility, we must then require that $m_{DM}<m_{1}$ and this will be reflected in our considerations below. We note that if $m_{1}>2m_{DM}$ then the $O(g_D^2)$ decay $V_1\rightarrow \phi \phi^\dagger$ will dominate, otherwise, $V_1$ will decay to SM fermions with, as noted above, a suppressed $O(\alpha \epsilon_1^2)$ decay partial width. \section{Flat Space Model Setup}\label{FlatAnalysis} In order to further explore the phenomenology of our construction, we must now specify the geometry of the extra dimension, namely by selecting a specific function $f(y)$ in Eq.(\ref{genericMetric}). With this determined, we can then straightforwardly find the spectrum of Kaluza-Klein (KK) gauge bosons $V_n$, their bulk wavefunctions $v_n(y)$, and concrete expressions for the cross sections of Eqs.(\ref{sigmae}) and (\ref{sigmavrel}). Initially, we shall consider the case of a flat extra dimension, {\it i.e.}, $f(y)=1$. The equation of motion for the bulk profile $v_n(y)$ is then straightforward; from the generic case given in Eqs.(\ref{generalEOM}) and (\ref{generalBCs}), we quickly arrive at \begin{align}\label{FlatEOM} \partial_y^2 v_n(y) = -m_n^2 v_n (y), \\ (\partial_y + m_n^2 \tau R)v_n(y)|_{y=0} = 0, \nonumber \\ (\partial_y + m_V^2 R)v_n(y)|_{y=\pi R} = 0. \nonumber \end{align} which when combined with the orthonormality condition Eq.(\ref{orthoRelation}) quickly yields the expressions \begin{align}\label{Flatvn} v_n(y) &= A_n (\cos(x^F_n (y/R))-\tau x^F_n \sin(x^F_n (y/R))), \\ A_n &\equiv \sqrt{\frac{2}{\pi}}\Big( 1+(x^F_n \tau)^2 + (1-(x^F_n \tau)^2)\frac{\sin(2 \pi x^F_n)}{2 \pi x^F_n }+\frac{2 \tau}{\pi}\cos^2(\pi x^F_n)\Big)^{-\frac{1}{2}}, \nonumber \\ x^F_n &\equiv m_n R, \;\;\;\; a_F \equiv m_V R, \nonumber \end{align} where we have defined the dimensionless quantities $x^F_n$ and $a_F$ from combinations of dimensionful parameters for the sake of later convenience{\footnote {The label ``F'' is used here to distinguish these flat space results from those of the warped case which we will discuss further below.}}. The allowed values of $x^F_n$ (and hence the mass spectrum of the KK tower) are given by the solutions to the equation, \begin{align}\label{FlatEigenvalues} \tan(\pi x^F_n) = \frac{(a_F^2-(x^F_n)^2\tau)}{x^F_n (1+a_F^2 \tau)}. \end{align} Given the results of Eqs.(\ref{Flatvn}) and (\ref{FlatEigenvalues}), we can now examine the behavior of a number of phenomenologically relevant quantities. To begin, it is useful to get a feel for the numerics of $x^F_1$, the lowest-lying root of the mass eigenvalue equation Eq.(\ref{FlatEigenvalues}). Since we are free to choose $m_1$ within the $\sim 0.1-1$ GeV mass range of interest, the lowest root $x^F_1 = m_1 R$ tells us the value of the compactification radius $R$ within this setup, hence, the value of $x^F_1(a_F,\tau)$ is important to consider. In I, where boundary conditions were used to break $U(1)_D$, the parameter $a_F$ is, of course, absent. However, it was found that $x^F_1$ in that case was a decreasing function of $\tau$, as is typical for the effect of brane-localized kinetic terms (BLKTs), with $x^F_1(\tau=0)=1/2$. Here, on the other hand, it is the value of $a_F\neq 0$ that generates a mass for the lowest lying dark photon KK state so that we expect $x^F_1\rightarrow 0$ as $a_F\rightarrow 0$ and thus to grow with increasing $a_F$. The top and bottom panels of Fig.~\ref{fig1} show that, indeed, the values of $x^F_1$ follow this anticipated behavior. For a fixed value of $a_F$, $x^F_1$ decreases as $\tau$ increases and for a fixed value of $\tau$, $x^F_1$ increases with the value of $a_F$. \begin{figure}[htbp] \centerline{\includegraphics[width=5.0in,angle=0]{Flat_x1_Tau.pdf}} \vspace*{-2.0cm} \centerline{\includegraphics[width=5.0in,angle=0]{Flat_x1_a.pdf}} \vspace*{-1.0cm} \caption{(Top) Value of the root $x^F_1$ as a function of $\tau$ for, from bottom to top, $a_F=$1, 3/2, 2, 5/2 and 3, respectively. (Bottom) Value of the root $x^F_1$ as a function of $a_F$ for, from top to bottom, $\tau=$1/2, 1, 3/2, 2, 5/2 and 3, respectively. } \label{fig1} \end{figure} Beyond the position of the lowest-lying root of Eq.(\ref{FlatEigenvalues}), the particular spectrum of the more massive KK modes are obviously of significant interest. A clear phenomenological signal for the types of models we are considering is the experimental observation of the dark photon KK excitations, perhaps most importantly that of the second dark photon KK excitation. Hence, knowing where the `next' state beyond the lowest lying member of the KK tower may lie is of a great deal of importance, {\it i.e.}, where do we look for the dark photon KK excitations if the lowest KK state is discovered? In Fig.~\ref{fig2} we display the ratio $m_{2}/m_{1}=x^F_2/x^F_1$ as a functions of $a_F,\tau$ and we see that for a reasonable variation of these parameters this mass ratio lies in the range $3-4$. Note that for fixed $a_F$ this ratio increases with increasing $\tau$ (mostly since since $x^F_1$ is pushed lower). Meanwhile, for any fixed value of $\tau$, this ratio sharply declines with increasing $a_F$ in the region $a_F \lesssim 1$ (largely because $x^F_1$ itself decreases sharply in this regime), while for $a_F \mathrel{\mathpalette\atversim>} 1$ the ratio slowly increases with increasing $a_F$. \begin{figure}[htbp] \centerline{\includegraphics[width=5.0in,angle=0]{Flat_xRatio_Tau.pdf}} \vspace*{-2.0cm} \centerline{\includegraphics[width=5.0in,angle=0]{Flat_xRatio_a.pdf}} \vspace*{-1.0cm} \caption{(Top) The mass ratio of the lowest two dark photon KK states, $m_{2}/m_{1}=x^F_2/x^F_1$, as a function of $\tau$ for $a_F=$3(cyan), 5/2(magenta), 2(green), 3/2(blue), 1(red), and 1/2(yellow), respectively. (Bottom) As in the previous panel, but now as a function of $a_F$ assuming $\tau=$3(cyan), 5/2(magenta), 2(green), 3/2(blue), 1(red), and 1/2(yellow), respectively.} \label{fig2} \end{figure} Non-zero values of $a_F,\tau$ particularly influence the low mass end of the dark photon KK mass spectrum as, {\it e.g.}, $a_F\neq 0$ provides the mass for the lightest KK mode in the present case. However, beyond the first few KK levels the masses of the dark photon KK states, in particular the ratio $m_{n}/m_{1}$ grows roughly linearly with increasing $n$ with a slope that is dependent on the values of the parameters $a_F,\tau$ as is shown in Fig.~\ref{fig3}. It is actually straightforward to see the eventual linear trend of the lines in Fig.~\ref{fig3} analytically, using the root equation Eq.(\ref{FlatEigenvalues}). In particular, note that as $x^F_n \rightarrow \infty$, Eq.(\ref{FlatEigenvalues}) approaches \begin{align} \textrm{tanc}(\pi x^F_n) = -\frac{\tau}{\pi (1+a^2_F \tau)}, \end{align} where $\textrm{tanc}(z) \equiv \textrm{tan}(z)/z$. It is well known that the difference between consecutive solutions of $\textrm{tanc}(z) = C$, for some constant $C$, approaches $\pi$ for very large $z$. So, we can see that for high-mass KK modes, the difference between consecutive solutions of Eq.(\ref{FlatEigenvalues}) will approach 1. Hence, the slope of the lines in Fig.~\ref{fig3} can be easily approximated as $\sim (x^F_1)^{-1}$, and will therefore exhibit the inverse of the dependence of $x^F_1$ on the parameters $\tau$ and $a_F$, which we have already observed in Fig.~\ref{fig1}. In addition, we can note that without taking the ratio of $x^F_n$ to $x^F_1$, \emph{any} large-$n$ solution of Eq.(\ref{FlatEigenvalues}) eventually follows the pattern $x^F_n \approx n$. \begin{figure}[htbp] \centerline{\includegraphics[width=5.0in,angle=0]{Flat_xnRatio.pdf}} \vspace*{-1.30cm} \caption{Approximate linear growth of the relative dark photon KK mass ratio $m_n/m_1$ as a function of $n$ for various choices of $(\tau ,a_F)$ =(1/2,1/2) [red], (1/2,1) [blue], (1/2,3/2) [green], (1,1/2) [magenta], (3/2,1/2) [cyan] and (1,1) [yellow], respectively.} \label{fig3} \end{figure} The next quantities of phenomenological relevance are the relative values of the KM parameters, $\epsilon_n/\epsilon_1$, and the couplings of the dark photon KK tower states to DM, $g_{DM}^n/g_D$; note that these latter quantities are found to oscillate in sign. Before exploring the numerics in detail here, it is useful to note that one can get a feel for the behavior of these ratios by purely analytical methods. In particular, by invoking Eqs.(\ref{gepsilonDefs}),(\ref{Flatvn}), and (\ref{FlatEigenvalues}), it is possible to derive the expressions \begin{align}\label{Flatgepsilon} \bigg( \frac{\epsilon_n}{\epsilon_1} \bigg)^2 &= \bigg(\frac{a_F^4+(x^F_n)^2}{a_F^4+(x^F_1)^2}\bigg)\lambda_F, \\ \bigg( \frac{g^n_{DM}}{g_D} \bigg)^2 &= \bigg( \frac{(1+(x^F_n)^2 \tau^2)(x^F_n)^2}{(1+(x^F_1)^2 \tau^2)(x^F_1)^2} \bigg) \lambda_F, \nonumber \\ \lambda_F &\equiv \frac{\pi (1+(x^F_1)^2 \tau^2)(a_F^4+(x^F_1)^2)+(1+a_F^2 \tau)(a_F^2 + (x^F_1)^2 \tau)}{\pi (1+(x^F_n)^2 \tau^2)(a_F^4+(x^F_n)^2)+(1+a_F^2 \tau)(a_F^2 + (x^F_n)^2 \tau)}. \nonumber \end{align} From Eq.(\ref{Flatgepsilon}), we can readily take the limits of $(\epsilon_n/\epsilon_1)^2$ and $(g^n_{DM}/g_D)^2$ at large $n$ (and hence large $x^F_n \approx n$). We arrive at the result that as $n\rightarrow \infty$ \begin{align}\label{FlatgepsilonAsymptotic} \bigg( \frac{\epsilon_n}{\epsilon_1} \bigg)^2 &\rightarrow \frac{\pi (1+(x^F_1)^2 \tau^2)(a_F^4+(x^F_1)^2)+(1+a_F^2 \tau)(a_F^2+(x^F_1)^2 \tau)}{\pi \tau^2(a_F^4+(x^F_1)^2)}\frac{1}{n^2}, \\ \bigg( \frac{g^n_{DM}}{g_D} \bigg)^2 &\rightarrow \frac{\pi (1+(x^F_1)^2 \tau^2)(a_F^4+(x^F_1)^2)+(1+a_F^2 \tau)(a_F^2+(x^F_1)^2\tau)}{\pi (1+(x^F_1)^2 \tau^2)(x^F_1)^2}. \nonumber \end{align} From the first expression in Eq.(\ref{FlatgepsilonAsymptotic}), we see that the ratio $(\epsilon_n/\epsilon_1)$ falls roughly as $1/n$ for large $n$; this result is readily borne out numerically in the top panel of Fig.~\ref{fig5}, where we also see that even for small $n$, $\epsilon_n$ never significantly exceeds the value of $\epsilon_1$, offering encouraging evidence that the small-kinetic mixing limit we took in Section \ref{Setup} was valid. More rigorously demonstrating this validity, however, will require the use of sum identities we shall derive later in this section. In contrast to the behavior of the effective kinetic mixing terms $\epsilon_n/\epsilon_1$, the ratio $|g^n_{DM}/g_D|$ approaches a constant non-zero value as $n \rightarrow \infty$. The precise value of this asymptotic limit of the ratio $|g^n_{DM}/g_D|$ is naturally of quite significant phenomenological interest: If $|g_{DM}^n/g_{DM}|$ is large, one might be concerned that even for a reasonable value of $g_D \lesssim 1$, the DM particle may experience some non-perturbative couplings to the various KK modes.\footnote{We also note that a large $|g_{DM}^n/g_{DM}|$ may raise concerns about non-convergence of various sums over all KK modes, such as those that appear in Eqs.(\ref{sigmae}) and (\ref{sigmavrel}), however, as we shall see later in this section, these sums remain well-defined.} In Fig.~\ref{fig4}, we explore the $\tau$ and $a_F$ dependence of this asymptotic coupling limit numerically. Notably, we find that the coupling ratio increases sharply as $a_F$ increases. For comparison's sake, in both panels of Fig.~\ref{fig4}, we have depicted as a dashed line the \emph{maximum} $|g^n_{DM}/g_D|$ that would be allowed such that all couplings would remain perturbative (that is, have a structure constant $(g^n_{DM})^2/(4 \pi) < 1$) given a choice of $g_D =0.3$, that is, assuming that the coupling of DM to the first KK mode of the dark photon field has approximately the same coupling constant as the electroweak force. In the figure then, we see that such a choice of $g_D$ is only permitted when $a_F \mathrel{\mathpalette\atversim<} 3/2$; much larger and the DM interactions with large-$n$ KK modes become strongly coupled. In both Figs.~\ref{fig4} and \ref{fig5}, however, we see that limiting our choice of $a_F$ to $a_F \mathrel{\mathpalette\atversim<} 3/2$ leads to substantially more modest asymptotic values of $|g^n_{DM}/g_D|$, of $\mathrel{\mathpalette\atversim<} 10$. Because $\lvert g^n_{DM}/g_D \rvert$ rises quadratically (or more accurately, the square of this ratio rises quartically) with increasing $a_F$, these conditions would be only slightly less restrictive if a somewhat smaller value of $g_D$, {\it e.g.}, $g_D=0.1$ were chosen. \begin{figure}[htbp] \centerline{\includegraphics[width=5.0in,angle=0]{Flat_AsymgRatio_Tau.pdf}} \vspace*{-2.0cm} \centerline{\includegraphics[width=5.0in,angle=0]{Flat_AsymgRatio_a.pdf}} \vspace*{-1.0cm} \caption{(Top) The limit of the ratio $|g^n_{DM}/g_D|$ as $n\rightarrow \infty$, as a function of $\tau$ for $a_F=$3(cyan), 5/2(magenta), 2(green), 3/2(blue), 1(red), and 1/2(yellow), respectively. The dashed line denotes the largest possible ratio such that the couplings of the DM particle to the gauge boson KK modes remain perturbative for all KK modes in the theory, assuming $g_D=0.3$ (Bottom) As in the previous panel, but now as a function of $a_F$ assuming $\tau=$3(cyan), 5/2(magenta), 2(green), 3/2(blue), 1(red), and 1/2(yellow), respectively.} \label{fig4} \end{figure} \begin{figure}[htbp] \centerline{\includegraphics[width=5.0in,angle=0]{Flat_epsilonnRatio.pdf}} \vspace*{-2.0cm} \centerline{\includegraphics[width=5.0in,angle=0]{Flat_gnRatio.pdf}} \vspace*{-1.0cm} \caption{(Top) The ratio $\epsilon_n/\epsilon_1$ as a function of $n$ for various choices of $(\tau,a_F)$=(1/2,1/2)[red], (1/2,1)[blue], (1/2,3/2)[green], (1,1/2)[magenta], (3/2,1/2)[cyan], and (1,1)[yellow], respectively. (Bottom) Same as the top panel but now for the absolute value of the strength of the $n^{\rm th}$ KK coupling of the dark photon to DM in units of $g_D$. Note that this quantity alternates in sign.} \label{fig5} \end{figure} To continue our discussion of the phenomenology of our construction, we must now also find the sum $F(y,y',s)$, which we remind the reader is defined in Eq.(\ref{FDefinition}), for the flat space case, which we can accomplish by inserting $f(y)=1$ into Eq.(\ref{FDiffEq}), yielding \begin{align}\label{FDiffEqFlat} \partial_y^2 F(y,y',s) &= R \delta(y-y')- s F(y,y',s), \nonumber\\ \partial_y F(y,y',s)|_{y=0} &= -s \tau R F(0,y',s), \\ \partial_y F(y,y',s)|_{y=\pi R} &= - m_V^2 R F(\pi R, y',s), \nonumber \end{align} from which the solution \begin{align}\label{FlatFSolution} F(y,y',s) &= R^2 \frac{[\cos(\sqrt{s}y_<)-R \sqrt{s} \sin(\sqrt{s} y_<)][\sqrt{s} R \cos(\sqrt{s}(y_>-\pi R))-a_F^2 \sin(\sqrt{s}(y_> - \pi R))]}{R \sqrt{s}(-a_F^2 +s R^2 \tau)\cos(\pi R \sqrt{s})+s R^2 (1+a_F^2 \tau)\sin(\pi R \sqrt{s})}, \\ y_> &\equiv \textrm{max}(y,y'), \;\;\; y_< \equiv \textrm{min}(y,y') \nonumber \end{align} can be straightforwardly derived. We see that, as expected, the sum $F(y,y',s)$ has poles whenever $s=m_n^2$, as can be seen from the mass eigenvalue condition Eq.(\ref{FlatEigenvalues}); in other words, our sum of propagators possesses poles exactly where the individual propagators have poles. Additionally, equipped with this sum, it is possible to derive in closed form the sum $\sum_n \epsilon_n^2/\epsilon_1^2$, which we recall from I and Section \ref{Setup} must be $\mathrel{\mathpalette\atversim<} 10$ in order for our assumption of small kinetic mixing (KM) to be valid. Taking the limit of $F(y,y',s)$ as $s\rightarrow 0$, we arrive at the result \begin{align} -F(y,y',0) = \sum_n \frac{v_n(y)v_n(y')}{m_n^2} = R^2 \bigg( \frac{1}{a_F^2}+\pi \bigg) - \theta(y-y') R y-\theta(y'-y) R y'. \end{align} Differentiating this sum with respect to $y$ at $y=0$ and applying the SM-brane boundary condition given in Eq.(\ref{FlatEOM}), we rapidly arrive at \begin{align}\label{FlatepsilonSum} \sum_n v_n (0)^2 &= \frac{1}{\tau} \\ \rightarrow \sum_n \frac{\epsilon_n^2}{\epsilon_1^2} &= \sum_n \frac{v_n (0)^2}{v_1(0)^2} = \frac{1}{\tau v_1(0)^2}. \nonumber \end{align} The form of the sum in the second line of Eq.(\ref{FlatepsilonSum}) already then confirms what has previously been observed in I, namely, that a nontrivial positive BLKT is necessary for the consistency of our KM analysis. The sum sharply increases to infinity as $\tau\rightarrow 0$, indicating that an insufficiently large $\tau$ will result in the sum being unacceptably large, namely $\mathrel{\mathpalette\atversim>} O(10)$. Furthermore, a \emph{negative} $\tau$ would suggest a still more worrying scenario, indicating the need for at least one KK state to be ghost-like (have a negative norm squared). \begin{figure}[htbp] \centerline{\includegraphics[width=5.0in,angle=0]{Flat_epsilonSum_Tau.pdf}} \vspace*{-2.0cm} \centerline{\includegraphics[width=5.0in,angle=0]{Flat_epsilonSum_a.pdf}} \vspace*{-1.0cm} \caption{(Top) The sum $\sum_n \epsilon_n^2/\epsilon_1^2$ over all $n$, as a function of $\tau$ for $a_F=$3(cyan), 5/2(magenta), 2(green), 3/2(blue), 1(red), and 1/2(yellow), respectively. (Bottom) As in the previous panel, but now as a function of $a_F$ assuming $\tau=$3(cyan), 5/2(magenta), 2(green), 3/2(blue), 1(red), and 1/2(yellow), respectively.} \label{figFlatepsilonSum} \end{figure} To determine if our kinetic mixing treatment is valid for the full parameter space we consider, we depict the sum $\sum_n \epsilon_n^2/\epsilon_1^2$ in Fig.~\ref{figFlatepsilonSum}. Our results here explicitly confirm those observed in I, namely, that for selections of $(\tau, a_F)$ such that $\tau \mathrel{\mathpalette\atversim>} 1/2$, the summation $\sum_n \epsilon_n^2/\epsilon_1^2$ remains small enough not to vitiate our treatment of kinetic mixing: The sum remains $\mathrel{\mathpalette\atversim<} O(10)$. Next, we apply the results of Eqs.(\ref{Flatvn}), (\ref{FlatEigenvalues}), and (\ref{FlatFSolution}) to find the DM-$e^-$ scattering cross section, to explore the possibility of direct detection of the DM. Inserting Eq.(\ref{FlatFSolution}) into Eq.(\ref{sigmae}) yields \begin{align}\label{Flatsigmae} \sigma_{\phi e} &= \frac{4 \alpha_{em} m_e^2 (g_D \epsilon_1)^2}{v_1 (0)^2 v_1 (\pi R)^2}\frac{R^4}{a_F^4}\\ &=\frac{4 \alpha_{em} m_e^2 (g_D \epsilon_1)^2}{v_1 (0)^2 v_1 (\pi R)^2}\frac{(x^F_1)^4}{m_1^4 a_F^4}, \nonumber \end{align} where in the second line we have substituted the parameter $m_1$, the mass of the lowest-lying KK mode of the dark photon field, for the compactification radius $R$. We can now suggestively rewrite this expression as \begin{align}\label{FlatsigmaeNum} \sigma_{\phi e} &= (2.97 \times 10^{-40} \; \textrm{cm}^2)\bigg( \frac{100 \; \textrm{MeV}}{m_1}\bigg)^4 \bigg( \frac{g_D \epsilon_1}{10^{-4}} \bigg)^2 \Sigma^F_{\phi e}, \\ \Sigma^F_{\phi e} &\equiv \frac{(x^F_1)^4}{v_1(0)^2 v_1(\pi R)^2 a_F^4} = \bigg\lvert \sum_{n=0}^\infty \frac{(x^F_1)^2 v_n(0) v_n(\pi R)}{(x^F_n)^2 v_1(0) v_1(\pi R)} \bigg\rvert^2. \nonumber \end{align} Note here that the quantity $\Sigma^F_{\phi e}$ depends \emph{only} on the model parameters $(\tau, a_F)$, while the rest of the expression above is independent of them. While the closed form of $\Sigma^F_{\phi e}$ is convenient for calculation, we have also included an explicit expression for this quantity in terms of an infinite sum over KK modes -- notably, because the quantity $g^n_{DM} \epsilon_n$ (or alternatively, $v_n(\pi R) v_n(0)$) alternates in sign and decreases sharply with increasing $n$, we can see in Fig.~\ref{fig6} that the sum rapidly converges, coming within $O(10^{-2})$ corrections to the value of the closed form of $\Sigma^F_{\phi e}$ even when the sum is truncated at $n=10$. \begin{figure}[htbp] \centerline{\includegraphics[width=5.0in,angle=0]{Flat_SumeChi_n.pdf}} \vspace*{-1.0cm} \caption{The explicit KK sum form of $\Sigma^F_{\phi e}$ defined in Eq.(\ref{FlatsigmaeNum}), which encapsulates the dependence of the DM-electron scattering cross section on parameters of the model of the extra dimension, where only terms coming from the first $n$ KK modes are included in the sum, for the choices of $(\tau ,a_F)$ =(1/2,1/2) [red], (1/2,1) [blue], (1/2,3/2) [green], (1,1/2) [magenta], (3/2,1/2) [cyan] and (1,1) [yellow], respectively.} \label{fig6} \end{figure} Looking at the numerical coefficient of $\Sigma^F_{\phi e}$ in Eq.(\ref{FlatsigmaeNum}), meanwhile, we see that for $m_1 \sim O(100 \; \textrm{MeV})$ and $g_D \epsilon_1 \sim 10^{-4}$, the DM-$e^-$ scattering cross section easily avoids current direct detection constraints as long as the quantity $\Sigma^F_{\phi e} \leq 1$ \cite{Essig:2017kqs,Essig:2015cda,TTYu,Aprile:2019xxb,Agnes:2018oej}, although it does lie within the possible reach of future experiments such as SuperCDMS \cite{Essig:2015cda}. Anticipating that $m_{DM} \approx m_1/2$ (which we shall shortly see is necessary in order to enjoy the resonant enhancement of the annihilation cross section we require to recreate the relic density), we note that if we assume $g_D \epsilon_1 =10^{-4}$, the quantity $\sigma_{\phi e}/\Sigma^F_{\phi e}$ (that is, the direct detection cross section divided by the variable which parameterizes the parameters related to the geometry of the extra dimension) is at least an order of magnitude below the most stringent boundaries of \cite{Essig:2017kqs,Essig:2015cda,TTYu,Aprile:2019xxb,Agnes:2018oej} for any $m_1 \mathrel{\mathpalette\atversim>} O(\textrm{a few}) \; \textrm{MeV}$. So, our sole remaining task to demonstrate that this model escapes direct detection bounds is to demonstrate that $\Sigma^F_{\phi e} \leq O(1)$. We can see that $\Sigma^F_{\phi e}$ does in fact stay below $O(1)$ for a broad range of parameters in Fig.~\ref{fig7}; for every choice of $(\tau,a_F)$ that we are considering here, $\Sigma^F_{\phi e}$ lies between 0.6 and 0.9 implying that the KK states lying above the lightest one do not make critical contributions to this cross section. Hence, this model can easily evade present DM direct detection constraints for reasonable choices of $m_1 \sim 100 \; \textrm{MeV}$ and $g_D \epsilon_1 \sim 10^{-4}$. \begin{figure}[htbp] \centerline{\includegraphics[width=5.0in,angle=0]{Flat_SumeChi_Tau.pdf}} \vspace*{-2.0cm} \centerline{\includegraphics[width=5.0in,angle=0]{Flat_SumeChi_a.pdf}} \vspace*{-1.0cm} \caption{(Top) The sum $\Sigma^F_{\phi e}$ defined in Eq.(\ref{FlatsigmaeNum}), which encapsulates the dependence of the DM-electron scattering cross section on parameters of the model of the extra dimension, as a function of $\tau$ for $a_F=$3(cyan), 5/2(magenta), 2(green), 3/2(blue), 1(red), and 1/2(yellow), respectively. (Bottom) As in the previous panel, but now as a function of $a_F$ assuming $\tau=$3(cyan), 5/2(magenta), 2(green), 3/2(blue), 1(red), and 1/2(yellow), respectively.} \label{fig7} \end{figure} Our brief phenomenological survey of the flat space scenario now concludes with a discussion of the thermally averaged annihilation cross section at freeze-out, that is, demonstrating that this construction is capable of producing the correct relic density of DM in the universe. To begin, we insert Eq.(\ref{FlatFSolution}) into the expression for the $\phi^\dagger \phi \rightarrow f \bar{f}$ (where $f$ is some fermion species) velocity-weighted annihilation cross section of Eq.(\ref{sigmavrel}). This yields the result \begin{align}\label{Flatsigmavrel} \sigma v_{lab} &= \frac{1}{3}\frac{g_D^2 \epsilon_1^2 \alpha_{\textrm{em}}Q_f^2}{v_1(\pi R)^2 v_1 (0)^2 }\frac{(s+2m_f^2)(s-4 m_{DM}^2)\sqrt{s(s-4 m_f^2)}R^4}{s(s-2 m_{DM}^2)}\\ &\times\Bigg\lvert \frac{1}{R^2}F(0,\pi R, s) - \frac{v_1(\pi R) v_1 (0)}{sR^2-(x^F_1)^2}+\frac{v_1(\pi R) v_1 (0)}{sR^2-(x^F_1)^2+i (x^F_1)R \Gamma_1}\Bigg\rvert^2, \nonumber \\ \frac{1}{R^2}F(0,\pi R, s) &= \frac{1}{(-a_F^2 + s R^2 \tau)\cos(\pi R \sqrt{s})+R \sqrt{s}(1+a_F^2 \tau)\sin(\pi R \sqrt{s})}. \nonumber \end{align} We can then use this expression in the single integral formula for a thermally averaged annihilation cross section given in Eq.(\ref{singleIntegralAvg}), and compare the results to the approximate necessary cross section to reproduce the (complex) DM relic density with a $p-$wave annihilation process, namely $\simeq 7.5 \times 10^{-26} \; \textrm{cm}^3/\textrm{s}$ \cite{Saikawa:2020swg}.\footnote{Note that due to the sub-GeV mass of the DM, the familiar required annihilation cross section of $\sim 3 \times 10^{-26} \; \textrm{cm}^3/\textrm{s}$ is inaccurate, as discussed in \cite{Saikawa:2020swg}.} We note that this quantity is the \emph{only} one in our analysis which has any direct dependence on the mass of the DM, $m_{DM}$, (assuming, as we do, that the DM particle's mass is substantially greater than that of the electron). In fact, because we must rely on resonant enhancement in order to achieve the correct relic density, we see that with all the other parameters fixed our results for the thermally averaged cross section are extremely sensitive to $m_{DM}$ and largely agnostic to differing choices of $(\tau, a_F)$. In Fig.~\ref{fig8}, we depict the thermally averaged velocity-weighted cross section as a function of the DM mass $m_{DM}$, requiring, as we have argued must be the case in Section \ref{Setup}, that $m_{DM} < m_1$. For demonstration purposes, we have selected that $m_1 = 100 \; \textrm{MeV}$, $x_F = m_{DM}/T = 20$, $g_D = 0.3$, $(g_D \epsilon_1) = 10^{-4}$ (where our choices of $m_1$ and $\epsilon_1$ have been informed by the constraints on direct detection), and have included only the possibility of the DM particles annihilating into an $e^+ e^-$ final state. Notably, the cross sections depicted are largely independent of the choices of ($\tau, a_F$) near values of $m_{DM}/m_1$ that produce the correct relic abundance (that is, relatively near the $m_1$ resonance of the cross section). In fact, for \emph{all} parameter space points we depict here, it is possible to produce the correct cross section when $m_{DM} \sim 0.36 m_1$ or $m_{DM} \sim 0.54 m_1$; however, other values would be required if we also varied $m_1$ or $g_D\epsilon_1$ By leveraging the resonance, therefore, our model is clearly able to reproduce the observed relic abundance for a wide variety of reasonable points in parameter space. We also note that the annihilation cross section here displays an extremely sharp decline when very close to the resonance peak. This is a consequence of the total decay width of the first KK excitation of the dark photon field becoming progressively smaller, as the width of the decay to a pair of DM particles becomes suppressed by a shrinking phase space factor, eventually approaching 0 when $m_{DM}=m_1/2$. In the absence of a kinematically allowed decay to the DM pairs, the decay into an electron-positron pair, which has a width of $\simeq \alpha_{em} \epsilon_1^2 m_1/3$, or $O(10^{-10})m_1$ if $\epsilon_1 \sim 10^{-4}$, becomes the dominant decay channel for the lightest KK mode of the dark photon field; this state is thus extremely narrow under these circumstances. \begin{figure}[htbp] \centerline{\includegraphics[width=5.0in,angle=0]{Flat_Sigmavrel.pdf}} \vspace*{-1.0cm} \caption{The thermally averaged, velocity-weighted cross section in $\textrm{cm}^3/\textrm{s}$ for the annihilation process $\phi^\dagger \phi \rightarrow f \bar{f}$, where the final-state fermions $f$ are electrons, for the choices of $(\tau ,a_F)$ =(1/2,1/2) [red], (1/2,1) [blue], (1/2,3/2) [green], (1,1/2) [magenta], (3/2,1/2) [cyan] and (1,1) [yellow], respectively. The dashed line denotes the value for this cross section necessary to produce the observed relic abundance of DM after freeze out.} \label{fig8} \end{figure} \section{Warped Space Model Analysis}\label{WarpedAnalysis} We now consider the possibility that the extra dimension is not flat, but rather has a Randall-Sundrum-like geometry with a curvature scale $k$. In this case, $f(y)$ in the metric of Eq.(\ref{genericMetric}) shall be $e^{-k y}$, but our analysis closely follows that of the flat space scenario. The warped geometry does, however, necessitate additional care in certain aspects of model construction, which we should address before moving forward with our discussion. First, in the warped space scenario, because $f(y)$ is non-trivial, we need two parameters to describe the metric rather than the single parameter, $R$, that we used in the flat-space analysis. We shall find the most convenient parameters with which to describe our metric are $kR$, the product of the curvature scale and the compactification radius, and the so-called ``KK mass", $M_{KK} \equiv k \, \textrm{exp}(-k R \pi)$. Second, unlike the flat-space case, our choice to place the SM on the $y=0$ brane and the DM on the $y=\pi R$ brane is no longer arbitrary. Specifically, we note that naturalness suggests that $\sim M_{KK}$ is a natural scale for mass terms localized on the $y= \pi R$ brane, and that the lowest-mass Kaluza-Klein (KK) modes of any bulk fields should also in general be $O(M_{KK})$, while the natural scale for mass terms localized on the $y=0$ brane should be $\sim M_{KK}\textrm{exp}(k R \pi)$, which is exponentially larger \cite{Randall:1999ee,Csaki:2004ay}. In our construction, then, naturalness suggests that we localize the higher-scale physics (the SM, with a scale of roughly $O(250 \; \textrm{GeV})$) on the $y=0$ brane, and the lower-scale DM sector with a scale of $O(0.1-1 \; {\rm GeV})$ localized on the $y=\pi R$ brane. Furthermore, the hierarchy between the two scales roughly sets the value of the product $kR$, namely, we must require that $e^{-k R \pi}\sim O(0.1-1 \; {\rm GeV})/O(250 \; {\rm GeV})$. Thus we will require that $kR \approx 1.5-2$. We note in passing, therefore, that in contrast to the flat space model, the warped space construction offers the aesthetically appealing characteristic of explaining the mild hierarchy between the brane-localized vev of the SM Higgs and the brane-localized mass parameters of the DM and dark photon fields appearing on the opposite brane. With these concerns addressed, we can now move on to determining the bulk profiles and sums of KK modes required for our analysis. First, we note that the equations of motion for the bulk profile $v_n(y)$ become{\footnote {Here we use the label ``W'' to denote the values relevant for the warped scenario.}} \begin{align} \partial_y [e^{-2 k y} ~\partial_y v_n(y)] = -m_n^2 v_n(y), \\ (\partial_y+m_n^2 \tau R)v_n(y)|_{y=0}=0, \nonumber\\ (e^{-2 k R \pi}~ \partial_y +m_V^2 R)v_n(\pi R)|_{y=\pi R}=0. \nonumber 1\end{align} The solution to these equations can be written, \begin{align}\label{Warpedvn} &v_n(y) = A_n z^W_n ~\zeta^{(n)}_1 (z^W_n), \\ &z^W_n \equiv x^W_n e^{k(y-\pi R)}, \;\; x^W_n \equiv \frac{m_n}{M_{KK}}, \;\; \varepsilon^W_n \equiv x^W_n e^{-k R \pi} \nonumber \end{align} where $A_n$ is a normalization factor, and the function $\zeta^{(n)}_\nu(z)$ is given by \begin{align} &\zeta^{(n)}_\nu (z) \equiv \alpha_n J_\nu(z)-\beta_n Y_\nu(z), \\ &\alpha_n \equiv [(Y_0\big( \varepsilon^W_n\big)+\big( \varepsilon^W_n\big) k R \tau Y_1 \big( \varepsilon^W_n \big) ], \nonumber \\ &\beta_n \equiv [(J_0\big( \varepsilon^W_n\big)+\big( \varepsilon^W_n\big) k R \tau J_1 \big( \varepsilon^W_n \big) ], \nonumber \end{align} with $J_\nu$, $Y_\nu$ denoting order-$\nu$ Bessel functions of the first and second kind, respectively. Notice that $v_n(y)$ then automatically satisfies its boundary condition at the brane $y=0$, while the allowed values of $x^W_n$ (and hence the masses of the KK tower modes $m_n$) are then found with the boundary condition at $y= \pi R$, which can be simplified to \begin{align}\label{WarpedEigenvalues} x^W_n \zeta^{(n)}_0 (x^W_n) &= -a_W^2 \zeta^{(n)}_1(x^W_n), \\ a_W &\equiv \sqrt{kR} \frac{m_n}{M_{KK}} \nonumber. \end{align} The normalization constant $A_n$ can be found using the orthonormality relation of Eq.(\ref{orthoRelation}), yielding \begin{align} A_n = \frac{\sqrt{2kR}}{\Bigg[ (z^W_n)^2 [\zeta^{(n)}_1(z^W_n)^2-\zeta^{(n)}_0(z^W_n)\zeta^{(n)}_2 (z^W_n)]\rvert^{z^W_n = x^W_n}_{z^W_n = \varepsilon^W_n}+2 \tau k R (\varepsilon^W_n)^2 \zeta^{(n)}_1(\varepsilon^W_n)^2\Bigg]^{1/2}}. \end{align} Using Eqs.(\ref{Warpedvn}) and (\ref{WarpedEigenvalues}), we can now continue on to an exploration of the phenomenology of various KK modes, much as we have done in Section \ref{FlatAnalysis} for the scenario with a flat extra dimension. We begin, as in the case of flat space, by determining the dependencies of the lowest-lying root of Eq.(\ref{WarpedEigenvalues}), $x^W_1$, as a function of the parameters $(\tau,a_W)$, depicted in Fig.~\ref{fig9}. Note that in Fig.~\ref{fig9} and subsequent calculations, we have elected to specify the parameter $(k R) \tau$ (that is $\tau$ scaled by the quantity $kR$) rather than $\tau$. This is because in practice, expressions featuring the brane term $\tau$ in this setup will always do so through the quantity $(kR) \tau$; we therefore find, as has been the case in other work with Randall-Sundrum brane terms \cite{blkts}, that $(kR) \tau$ is the more natural parameter to use. \begin{figure}[htbp] \centerline{\includegraphics[width=3.5in]{Warped_x1_Tau_kr1.pdf} \hspace{-0.75cm} \includegraphics[width=3.5in]{Warped_x1_Tau_kr2.pdf}} \vspace*{-0.25cm} \centerline{\includegraphics[width=3.5in]{Warped_x1_a_kr1.pdf} \hspace{-0.75cm} \includegraphics[width=3.5in]{Warped_x1_a_kr2.pdf}} \caption{(Top Left) Value of the root $x^W_1$ assuming $kR = 1.5$ as a function of $\tau$ for various choices of $a_W$, from bottom to top, $a_W=$1/2, 1, 3/2, 2, 5/2 and 3, respectively. (Top Right) The same as the top left, but assuming $kR = 2.0$ (Bottom Left) Value of the root $x^W_1$ assuming as a function of $a_W$ for various choices of $\tau$, from top to bottom, $(kR)\tau=$1/2, 1, 3/2, 2, 5/2 and 3, respectively. (Bottom Right) The same as the bottom left, but assuming $kR =2.0$} \label{fig9} \end{figure} Qualitatively, we observe largely similar behavior for the root $x^W_1$ in Fig.~\ref{fig9} as we observed in $x^F_1$ in Fig.~\ref{fig1}, namely that $x^W_1 \mathrel{\mathpalette\atversim<} 1$ for the range of $(\tau, a_W)$ parameters we probe, and that $x^W_1$ increases with increasing $a_W$ and decreases with increasing $\tau$. It is interesting to note that the specific values of $x^W_1$ are somewhat sensitive to the specific value of $kR$: In particular, when $kR = 2.0$, the values of $x^F_1$ for a given choice of $(kR) \tau$ and $a_W$ is approximately 15\% lower than these values in a scenario where $kR = 1.5$. Next, we discuss the quantity $m_2/m_1$, the ratio of the mass of the second KK mode of the dark photon field to that of the first KK mode; as in our discussion of this ratio in the flat space scenario, this quantity continues to possess substantial phenomenological importance due to the potential of the second KK mode to be an experimental signal for the existence of extra dimensions. In Fig.~\ref{fig10}, we depict this mass ratio's dependence on the quantities $\tau$ and $a_W$. The most salient difference between the results here and those for the flat space case discussed in Section \ref{FlatAnalysis} lies in the typical magnitude of the ratio itself: With a flat extra dimension, we found that reasonable selections for $\tau$ and $a_F$ resulted in ratios $m_2/m_1 \sim 3-4$. In the warped setup, we find that the same ratio now typically lies within the range of $m_2/m_1 \sim 6-16$. This represents one of the primary distinctions between the warped and flat constructions, namely, that for a given mass of the lightest KK mode of the dark photon, $m_1$, \emph{the mass of the second KK mode} $m_2$ \emph{is significantly greater in the case of a warped extra dimension than it is in the case of a flat one}. Beyond this observation, we also note that changing $kR$ in our computations below has an effect roughly in line with what we might expect from the results depicted in Fig.~\ref{fig1}, namely, that a larger value of $kR$ slightly increases the ratio $m_2/m_1$, likely because the value of the root $x^W_1$ is somewhat reduced. \begin{figure}[htbp] \centerline{\includegraphics[width=3.5in]{Warped_xRatio_Tau_kr1.pdf} \hspace{-0.75cm} \includegraphics[width=3.5in]{Warped_xRatio_Tau_kr2.pdf}} \vspace*{-0.25cm} \centerline{\includegraphics[width=3.5in]{Warped_xRatio_a_kr1.pdf} \hspace{-0.75cm} \includegraphics[width=3.5in]{Warped_xRatio_a_kr2.pdf}} \caption{(Top Left) The mass ratio of the lowest two dark photon KK states, $m_{2}/m_{1}=x^W_2/x^W_1$ assuming $kR=1.5$, as a function of $(kR) \tau$ for $a_W=$3(cyan), 5/2(magenta), 2(green), 3/2(blue), 1(red), and 1/2(yellow), respectively. (Top Right) As in the top left, but now assuming $kR = 2.0$. (Bottom Left) As in the top left, but now as a function of $a_W$ assuming $(kR)\tau=$3(cyan), 5/2(magenta), 2(green), 3/2(blue), 1(red), and 1/2(yellow), respectively. (Bottom Right) As in the bottom left, but assuming $kR = 2.0$.} \label{fig10} \end{figure} We complete our exploration of the relative masses of various KK modes just as we have in the flat space scenario, namely, by exploring the growth of $m_n$ as $n$ increases. We depict the results in Fig.~\ref{fig11} for both $kR=1.5$ and $kR=2.0$, for various selections of $(kR)\tau$ and $a_W$. The most salient contrast between these results and those in the flat space analysis again lies in the magnitude of the mass ratio: In the warped setup, $m_n/m_1$ increases significantly more sharply with $n$ than it does in the flat space, such that at large $n$, typical values of $m_{n}/m_1$ are approximately three times larger for a warped extra dimension than they are for a flat one. The dominant share of this discrepancy is determinable from the mass eigenvalue equation Eq.(\ref{WarpedEigenvalues})-- numerically, it can be readily seen that the difference between successive roots of this equation approaches $\pi$ as $n$ becomes large, so the eventual slope of the line depicted in Fig.~\ref{fig11} should be roughly $\pi (x^W_1)^{-1}$. This is compared to the analogous slope in the flat space scenario, which, as discussed in Section \ref{FlatAnalysis}, should be approximated by $(x^F_1)^{-1}$. Because the typical values of $x^F_1$ and $x^W_1$ are roughly comparable, this in turn suggests that the slope of the lines in Fig.~\ref{fig11} should be steeper by roughly a factor of $O(\pi)$ than their flat space counterparts in Fig.~\ref{fig3}. Before moving on, we also note that the same behavior with increasing $kR$ that we observed in the ratio $m_2/m_1$ appears again as we consider more massive KK modes, namely, that increasing $kR$ will increase the value of the ratios of heavier KK mode masses to that of the lightest mode. \begin{figure}[htbp] \centerline{\includegraphics[width=5.0in,angle=0]{Warped_xnRatio_kr1.pdf}} \vspace*{-2.0cm} \centerline{\includegraphics[width=5.0in,angle=0]{Warped_xnRatio_kr2.pdf}} \vspace*{-1.0cm} \caption{(Top) Approximate linear growth of the relative dark photon KK mass ratio $m_n/m_1$ as a function of $n$ assuming $kR = 1.5$ for various choices of $((kR)\tau ,a_W)$ =(1/2,1/2) [red], (1/2,1) [blue], (1/2,3/2) [green], (1,1/2) [magenta], (3/2,1/2) [cyan] and (1,1) [yellow], respectively. (Bottom) As in the previous panel, but assuming $kR=2.0$} \label{fig11} \end{figure} Having addressed the masses of the various dark photon KK modes, we now move on to discuss the effective kinetic mixing and DM coupling terms that arise in this construction. In Fig.~\ref{fig12}, we depict the behavior of the ratios $\epsilon_n/\epsilon_1$ and $|g^n_{DM}/g_D|$ as a function of the KK mode $n$ (we note that once again, as in the flat space scenario, the values of $g^n_{DM}$ oscillate in sign). The results are qualitatively quite similar to the flat space scenario depicted in Fig.~\ref{fig5}. In particular, we find once again that while $\epsilon_n/\epsilon_1$ consistently decreases for large $n$, $|g^n_{DM}/g_D|$ again approaches a non-zero asymptotic value. This asymptotic value for $|g^n_{DM}/g_D|$, much like its flat space analogue, can be explored further by semi-analytical means. By using Eqs.(\ref{Warpedvn}) and (\ref{WarpedEigenvalues}), as well as the identities, \begin{align} J_1(z) Y_0(z)-J_0(z)Y_1(z) = \frac{2}{\pi z}, \\ \zeta^{(n)}_2(z) = \frac{2}{z}\zeta^{(n)}_1(z)-\zeta^{(n)}_0(z), \nonumber \end{align} it is possible to determine that as $n$ becomes very large, the ratio $|g^n_{DM}/g_D|$ becomes well-approximated by the expression \begin{align}\label{WarpedApproxgn} \bigg\lvert \frac{g^n_{DM}}{g_D} \bigg\rvert &\approx \frac{1}{(x^W_1)}\bigg( (x^W_1)^2 + 2 a_W^2+a_W^4-(1+(kR)^2\tau^2 (x^W_1)^2 e^{-2 kR \pi})\bigg(\mathcal{J}\bigg)^2\bigg)^{\frac{1}{2}},\\ \mathcal{J} &\equiv \frac{x^W_1 J_0(x^W_1)+a_W^2 J_1(x^W_1)}{J_0(x^W_1 e^{-kR \pi})+(kR)\tau x^W_1 e^{-kR \pi}J_1(x^W_1 e^{-kR \pi})}. \nonumber \end{align} In Fig.~\ref{fig13}, we depict the dependence of this approximate asymptotic value on $\tau$ and $a_W$. The behavior of this quantity is quite similar to the analogous results Fig.~\ref{fig4} for the flat space scenario, in particular, we observe a sharp increase in the ratio here as $a_W$ increases, just as the corresponding ratio in the flat space case increases sharply with increasing $a_F$. We note that the typical maximum values that we observe in Fig.~\ref{fig13}, however, are roughly a factor of 2 smaller than those we observed in Fig.~\ref{fig4}, however, as $a_F$ and $a_W$ are not directly comparable quantities, the significance of this diminished range is not obvious. Again, as in Fig.~\ref{fig4}, we have included a dashed line which denotes the maximum value that this ratio can attain such that all $g^n_{DM}$ remain perturbative for the choice $g_D=0.3$; in this case, we see that such a requirement effectively excludes choices of $a_W\mathrel{\mathpalette\atversim>} 2$. \begin{figure}[htbp] \centerline{\includegraphics[width=3.5in]{Warped_epsilonnRatio_kr1.pdf} \hspace{-0.75cm} \includegraphics[width=3.5in]{Warped_epsilonnRatio_kr2.pdf}} \vspace*{-0.25cm} \centerline{\includegraphics[width=3.5in]{Warped_gnRatio_kr1.pdf} \hspace{-0.75cm} \includegraphics[width=3.5in]{Warped_gnRatio_kr2.pdf}} \caption{(Top Left) The ratio $\epsilon_n/\epsilon_1$, assuming $kR=1.5$, as a function of $n$ for various choices of $((kR)\tau ,a_W)$ =(1/2,1/2) [red], (1/2,1) [blue], (1/2,3/2) [green], (1,1/2) [magenta], (3/2,1/2) [cyan] and (1,1) [yellow], respectively. (Top Right) The same as the top left, but assuming $kR = 2.0$ (Bottom Left) The ratio $g^n_{DM}/g_D$, assuming $kR=1.5$, as a function of $n$ for various choices of $((kR)\tau ,a_W)$ =(1/2,1/2) [red], (1/2,1) [blue], (1/2,3/2) [green], (1,1/2) [magenta], (3/2,1/2) [cyan] and (1,1) [yellow], respectively. (Bottom Right) The same as the bottom left, but assuming $kR =2.0$} \label{fig12} \end{figure} \begin{figure}[htbp] \centerline{\includegraphics[width=3.5in]{Warped_AsymgRatio_Tau_kr1.pdf} \hspace{-0.75cm} \includegraphics[width=3.5in]{Warped_AsymgRatio_Tau_kr2.pdf}} \vspace*{-0.25cm} \centerline{\includegraphics[width=3.5in]{Warped_AsymgRatio_a_kr1.pdf} \hspace{-0.75cm} \includegraphics[width=3.5in]{Warped_AsymgRatio_a_kr2.pdf}} \caption{(Top Left) The approximate asymptotic value of $|g^n_{DM}/g_D|$ given by Eq.(\ref{WarpedApproxgn}) for large $n$, assuming $kR = 1.5$, as a function of $(kR)\tau$ for various choices of $a_W=$3(cyan), 5/2(magenta), 2(green), 3/2(blue), 1(red), 1/2(yellow). (Top Right) The same as the top left, but assuming $kR = 2.0$ (Bottom Left) The approximate asymptotic value of $|g^n_{DM}/g_D|$ given by Eq.(\ref{WarpedApproxgn}) for large $n$, assuming $kR=2.0$, as a function of $a_W$ for various choices of $(kR)\tau=$3(cyan), 5/2(magenta), 2(green), 3/2(blue), 1(red), 1/2(yellow). The dashed line represents the maximum value that this ratio can obtain and still have all KK couplings remain perturbative for $g_D=0.3$. (Bottom Right) The same as the bottom left, but assuming $kR =2.0$} \label{fig13} \end{figure} Just as in our analysis of the flat space setup, we can now move on from discussing individual KK modes' masses and couplings to the basic predictions of phenomenologically important processes. In order to do this, we must first evaluate the sum $F(y,y',s)$ (defined in Eq.(\ref{FDefinition}) for the warped metric, by solving Eq.(\ref{FDiffEq}) with $f(y) = e^{-k y}$ inserted. We arrive at the differential equation \begin{align}\label{FDiffEqWarped} \partial_y [e^{-2 k y} ~\partial_y F(y,y',s)] = R\delta(y-y')- s F(y,y',s), \nonumber\\ \partial_y F(y,y',s)|_{y=0} = -s \tau R F(0,y',s), \\ \partial_y F(y,y',s)|_{y=\pi R} = - m_V^2 R e^{- 2 k R \pi} F(\pi R, y',s). \nonumber \end{align} By defining the variables $z \equiv (\sqrt{s}/M_{KK})e^{k (y-\pi R)}$ and $z' \equiv (\sqrt{s}/M_{KK}) e^{k (y'-\pi R)}$, we can solve Eq.(\ref{FDiffEqWarped}) in terms of Bessel functions, yielding \begin{align}\label{WarpedFSolution} &F(y,y',s) = -\frac{kR \pi}{2 M_{KK}^2}\frac{e^{k (y+y'-2\pi R)}\xi_1(z_>)\omega_1(z_<)}{z_\pi \omega_0(z_\pi)+a_W^2 \omega_1 (z_\pi)},\\ &\omega_\nu (z) \equiv [Y_0 (z_0)+\tau k R z_0 Y_1 (z_0)]J_\nu(z)-[J_0 (z_0)+\tau k R z_0 J_1 (z_0)]Y_\nu(z), \nonumber \\ &\xi_\nu (z) \equiv [z_\pi Y_0 (z_\pi)+a_W^2 Y_1 (z_\pi)]J_\nu (z) - [z_\pi J_0 (z_\pi)+a_W^2 J_1 (z_\pi)]Y_\nu (z), \nonumber \\ &z_> \equiv \bigg(\frac{\sqrt{s}}{M_{KK}} \bigg) e^{k (\textrm{max}(y,y')-\pi R)}, \;\;\; z_< \equiv \bigg(\frac{\sqrt{s}}{M_{KK}} \bigg) e^{k (\textrm{min}(y,y')-\pi R)}, \nonumber\\ &z_0 \equiv \bigg(\frac{\sqrt{s}}{M_{KK}} \bigg)e^{-k R \pi}, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; z_\pi \equiv \bigg(\frac{\sqrt{s}}{M_{KK}} \bigg). \nonumber \end{align} We note that in this form, it is readily apparent that $F(y,y',s)$ has poles wherever $\sqrt{s}$ is equal to the mass of a KK mode $m_n$, just as we would expect given the components of its sum and just as we previously observed in the flat-space sum Eq.(\ref{FlatFSolution}). With a solution for $F(y,y',s)$ in hand, we can then replicate our analysis in Section \ref{FlatAnalysis} to determine whether or not our kinetic mixing treatment is valid in the parameter space we're probing, this time applied to the warped space scenario not considered in I. Through analogous steps to those taken in Section \ref{FlatAnalysis}, we find that the sum $\sum_n \epsilon^2_n/\epsilon^2_1$ in the case of warped spacetime is also given by \begin{align} \sum_n \epsilon^2_n/\epsilon^2_1 = \frac{1}{\tau v_1(0)^2}, \end{align} the only difference from the flat-space result here being the form of the function $v_1(0)$. The $\tau^{-1}$ dependence of this sum suggests the same requirements as the identical flat space result, then: The brane-localized kinetic term (BLKT) $\tau$ must still be large enough so that its magnitude remains $\mathrel{\mathpalette\atversim<} 10$ and positive so that the result does not require the existence of ghost states. In Fig.~\ref{figWarpedepsilonSum}, we depict the sum $\sum_n \epsilon_n^2/\epsilon_1^2$ for different values of $\tau$ and $a_W$. Notably, while the sum is generally within reasonable $\mathrel{\mathpalette\atversim<} 10$ limits, when $(kR)\tau \approx 1/2$, the sum becomes quite close to, and even somewhat exceeds, 10. While the largest values of $\sum_n \epsilon_n^2/\epsilon_1^2$ achieved among the region of parameter space we have explored still aren't quite large enough to render $\epsilon_1^2$ terms in our analysis numerically significant (at least for the $\epsilon_1 \sim 10^{-(3-4)}$ terms we consider here), the sharp rate of increase they enjoy with decreasing $\tau$ near $(kR)\tau=1/2$ suggests that probing significantly below this value is unlikely to yield valid results. On the surface, this may seem to contrast slightly with our results in Section \ref{FlatAnalysis}, in which we found that restricting $\tau$ to values larger than $1/2$ stayed roughly $\mathrel{\mathpalette\atversim<} 6$. Closer inspection indicates that this discrepancy can largely be attributed to the use of $(kR) \tau$ as the variable we are employing instead of $\tau$: If one compares the maximum value obtained by the warped sum at $(k R) \tau = 3/4$ (for $k R = 1.5$) and $(k R) \tau = 1$ (for $k R = 2.0$), for which the variable $\tau$ itself is simply $1/2$, the results for the sum with both $k R$ values very closely matches that which was observed in the flat space construction of Section \ref{FlatAnalysis}. Hence, in both the flat and warped space cases, our setup's treatment of kinetic mixing easily remains valid for $\tau \mathrel{\mathpalette\atversim>} 0.5$, although it should be noted that as $kR$ increases, any boundary from these perturbativity concerns on the more natural warped-space parameter $(kR)\tau$, which is often used instead of $\tau$ for warped setups \cite{blkts}, will become increasingly stringent. \begin{figure}[htbp] \centerline{\includegraphics[width=3.5in]{Warped_epsilonSum_Tau_kr1.pdf} \hspace{-0.75cm} \includegraphics[width=3.5in]{Warped_epsilonSum_Tau_kr2.pdf}} \vspace*{-0.25cm} \centerline{\includegraphics[width=3.5in]{Warped_epsilonSum_a_kr1.pdf} \hspace{-0.75cm} \includegraphics[width=3.5in]{Warped_epsilonSum_a_kr2.pdf}} \caption{(Top Left) The value of the sum $\sum_n \epsilon_n^2/\epsilon_1^2$ over all $n$ assuming $kR=1.5$, as a function of $(kR) \tau$ for $a_W=$3(cyan), 5/2(magenta), 2(green), 3/2(blue), 1(red), and 1/2(yellow), respectively. (Top Right) As in the top left, but now assuming $kR = 2.0$. (Bottom Left) As in the top left, but now as a function of $a_W$ assuming $(kR)\tau=$3(cyan), 5/2(magenta), 2(green), 3/2(blue), 1(red), and 1/2(yellow), respectively. (Bottom Right) As in the bottom left, but assuming $kR = 2.0$.} \label{figWarpedepsilonSum} \end{figure} Moving on, it is then straightforward to find the DM-$e^-$ scattering cross section by inserting our results for $F(y,y',s)$ given in Eq.(\ref{WarpedFSolution}) into Eq.(\ref{sigmae}), arriving at \begin{align}\label{Warpedsigmae} \sigma_{\phi e} &= \frac{4 \alpha_{\textrm{em}} m_e^2 (g_D \epsilon_1)^2}{v_1(\pi R)^2 v_1 (0)^2} \frac{(kR)^2}{a_W^4 M_{KK}^4} = (2.97 \times 10^{-40} \; \textrm{cm}^2)\bigg( \frac{100 \; \textrm{MeV}}{m_1}\bigg)^4 \bigg( \frac{g_D \epsilon_1}{10^{-4}}\bigg)^2 \Sigma^W_{\phi e}, \\ \Sigma^W_{\phi e} &\equiv \frac{(x^W_1)^4 (kR)^2}{a_W^4 v_1(\pi R)^2 v_1 (0)^2} = \bigg\lvert \sum_{n=0}^\infty \frac{(x^F_1)^2 v_n(0) v_n(\pi R)}{(x^F_n)^2 v_1(0) v_1(\pi R)} \bigg\rvert^2. \nonumber \end{align} Notably, this is the same result (up to a normalization convention of the parameter $a_W$ and, of course, different bulk wave functions $v_1(y)$) that we derived for the flat-space case Eq.(\ref{Flatsigmae}). In particular, the sum $F(0,\pi R, 0)$ has \emph{identical} results (again, up to normalization of $a_W$) for the flat- and warped-space scenarios. Just as in the flat space case, the numerical coefficient in front of the quantity $\Sigma^W_{\phi e}$, which now encapsulates all of the cross section's dependence on the parameters $\tau$ and $a_W$, indicates that as long as $\Sigma^W_{\phi e} \mathrel{\mathpalette\atversim<} O(1)$, the resultant cross section is not constrained by current experimental limits, although we remind the reader that such cross sections may lie within reach of near-term future direct-detection experiments. In Fig.~\ref{fig14}, we depict the dependence of $\Sigma^W_{\phi e}$ on various choices of $\tau$ and $a_W$; we find that just as for the flat space case, this requirement is easily satisfied for every $\tau$ and $a_W$ we consider. \begin{figure}[htbp] \centerline{\includegraphics[width=3.5in]{Warped_SumeChi_Tau_kr1.pdf} \hspace{-0.75cm} \includegraphics[width=3.5in]{Warped_SumeChi_Tau_kr2.pdf}} \vspace*{-0.25cm} \centerline{\includegraphics[width=3.5in]{Warped_SumeChi_a_kr1.pdf} \hspace{-0.75cm} \includegraphics[width=3.5in]{Warped_SumeChi_a_kr2.pdf}} \caption{(Top Left) The sum $\Sigma^W_{\phi e}$ defined in Eq.(\ref{Warpedsigmae}), which encapsulates the dependence of the DM-electron scattering cross section on parameters of the model of the extra dimension, assuming $kR=1.5$, as a function of $(kR) \tau$ for $a_W=$3(cyan), 5/2(magenta), 2(green), 3/2(blue), 1(red), and 1/2(yellow), respectively. (Top Right) As in the top left, but now assuming $kR = 2.0$. (Bottom Left) As in the top left, but now as a function of $a_W$ assuming $(kR)\tau=$3(cyan), 5/2(magenta), 2(green), 3/2(blue), 1(red), and 1/2(yellow), respectively. (Bottom Right) As in the bottom left, but assuming $kR = 2.0$.} \label{fig14} \end{figure} We also note that the sum over individual KK modes in the computation of $\Sigma^W_{\phi e}$ quickly converges to the closed form expression even when truncated for very low $n$; as depicted in Fig.~\ref{fig15}, $\Sigma^W_{\phi e}$, just like its flat space analogue, converges to within $O(10^{-2})$ corrections to its exact value even when truncated at $n \approx 10$. Hence, just as in the flat space scenario, exchanges of the lightest few dark photon KK modes dominate the direct detection signal. \begin{figure}[htbp] \centerline{\includegraphics[width=5.0in,angle=0]{Warped_SumeChi_n_kr1.pdf}} \vspace*{-2.0cm} \centerline{\includegraphics[width=5.0in,angle=0]{Warped_SumeChi_n_kr2.pdf}} \vspace*{-1.30cm} \caption{(Top) The value of the sum $\Sigma^W_{\phi e}$ defined in Eq.(\ref{Warpedsigmae}), which encapsulates the dependence of the DM-electron scattering cross section on parameters of the model of the extra dimension, truncated at finite $n$, assuming $kR = 1.5$ for various choices of $((kR)\tau ,a_W)$ =(1/2,1/2) [red], (1/2,1) [blue], (1/2,3/2) [green], (1,1/2) [magenta], (3/2,1/2) [cyan] and (1,1) [yellow], respectively. (Bottom) As in the previous panel, but assuming $kR=2.0$} \label{fig15} \end{figure} Finally, we can conclude our discussion of the warped space scenario by considering the thermally averaged annihilation cross section of DM particles into SM fermions. Inserting the relevant value of $F(y,y',s)$ into Eq.(\ref{sigmavrel}) allows us to derive the DM annihilation cross-section, $\sigma v_{lab}$, for the warped space scenario, yielding \begin{align}\label{Warpedsigmavrel} \sigma v_{lab} &= \frac{1}{3}\frac{g_D^2 \epsilon_1^2 \alpha_{\textrm{em}}Q_f^2}{v_1(\pi R)^2 v_1 (0)^2 }\frac{(s+2m_f^2)(s-4 m_{DM}^2)\sqrt{s(s-4 m_f^2)}}{s(s-2 m_{DM}^2)M_{KK}^4}\\ &\times\Big\lvert \frac{2}{\pi z_\pi} \bigg(\frac{kR}{z_\pi \omega_0(z_\pi)+a_W^2 \omega_1(z_\pi)}\bigg)-\frac{v_1(\pi R) v_1 (0)}{(s/M_{KK}^2)-(x^W_1)^2}+\frac{v_1(\pi R) v_1 (0)}{(s/M_{KK}^2)-(x^W_1)^2+i x^W_1 \Gamma_1/M_{KK}}\Big\rvert^2, \nonumber \end{align} where we remind the reader that the functions $\omega_{0,1}(z)$ are defined in Eq.(\ref{WarpedFSolution}). Inserting this result into Eq.(\ref{singleIntegralAvg}), we can straightforwardly obtain the thermally averaged DM annihilation cross section via numerical integration. Just as in the flat space case, we specify that $m_{DM}=100 \; \textrm{MeV}$, $x_F = (m_{DM}/T) = 20$, $g_D = 0.3$, and $g_D \epsilon_1 = 10^{-4}$, and consider DM annihilation into an $e^+ e^-$ final state. Our results, depicted in Fig.~\ref{fig16} along with a dashed line marking $<\sigma v> = 7.5 \times 10^{-26} \; \textrm{cm}^3/\textrm{s}$, the approximate necessary cross section to produce the observed DM relic abundance, exhibit substantial similarity with the results for the flat space scenario given in Fig. \ref{fig8}; in particular, in both cases the dependence of the cross section on the BLKT $\tau$ and the brane-localized mass parameter $m_V \propto a_{F,W}$ is extremely limited, and the correct relic abundance is obtained when $m_{DM} \approx 0.36 m_1$ or $m_{DM} \approx 0.53 m_1$. Of course, as we vary the DM mass and $g_D \epsilon_1$, other values of $m_1$ will also be allowed. In short, for the annihilation cross section at freeze-out, we observe qualitatively similar behavior in the warped space setup as we do in the flat space scenario: For our choice of parameters resonant enhancement is necessary in order to realize the correct dark matter relic density, and the cross section is largely agnostic to specific selections for the brane-localized kinetic and mass terms for the dark photon field. \begin{figure}[htbp] \centerline{\includegraphics[width=5.0in,angle=0]{Warped_Sigmavrel_kr1.pdf}} \vspace*{-2.0cm} \centerline{\includegraphics[width=5.0in,angle=0]{Warped_Sigmavrel_kr2.pdf}} \vspace*{-1.0cm} \caption{(Top) The thermally averaged annihilation cross section in $\textrm{cm}^3/\textrm{s}$, assuming $kR = 1.5$ for various choices of $((kR)\tau ,a_W)$ =(1/2,1/2) [red], (1/2,1) [blue], (1/2,3/2) [green], (1,1/2) [magenta], (3/2,1/2) [cyan] and (1,1) [yellow], respectively. (Bottom) As in the previous panel, but assuming $kR=2.0$} \label{fig16} \end{figure} \section{Summary and Conclusions}\label{Conclusion} In this paper, we have discussed a modification to our previous setup in I and II. In lieu of imparting mass to the lightest dark photon Kaluza-Klein (KK) modes via dark photon boundary conditions, which necessitates a bulk DM particle with corresponding KK modes, our current construction simplifies this structure by reinstating the dark Higgs as a scalar localized on the \emph{opposite} brane in the theory from the brane containing the SM, preventing mixing between the SM and dark Higgs scalars. The DM particle can then be placed on the same brane as the dark Higgs, removing the additional complication of a KK tower of DM particles and resulting in substantially simpler phenomenology while still removing the effects of the dark and SM Higgses mixing. We then briefly explored the model-building possibilities for this setup in two scenarios, one with a flat extra dimension and the other with a warped Randall-Sundrum metric, in particular considering the behavior of the dark photon tower's mass spectrum, couplings, and mixing parameters with SM fields, as well as briefly touching on the predictions for spin-independent direct detection experiments and thermally averaged annihilation cross sections at freeze-out for various points in parameter space. Exploring the case of a warped extra dimension in addition to that of a flat one affords us significant additional model-building freedom; for example, given the same choice for the lightest dark photon KK mode mass, subsequent KK modes for the warped scenario are approximately $\sim 3$ times heavier than they are in the flat scenario, demonstrating a qualitatively different KK spectrum. The ability for warped extra dimensions to generate hierarchies, meanwhile, can be straightforwardly exploited to naturally explain the mild $O(10^{2-3})$ hierarchy that exists between the SM Higgs scale and the characteristic mass scales of the dark brane, namely the masses of the DM and the lightest dark photon KK modes $\sim 0.1-1$ GeV. With this model, we find few parameter space restrictions in either the warped or flat space constructions. The requirement that every dark photon KK mode's coupling to DM remain perturbative provides an upper limit on the DM-brane-localized mass term $m_V$, in particular, we find that for the flat construction, $m_V \mathrel{\mathpalette\atversim<} 1.5 R^{-1}$, where $R$ is the compactification radius of the extra dimension, while for warped space, $m_V \mathrel{\mathpalette\atversim<} 2 M_{KK}/\sqrt{kR}$, where $M_{KK}$ is the KK mass in the model and $kR \sim 1.5-2.0$. We also find, in agreement with I for the flat space scenario and novelly for the case of warped space, that a positive $O(1)$ value for the SM-brane-localized kinetic term (referred to here as $\tau$) is necessary in order to ensure the validity of our kinetic mixing analysis (in particular to ensure that $O(\epsilon_1^2)$ and higher order terms can in fact be safely neglected). For both the flat and warped space scenarios, however, this constraint is quite mild; requiring $\tau \geq 1/2$ is sufficient to satisfy it. Regarding possible experimental signals, we explicitly consider that of spin-independent direct detection from scattering with electrons. We find that selecting $g_D \epsilon_1 \sim 10^{-4}$ and $m_1 \sim 100 \; \textrm{MeV}$ still places the spin-independent direct detection cross sections in both the flat and warped space constructions at $\sim 10^{-40} \textrm{cm}^2$, below current experimental constraints. However, we note that such signals are roughly within the order of magnitude of the possible reach of near-term future experiments, and are not especially sensitive to variations in the brane-localized kinetic and mass terms of the particular extra dimensional model (in the flat scenario, we see reasonable variation in these parameters producing at most an approximately 25\% change in the value of the direct detection cross section, while for the warped scenario this variation is approximately 5\%). As such, experiments such as SuperCDMS may place meaningful constraints on dark photon KK mode masses, couplings, and mixings in the near future. The requirement that the thermally averaged annihilation cross section for the DM gives rise to the correct DM relic density, meanwhile, substantially constrains our selection of the relative DM particle mass $m_{DM}/m_1$. In particular, for natural selections of the other model parameters we see in both the flat and warped scenarios the DM annihilation cross section must enjoy some resonant enhancement of the contribution from the exchange of the lightest dark photon KK mode in order to attain a sufficiently large value. Given the sharpness of the resonance peak, this requirement places a significant constraint on the $m_{DM}$; for the choices $g_D \epsilon_1 = 10^{-4}$, $m_1 = 100 \; \textrm{MeV}$, and $g_D = 0.3$, $m_{DM}$ must lie near 0.36 or 0.54 of $m_1$ for flat space and 0.36 or 0.53 of $m_1$ for warped space. This cross section is also notably largely insensitive to differing choices of the brane-localized dark photon mass $m_V$ and the brane-localized kinetic term $\tau$ provided $m_1$, $g_D$, and $\epsilon_1$ are kept fixed, indicating that the exchange of the lightest KK mode is, somewhat unsurprisingly given its resonant enhancement, of paramount importance as contributors to this process. Overall, we find that constructing this model within a flat or warped space framework results in little qualitative difference in our results. The most salient potential phenomenological difference lies in the differing relative masses of dark photon KK modes (in particular, the ratio of the second-lightest dark photon mass to that of the lightest is in general 3-4 times larger in the Randall-Sundrum-like metric we consider than in the flat space case), which would have considerable effect on experimental searches for dark photons in colliders. Otherwise, however, we note that a wide range of natural and currently phenomenologically viable parameter space is available for both constructions. As we move forward to explore the possibilities of kinetic mixing in theories of extra dimensions, we continue to find alternate constructions that allow for phenomenologically viable models. Here, following the work of I and II, we have presented another, simpler, construction that utilizes the additional model-building freedom afforded by extra dimensions to ameliorate phenomenological concerns that arise in 4-D kinetic mixing theories. \section*{Acknowledgements} The authors would like to particularly thank D. Rueter and J.L. Hewett for very valuable discussions related to this work. This work was supported by the Department of Energy, Contract DE-AC02-76SF00515.
proofpile-arXiv_065-4503
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{intro} Understanding flow around urban environments is becoming of increasing importance as cities and their populations grow in size \citep{desa2019world}. Although surface energy balance models have recently improved, accurate models for aerodynamic parameters are still poor, particularly for non-conventional roughness geometries - e.g. tall canopies with heterogeneous height, with urban flows that remain poorly described by both theoretical and empirical models \citep{kanda2013new}. The surface layer within the atmospheric boundary layer (ABL) is customarily split into two regions. Firstly, the roughness sublayer (RSL) which acts from the wall surface up to a certain point above the roughness elements. Customarily, the RSL is the region where the flow still encounters the effect of the individual roughness elements \citep{reading20193}. Secondly, the inertial sublayer (ISL), which covers a region above the RSL \citep{raupach1991rough}. The ISL begins at the top of the RSL and is characterised as a region of constant momentum flux \citep{reading20193}. Considerable debate has taken place over defining the boundaries of both the RSL and ISL. The height of the RSL has been commonly quoted between 2 - 5 times the average height ($h$) of the roughness surface \citep{raupach1991rough}, and more recently as low as 1.1 - 1.2 $h$ \citep{florens2013defining}. The ISL, and its existence, has been subject to even more debate. {Traditional theory \citep{stull2012introduction} suggests that in a sufficiently developed boundary layer an ISL should form. In contrast \cite{jimenez2004turbulent} postulated that for $\delta/h<80$ (where $\delta$ is the boundary-layer height), the RSL will increase in depth and effectively replace the ISL. \cite{jimenez2004turbulent} idea has been supported by several studies \citep{rotach1999influence, cheng2002near, cheng2007flow, hagishima2009aerodynamic}. However, \cite{leonardi2010channel} and \cite{cheng2007flow} argue that the relative boundary layer depth quoted by \cite{jimenez2004turbulent} may not be accurate.} Typically, in the standard characterisation of the effect of wall roughness, a surface (at least close to the wall) is characterised by the aerodynamic parameters and the friction velocity. It is of increasing importance to be able to determine the zero-plane displacement ($d$) and the roughness length ($z_{0}$) to effectively predict and calculate wind flow in and above the growing urban environments {(as discussed by \cite{stull2012introduction} in Sec. 9.7)}. In fully-rough conditions, these parameters are usually obtained, following \cite{cheng2002near}, by (i) calculating the friction velocity ($u_{*}$) and then (ii) applying logarithmic law fitting procedure in the constant flux layer (Eq. \ref{eq:U}). \begin{align} \hspace*{5mm} U=\frac{u_*}{k}\ln\left(\frac{z-d}{z_0}\right) \label{eq:U} \end{align} The friction velocity can be determined in two ways. Directly, by calculating the drag generated by the rough wall e.g. using fully instrumented elements with static pressure ports or by floating element force balance \citep{cheng2007flow, hagishima2009aerodynamic, zaki2011aerodynamic}. These methods are both acceptable in the fully rough regime, where the viscous drag is almost negligible in comparison with form drag \citep{leonardi2010channel}. The second approach is indirect, where the friction velocity is evaluated from the Reynolds shear stress in the ISL \citep{cheng2002near, cheng2007flow}, so that $u_{*}=\sqrt{\tau_{0}~\rho^{-1}} \approx \sqrt{- \overline{u'w'}}$, where $\tau_{0}$ is the wall shear stress and ${\rho}$ is the density of the medium. {The roughness length $z_{0}$ is the height where wind speed reaches zero, and is used as a scale for the roughness of the surface \citep{stull2012introduction}}. The zero-plane displacement, $d$, is understood as a correction factor to the logarithmic profile \citep{cheng2002near,kanda2013new}, while physical reasoning tends to describe it as central height were drag occurs on a rough wall \citep{jackson1981displacement}. The aerodynamic parameters vary due to surface properties and, therefore, it should be possible to examine their impact on flow by systematically varying the surface characteristics. Many experiments have taken place over uniform-height cube arrays with different $h$, $\lambda_{p}$, and $\lambda_{f}$ (see \cite{grimmond1999aerodynamic} for $\lambda_{p}$ and $\lambda_{f}$ definitions) to examine how roughness affects the aerodynamic parameters \citep{cheng2002near, cheng2007flow, jackson1981displacement, grimmond1999aerodynamic, florens2013defining, macdonald1998improved, sharma2019turbulent}. However, the discrepancy in these studies with additional evidence from some recent numerical and experimental work, have demonstrated the inaccuracy of using just these variables to examine the sensitivity of the aerodynamic parameters \citep{hagishima2009aerodynamic, zaki2011aerodynamic, kanda2013new, reading20193, nakayama2011analysis, xie2008large}. Previous work has also highlighted how $\lambda_{p}$ or $\lambda_{f}$ - in isolation - are insufficient to characterise non-cubical roughness \citep{carpentieri2015influence}, and advocated for the need to decouple the two solidity ratios \citep{placidi2015effects,Placidi:2017}, however, this is outside the scope of this work. \cite{kanda2013new} pointed out the importance of two additional parameters when describing a canonical regular surface, the standard deviation ($\sigma_{h}$) and maximum element height ($h_{max}$). Others have attempted to determine the aerodynamic parameters for various environments by deriving semi-empirical relationships based on roughness geometry \citep{kanda2013new, macdonald1998improved, reading20193}. These have the advantage of being able to quickly determine the parameters without the need for field observations, wind tunnel tests, or computational experiments, allowing for fast prediction of flow in urban environments. This article further explores how the aerodynamic parameters behave in extremely rough surfaces with heterogeneous heights in comparison with uniform-height cases at matching average height. The block arrays in this study are based on simplified attributes of super-tall grid cities, which feature a large standard deviation of element heights with large aspect ratios. A combination of variables, that to the authors' knowledge, have not yet been explored. The structure of this paper is as follows: the experimental set-up is discussed in Sec. \ref{sec:ExpSetup}. Sections \ref{sec:BLDepthI} - \ref{sec:ISDepthI} and \ref{sec:BLDepthII} - \ref{sec:ISDepthII} examine the depths of the boundary layer and surface layers. The aerodynamic parameters are then presented in Sections \ref{sec:ISLAveragedDataI} and \ref{sec:ISLAveragedDataII}. Finally, conclusions are drawn in Sec. \ref{sec:Conc}. \section{Experimental facility and details} \label{sec:ExpSetup} \subsection{Experimental facility} Experiments were conducted in the `Aero' tunnel within the EnFlo laboratory at the University of Surrey. This is a closed-circuit wind tunnel with a maximum speed of 40 m~s$^{-1}$. The free-stream velocity, measured by a Pitot tube located upstream of the model, was set to 10 m~s$^{-1}$ for all cases presented here. The tunnel's test section is 9 m long, 1.27 m wide, and 1.06 m tall. The streamwise, spanwise, and vertical directions are identified with the $x$, $y$, and $z$ axis, respectively. The $z$-axis is set from the top of the baseboard of the model (i.e. the actual wall), while the $y$ = 0 is set in the centre of the test section. The position $x$ = 0 is considered as the beginning of the tunnel test section. Time- and spanwise-averaged mean, and fluctuating velocities are denoted as ($U,V,W$), and ($u',v',w'$), respectively. In the space between the beginning of the test section and the model a 1 m-long ramp rises from the floor of the tunnel to the average canopy height ($h_{avg}$ = 80 mm, Fig. \ref{fig:ArrayMap}). The ramp creates a smooth transition between the wind tunnel test section inlet and the roughness surface, thus minimising the flow disruption at the beginning of the roughness fetch \citep{cheng2002near} and allowing an equilibrium boundary layer to form. The most upstream measurement station is at $x$ = 3600 mm, where the flow is already fully developed. \subsection{Rough-wall models} \label{sec:Models} {Four surface roughnesses, all representing idealised tall and super-tall urban environments, were used in this study. Two of the surfaces have elements of uniform-height, while two have elements with varied-height. The individual roughness elements are sharp-edged cuboids of average height 80 mm. Urban buildings are classified by their aspect ratio, $AR=h/w$ (where $h$ and $w$ are the height and width of the building, respectively). Buildings with $3<AR<8$ fall in the tall regime (generally $100~m<h<300~m$), whilst buildings with $AR>8$ (and $h>300~m$) are referred to as super-tall \citep{CTBUHHeightCriteria}. Based on this criterion, the surface morphologies examined here are classified as tall or super-tall. Zero-pressure-gradient conditions were used in this work, as the acceleration parameter, calculated for both uniform- and varied- element height cases, was 4.85$\times10^{-8}$ and 9.48$\times10^{-8}$, respectively. The surfaces in examination are further described in the following sections.} \subsubsection{Homogeneous-height model}\label{sec:HoHM} To create an idealised urban environment in the case of uniform-height canopy, two urban features were studied: the packing density, {$\lambda_p$, and the element aspect ratio. Large urban areas with high-density buildings of 14 dense cities were used to calculate a characteristic urban packing density, as reported in Table \ref{table:CitPack}).} Google Maps was used to measure the area plots and building base sizes. A range between $\lambda_{p}$ = 0.33 to 0.57 was determined, which is in line with values cited for real cities in \cite{reading20193}. The packing density of $\lambda_{p}$ = 0.44 was within this range, and it was selected to describe densely packed cities. {The \cite{CTBUH2015}} calculates a mean height of all buildings in Manhattan (New York) above 100 m to be 145.7 m. Using this average height and the bases of buildings from \cite{GMaps} gives an aspect ratio of approximately 3.4. Guided by this criterion, the effects of walls and fetch, a scaled model with elements of size 80 mm $\times$ 20 mm $\times$ 20 mm was designed. The roughness was mounted on five base plates. Each base plate when rotated allowed the roughness pattern to vary from aligned to 50 \% staggered, as in Fig. \ref{fig:Modules}. These surfaces are referred to as uniform-height aligned (UHA), and uniform-height staggered (UHS). In the homogeneous height roughness cases, a total of 5,775 elements were used. \begin{table}[ht] \caption{{Packing densities of highly dense areas within large urban areas. Values with an asterisk (*), are taken from \cite{grimmond1999aerodynamic}.}} \def1.3{1.3} \centering \label{table:CitPack} \begin{tabular}{l l l l l l } \toprule City & $\lambda_{p}$ &City & $\lambda_{p}$ & City & $\lambda_{p}$\\ \midrule Beijing & 0.50 & Los Angeles & 0.36 & Singapore & 0.33 \\ Chicago & 0.57 & Mexico City & 0.47* & Tokyo & 0.52 \\ Dubai & 0.39 & Minneapolis & 0.42 & Toronto & 0.53\\ Hong Kong & 0.53 & New York & 0.57 & Vancouver & 0.39*\\ Kuala Lumpur& 0.35 & San Francisco & 0.49 & &\\ London & 0.43 & Shanghai & 0.46 & &\\ \bottomrule \end{tabular} \end{table} \begin{figure}[ht!] \includegraphics[scale=0.425, trim={3cm 2cm 1cm 2cm},clip]{Planks} \caption{{A single board of (a) homogeneous and (b) heterogeneous elements.}} \label{fig:Planks} \end{figure} \begin{figure \centering \includegraphics[width=1.0\textwidth,height=0.3\textheight,keepaspectratio]{BothArrays} \caption[]{{Aligned and staggered orientations of 3 $\times$ 3 module for homogeneous and heterogeneous models (all measurements are in mm). Indicated in the centre of the grey blocks are the heights of the elements in a heterogeneous module. Indicated with blue dots and black lines are an example of where the vertical profiles were measured within a repeating unit.}} \label{fig:Modules} \end{figure} \subsubsection{Heterogeneous-height model}\label{sec:HeHM} A varied-height model was also designed that differed from the uniform-height model in only one aspect, the standard deviation of element height ($\sigma_{h}$). The two different roughness configurations are shown in Fig. \ref{fig:Planks}, where the aligned homogeneous and heterogeneous surfaces are depicted in (a) and (b), respectively. The use of a large $h_{avg}$ provides the ability to introduce large $\sigma_{h}$. The standard deviation of the elements was modelled on the districts of Mong Kok (Hong Kong) and Midtown Manhattan. The geometric properties of the elements were derived from the \cite{NYCDoCP} and \cite{TownPB}, along with the buildings' design guide by \citet{doi:10.1680/mosd.41448}, when the information was lacking in the database. The real $\sigma_{h}$ of cities is 17.3 m and was scaled down to $\sigma_{h}$ = 0.049 m. As in the homogeneous model, the average height is 0.08 m, hereafter denoted as $h_{avg}$. Other than the $\sigma_{h}$ = 0.049 m, element heights were selected by matching the {distribution of building heights} in the data set, resulting in an increased number of short elements with only a few elements taller than the $h_{avg}$; $h_{max}$ being 0.2 m. The rough surfaces were constructed by assembling elements into modules. Herein a module is: 3 $\times$ 3 elements consisting of five different heights, randomly placed (Fig. \ref{fig:Modules}). In the heterogeneous model, a repeating module is needed to achieve statistically relevant statistics, as described by \cite{cheng2002near}. Each module contains elements with heights: 30, 30, 60, 60, 60, 80, 80, 110, and 200 mm, { distributed as in Fig. \ref{fig:HHist}}. The purpose of this randomisation is to {avoid creating preferential corridors for the flow between the super-tall elements facing the wind direction} and to create a more realistic city layout. The tallest elements in the varied-height surface are within the super-tall regime, with $AR=10$ \citep{jianlong2014study}. The varied-height surfaces have a {total of 5,445 elements.} Similarly to the homogeneous canopy, each base plate when rotated allowed for the roughness pattern to be modified from aligned to 50 \% staggered. Due to additional geometrical constrains, the total number of elements is slightly different for the heterogeneous and homogeneous canopies, {however, this is of little consequence as the downstream flow is spanwise self-similar over the different repeating units and fully developed in a central section of the wind tunnel (i.e. at the measurement stations).} The surfaces are referred to as varied-height aligned (VHA), and varied-height staggered (VHS). A summary of the surfaces can be found in Table \ref{table:Aligned and staggered orientations of models}. \begin{figure \centering \includegraphics[width=1.0\textwidth,height=0.3\textheight,keepaspectratio]{HeightHist.eps} \caption[]{{Probability density function of the distribution of element heights per module in heterogeneous model.}} \label{fig:HHist} \end{figure} \begin{table}[ht] \caption{{Characteristics of the different roughness surfaces considered in this work.}} \def1.3{1.3} \centering \label{table:Aligned and staggered orientations of models} \begin{tabular}{l c c c c c c} \toprule Case & \multirow{2}{5.5em}{Configuration} & \multirow{2}{3.5em}{Height} & $h_{avg}$ & $h_{max}$ & $\sigma_{h}$ & Elements\\ ID&&&(mm)&(mm)&(mm)&per module\\ \midrule UHA & Aligned & Uniform & 80 & 80 & 0 & 1 \\ UHS & Staggered & Uniform & 80 & 80 & 0 & 1 \\ VHA & Aligned & Varied & 80 & 200 & 49 & 9 \\ VHS & Staggered & Varied & 80 & 200 & 49 & 9 \\ \bottomrule \end{tabular} \end{table} \subsection{Instrumentation}\label{sec:Instu} A two-component Laser Doppler Anemometer (LDA), FiberFlow from Dantec, was employed to measure two velocity components of the flow simultaneously. The laser beams were converged by a 300 mm focal length lens, which facilitated measurements in between elements. In the varied-height model, two elements of 200 mm height are located adjacent to each other in some configurations. In these situations, no measurements can be acquired lower than $z$ = 40 mm due to laser obstruction by the elements. The model was sprayed with black matt paint to minimise light reflections. The LDA probe was mounted onto a traverse system which could move the measurement volume in three-dimensions within the tunnel with sub-millimetre accuracy. An elliptical mirror was attached to the LDA probe to rotate one of the laser-beam couples by 90$^\circ$, effectively changing the component of velocity measured by the LDA. Additionally, a fully instrumented pressure tapped element was used in the case of the uniform-height surfaces to measure the differential force across an element, hence its drag. To allow for these measurements one element was removed and replaced with an identical 3D-printed plastic element, fitted with a total of 25 static pressure ports on one of its faces. The element could then be rotated to allow for the differential pressure to be measured. The drag force was determined by integrating pressure distribution over the front and back of the pressure tapped element, allowing to measure the shear stress due to pressure ($u_{*}(p)$) as in \cite{cheng2002near}. The $u_{*}(p)$ was only determined for the uniform-height surface given the impossibility of replicating all random permutation of the different height elements for the heterogeneous roughness case. \subsection{Experimental details} {LDA was used to measure velocity and Reynolds stress profiles within the ISL and RSL above the elements, but also within the canopy. For each surface, different types of LDA profiles were acquired as shown in Fig. \ref{fig:ArrayMap}. For both homogeneous and heterogeneous models \textit{x-profiles} were collected to study the development of the boundary layer. In the homogeneous model 9 vertical profiles were taken over a repeating element (see dots in Fig. \ref{fig:Modules}) to study the depths of the ISL and RSL. Over the varied-height surfaces 81 vertical profiles were taken over one module to study the statistical convergence of the Reynolds shear stress profiles within the RSL (i.e \textit{$3\times3$-module} in Fig. \ref{fig:ArrayMap}). Finally, over the heterogeneous model 18 vertical profiles across the width of the tunnel were taken to study the ISL depth (further details on \textit{y-planes} are contained in Fig. \ref{fig:Modules}).} \begin{figure}[ht!] \centering \includegraphics[width=1.0\textwidth,height=0.3\textheight,keepaspectratio]{ArrayMap} \caption{{Schematic indicating the location of the LDA measurements. All units are in mm.}} \label{fig:ArrayMap} \end{figure} \section{Results and discussion}\label{sec:RandD} {This section presents and discusses results for the cases of uniform-height in \S \ref{sec:effofh} followed by results for the varied-height cases in \S \ref{sec:effofvh}.} \subsection{Effect of uniform element height}\label{sec:effofh} \subsubsection{Depth of the boundary layer}\label{sec:BLDepthI} The height of the boundary layer, $\delta$, is commonly estimated as the height where $U$ = 0.99 $ U_{ref}$, where $U_{ref}$ is the freestream velocity. {The $U_{ref}$ used in all plots is measured at wind-tunnel inlet, ahead of the rough wall models. Since $z_{0}$ is a strong function of roughness fetch in a developing flow \citep{cheng2007flow}, the boundary layer must be in equilibrium with the surface below it and fully rough before representative measurements can be collected.} It was determined that this occurs over a fetch of $x$ = 4000 mm ($\approx 15$ $\delta$) for both varied- and uniform-height models. The results past this location are shown in Fig. \ref{fig:BLUH}, where $\delta$ is shown to be up to the height of 3.25 $h$ and 3 $h$ in the UHS and UHA cases, respectively. The increase in boundary layer thickness in the UHS case is likely due to increased frontal blockage of the staggered array. When the elements are staggered, `skimming' flow regime \citep{grimmond1999aerodynamic}, is less likely to occur, as the streamwise distances between the element increase, allowing for the development of the `wake interference' flow regime, with associated enhanced turbulent structures and an increased boundary layer thickness. \begin{figure}[ht] \centering \includegraphics[width=1.0\textwidth,height=0.35\textheight,keepaspectratio]{Dfig7I} \caption{Boundary layer depth shown with vertical velocity profiles over UHA and UHS models.} \label{fig:BLUH} \end{figure} The current data ($h_{avg}$ = 80 mm and $\lambda_{p}$ = 0.44) is compared with the previous literature in Table \ref{table:AeroPar}. The taller elements in this experiment occupy a much larger fraction of the boundary layer ($\delta/h_{max}=3.25$) than those of \cite{cheng2002near} and \cite{cheng2007flow}, where $\delta/h_{max}=12$ and $\delta/h_{max}=7$, respectively; i.e. the surfaces described here have a higher relative roughness height. At the same freestream velocity, the elements of $h$ = 20 mm of \cite{cheng2002near} and \cite{cheng2007flow} show a much larger $\delta/h_{max}$ than the one found here, indicating that - not surprisingly - the height of the elements can influence the boundary layer depth. {For the sake of brevity, the reader is referred to \cite{andrieux2017} and \cite{thorpe2018} for a more in-depth discussion of the boundary layer variation across stations in both the streamwise and spanwise direction due to the surface heterogeneity.} \begin{table}[ht] \caption{Surface characteristics of various roughness configurations. Bold cases are the current measurements. C20A, C20S, C10S and RM10S taken from \cite{cheng2002near} and C20A-25\%, C20S-25\%, C20A-6.25\% and C20S-6.25\% are from \cite{cheng2007flow}.} \def1.3{1.3} \centering \label{table:AeroPar} \begin{tabular}{l c c c c c c c c c} \toprule Case ID & $\lambda_{p}$ & $\delta/h$ & $RSL/h$ & $ISL/h$ & $u_{*}/U_{ref}$ & $d/h$ & $z_{0}/h$ & $Re_{\tau}$\\ \midrule \textbf{UHA} & 0.44 & 3.00 & 1.13 & 1.63 & 0.072 & 1.12 & 0.007 & 1.14 $\times$ $10^{4}$\\ \textbf{UHS} & 0.44 & 3.25 & 1.13 & 1.56 & 0.077 & 1.02 & 0.013 & 1.32 $\times$ $10^{4}$\\ \textbf{VHA} & 0.44 & 6.25 & 2.85 & 4.75 & 0.094 & 2.55 & 0.043 & 3.10 $\times$ $10^{4}$\\ \textbf{VHS} & 0.44 & 6.25 & 2.85 & 4.88 & 0.097 & 2.66 & 0.046 & 3.20 $\times$ $10^{4}$\\ \midrule C20A & 0.25 & 7.55 & 1.85 & 0.55 & 0.061 & 1.18 & 0.023 & --\\%6.07 $\times$ $10^{3}$\\ C20S & 0.25 & 7.05 & 1.85 & 0.45 & 0.063 & 1.03 & 0.028 & --\\%5.87 $\times$ $10^{3}$\\ C10S & 0.25 & 12.1 & 1.80 & 1.40 & 0.058 & 1.16 & 0.012 & --\\%4.61 $\times$ $10^{3}$\\ RM10S & 0.25 & 13.7 & 2.50 & 0.80 & 0.063 & 1.36 & 0.014 & --\\%5.71 $\times$ $10^{3}$\\ C20A-25\% & 0.25 & 6.90 & 1.80 & 0.60 & 0.068 & 1.00 & 0.039 & --\\%6.35 $\times$ $10^{3}$\\ C20S-25\% & 0.25 & 6.70 & 1.75 & 0.40 & 0.071 & 0.96 & 0.045 & --\\%6.21 $\times$ $10^{3}$\\ C20A-6.25\% & 0.06 & 6.20 & 4.00 & -- & 0.064 &-- & -- & --\\%6.53 $\times$ $10^{3}$\\ C20S-6.25\% & 0.06 & 6.80 & 1.80 & 0.40 & 0.072 & 0.62 & 0.044 & --\\%5.25 $\times$ $10^{3}$\\ \bottomrule \end{tabular} \end{table} \subsubsection{{Mean and fluctuating velocity profiles}}\label{sec:MF_profiles} {Mean streamwise velocity profiles for flow over uniform-height cases are shown in Fig. \ref{fig:DP}a. These profiles have been spatially averaged across one repeating unit to offer a fair representation of the surfaces under investigation and are taken in the region of fully developed flow. The units are non-dimensionalised with the skin friction velocity $u_{*}$ and the roughness length $z_0$. For both cases in Fig. \ref{fig:DP}a, a linear region within the profiles is evident; this offers strong support for the existence of a well defined ISL. Within this region, the collapse of the rough cases onto the smooth theoretical dashed line is an indication of the validity of the assumptions underpinning the methodology used to estimate the roughness parameters. This is further discussed in \S \ref{sec:ISLAveragedDataI}. The logarithmic region seems to extend well into what would be expected to be the roughness sublayer, as suggested in \cite{cheng2002near}. It must be stressed that data from different streamwise stations were considered for this analysis. The characteristics of the linear region visible in Fig. \ref{fig:DP}a, and the roughness parameters describing it, were found to be fully converged and independent from the streamwise location (i.e. from the boundary-layer depth) once the flow developed over approximately 15 boundary layer height. This offers a further proof that $15$ $\delta$ is the minimum required roughness fetch for the flow to be in equilibrium with the underlying surfaces considered herein. The parameters reported in Table \ref{table:AeroPar} are referring to these conditions, where the flow is fully developed. Next the streamwise turbulent fluctuations are discussed in Fig. \ref{fig:DP}b in the form of the diagnostic plot \citep{Alfredsson:2010}, which removes the need to define the roughness parameters and the friction scaling. The $U_{e}$ used to normalise the mean velocity in Figures \ref{fig:DP} and \ref{fig:DP2} is the local velocity at the edge of the boundary layer. The benefit of the diagnostic plot approach in this work is twofold, firstly it allows one to assess the universality and the self-similar character of the turbulent statistics, secondly it provides an independent assessment on whether the flow is in fully-rough conditions. For these reasons, the smooth \citep{Alfredsson:2011} and the fully-rough \citep{Castro:2013} asymptotes are reported in Fig. \ref{fig:DP}b (and Fig. \ref{fig:DP2}b). The data for uniform-heights (both for the aligned and staggered arrays) shows satisfying collapse onto the fully-rough line (black solid line). This is a strong indication that the turbulence statistics are self-similar across cases and that the boundary layer developing above the urban canopies in examination respects the classical scaling laws.} \begin{figure}[ht!] \centering \includegraphics[width=1.0\textwidth,height=0.7\textheight,keepaspectratio]{UH_Dia.eps} \caption{{(a) Mean velocity profiles in inner scales for the UH cases. The dotted black line represents a smooth wall. (b) Diagnostic plot for the same cases. The black solid line represents the fully-rough regime \citep{Castro:2013}, while the grey line is the smooth-wall limit \citep{Alfredsson:2011}.}} \label{fig:DP} \end{figure} \subsubsection{The roughness sublayer}\label{sec:RSDepthI} The top of the RSL is considered to be the point where all the effects of the individual roughness elements on the flow cease \citep{cheng2007flow, reading20193}. It follows that the flow inside the RSL is not spatially homogeneous. The nine vertical profiles taken over the surface are presented in Fig. \ref{fig:RSLUH}a,b for the UHA and UHS cases, respectively. These are shown to converge, unexpectedly, at about 1.15 $h_{avg}$ for both cases. The uniform-height surface results for this tall-element canopy differ from the small-cube cases analysed by \cite{cheng2002near}, where the RSL was found to be much deeper ($\approx 2$ $h_{avg}$). Despite the comparable Reynolds number, the RSL depth is much shallower here, closer to those found by \cite{florens2013defining}. Given the similar conditions between the current data and those of \cite{cheng2002near}, we attribute this difference to the discrepancy in the packing density and the much deeper canopy layer. The current results also vary significantly from the commonly cited 2 - 5 $h_{avg}$ \citep{reading20193,Flack:2007,roth2000review} due to the combination of high aspect ratio elements and dense canopy. Furthermore, across Fig. \ref{fig:RSLUH}a and Fig. \ref{fig:RSLUH}b a clear difference can be seen between cases. Figure \ref{fig:RSLUH}b show similarity and collapse between all profiles. The flow inside the canopy behaves as predicted in literature, similar to that of a densely forested canopy, with a region of severe velocity deficit up to the elements' average height; {this is accordance with Fig. 9.7 in \S 9.7.3 from \cite{stull2012introduction}}. These results are qualitatively in agreement with those in \cite{Nept:2012b} for vegetating canopies for which an inflection point in the velocity profiles appears just above the roughness height for dense canopy (i.e. $\lambda_f>0.23$). The tightly packed roughness generates a strong shear layer at the top of the elements. \cite{cheng2007flow} argues that this region is the main contributor to $z_{0}$, as demonstrated in the 20 mm cube arrays. Figure \ref{fig:RSLUH}a, however, highlights the effect of aligned street canyons. Profiles 1, 4, and 7 exhibit flow penetration into the canopy where the street canyons are aligned. The rest of the profiles follow a trend very similar to that in Fig. \ref{fig:RSLUH}b. This demonstrates that, for $\lambda_{p}$ = 0.44, the flow cannot penetrate deeply into the canopy, unless the streets are aligned. Even so, velocity profiles 1, 4, and 7 only begin to have a significant velocity increase at around 0.5 $h_{avg}$, indicating that at this $\lambda_{p}$, even in aligned street canyons, the penetration into the canopy is limited beyond the depth of 40 mm. A further study could be conducted on varied canopy depth models to verify how deep the flow can penetrate in aligned surfaces with varying $h$ at different packing densities. In summary, uniform tall arrays, concerning the RSL, behave similarly to forested canopies, showing an inflection point in correspondence with the top of the canopy. Lastly, for large $\lambda_{p}$ and $h/\delta$, even when the elements are aligned, the flow cannot penetrate significantly within the canopy. \begin{figure}[ht!] \centering \includegraphics[width=1.0\textwidth,height=0.7\textheight,keepaspectratio]{Dfig11} \caption[]{On the right: RSL depth shown with vertical velocity profiles for uniform-height models. On the left: positions of profiles shown over a module: (a) UHA; (b) UHS. } \label{fig:RSLUH} \end{figure} \subsubsection{The inertial sublayer}\label{sec:ISDepthI} In the ISL, the wall-normal variation of shear stress may be neglected, hence the constant-flux region denomination \citep{reading20193}. Here, we defined the ISL as the region where the vertical variation of the spatially averaged profiles of Reynolds shear stresses is below $\pm$ 10 \%, as in \cite{cheng2007flow}. The profiles are reported in Fig. \ref{fig:ISLUH}a,c. The base of the ISL is assumed to be the top of the RSL found in Section \ref{sec:RSDepthI}. Unlike the work by \cite{cheng2002near} and \cite{cheng2007flow}, where the determination of an ISL was found to be challenging, here an ISL of significant depth can be observed in both cases. \cite{jimenez2004turbulent} reported $\delta/h\approx80$ as the lower limit for a canonical ISL to exist. For the uniform-height model, $\delta/h\approx3$, well below the limit where an ISL should form. Arguably, in Fig. \ref{fig:ISLUH} an ISL forms for these cases. \cite{cheng2007flow} also argued that the ISL depth is not constant which is also demonstrated here. As discussed in Section \ref{sec:BLDepthI}, it is likely that a deep ISL forms due to the large $\lambda_{p}$. {It is intuitive} to imagine that, in the limit of $\lambda_{p}\to\infty$, a new raised smooth wall surface (at $z=h$) will form, and recover the canonical turbulent boundary layer structure. {Additionally, it was discussed in \S \ref{sec:MF_profiles} how both the mean velocity profiles and the turbulent statistics were found to conform to the canonical scaling for turbulent boundary layers, and how the roughness parameters were invariant with streamwise location once the flow was fully-developed. These findings strengthen the argument that, despite the relative roughness height considered in this work, an ISL does indeed develop over these urban surfaces.} \begin{figure}[ht!] \centering \includegraphics[width=1.0\textwidth,height=0.7\textheight,keepaspectratio]{Dfig4III} \caption[]{{On the left: ISL shown with shear stress scatter plot and spatially averaged shear stress profile (black line). Black dotted lines show the ISL boundaries determined by the $\pm$ 10\% from the averaged ISL value. Blue line indicates $\overline{u'w'}$ determined by average ISL value, whilst the red line uses best-fit extrapolation to the $h_{avg}$ to determine $\overline{u'w'}$ (as in \cite{florens2013defining}). On the right: Eq. \ref{eq:U} is rearranged to the form of $z=z_0e^{kU/u_*}+d$, where $u_*$ is calculated using $\overline{u'w'}$. Using this rearranged form, $z_0$ represents the gradient and $d$ is the axis intercept. A least-mean-square fit is used to extrapolate onto the axis within the ISL boundaries. Case UHA for plots (a) and (b), and case UHS for plots (c) and (d).}} \label{fig:ISLUH} \end{figure} \subsubsection{The aerodynamic parameters}\label{sec:ISLAveragedDataI} The method used in this study to calculate $d$ and $z_{0}$ uses a logarithmic law fitting. A common problem faced with the uncertainty in the fitting of the log-law is that there are three free parameters, $u_{*}$, $d$ and $z_{0}$ \citep{Castro:2007,Segalini:2013}. More generally, other unknowns are given by the wake parameter, $\Pi$, and by not considering the Von K\'{a}rm\'{a}n `constant' a constant \citep{castro2017measurements}, however, this is not relevant for the current discussion. To reduce the uncertainty in the fitting procedure, a common method involves first fixing $u_{*}$. This is done here with two methods: (i) by using the Reynolds shear stress value in the ISL to determine $u_{*}$, and (ii) by measuring the drag directly by use of pressure-tapped elements. In the first method, the horizontally-averaged vertical velocity profile is used in either of two ways. Firstly, one can simply compute the average value from all the points within the ISL; secondly, a best-fit extrapolation onto the height z = $d$ or z = $h$ can be used \citep{cheng2002near, cheng2007flow, florens2013defining}. An independent method to compute $u_{*}$ is with the use of a pressure instrumented element, as described in \cite{cheng2002near}. Results are summarised in Table \ref{table:AeroPar}. Once the friction velocity has been determined, the aerodynamic parameters are computed by best fitting the velocity profiles considering the Von K\'{a}rm\'{a}n's constant $\kappa = 0.4$, which is within the limits suggested by \cite{Marusic:2013} for high Reynolds number flows. {The discrepancy between} the friction velocities calculated via averaged and extrapolated ISL is below 4 \% (Fig. \ref{fig:ISLUH}b,d). A further comparison extrapolating to z = $h$ \citep{florens2013defining} gives results more closely resembling the pressure tapped results (i.e. $\approx$ 1.5 \% and $\approx$ 15 \% for $d$ and $z_{0}$ respectively), suggesting that the extrapolation technique does yield to more robust results. \subsection{Effect of varying element height}\label{sec:effofvh} \subsubsection{Depth of the boundary layer} \label{sec:BLDepthII} In the varied-height models, similarly to the previous case, an initial investigation was carried out to determine the fetch at which the boundary layer is fully {developed} and this was found to be at $x$ = 4470 mm ($\approx9$ $\delta$), as in Fig. \ref{fig:BLVH}. Although the $h_{avg}$ is the same in both models, the boundary layer depth is approximately double in the varied-height model ($\delta_{UHA}/h_{avg} = 3$ versus $\delta_{VHA}/h_{avg} = 6.25$). It is likely that the increased $\sigma_{h}$ also increases the drag generated by the surface, hence creating deeper boundary layers. \begin{figure}[ht] \centering \includegraphics[width=1.0\textwidth,height=0.3\textheight,keepaspectratio]{Dfig12I} \caption[]{Boundary layer depth found from vertical velocity profiles taken at different fetch over models: (a) VHA, and (b) VHS.} \label{fig:BLVH} \end{figure} \subsubsection{{Mean and fluctuating velocity profiles}}\label{sec:DP_2} {As for the case of uniform-height previously discussed, mean and fluctuating velocity profiles are considered next. Mean streamwise velocity profiles for flow over varied-height cases are shown in Fig. \ref{fig:DP2}a. As for the previous case, a region of near-linear behaviour is noticeable, however, both the degree of collapse onto the smooth wall dashed line and the extent of the linear regime are reduced from the previous cases in Fig. \ref{fig:DP}a. Both these aspects are discussed in the following sections, however, these seem to indicate that the methodology for the evaluation of the roughness parameters (see \S \ref{sec:ISDepthII}) has a higher uncertainty for surfaces with height heterogeneity and that the roughness sublayer has grown at the expense of the inertial sublayer (see \S \ref{sec:RSDepthII}), respectively. It is still important to highlight that, even for this cases, data from different streamwise stations yielded converged roughness parameters; a strong evidence of the fully-developed nature of the flow at the measuring stations.} \begin{figure}[ht!] \centering \includegraphics[width=1.0\textwidth,height=0.7\textheight,keepaspectratio]{VH_Dia} \caption{{(a) Mean velocity profiles in inner scales for the VH cases. The dotted black line represents a smooth wall. (b) Diagnostic plot for the same cases. The black solid line represents the fully-rough regime \citep{Castro:2013}, while the grey line is the smooth-wall limit \citep{Alfredsson:2011}.}} \label{fig:DP2} \end{figure} {The diagnostic plot for the streamwise fluctuations is shown in Fig. \ref{fig:DP2}b. Both cases for varied-heights are found to be self-similar, however interestingly, these sit above the fully-rough trend line. This can be interpreted as an indication of the enhanced turbulent mixing due to the heterogeneous surface morphology; an aspect further discussed in \S \ref{sec:RSDepthII}. This discrepancy with previous data on fully-rough walls reported in \cite{Castro:2013} can also be attributed to the possibility that this fully-rough asymptote within the diagnostic plot is effected by several other roughness parameters, as highlighted originally in \cite{Castro:2013}. Two parameters of particular relevance to this work are the much higher relative roughness height, $\delta/h$, and the standard deviation in elements height, $\sigma_h$, which distinguish this work from that data in \cite{Castro:2013}.} \subsubsection{The roughness sublayer} \label{sec:RSDepthII} In the varied-height experiment, 81 profiles were taken per repeating $3\times3$-module (Fig. \ref{fig:Modules}). These are shown in Fig. \ref{fig:RSLVH}a, b for the two configurations. There is a clear collapse of mean vertical profiles, indicating the existence of a limited RSL depth (Fig. \ref{fig:RSLVH}a,b). The RSL height is the same in both VHA and VHS configurations, which is of interest; it seems that the dominant feature in determining the height of the RSL is the tallest element in the $3\times3$-module, as a collapse is found for $z/h_{avg}\approx2.5$ ($z/h_{max}\approx1$). The evident collapse of vertical profiles in the RSL contradicts \cite{cheng2002near}, \cite{cheng2007flow} and \cite{jimenez2004turbulent}, who predicted an RSL region that expands and `squeezes' the ISL for rough walls with significantly tall and heterogeneous elements. For the VHA surface (Fig. \ref{fig:RSLVH}a) not all profiles collapse at the same height. Several vertical profiles converge near the average height of the elements (i.e. $z/h_{avg}$ = 1). The remaining profiles converge just above the maximum-height element ($z/h_{avg}$ = 2.5 or $z/h_{max}\approx1$). This trend possibly occurs for the same reasons discussed for the uniform-height cases. Where street canyons are aligned in the wind direction, the flow penetrates deeper into the canopy, allowing for higher velocities and enhanced mixing. Likewise, in the VHS configuration (Fig. \ref{fig:RSLVH}b), the flow does not seem to fully converge until above the tallest element ($z/h_{avg}$ = 2.85). In the varied-height situation skimming no longer occurs since there is a large spread of velocities below the $h_{avg}$ and $h_{max}$. As seen in both the VHA and VHS cases, the large spread of velocities within the canopy indicates significant flow penetration likely due to the increased $\sigma_{h}$. Considering this possibility, cities designed with large $\sigma_{h}$, could introduce much higher rates of mixing, with positive outcomes on urban air quality, and natural ventilation. As argued by \cite{zaki2011aerodynamic}, $\lambda_{p}$ is no longer a good indicator of the flow regime when a large $\sigma_{h}$ is introduced, which is counter to the studies conducted by \cite{sharma2019turbulent}. The argument made here suggests that the surface parameters used to describe flow over uniform-height models are no longer sufficient to characterise canopies with significant height variations. \begin{figure}[ht!] \centering \includegraphics[width=1.0\textwidth,height=0.7\textheight,keepaspectratio]{Dfig3} \caption[]{RSL shown with vertical velocity profiles over the varied-height models. (a) VHA and (b) VHS. Horizontally-averaged vertical velocity profiles of different experimental runs. (c) VHA (d) VHS.} \label{fig:RSLVH} \end{figure} Figures \ref{fig:RSLVH}c and d show horizontally-averaged profiles within the RSL for the different experimental runs in the varied-height model. The spatially-averaged profiles taken in different areas of the array (across the wind tunnel span) collapse reasonably well onto each other, but there are some discrepancies around the region $1.5<z/h<2.5$. The profiles measured across the $y$-planes collapse better than the profiles taken over the $3\times3$-module. Possibly, it is more efficient and easier to get a representative averaged profile from taking measurements across the width of the model rather than over a single repeating module. Another reason for this better collapse could be that the average height of the elements in-front and behind the $y$-planes did not always match $h_{avg}$- see further discussion on this topic in \cite{cheng2002near}. However, the clear collapse in $y$-plane profiles indicates that $h_{avg}$ may not be the dominant lengthscale in models with large $\sigma_{h}$. Comparing uniform- to varied-height canopies across Fig. \ref{fig:RSLUH} and Fig. \ref{fig:RSLVH}, a prominent difference appears. For the varied-height models, the wind velocities vary greatly within the canopy, in contrast, they remain close to zero until the very top of the canopy for the uniform-height experiments. This is likely due to the physics of the `skimming flow' regime \citep{grimmond1999aerodynamic}. Even below $h_{avg}$, there are much higher velocities in the varied-height canopy compared to the uniform-height. The spread of velocity profiles in the varied-height model suggests that reasonable mixing can occur in the near-wall region for tall and dense canopy (large $\lambda_{p}$ and $h_{avg}$), providing that $\sigma_{h}$ is significant. \subsubsection{The inertial sublayer}\label{sec:ISDepthII} Several thresholds have been used in the literature for defining this region, where $\pm$ 5, $\pm$ 10, and $\pm$ 20 \% variation are all examined by \cite{cheng2007flow}. \cite{kanda2013new} used a region for logarithmic law fitting which is a function of the tallest and average roughness elements' height. \cite{sharma2019turbulent} noted how the ISL depth in their work is a function of the spacing between the roughness elements. These ISL fitting methods, however controversial, can prove very useful in accurately estimating aerodynamic parameters. However, a convincing relationship between the roughness geometry and the upper limits of the ISL is still elusive, particularly for tall canopies. \begin{figure}[ht] \centering \includegraphics[width=1.0\textwidth,height=0.65\textheight,keepaspectratio]{Dfig4IIII} \caption[]{{On the left: ISL shown with shear stress scatter plot and spatially averaged shear stress profile (black line). Black dotted lines show the ISL boundaries determined by the $\pm$ 10\% from the averaged ISL value. Blue line indicates $\overline{u'w'}$ determined by average ISL value, whilst the red line uses best-fit extrapolation to the $h_{avg}$ to determine $\overline{u'w'}$ (as in \cite{florens2013defining}). On the right: Eq. \ref{eq:U} is rearranged to the form of $z=z_0e^{kU/u_*}+d$, where $u_*$ is calculated using $\overline{u'w'}$. Using this rearranged form, $z_0$ represents the gradient and $d$ is the axis intercept. A least-mean-square fit is used to extrapolate onto the axis within the ISL boundaries. Case VHA for plots (a) and (b), and case VHS for plots (c) and (d).}} \label{fig:ISLVH} \end{figure} The depth of the ISL region, herein based on \cite{reading20193}, is shown in Fig. \ref{fig:ISLVH}. Previous studies predicted that an ISL region would vanish as the RSL rises through the boundary layer due to increasingly rough surfaces \citep{cheng2007flow, cheng2002near, rotach1999influence, jimenez2004turbulent, hagishima2009aerodynamic}. The clear collapse of the RSL in Fig. \ref{fig:RSLVH} contradicts this theory. Furthermore, the $\pm$ 10 \% variation definition does allow for an ISL with significant depth to be singled out, as seen in Fig. \ref{fig:ISLVH}. This may be an indication that the ISL does, indeed, exist. It is important to point out, however, that the Reynolds shear stresses never reach a full plateau, but show a small - yet visible - gradient across the ISL. This is possibly related to the fact that the boundary layer in the cases with varied-height occupies a significant height of the wind tunnel. The acceleration parameter is fairly small (9.48$\times10^{-8}$) indicating a zero-pressure-gradient, however, it must be acknowledged that it is nearly double the same quantity in the uniform-height case. An ISL within the $\pm$ 10 \% variation does form even though $\delta/h= 6.25$ is still well below the limits established by \cite{jimenez2004turbulent}. \cite{florens2013defining} argued that this ratio should be lowered, and additionally \cite{cheng2007flow} demonstrates that the ratio $\delta/h$ should not be the only criterion for the existence of the ISL. Furthermore, \cite{leonardi2010channel} argued that the Reynolds number previously thought to be necessary for the ISL development may be much lower than expected. The finding from the current work, therefore, offers support to the idea that \citeauthor{jimenez2004turbulent}'s (\citeyear{jimenez2004turbulent}) ratio is perhaps too stringent. The $\pm$ 10 \% horizontal variation may not, however, be the best suit for determining the ISL depth. The top limit of the ISL can be hard to determine, and it is here tempting to soften the criterion to include the region where the gradient of the slope is more noticeable (i.e. $2.5<z/h<5$ in Fig. \ref{fig:ISLVH}c). Indeed, in the uniform-height experiment discussed earlier, the $\pm$ 10 \% horizontal variation is quite liberal, and a more constrained $\pm$ 5 \% could be used. The depth of the ISL could perhaps be better determined by the change in slope in the spatially-averaged Reynolds shear stresses, in a procedure similar to that in \cite{kanda2013new}. The relevance of this discussion stems from the fact that, if sufficiently accurate $d$ and $z_{0}$ estimates can be derived using the constant flux approximation, then wind-tunnel experiments could be greatly simplified, omitting pressure tapped elements or floating force balances, without compromising on the accuracy of the inner scaling. A summary of the depths of ISL and RSL for both cases of heterogeneous heights can be seen in Table \ref{table:AeroPar}. The heights of RSL, ISL, and boundary layer seem to be strongly dependent on the height of the tallest element within the surfaces. Even with appropriate normalisation (e.g. by $h_{avg}$ or $h_{max}$), there is no apparent universal scaling between results from varied- and uniform-height experiments. The significant change in the boundary layer, RSL, and ISL depths across the two cases seem to be uncorrelated. Thus, VHA and VHS surfaces could only be compared to models of similarly sized standard deviation and average height. \subsubsection{The aerodynamic parameters}\label{sec:ISLAveragedDataII} The aerodynamic parameters and friction scaling were determined as in Section \ref{sec:ISLAveragedDataI}, but direct measurement of the friction velocity by an instrumented element were not available for these cases for reasons discussed in Section \ref{sec:Instu}. Here we follow the same procedure explained in Section \ref{sec:ISLAveragedDataI}, i.e. {extrapolating to $z = h$}. The results are reported in Fig. \ref{fig:ISLVH}. The aerodynamic parameters calculated herein are compared to the predictions of \citeauthor{kanda2013new}'s (\citeyear{kanda2013new}) and \citeauthor{MacDonald:1998}'s (\citeyear{MacDonald:1998}) morphometric methods which considered elements with varied-height, to assess their performance. There is a large discrepancy between the varied-and uniform-height results, where $d$ is found to increase 1.5 times from uniform- to varied-height, whilst $z_{0}$ increases nearly 3.5 times. For the uniform canopy, \citeauthor{kanda2013new}'s (\citeyear{kanda2013new}) method predicted reasonable $d$ while \citeauthor{MacDonald:1998}'s (\citeyear{MacDonald:1998}) performed worse ($\approx$ 5 $\%$ and 30 $\%$, respectively). Neither produced a value for normalised $z_0$ that was close to that obtained herein (with both methods predicting a roughness length almost an order of magnitude larger). For the varied-height experiment, the extrapolated results from the shear stress plot produced a value of $d$ above $h_{max}$, whilst the average shear stress method resulted in values below $h_{max}$ (see Fig. \ref{fig:ISLVH}). Both \cite{kanda2013new} and \cite{Hopkins:2012} methods predict a value of $d$ between $h$ and $h_{max}$, which differ $\approx$ 36 $\%$ and $\approx$ 43 $\%$, respectively from those extrapolated here from the average shear. This discrepancy may be due to the fact that Tokyo (heavily used in \cite{kanda2013new}) does not have many super-tall structures such as Hong Kong (largely due to the seismic restrictions in the former). The lack of similarity of results continues when comparing $z_{0}$. The $z_{0}$ values calculated with the two methods were roughly twice as large those evaluated here. The findings of this work strengthen the argument by \cite{kanda2013new} that new parameters are necessary to accurately characterise an urban rough wall, particularly when the elements are tall and heterogeneous in height if one wants to accurately estimate the aerodynamic parameters. \cite{jiang2008systematic} and \cite{zaki2011aerodynamic} suggest that $d$ increases with $\sigma_{h}$, which is in line with findings presented in this work. \cite{zaki2011aerodynamic} also demonstrated that the values of $d$ and $z_{0}$ almost doubles for cases with the same average height but significant $\sigma_{h}$ variation. \cite{xie2008large} also acknowledges that the tallest elements have a disproportionate contribution to the drag across a surface. \cite{kanda2013new} further suggested that both $\lambda_{p}$ and $\sigma_{h}$ have an effect on $d$ and $z_{0}$. A lack of similarity between results of uniform- and varying-height strengthens the above arguments; particularly the fact that a large $\sigma_{h}$ produces significant changes in the flow field. \cite{kanda2013new} considered $h_{max}$ as the upper limit of $d$, and also advocate the use of standard deviation of element height ($\sigma_{h}$) as an important measure when calculating the aerodynamic parameters, which is corroborated by the findings of this work. {This seems to support the argument that} the sole use of $\sigma_{h}$ and $h_{avg}$ when designing an urban model are not representative of the flow physics in a real city as suggested by \cite{kanda2013new}. Since there are usually a few very tall elements amongst many shorter elements, other important parameters to consider would be the distribution and ratios of tall to short elements. \section{Conclusions} \label{sec:Conc} Wind tunnel experiments were conducted at the University of Surrey on four dense ($\lambda_p=0.44$) and tall ($\delta/h_{avg}\approx3$) urban arrays; two canopies were of uniform-height and two of varied-height, whilst the average height was kept fixed at $h_{avg}$ = 80 mm in both cases. All canopies aimed to represent idealised modern cities. The experiments examined the differences in the flow features across canopies with homogeneous and heterogeneous heights by means of laser doppler anemometry and direct drag measurements. All cases examined were fully-developed, fully-rough surfaces in zero-pressure-gradients. In the uniform-height experiments, the surfaces with a large $\lambda_{p}$ inhibited deep penetration of the wind into the canopy, thus hindering mixing at street level. For $\lambda_{p}$ = 0.44, `skimming flow' regime seemingly occurred, and the rough surface started to recover smooth-like properties above the elements, producing an inertial sublayer region; this is in contrast to \cite{cheng2002near} and \cite{cheng2007flow} who questioned the existence of this region as the roughness influence grows. {The RSL depth was found to extend approximately up to $z$ = 1.15 $h$, which is much shallower than the typically expected 2 - 5 $h$ \citep{Flack:2007}. Conventional turbulent scalings laws were found to be applicable to both mean velocity profiles and turbulent quantities despite the severity of the roughness.} For heterogeneous-height canopies, the usefulness of $\lambda_{p}$ in describing the wall properties became more questionable. Despite the same $h_{avg}$, the boundary layer grew to almost double the thickness of the cases with uniform-height, highlighting the significance of $\sigma_{h}$ and $h_{max}$ in dictating the flow features. Surprisingly, even with such a heterogeneous canopy, a clear collapse of vertical profiles of Reynolds shear stress was observed, forming a coherent RSL extending to just above the height of the tallest element ($z/h_{avg}=2.85$ and $z/h_{max}\approx1$), {which is much shallower than anticipated. Moreover, more significant wind penetration was observed within the canopy when compared with the uniform-height array. This is responsible for an enhanced turbulent mixing resulting in velocity fluctuations which are higher than previously reported in surfaces with homogeneous heights throughout the boundary layer depth \citep{Castro:2013}.} These findings strengthen the need to include information regarding $\sigma_{h}$ and $h_{max}$ when describing flow over tall and heterogeneous canopies, supporting the conclusions highlighted in \cite{kanda2013new}. A comparison of the aerodynamic parameters for the cases with heterogeneous height considered herein with existing morphometric methods has highlighted the inaccuracies of these in the case of tall canopies with significant variation in height between elements. Further work is therefore needed on this topic. \begin{acknowledgements} {The authors would like to acknowledge Dr. Paul Hayden at the University of Surrey for facilitating the tests and the Department of Mechanical Engineering Sciences for funding the manufacture of the experimental rig. We are also grateful to Jacques Andrieux, Harry Thorpe, and Amal Pawa, who manufactured the UH model during their undergraduate projects and carried out preliminary tests. Finally, we would like to acknowledge the support of the IMechE, via the Conference Travel Grant that allowed us to disseminate some of this work at the American Meteorological Society in 2020 (\url{https://ams.confex.com/ams/2020Annual/meetingapp.cgi/Paper/370772}).} The data used in this work is available at the following link: \url{https://dx.doi.org/10.17605/OSF.IO/ZW8NP}. \end{acknowledgements} \bibliographystyle{spbasic_updated}
proofpile-arXiv_065-4523
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Throughout this paper, we will write $K$ for any field with characteristic zero and $K[x]=K[x_1,x_2,\ldots,x_n]$ for the polynomial algebra over $K$ with $n$ indeterminates. Let $F=(F_1,F_2,\ldots,F_n):K^n\rightarrow K^n$ be a polynomial map, that is, $F_i\in K[x]$ for all $1\leq i\leq n$. Let $JF=(\frac{\partial F_i}{\partial x_j})_{n\times n}$ be the Jacobian matrix of $F$. For $H_i, u_{i-1}\in K[x]$, we abbreviate $\frac{\partial H_i}{\partial x_j}$ as $H_{ix_j}$ and $\frac{\partial u_{i-1}}{\partial x_j}$ as $u_{(i-1)x_j}$, and define $\deg_{x_i} f$ as the highest degree of variable $x_i$ in $f$. $P_n(i,j)$ denotes the $n \times n$ elementary permutation matrix which interchanges coordinates $i$ and $j$, and $P_n(i(a),j)$ denotes the $n \times n$ elementary matrix which add $a$ times of the $i$-th row to the $j$-th row. The Jacobian Conjecture (JC) raised by O.H. Keller in 1939 in \cite{1} states that a polynomial map $F: K^n\rightarrow K^n$ is invertible if the Jacobian determinant $\det JF$ is a nonzero constant. This conjecture has been attacked by many people from various research fields, but it is still open, even for $n\geq 2$. Only the case $n=1$ is obvious. For more information about the wonderful 70-year history, see \cite{2}, \cite{3}, and the references therein. In 1980, S.S.S.Wang (\cite{4}) showed that the JC holds for all polynomial maps of degree 2 in all dimensions (up to an affine transformation). A powerful result is the reduction to degree 3, due to H.Bass, E.Connell and D.Wright (\cite{2}) in 1982 and A.Yagzhev (\cite{5}) in 1980, which asserts that the JC is true if the JC holds for all polynomial maps $x+H$, where $H$ is homogeneous of degree 3. Thus, many authors study these maps and led to pose the following problem. {\em (Homogeneous) dependence problem.} Let $H=(H_1,\ldots,H_n)\in K[x]$ be a (homogeneous) polynomial map of degree $d$ such that $JH$ is nilpotent and $H(0)=0$. Whether $H_1,\ldots,H_n$ are linearly dependent over $K$? The answer to the above problem is affirmative if rank$JH\leq 1$ (\cite{2}). In particular, this implies that the Dependence Problem has an affirmative answer in the case $n=2$. M. de Bondt and Van den Essen give an affirmative answer to the above problem in the case that $H$ is homogeneous and $n=3$ (\cite{8}). With restrictions on the degree of $H$, more positive results are known. For cubic homogeneous $H$, the case $n = 4$ has been solved affirmatively by Hubbers in \cite{7}, using techniques of \cite{6}. For cubic homogeneous $H$ with rank$JH = 2$, the Dependence Problem has an affirmative answer for every $n$, because the missing case $n \geq 5$ follows from \cite[Theorem 4.3.1]{HKM}. For cubic $H$, the case $n = 3$ has been solved affirmatively as well, see e.g.\@ \cite[Corollary 4.6.6]{HKM}). For quadratic $H$, the Dependence Problem has an affirmative answer if rank$\allowbreak JH \leq 2$ (see \cite{B2} or \cite[Theorem 3.4]{12}), in particular if $n \leq 3$. For quadratic homogeneous $H$, the Dependence Problem has an affirmative answer in the case $n \leq 5$, and several authors contributed to that result. See \cite[Appendix A]{HKM} and \cite{XS5} for the case $n = 5$. The first counterexamples to the Dependence Problem were found by Van den Essen (\cite{9}, \cite[Theorem 7.1.7 (ii)]{3}). He constructs counterexamples for all $n \geq 3$. In another paper (\cite{E}), he constructs a quadratic counterexample for $n = 4$, which can be generalized to arbitrary even degree (see \cite[Example 8.4.4]{3} for degree $4$. M. de Bondt was the first who found homogeneous counterexamples (\cite{10}). He constructed homogeneous counterexamples of $6$ for $n = 5$, homogeneous counterexamples of degree $4$ and $5$ for all $n \geq 6$, and cubic homogeneous counterexamples for all $n \geq 10$. Homogeneous counterexamples of larger degrees can be made as well, except for $n = 5$ and odd degrees. A cubic homogeneous counterexample for $n = 9$ can be found in \cite{SFGZ}, see also \cite[Section 4.2]{HKM}. In \cite{18}, Chamberland and Van den Essen classified all polynomial maps of the form $$H=\big(u(x_1,x_2),v(x_1,x_2,x_3),h(u(x_1,x_2),v(x_1,x_2,x_3))\big)$$ with $JH$ nilpotent. The author and Tang \cite{13} classified all polynomial maps of the form $H=\big(u(x_1,x_2),v(x_1,x_2,x_3),h(x_1,x_2,x_3)\big)$ with some conditions. In \cite{14}, the author and M. de Bondt classify all polynomial maps of the form $$H=\big(H_1(x_1,x_2,\ldots,x_n),H_2(x_1,x_2),H_3(x_1,x_2,H_1),\ldots,H_n(x_1,x_2,H_1)\big)$$ with $JH$ nilpotent. Casta\~{n}eda and Van den Essen classify in \cite{CE} all polynomial maps of the form $$H=\big(u(x_1,x_2),u_2(x_1,x_2,x_3),u_3(x_1,x_2,x_4),\ldots,u_{n-1}(x_1,x_2,x_n), h(x_1,x_2)\big)$$ with $JH$ nilpotent. A polynomial map of the form $(x_1,\ldots,x_{i-1},x_i+Q,x_{i+1},\ldots,x_n)$ is \emph{elementary} if $Q\in K[x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n]$. A polynomial map is called \emph{tame} if it is a finite composition of invertible linear maps and elementary maps. In the paper, we first classify all polynomial maps of the form $H=(u(x,y,z),\allowbreak v(x,y,z), h(x,y))$ in the case that $JH$ is nilpotent and $\deg_zv\leq 1$. Then, in section 3, we extend these results to the case where $$H=\big(H_1(x_1,x_2,\ldots,x_n),\allowbreak b_3x_3+\cdots+b_nx_n+H_2^{(0)}(x_2),H_3(x_1,x_2),\ldots,H_n(x_1,x_2)\big).$$ In particular, we prove that $F=x+H$ is tame. \section{Polynomial maps of the form $H=(u(x,y,z),v(x,\allowbreak y,z), h(x,y))$} In the section, we classify polynomial maps of the form $H=(u(x,y,z),v(x,y,z),\allowbreak h(x,y))$ in the case that $JH$ is nilpotent, $\deg_zv(x,y,z)\leq 1$. Firstly, we prove in Lemma 2.2 that $u,v,h$ are linearly dependent in the case that $JH$ is nilpotent, $\deg_zv(x,y,z)=1$ and $\deg_zu=2$. Then we prove that $u,v,h$ are linearly dependent in Theorem 2.3 in the case that $JH$ is nilpotent and $\deg_zv=1$ and $\deg_zu> 2$. \begin{lem} \label{lem2.1} Let $u,v\in K[x,y,z]$, $u=u_dz^d+\cdots+u_1z+u_0$, $v=v_lz^l+\cdots+v_1z+v_0$ with $u_dv_l\neq 0$. If $u_x+v_y=0$ and $l\leq d$, then we have the following equations \begin{equation}\label{eq2.1} u_{dx}=\cdots=u_{(l+1)x}=0 \end{equation} and \begin{equation}\label{eq2.2} u_{ix}+v_{iy}=0 \end{equation} for $0\leq i\leq l$. \end{lem} \begin{proof} We have the conclusion by comparing the coefficients of $z^j$ of the equation $u_x+v_y=0$ for $0\leq j\leq d$. \end{proof} \begin{lem}\label{lem2.2} Let $H=(u(x,y,z),v(x,y,z),h(x,y))$ be a polynomial map with $\deg_zv(x,y,z)=1$. Assume that $H(0)=0$ and $\deg_zu=2$. If $JH$ is nilpotent, then $u,v,h$ are linearly dependent. \end{lem} \begin{proof} Since $JH$ is nilpotent, we have the following equations: \begin{eqnarray} u_x+v_y = 0,\label{eq2.3}\\ u_xv_y-v_xu_y-h_xu_z-h_yv_z=0,\label{eq2.4}\\ v_xh_yu_z-h_xv_yu_z+h_xu_yv_z-u_xh_yv_z = 0.\label{eq2.5} \end{eqnarray} Let $u,v$ be as in Lemma \ref{lem2.1}. Since $\deg_zu=2$, $\deg_zv=1$, it follows from equation \eqref{eq2.3} and Lemma \ref{lem2.1} that \begin{equation}\label{eq2.6} u_{2x}=0 \end{equation} and \begin{equation}\label{eq2.7} u_{ix}+v_{iy}=0 \end{equation} for $0\leq i\leq 1$. It follows from equations \eqref{eq2.4} and \eqref{eq2.6} that \begin{equation}\label{eq2.8} \begin{split} (u_{1x}z+u_{0x})(v_{1y}z+v_{0y})-(v_{1x}z+v_{0x})(u_{2y}z^2+u_{1y}z+u_{0y})\\ -h_x(2u_2z+u_1)-h_yv_1=0. \end{split} \end{equation} We always view that the polynomials are in $K[x,y,z]$ with coefficients in $K[x,y]$ while we compare the coefficients of the degree of $z$. Comparing the coefficients of $z^3$ and $z^2$ of the above equation, we have $v_{1x}u_{2y}=0$ and \begin{equation}\label{eq2.9} u_{1x}v_{1y}-v_{1x}u_{1y}-v_{0x}u_{2y}=0. \end{equation} Thus, we have $u_{2y}=0$ or $v_{1x}=0$.\\ (I) If $u_{2y}=0$, then we have $u_2\in K^*$ and \begin{equation}\label{eq2.10} u_{1x}v_{1y}-v_{1x}u_{1y}=0 \end{equation} by equations \eqref{eq2.6} and \eqref{eq2.9} respectively. It follows from equation \eqref{eq2.7} that $u_{1x}=-v_{1y}$. Thus, there exists $P\in K[x,y]$ such that $u_1=P_y$, $v_1=-P_x$. It follows from equation \eqref{eq2.10}) that $P_{xy}^2-P_{xx}P_{yy}=0$. Then it follows from Lemma 2.1 in \cite{18} that \begin{equation}\label{eq2.11} u_1=P_y=bf(ax+by)+c_2 \end{equation} and \begin{equation}\label{eq2.12} v_1=-P_x=-af(ax+by)+c_1 \end{equation} for some $f(t)\in K[t]$ and $f(0)=0$, $a,b\in K^*$, $c_1,c_2\in K$. Then we have the following equations: \begin{equation}\label{eq2.13} u_{1x}v_{0y}+u_{0x}v_{1y}-v_{1x}u_{0y}-v_{0x}u_{1y}-2u_2h_x=0 \end{equation} and \begin{equation}\label{eq2.14} u_{0x}v_{0y}-v_{0x}u_{0y}-u_1h_x-v_1h_y=0 \end{equation} by comparing the coefficients of $z$ and $z^0$ of equation \eqref{eq2.8} respectively. It follows from equations \eqref{eq2.5} and \eqref{eq2.6} that \begin{equation}\label{eq2.15} \begin{split} [(v_{1x}z+v_{0x})h_y-h_x(v_{1y}z+v_{0y})](2u_2z+u_1)\\ +[h_x(u_{1y}z+u_{0y})-h_y(u_{1x}z+u_{0x})]v_1=0. \end{split} \end{equation} Comparing the coefficients of $z^2,z,z^0$ of equation \eqref{eq2.15}, we have the following equations: \begin{equation}\label{eq2.16} v_{1x}h_y-h_xv_{1y}=0, \end{equation} \begin{equation}\label{eq2.17} (v_{0x}h_y-h_xv_{0y})2u_2+(v_{1x}h_y-h_xv_{1y})u_1+(h_xu_{1y}-h_yu_{1x})v_1=0 \end{equation} and \begin{equation}\label{eq2.18} (v_{0x}h_y-h_xv_{0y})u_1+(h_xu_{0y}-h_yu_{0x})v_1=0. \end{equation} It follows from equations \eqref{eq2.12} and \eqref{eq2.16} that $af'\cdot (bh_x-ah_y)=0$. Thus, we have $a=0$ or $f'=0$ or $bh_x=ah_y$.\\ (i) If $a=0$, then \begin{equation}\label{eq2.19} u_1=bf(by)+c_2,~~~v_1=c_1\in K^* \end{equation} It follows from equations \eqref{eq2.13} and \eqref{eq2.19} that $2u_2h_x=-b^2f'(by)v_{0x}$. Integrating with respect to $x$ of two sides of the above equation, we have \begin{equation}\label{eq2.20} h=-\frac{b^2}{2u_2}f'(by)v_0+\frac{c(y)}{2u_2} \end{equation} for some $c(y)\in K[y]$. Substituting equations \eqref{eq2.19} and \eqref{eq2.20} into equation \eqref{eq2.17}, we have the following equation: $$v_{0x}[-b^3f''(by)v_0+c'(y)-\frac{b^4}{2u_2}v_1\cdot (f'(by))^2]=0.$$ Thus, we have $v_{0x}=0$ or $f''(by)=0$ and $c'(y)=\frac{b^4}{2u_2}v_1\cdot (f'(by))^2$. If $v_{0x}=0$, then it follows from equations \eqref{eq2.13} and \eqref{eq2.19} that $h_x=0$. It follows from equation \eqref{eq2.18} that $v_1h_yu_{0x}=0$. Thus, we have $h_y=0$ or $u_{0x}=0$. If $u_{0x}=0$, then it follows from equation \eqref{eq2.14} that $h_y=0$. If $h_y=0$, then $h=0$ because $h(0,0)=0$. Thus, $u,v,h$ are linearly dependent. If $f''(by)=0$, then $f'(by)\in K$ and $c'(y)=\frac{b^4}{2u_2}v_1(f'(by))^2\in K$. Let $l:=b^2f'(by)$ and $c:=\frac{l^2}{(2u_2)^2}v_1$. Then it follows from equation \eqref{eq2.19} that $v_1\in K^*$ and $u_1=ly+c_2$ for some $c_2\in K$. Since $h(0,0)=0$, it follows from equation \eqref{eq2.20} that \begin{equation}\label{eq2.21} h=-\frac{l}{2u_2}v_0+c\cdot y. \end{equation} It follows from equations \eqref{eq2.14} and \eqref{eq2.21} that \begin{equation}\label{eq2.22} v_{0x}u_{0y}-u_{0x}v_{0y}=\frac{l}{2u_2}u_1v_{0x}+\frac{l}{2u_2}v_1v_{0y}-cv_1. \end{equation} It follows from equations \eqref{eq2.18} and \eqref{eq2.21} that \begin{equation}\label{eq2.23} c\cdot u_1v_{0x}+v_1[-\frac{l}{2u_2}(v_{0x}u_{0y}-u_{0x}v_{0y})-c\cdot u_{0x}]=0. \end{equation} Substituting equation \eqref{eq2.22} into equation \eqref{eq2.23}, we have the following equation: $$c\cdot u_1v_{0x}+v_1[-\frac{l^2}{(2u_2)^2}u_1v_{0x}+\frac{lc}{2u_2}v_1-\frac{l^2}{(2u_2)^2}v_1v_{0y}-c\cdot u_{0x}]=0.$$ Since $c=\frac{l^2}{(2u_2)^2}v_1$, the above equation has the following form: $$\frac{lc}{2u_2}v_1-c(v_{0y}+u_{0x})=0.$$ Substituting equation \eqref{eq2.7}$(i=0)$ into the above equation, we have $\frac{lc}{2u_2}v_1=0$. That is, $lc=0$. Since $c=\frac{l^2}{(2u_2)^2}v_1$, we have $c=l=0$. It follows from equation \eqref{eq2.21} that $h=0$. Thus, $u,v,h$ are linearly dependent.\\ (ii) If $f'=0$, then $f=0$ because $f(0)=0$. That is, $u_1=c_2$, $v_1=c_1\in K^*$. It follows from equation \eqref{eq2.13} that $h_x=0$. It follows from equation \eqref{eq2.17} that $v_{0x}h_y=0$. Thus, we have $h_y=0$ or $v_{0x}=0$.\\ If $h_y=0$, then $h=0$ because $h(0,0)=0$. Thus, $u,v,h$ are linearly dependent.\\ If $v_{0x}=0$, then it follows from equation \eqref{eq2.18} that $u_{0x}h_y=0$. That is, $u_{0x}=0$ or $h_y=0$. If $u_{0x}=0$, then it follows from equation \eqref{eq2.14} that $h_y=0$. Thus, we have that $h_y=0$. It reduces to the above case.\\ (iii) If $bh_x=ah_y$, then we can assume that $a\cdot f'\neq 0$, hence we have \begin{equation}\label{eq2.24} h_y=\frac{b}{a}h_x. \end{equation} Let $\bar{x}=ax+by$, $\bar{y}=y$. It follows from equation \eqref{eq2.24} that we have $h_{\bar{y}}=0$. That is, $h\in K[ax+by]$. It follows from equations \eqref{eq2.11}, \eqref{eq2.12}, \eqref{eq2.17} and \eqref{eq2.24} that \begin{equation}\label{eq2.25} h_x(\frac{b}{a}v_{0x}-v_{0y})=0. \end{equation} It follows from equations \eqref{eq2.18} and \eqref{eq2.24} that \begin{equation}\label{eq2.26} u_1h_x(\frac{b}{a}v_{0x}-v_{0y})+v_1h_x(u_{0y}-\frac{b}{a}u_{0x})=0. \end{equation} It follows from equations \eqref{eq2.25} and \eqref{eq2.26} that $h_x=0$ or $bv_{0x}=av_{0y}$ and $bu_{0x}=au_{0y}$. \\ If $h_x=0$, then it follows from equation \eqref{eq2.24} that $h_y=0$. Thus, we have $h=0$ because $h(0,0)=0$. so $u,v,h$ are linearly dependent.\\ If $bv_{0x}=av_{0y}$ and $bu_{0x}=au_{0y}$, then $v_0,u_0\in K[ax+by]$ by the same reason as $h$. Thus, it follows from equations \eqref{eq2.11}, \eqref{eq2.12} and \eqref{eq2.13} that $h_x=0$. It reduces to the former case.\\ (II) If $v_{1x}=0$, then it follows from equation \eqref{eq2.9} that \begin{equation}\label{eq2.27} u_{1x}v_{1y}-v_{0x}u_{2y}=0. \end{equation} It follows from equation \eqref{eq2.8} that \begin{equation}\label{eq2.28} (u_{1x}z+u_{0x})(v_{1y}z+v_{0y})-v_{0x}(u_{2y}z^2+u_{1y}z+u_{0y})-h_x(2u_2z+u_1)-h_yv_1=0. \end{equation} Comparing the coefficients of $z^2,~z,~z^0$ of equation \eqref{eq2.28}, we have the following equations: \begin{equation}\label{eq2.29} u_{1x}v_{1y}-v_{0x}u_{2y} = 0, \end{equation} \begin{equation}\label{eq2.30} u_{1x}v_{0y}+u_{0x}v_{1y}-v_{0x}u_{1y}-2u_2h_x = 0, \end{equation} \begin{equation}\label{eq2.31} u_{0x}v_{0y}-v_{0x}u_{0y}-u_1h_x-v_1h_y = 0. \end{equation} It follows from equations \eqref{eq2.5} and \eqref{eq2.6} that $$[v_{0x}h_y-(v_{1y}z+v_{0y})h_x](2u_2z+u_1)+[h_x(u_{2y}z^2+u_{1y}z+u_{0y})-h_y(u_{1x}z+u_{0x})]v_1=0.$$ Comparing the coefficients of $z^2,~z$ and $z^0$ of the above equation, we have the following equations: \begin{equation}\label{eq2.32} h_x(v_1u_{2y}-2u_2v_{1y}) = 0, \end{equation} \begin{equation}\label{eq2.33} -v_{1y}h_xu_1+2u_2(v_{0x}h_y-v_{0y}h_x)+v_1(h_xu_{1y}-h_yu_{1x}) = 0 \end{equation} and \begin{equation}\label{eq2.34} u_1(v_{0x}h_y-v_{0y}h_x)+v_1(h_xu_{0y}-h_yu_{0x}) = 0. \end{equation} It follows from equation \eqref{eq2.32} that $h_x=0$ or $v_1u_{2y}=2u_2v_{1y}$. If $h_x=0$, then it follows from equation \eqref{eq2.33} that $h_y(2u_2v_{0x}-u_{1x}v_1)=0$. Thus, we have that $h_y=0$ or $2u_2v_{0x}=v_1u_{1x}$. If $h_y=0$, then $h=0$ because $h(0,0)=0$. Thus, $u,v,h$ are linearly dependent. If $2u_2v_{0x}=v_1u_{1x}$, then it follows from equation \eqref{eq2.7}(i=1) that \begin{equation}\label{eq2.35} 2u_2v_{0x}=-v_1v_{1y}. \end{equation} Substituting equations \eqref{eq2.35} and \eqref{eq2.7} to equation \eqref{eq2.29} , we have the following equation: $v_{1y}(2u_2v_{1y}-v_1u_{2y})=0$. Thus, we have $v_{1y}=0$ or $2u_2v_{1y}=v_1u_{2y}$. \\ If $v_{1y}=0$, then it follows from equation \eqref{eq2.29} that $v_{0x}u_{2y}=0$. Thus, we have $v_{0x}=0$ or $u_{2y}=0$. If $u_{2y}=0$, then it reduces to (I). If $v_{0x}=0$, then it follows form equation \eqref{eq2.34} that $h_yu_{0x}=0$. Thus, we have $h_y=0$ or $u_{0x}=0$. If $u_{0x}=0$, then it follows from equation \eqref{eq2.31} that $h_y=0$. Therefore, we have $h=0$ because $h(0,0)=0$. Thus, $u,v,h$ are linearly dependent. If $2u_2v_{1y}=v_1u_{2y}$, then we have \begin{equation}\label{eq2.36} \frac{u_{2y}}{u_2}=2\frac{v_{1y}}{v_1}. \end{equation} Suppose that $u_{2y}v_{1y}\neq 0$. Then we have $$u_2=e^{\bar{c}(x)}v_1^2$$ by integrating the two sides of \eqref{eq2.36} with respect to $y$. where $\bar{c}(x)$ is a function of $x$. Since $u_2,v_1\in K[x,y]$, we have $e^{\bar{c}(x)}\in K(x)$. That is, $u_2=c(x)v_1^2$, where $c(x)$ is not equal to zero and belongs to $K(x)$. Let $c(x)=\frac{c_1(x)}{c_2(x)}$ with $c_1(x), c_2(x)\in K[x]$ and $c_1(x)\cdot c_2(x)\neq 0$. Then it follows from equations \eqref{eq2.29} and \eqref{eq2.7} that $$v_{1y}(2c(x)v_1v_{0x}+v_{1y})=0.$$ That is, \begin{equation}\label{eq2.37} 2c_1(x)v_1v_{0x}=-c_2(x)v_{1y}. \end{equation} If $v_{0x}\neq 0$, then we have that $v_{1y}=0$ by comparing the degree of $y$ of equation \eqref{eq2.37}. Thus, we have $v_{0x}=0$. It follows from equation \eqref{eq2.37} that $v_{1y}=0$. This contradicts with our assumption. Therefore, we have $v_{1y}=v_{0x}=0$. It follows from equation \eqref{eq2.36} that $u_{2y}=v_{1y}=0$. which reduces to (I). \end{proof} \begin{thm}\label{thm2.3} Let $H=(u(x,y,z),v(x,y,z),h(x,y))$ be a polynomial map with $\deg_zv(x,y,z)=1$. Assume that $H(0)=0$ and $\deg_zu\geq 2$. If $JH$ is nilpotent, then $u,v,h$ are linearly dependent. \end{thm} \begin{proof} Let $u$, $v$ be as in Lemma \ref{lem2.1}. If $\deg_zu=2$, then the conclusion follows from Lemma \ref{lem2.2}. If $\deg_zu\geq 3$, then it follows from equation \eqref{eq2.3} and Lemma \ref{lem2.1} that \begin{equation}\label{eq2.38} u_{dx}=\cdots=u_{2x}=0 \end{equation} and equation \eqref{eq2.7} is true. It follows from equations \eqref{eq2.4} and \eqref{eq2.38} that \begin{equation}\label{eq2.39} \begin{split} (u_{1x}z+u_{0x})(v_{1y}z+v_{0y})-(v_{1x}z+v_{0x})(u_{dy}z^d+u_{(d-1)y}z^{d-1}\\ +\cdots+u_{1y}z+u_{0y})-h_x(du_dz^{d-1}+\cdots+u_1)-v_1h_y=0. \end{split} \end{equation} We always view that the polynomials are in $K[x,y,z]$ with coefficients in $K[x,y]$ when comparing the coefficients of the degree of $z$. Comparing the coefficients of $z^{d+1}$ and $z^d$ of equation \eqref{eq2.39}, we have the following equations: \begin{equation}\label{eq2.40} u_{dy}v_{1x}=0 \end{equation} and \begin{equation}\label{eq2.41} v_{1x}u_{(d-1)y}+v_{0x}u_{dy}=0. \end{equation} It follows from equations \eqref{eq2.40} and \eqref{eq2.41} that $v_{1x}=v_{0x}=0$ or $v_{1x}=u_{dy}=0$ or $u_{dy}=u_{(d-1)y}=0$.\\ (a) If $v_{1x}=v_{0x}=0$, then equation \eqref{eq2.39} has the following form: \begin{equation}\label{eq2.42} (u_{1x}z+u_{0x})(v_{1y}z+v_{0y})-h_x(du_dz^{d-1}+\cdots+u_1)-h_yv_1=0. \end{equation} If $d>3$, then $h_x=0$ by comparing the coefficient of $z^{d-1}$ of equation \eqref{eq2.42}. Thus, it follows from equations \eqref{eq2.5} and \eqref{eq2.38} that $h_y(u_{1x}z+u_{0x})=0$. Therefore, we have $h_y=0$ or $u_{1x}=u_{0x}=0$. If $u_{1x}=u_{0x}=0$, then it follows from equation \eqref{eq2.42} that $h_y=0$. Thus, we have $h=0$ because $h(0,0)=0$. Therefore, $u,v,h$ are linearly dependent.\\ If $d=3$, then comparing the coefficients of $z^2,z$ and $z^0$ of equation \eqref{eq2.42}, we have the following equations: \begin{equation}\label{eq2.43} u_{1x}v_{1y}-3u_3h_x = 0, \end{equation} \begin{equation}\label{eq2.44} u_{1x}v_{0y}-v_{1y}u_{0x}-2u_2h_x = 0 \end{equation} and \begin{equation}\label{eq2.45} u_{0x}v_{0y}-u_1h_x-v_1h_y = 0. \end{equation} It follows from equations \eqref{eq2.5} and \eqref{eq2.38} that \begin{equation}\label{eq2.46} \begin{split} -h_x(v_{1y}z+v_{0y})(3u_3z^2+2u_2z+u_1)+[h_x(u_{3y}z^3+u_{2y}z^2+u_{1y}z+u_{0y})\\ -h_y(u_{1x}z+u_{0x})]v_1=0. \end{split} \end{equation} Comparing the coefficients of $z^3$ of the above equation, we have $h_x(3v_{1y}u_3-u_{3y}v_1)=0$. Thus, we have $h_x=0$ or $3u_3v_{1y}=v_1u_{3y}$. (a1) If $h_x=0$, then it follows from equation \eqref{eq2.46} that $h_y(u_{1x}z+u_{0x})=0$. That is, $h_y=0$ or $u_{1x}=u_{0x}=0$. If $u_{1x}=u_{0x}=0$, then it follows from equation \eqref{eq2.42} that $h_y=0$. Thus, we have $h=0$ because $h(0,0)=0$. Therefore, $u,v,h$ are linearly dependent. (a2) If $3u_3v_{1y}=v_1u_{3y}$, then \begin{equation}\label{eq2.47} \frac{u_{3y}}{u_3}=3\frac{v_{1y}}{v_1}. \end{equation} If $v_{1y}=0$, then $u_{3y}=0$. It follows from equation \eqref{eq2.43} that $h_x=0$. Thus, it follows from the arguments of (a1) that $u,v,h$ are linearly dependent. We can assume that $u_{3y}v_{1y}\neq 0$. Then we have that $u_3=e^{\bar{d}(x)}v_1^3$ by integrating the two sides of equation \eqref{eq2.47} with respect to $y$, where $\bar{d}(x)$ is a function of $x$. Since $u_3,v_1\in K[x,y]$, we have $e^{\bar{d}(x)}\in K(x)$. That is, \begin{equation}\label{eq2.48} u_3=d(x)v_1^3 \end{equation} with $d(x)\in K(x)$, $d(x)\neq 0$. Let $d(x)=\frac{d_1(x)}{d_2(x)}$ with $d_1(x),d_2(x)\in K[x]$ and $d_1(x)\cdot d_2(x)\neq 0$. Substituting equations \eqref{eq2.7} and \eqref{eq2.48} into equation \eqref{eq2.43}, we have \begin{equation}\label{eq2.49} -3d_1(x)v_1^3h_x=d_2(x)v_{1y}^2 \end{equation} Then we have $v_{1y}=0$ by comparing the degree of $y$ of equation \eqref{eq2.49}. It follows from equation \eqref{eq2.49} that $h_x=0$. Thus, it reduces to (a1).\\ (b) If $v_{1x}=u_{dy}=0$, then it follows from equation \eqref{eq2.38} that $u_d\in K^*$. Thus, we have the following equation: \begin{equation}\label{eq2.50} -v_{0x}u_{iy}-(i+1)u_{i+1}h_x=0 \end{equation} by comparing the coefficients of $z^i$ of equation \eqref{eq2.39} for $i=d-1,d-2,\ldots,3$. Comparing the coefficients of $z^2,z$ and $z^0$ of equation \eqref{eq2.39}, we have the following equations: \begin{equation}\label{eq2.51} u_{1x}v_{1y}-v_{0x}u_{2y}-3u_3h_x = 0, \end{equation} \begin{equation}\label{eq2.52} u_{1x}v_{0y}+v_{1y}u_{0x}-v_{0x}u_{1y}-2u_2h_x = 0 \end{equation} and \begin{equation}\label{eq2.53} u_{0x}v_{0y}-v_{0x}u_{0y}-u_1h_x-v_1h_y = 0. \end{equation} It follows from equations \eqref{eq2.5} and \eqref{eq2.38} that \begin{equation}\label{eq2.54} \begin{split} [v_{0x}h_y-h_x(v_{1y}z+v_{0y})](du_dz^{d-1}+(d-1)u_{d-1}z^{d-2}+\cdots+u_1)\\ +[h_x(u_{(d-1)y}z^{d-1}+\cdots+u_{1y}z+u_{0y})-h_y(u_{1x}z+u_{0x})]v_1=0. \end{split} \end{equation} Then we have $h_xv_{1y}=0$ by comparing the coefficients of $z^d$ of equation \eqref{eq2.54}. That is, $h_x=0$ or $v_{1y}=0$. (b1) If $h_x=0$, then equation \eqref{eq2.54} has the following form: \begin{equation}\label{eq2.55} v_{0x}h_y(du_dz^{d-1}+(d-1)u_{d-1}z^{d-2}+\cdots+u_1)-h_y(u_{1x}z+u_{0x})v_1=0. \end{equation} Comparing the coefficients of $z^{d-1}$ of equation \eqref{eq2.55}, we have that $v_{0x}h_y=0$. That is, $v_{0x}=0$ or $h_y=0$. If $v_{0x}=0$, then it reduces to (a). If $h_y=0$, then $h=0$ because $h(0,0)=0$. Thus, $u,v,h$ are linearly dependent. (b2) If $v_{1y}=0$, then comparing the coefficients of $z^{d-1}$ and $z^0$ of equation \eqref{eq2.54}, we have \begin{equation}\label{eq2.56} (v_{0x}h_y-h_xv_{0y})du_d+h_xu_{(d-1)y}v_1=0 \end{equation} and \begin{equation}\label{eq2.57} (v_{0x}h_y-h_xv_{0y})u_1+(h_xu_{0y}-h_yu_{0x})v_1=0. \end{equation} It follows from equation \eqref{eq2.50}$(i=d-1)$ for $d>3$ and from equation \eqref{eq2.51} for $d=3$ that \begin{equation}\label{eq2.58} h_x=-\frac{1}{du_d}v_{0x}u_{(d-1)y}. \end{equation} Substituting equation \eqref{eq2.58} into equation \eqref{eq2.56}, we have the following equation: $$v_{0x}[h_y-\frac{v_1}{d^2u_d^2}u_{(d-1)y}^2+\frac{1}{du_d}u_{(d-1)y}v_{0y}]=0$$ for $d\geq 3$. Thus, we have $v_{0x}=0$ or \begin{equation}\label{eq2.59} h_y=\frac{v_1}{d^2u_d^2}u_{(d-1)y}^2-\frac{1}{du_d}u_{(d-1)y}v_{0y}. \end{equation} If $v_{0x}=0$, then it reduces to (a). Otherwise, substituting equations \eqref{eq2.58}, \eqref{eq2.59} into equation \eqref{eq2.53},we have that \begin{equation}\label{eq2.60} u_{0x}v_{0y}-v_{0x}u_{0y}=-\frac{u_1}{du_d}v_{0x}u_{(d-1)y}+\frac{v_1^2}{d^2u_d^2}u_{(d-1)y}^2-\frac{v_1}{du_d}u_{(d-1)y}v_{0y}. \end{equation} Substituting equations \eqref{eq2.58}, \eqref{eq2.59} into equation \eqref{eq2.57}, we have that \begin{equation}\label{eq2.61} \frac{u_1v_1}{d^2u_d^2}u_{(d-1)y}^2v_{0x}-\frac{v_1^2}{d^2u_d^2}u_{(d-1)y}^2u_{0x}+\frac{v_1}{du_d}u_{(d-1)y}(u_{0x}v_{0y}-v_{0x}u_{0y})=0. \end{equation} Then we have $\frac{v_1^3}{d^3u_d^3}u_{(d-1)y}^3=0$ by substituting equations \eqref{eq2.7}, \eqref{eq2.60} into equation \eqref{eq2.61}. That is, $u_{(d-1)y}=0$. It follows from equation \eqref{eq2.50}$(i=d-1)$ that $h_x=0$. Then it reduces to (b1).\\ (c) If $u_{dy}=u_{(d-1)y}=0$, then it follows from equations \eqref{eq2.4} and \eqref{eq2.6} that \begin{equation}\label{eq2.62} \begin{split} (u_{1x}z+u_{0x})(v_{1y}z+v_{0y})-(v_{1x}z+v_{0x})(u_{(d-2)y}z^{d-2}+\cdots\\ +u_{1y}z+u_{0y})-h_x(du_dz^{d-1}+\cdots+u_1)-h_yv_1=0. \end{split} \end{equation} Comparing the coefficients of $z^j$ of equation \eqref{eq2.62} for $j=d-2,\ldots,3$, we have the following equations: \begin{equation}\label{eq2.63} -v_{1x}u_{(j-1)y}-v_{0x}u_{jy}-(j+1)u_{j+1}h_x=0. \end{equation} Comparing the coefficients of $z^{d-1}, z^2, z$ and $z^0$ of equation \eqref{eq2.62}, we have the following equations: \begin{equation}\label{eq2.64} -v_{1x}u_{(d-2)y}-du_dh_x=0, \end{equation} \begin{equation}\label{eq2.65} u_{1x}v_{1y}-v_{1x}u_{1y}-v_{0x}u_{2y}-3u_3h_x = 0, \end{equation} \begin{equation}\label{eq2.66} u_{1x}v_{0y}+v_{1y}u_{0x}-v_{1x}u_{0y}-v_{0x}u_{1y}-2u_2h_x = 0 \end{equation} and \begin{equation}\label{eq2.67} u_{0x}v_{0y}-v_{0x}u_{0y}-u_1h_x-v_1h_y = 0. \end{equation} If $d=3$, then equations \eqref{eq2.63} and \eqref{eq2.64} are not available. If $d=4$, then equation \eqref{eq2.63} is not available. It follows from equations \eqref{eq2.5} and \eqref{eq2.38} that \begin{equation}\label{eq2.68} \begin{split} [(v_{1x}z+v_{0x})h_y-h_x(v_{1y}z+v_{0y})](du_dz^{d-1}+(d-1)u_{d-1}z^{d-2}+\cdots\\ +u_1)+[h_x(u_{(d-2)y}z^{d-2}+\cdots+u_{1y}z+u_{0y})-h_y(u_{1x}z+u_{0x})]v_1=0. \end{split} \end{equation} Comparing the coefficients of $z^d$ and $z^{d-1}$ of equation \eqref{eq2.68}, we have the following equations: \begin{equation} \nonumber \left\{ \begin{aligned} du_d(v_{1x}h_y-h_xv_{1y})=0,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ (d-1)u_{d-1}(v_{1x}h_y-h_xv_{1y})+du_d(v_{0x}h_y-h_xv_{0y}) = 0. \\ \end{aligned} \right. \end{equation} That is, \begin{eqnarray}\label{eq2.69} v_{1x}h_y-h_xv_{1y}=0,~~~ v_{0x}h_y-h_xv_{0y}=0. \end{eqnarray} Then equation \eqref{eq2.68} has the following form: \begin{equation}\label{eq2.70} h_x(u_{(d-2)y}z^{d-2}+\cdots+u_{1y}z+u_{0y})-h_y(u_{1x}z+u_{0x})=0. \end{equation} Then we have $h_xu_{ky}=0$ by comparing the coefficients of $z^k$ of equation \eqref{eq2.70} for $2\leq k\leq d-2$. If $d\geq 4$, then we have $h_x=0$ or $u_{(d-2)y}=\cdots=u_{2y}=0$. If $u_{(d-2)y}=\cdots=u_{2y}=0$, then it follows from equation \eqref{eq2.64} that $h_x=0$. If $h_x=0$, then it follows from equation \eqref{eq2.70} that $h_y=0$ or $u_{1x}=u_{0x}=0$. If $u_{1x}=u_{0x}=0$, then it follows from equation \eqref{eq2.64} that $v_{1x}=0$ or $u_{(d-2)y}=0$. If $v_{1x}=0$, then it reduces to (b). If $u_{(d-2)y}=0$, then it follows from equation \eqref{eq2.63} that $u_{(d-3)y}=\cdots=u_{2y}=0$. It follows from equation \eqref{eq2.65} that $u_{1y}=0$. Then we have $u_{0y}=0$ by substituting the above equations into equation \eqref{eq2.66}. It follows from equation \eqref{eq2.67} that $h_y=0$. Thus, we have $h=0$ because $h(0,0)=0$. Therefore, $u,v,h$ are linearly dependent. If $d=3$, then equation \eqref{eq2.70} has the following form: $$h_x(u_{1y}z+u_{0y})-h_y(u_{1x}z+u_{0x})=0.$$ That is, \begin{eqnarray}\label{eq2.71} u_{1y}h_x-h_yu_{1x}=0,~~~ u_{0y}h_x-h_yu_{0x}=0. \end{eqnarray} If $h=0$, then the conclusion follows. Suppose that $h\neq 0$ in the following arguments. It follows from equations \eqref{eq2.69} and \eqref{eq2.71} that \begin{eqnarray}\label{eq2.72} v_{1y}:v_{1x}=h_y:h_x=v_{0y}:v_{0x},~~~u_{1y}:u_{1x}=h_y:h_x=u_{0y}:u_{0x}. \end{eqnarray} Substituting \eqref{eq2.72} into equations \eqref{eq2.65}, \eqref{eq2.66}, \eqref{eq2.67} respectively, we have the following equations: \begin{equation}\label{eq2.73} v_{0x}u_{2y}+3u_3h_x=0, \end{equation} \begin{equation}\label{eq2.74} 2u_2h_x=0 \end{equation} and \begin{equation}\label{eq2.75} u_1h_x+v_1h_y=0. \end{equation} It follows from equation \eqref{eq2.74} that $u_2=0$ or $h_x=0$. If $u_2=0$, then it follows from equation \eqref{eq2.73} that $h_x=0$. If $h_x=0$, then it follows from equation \eqref{eq2.75} that $h_y=0$. Thus, we have $h=0$ because $h(0,0)=0$. This contradicts with our assumption. \end{proof} \begin{prop} Let $H=(u(x,y,z),v(x,y,z),h(x,y))$ be a polynomial map with $\deg_zv(x,y,z)=1$. Assume that $H(0)=0$ and $\deg_zu\geq 2$. If $JH$ is nilpotent, then there exists $T\in \operatorname{GL}_3(K)$ such that $$T^{-1}HT=(a_2(z)h(a_1(z)x+a_2(z)y)+c_1(z),-a_1(z)h(a_1(z)x+a_2(z)y)+c_2(z),0)$$ for some $a_i(z), c_i(z)\in K[z]$ and $f(t)\in K[z][t]$. \end{prop} \begin{proof} It follow from Theorem \ref{thm2.3} that $u$, $v$, $h$ are linearly dependent over $K$. Then the conclusion follows from Theorem 7.2.25 in \cite{3} or Corollary 1.1 in \cite{18}. \end{proof} Next we only need to consider the polynomial maps $H$ and the components of $H$ are linearly independent over $K$. \begin{lem}\label{lem2.4} Let $H=(u,v,h)$ be a polynomial map in $K[x,y,z]$ with nilpotent Jacobian matrix. Let $d$ be the degree of $(u,v,h)$ with respect to $z$, and $(u_d,v_d,h_d)$ be the coefficient of $z^d$ of $(u,v,h)$. Then we can transform linearly to obtain $v_d\in K$, $u_d\in K[x_2]$, and the degree with respect to $z$ of $h$ unchanged. \end{lem} \begin{proof} Taking coefficients of $z^d$ and $z^{2d}$ of the trace condition and the $2\times 2$ minors condition respectively, we obtain that $J_{x_1,x_2}(u_d,v_d)$ is nilpotent. By way of a linear transformation, we obtain that $J_{x_1,x_2}(u_d,v_d)$ is upper triangular. This yields the claims. \end{proof} \begin{thm}\label{thm2.5} Let $H=(u(x,y,z),v(x,y,z),h(x,y))$ be a polynomial map with $\deg_zv(x,y,z)\leq 1$. Assume that $H(0)=0$ and the components of $H$ are linearly independent over $K$. If $JH$ is nilpotent, then there exists $T\in \operatorname{GL}_3(K)$ such that $T^{-1}HT$ has the form of Theorem 2.4 for $n=3$ in \cite{14}. \end{thm} \begin{proof} Since $u,v,h$ are linearly independent, it follows from Theorem \ref{thm2.3} that $\deg_z u\leq 1$. Then it follows from Lemma \ref{lem2.4} that there exists $T_1\in \operatorname{GL}_3(K)$ such that $T_1^{-1}HT_1=(u_1z+u_0,v_1z+v_0,h(x,y))$ with $u_1\in K[x_2]$, $v_1\in K$, $u_0,~v_0\in K[x,y]$. Taking the coefficients of $z$ of the $2\times 2$ minors condition and $3\times 3$ minor condition of $J(T_1^{-1}HT_1)$ respectively, we obtain that $v_{0x}u_{1y}=0$ and $v_1h_xu_{1y}=0$. Thus, we have $u_1\in K$ or $v_1=0$ or $v_{0x}=h_x=0$. As for the two former two cases, there exists $T_2\in \operatorname{GL}_3(K)$ such that $T_2^{-1}T_1^{-1}HT_1T_2=(\bar{u}(x,y,z),\bar{v}(x,y),h(x,y))$. Thus, the conclusion follows from Theorem 2.4 in \cite{14}. If $v_{0x}=h_x=0$, then the determinant of $J(T_1^{-1}HT_1)$ is $v_1u_{0x}h_y$, which is 0. Thus, we have $u_{0x}=0$ or $h_y=0$. If $u_{0x}=0$, then we have $h_y=0$ by considering the $2\times 2$ minors condition of $J(T_1^{-1}HT_1)$. If $h_y=0$, then $h=0$ because $H(0)=0$. Thus, $u,v,h$ are linearly dependent over $K$. This contradicts with the condition that the components of $H$ are linearly independent over $K$. \end{proof} \section{A generalization of the form of $H$} In the section, we first prove in Lemma \ref{lem3.2} that $\deg H_1^{(d)}\leq 1$, where $H_1^{(d)}$ is the leading homogeneous part with respect to $x_3,\ldots,x_n$ of $H_1$, where $H=(H_1(x_1,x_2,\ldots,x_n),b_3x_3+\cdots+b_nx_n+H_2^{(0)}(x_1,x_2),H_3(x_1,x_2),\ldots,H_n(x_1,x_2))$, $JH$ is nilpotent and the components of $H$ are linearly independent. Then we classify in Theorem \ref{thm3.3} all polynomial maps of the form $$H=(H_1(x_1,x_2,\ldots,x_n),b_3x_3+\cdots+b_nx_n+H_2^{(0)}(x_2),H_3(x_1,x_2),\ldots,H_n(x_1,x_2)),$$ where $JH$ is nilpotent and the components of $H$ are linearly independent. \begin{lem}\label{lem3.1} Let $H$ be a polynomial map over $K$ of the form $$(H_1(x_1,x_2,\ldots,x_n),H_2(x_1,x_2,\ldots,x_n),H_3(x_1,x_2),\ldots,H_n(x_1,x_2))$$ where $H_2=b_3x_3+\cdots+b_nx_n+H_2^{(0)}(x_1,x_2)$, $b_3,\ldots,b_n\in K$, $H_2^{(0)}\in K[x_1,x_2]$. Write $h_2=b_3H_3+\cdots+b_nH_n$. If $JH$ is nilpotent, then \begin{eqnarray} H_{1x_1}+H_{2x_2}=0,\label{eq3.1}\\ (H_{2x_2})^2+H_{1x_2}H_{2x_1}+H_{1x_3}H_{3x_1}+\cdots+H_{1x_n}H_{nx_1}+h_{2x_2}=0,\label{eq3.2}\\ \begin{split} H_{1x_3}(H_{2x_1}H_{3x_2}-H_{2x_2}H_{3x_1})+H_{1x_4}(H_{2x_1}H_{4x_2}-H_{2x_2}H_{4x_1})+\cdots\\ +H_{1x_n}(H_{2x_1}H_{nx_2}-H_{2x_2}H_{nx_1})-(H_{1x_1}h_{2x_2}-H_{1x_2}h_{2x_1})=0,\label{eq3.3}\\ \end{split} \end{eqnarray} \begin{equation} \begin{split} H_{1x_3}(H_{3x_1}h_{2x_2}-H_{3x_2}h_{2x_1})+H_{1x_4}(H_{4x_1}h_{2x_2}-H_{4x_2}h_{2x_1}+\cdots\\ +H_{1x_n}(H_{nx_1}h_{2x_2}-H_{nx_2}h_{2x_1})=0.\label{eq3.4}\\ \end{split} \end{equation} \end{lem} \begin{proof} Equation \eqref{eq3.1} follows from the fact that the trace of $JH$ is zero. Since the sum of the principal minor determinants of size 2 of $JH$ is zero as well, we deduce that $$-H_{1x_1}H_{2x_2}+H_{1x_2}H_{2x_1}+H_{1x_3}H_{3x_1}+\cdots+H_{1x_n}H_{nx_1}+h_{2x_2}=0.$$ Adding equation \eqref{eq3.1}$H_{2x_2}$ times to it yields equation \eqref{eq3.2}. Since the sum of the principal minor determinants of size 3 of $JH$ is zero as well, we deduce that \begin{equation} \nonumber \begin{split} (H_{3x_1}H_{4x_2}-H_{4x_1}H_{3x_2})(b_4H_{1x_3}-b_3H_{1x_4})+\cdots +(H_{3x_1}H_{nx_2}-\\H_{nx_1}H_{3x_2})(b_nH_{1x_3}-b_3H_{1x_n})+(H_{4x_1}H_{5x_2}-H_{5x_1}H_{4x_2})(b_5H_{1x_4}-b_4H_{1x_5})\\ +\cdots+(H_{(n-1)x_1}H_{nx_2}-H_{nx_1}H_{(n-1)x_2})(b_nH_{1x_{n-1}}-b_{n-1}H_{1x_n})=0.\\ \end{split} \end{equation} We view the above equation in another way; put the terms which contain $H_{1x_i}$ together in the above equation for $3\leq i\leq n$, so we have equation \eqref{eq3.4}. \end{proof} \begin{lem}\label{lem3.2} Let $H$ be a polynomial map over $K$ of the form $$(H_1(x_1,x_2,\ldots,x_n),H_2(x_1,x_2,\ldots,x_n),H_3(x_1,x_2),\ldots,H_n(x_1,x_2)),$$ where $H(0)=0$ and $H_2(x_1,\ldots,x_n)=b_3x_3+\cdots+b_nx_n+H_2^{(0)}(x_1,x_2)$, $b_3,\ldots,b_n\in K$, $H_2^{(0)}\in K[x_1,x_2]$. If $JH$ is nilpotent and the components of $H$ are linearly independent over $K$, then $\deg H_1^{(d)}\leq 1$, where $H_1^{(d)}$ is the leading homogeneous part with respect to $x_3,\ldots,x_n$ of $H_1$. Moreover, If $\deg H_1^{(d)}= 1$, then $H_1^{(d)}\in K[x_3,x_4,\ldots,x_n]$. \end{lem} \begin{proof} If $b_3=\cdots=b_n=0$, then the conclusion follows from Lemma 2.3 in \cite{14}. Assume that at least one of $b_3,\ldots,b_n$ is non-zero in the following arguments. Write $H_1=H_1^{(d)}+H_1^{(d-1)}+\cdots+H_1^{(1)}+H_1^{(0)}$, where $H_1^{(i)}$ is the homogeneous part of degree $i$ with respect to $x_3,\ldots,x_n$ of $H_1$. Comparing the degree in equation \eqref{eq3.1} of the monomials with respect to $x_3,\ldots,x_n$ of degree $i$ for $0\leq i\leq d$, we have the following equations: \begin{eqnarray}\label{eq3.5} (H_1^{(d)})_{x_1}=\cdots=(H_1^{(1)})_{x_1}=0,~~~ (H_1^{(0)})_{x_1}+(H_2^{(0)})_{x_2}=0. \end{eqnarray} (a) If $H_{2x_1}=0$, then $(H_1^{(d)})_{x_2}\cdot h_{2x_1}=0$ by focusing on the leading homogeneous part with respect to $x_3, x_4, \ldots, x_n$ of equation \eqref{eq3.3}. Thus, we have $(H_1^{(d)})_{x_2}=0$ or $h_{2x_1}=0$. If $h_{2x_1}=0$, then equation \eqref{eq3.4} has the following form: $$h_{2x_2}[H_{1x_3}H_{3x_1}+H_{1x_4}H_{4x_1}+\cdots+H_{1x_n}H_{nx_1}]=0.$$ That means $h_{2x_2}=0$ or $H_{1x_3}H_{3x_1}+H_{1x_4}H_{4x_1}+\cdots+H_{1x_n}H_{nx_1}=0$. If $h_{2x_2}=0$, then $h=0$ because $H(0)=0$. Thus, $H_3,\ldots,H_n$ are linearly dependent. This contradicts the fact that the components of $H$ are linearly independent over $K$. Therefore, we have \begin{equation}\label{eq3.6} H_{1x_3}H_{3x_1}+H_{1x_4}H_{4x_1}+\cdots+H_{1x_n}H_{nx_1}=0. \end{equation} Substituting equation \eqref{eq3.6} into equations \eqref{eq3.2}, \eqref{eq3.3} respectively, we have the following equations \begin{eqnarray} (H_{2x_2})^2+h_{2x_2}=0,\label{eq3.7}\\ H_{1x_1}h_{2x_2}=0.\label{eq3.8} \end{eqnarray} Substituting equation \eqref{eq3.1} into equation \eqref{eq3.8}, we have the following equation: \begin{equation} H_{2x_2}h_{2x_2}=0.\label{eq3.9} \end{equation} It follows from equations \eqref{eq3.7} and \eqref{eq3.9} that $H_{2x_2}=h_{2x_2}=0$. Thus, we have $h=0$ because $H(0)=0$. Thus, $H_3,\ldots,H_n$ are linearly dependent. This contradicts the fact that the components of $H$ are linearly independent over $K$. Thus, we have that $$(H_1^{(d)})_{x_2}=0.$$ (b) If $H_{2x_1}\neq 0$, then we have $(H_1^{(d)})_{x_2}=0$ by considering the leading homogeneous part with respect to $x_3, x_4, \ldots, x_n$ of equation \eqref{eq3.2}.\\ Now assume that $d>1$. Focus on the homogeneous part of degree $d-1$ with respect to $x_3, x_4, \ldots, x_n$ of equations \eqref{eq3.2}, \eqref{eq3.3} and \eqref{eq3.4} respectively. we deduce that \begin{equation}\label{eq3.10} (H_1^{(d-1)})_{x_2}H_{2x_1}+(H_1^{(d)})_{x_3}H_{3x_1}+\cdots+(H_1^{(d)})_{x_n}H_{nx_1}=0, \end{equation} \begin{equation}\label{eq3.11} \begin{split} H_{2x_1}((H_1^{(d)})_{x_3}H_{3x_2}+\cdots+(H_1^{(d)})_{x_n}H_{nx_2})-H_{2x_2}((H_1^{(d)})_{x_3}H_{3x_1}+\\ \cdots+(H_1^{(d)})_{x_n}H_{nx_1})+(H_1^{(d-1)})_{x_2}h_{2x_1}=0 \end{split} \end{equation} and \begin{equation}\label{eq3.12} \begin{split} h_{2x_2}((H_1^{(d)})_{x_3}H_{3x_1}+\cdots+(H_1^{(d)})_{x_n}H_{nx_1})=\\ h_{2x_1}((H_1^{(d)})_{x_3}H_{3x_2}+\cdots+(H_1^{(d)})_{x_n}H_{nx_2}). \end{split} \end{equation} As $(H_1^{(d-1)})_{x_1}=0$, we have $H_1^{(d-1)}\in K[x_2,x_3,\ldots,x_n]$. We view $H_1^{(d-1)}$ as a polynomial in $K[x_3,\ldots,x_n]$ with coefficients in $K[x_2]$, then we have the following equations: \begin{equation}\label{eq3.13} e_3H_{3x_1}+e_4H_{4x_1}+\cdots+e_nH_{nx_1}=-q(x_2)H_{2x_1}, \end{equation} \begin{equation}\label{eq3.14} H_{2x_1}(e_3H_{3x_2}+\cdots+e_nH_{nx_2})+q(x_2)h_{2x_1}=H_{2x_2}(e_3H_{3x_1}+\cdots+e_nH_{nx_1}), \end{equation} \begin{equation}\label{eq3.15} h_{2x_2}(e_3H_{3x_1}+\cdots+e_nH_{nx_1})=h_{2x_1}(e_3H_{3x_2}+\cdots+e_nH_{nx_2}) \end{equation} by comparing the coefficients of any monomials $x_3^{j_3}\cdots x_n^{j_n}$ with $j_3+\cdots+j_n=d-1$ of equations \eqref{eq3.10}, \eqref{eq3.11} and \eqref{eq3.12} respectively, where $q(x_2)\in K[x_2]$, $e_3,\ldots,e_n\in K$ and at least one of $e_3,\ldots,e_n$ is non-zero. Since $H_{2x_1}=(H_2^{(0)})_{x_1}$, we have that \begin{equation}\label{eq3.16} e_3H_3+e_4H_4+\cdots+e_nH_n=-q(x_2)H_2^{(0)}+g(x_2) \end{equation} by integrating the two sides of equation \eqref{eq3.13} with respect to $x_1$, where $g(x_2)\in K[x_2]$ and $g(0)=0$. Differentiating the two sides of equation \eqref{eq3.16} with respect to $x_2$, we have that \begin{equation}\label{eq3.17} e_3H_{3x_2}+e_4H_{4x_2}+\cdots+e_nH_{nx_2}=-q'(x_2)H_2^{(0)}-q(x_2)(H_2^{(0)})_{x_2}+g'(x_2). \end{equation} Since $H_{2x_1}=(H_2^{(0)})_{x_1}$, we have \begin{equation}\label{eq3.18} -q'(x_2)H_2^{(0)}(H_2^{(0)})_{x_1}+g'(x_2)(H_2^{(0)})_{x_1}+q(x_2)h_{2x_1}=0 \end{equation} by substituting equation \eqref{eq3.17} into equation \eqref{eq3.14}. Thus, we have \begin{equation}\label{eq3.19} -\frac{1}{2}q'(x_2)(H_2^{(0)})^2+g'(x_2)H_2^{(0)}+q(x_2)h_2=\bar{g}(x_2) \end{equation} by integrating the two sides of equation \eqref{eq3.18} with respect to $x_1$, where $\bar{g}(x_2)\in K[x_2]$ and $\bar{g}(0)=0$. Substituting equations \eqref{eq3.13}, \eqref{eq3.17} into equation \eqref{eq3.15}, we have that \begin{equation}\label{eq3.20} q(x_2)(H_2^{(0)})_{x_1}h_{2x_2}=(q(x_2)(H_2^{(0)})_{x_2}+q'(x_2)H_2^{(0)}-g'(x_2))h_{2x_1}. \end{equation} If $q(x_2)=0$, then it follows from equation \eqref{eq3.13} that \begin{equation}\label{eq3.21} e_3H_{3x_1}+\cdots+e_nH_{nx_1}=0. \end{equation} Substituting equation \eqref{eq3.21} into equation \eqref{eq3.14}, we have that $$H_{2x_1}(e_3H_{3x_2}+\cdots+e_nH_{nx_2})=0.$$ That is, $H_{2x_1}=0$ or $e_3H_{3x_2}+\cdots+e_nH_{nx_2}=0$. If $e_3H_{3x_2}+\cdots+e_nH_{nx_2}=0$, then $e_3H_3+\cdots+e_nH_n=0$ because $H(0)=0$. Thus, $H_3,\ldots, H_n$ are linearly dependent. This contradicts the fact that the components of $H$ are linearly independent over $K$. If $H_{2x_1}=0$, then we assume without loss of generality that $$(H_1^{(d)})_{x_3}, (H_1^{(d)})_{x_4},\ldots,(H_1^{(d)})_{x_k}$$ are linearly independent over $K$, and $(H_1^{(d)})_{x_{k+1}}=(H_1^{(d)})_{x_{k+2}}=\cdots=(H_1^{(d)})_{x_n}=0$. It is easy to see that $k\geq 3$. Then $(H_1^{(d)})_{x_3}, (H_1^{(d)})_{x_4},\ldots,(H_1^{(d)})_{x_k}$ are linearly independent over $K(x_1,x_2)$ as well. So if we focus on the leading homogeneous part with respect to $x_3,x_4,\ldots,x_n$ of equation \eqref{eq3.4}, we infer that $$H_{ix_1}h_{2x_2}-H_{ix_2}h_{2x_1}=0$$ for each $i\in \{3,4,\ldots,k\}$. Consequently, $H_i$ is algebraically dependent over $K$ on $h_2$ for each $i\in \{3,4,\ldots,k\}$, and there exists an $f\in K[x_1,x_2]$, such that $H_i,h_2\in K[f]$ for each $i\in \{3,4,\ldots,k\}$. So if we focus the leading homogeneous part with respect to $x_3,\ldots,x_n$ of equation \eqref{eq3.2}, $H_{3x_1},\ldots, H_{kx_1}$ are linearly dependent over $K(x_3,\ldots,x_n)$, and hence over $K$. Since the rank of the sub-matrix of rows $3,4,\ldots,k$ of $JH$ is 1, the rows of this sub-matrix are linearly dependent over $K$ along with the entries of the first column. This contradicts the fact that the components of $H$ are linearly independent over $K$. So we can assume that $q(x_2)\neq 0$ in the following arguments.\\ It follows from equation \eqref{eq3.18} that \begin{equation}\label{eq3.22} h_{2x_1}=(q(x_2))^{-1}(q'(x_2)H_2^{(0)}(H_2^{(0)})_{x_1}-g'(x_2)(H_2^{(0)})_{x_1}). \end{equation} Substituting equation \eqref{eq3.22} into equation \eqref{eq3.20}, we have that \begin{equation}\label{eq3.23} h_{2x_2}=(q(x_2))^{-2}(q(x_2)(H_2^{(0)})_{x_2}+q'(x_2)H_2^{(0)}-g'(x_2))(q'(x_2)H_2^{(0)}-g'(x_2)). \end{equation} Differentiating the two sides of equation \eqref{eq3.19} with respect to $x_2$, we have that \begin{equation}\label{eq3.24} \begin{split} -\frac{1}{2}q''(x_2)(H_2^{(0)})^2-q'(x_2)H_2^{(0)}(H_2^{(0)})_{x_2}+g''(x_2)H_2^{(0)}\\ +g'(x_2)(H_2^{(0)})_{x_2}+q'(x_2)h_2+q(x_2)h_{2x_2}=\bar{g}'(x_2). \end{split} \end{equation} We can get the equations about $h_2$ and $h_{2x_2}$ from equations \eqref{eq3.19} and \eqref{eq3.23} respectively, and then substituting them into equation \eqref{eq3.24}, we have the following equation: \begin{equation}\label{eq3.25} \begin{split} (-\frac{1}{2}q(x_2)q''(x_2)+\frac{3}{2}(q'(x_2))^2)(H_2^{(0)})^2+(q(x_2)g''(x_2)-3g'(x_2)q'(x_2))H_2^{(0)}\\ =q(x_2)\bar{g}'(x_2)-q'(x_2)\bar{g}(x_2)-(g'(x_2))^2. \end{split} \end{equation} If $(H_2^{(0)})_{x_1}=0$, then it follows from equation \eqref{eq3.22} that $h_{2x_1}=0$. Thus, we have that $H_3,\ldots,H_n$ are linearly dependent by following the arguments of Lemma \ref{lem3.2} (a). This contradicts the fact that the components of $H$ are linear independent over $K$. Thus, we have $(H_2^{(0)})_{x_1}\neq 0$. Comparing the degree of $x_1$ of equation \eqref{eq3.25}, we have \begin{equation}\label{eq3.26} q(x_2)q''(x_2)=3(q'(x_2))^2 \end{equation} and \begin{equation}\label{eq3.27} q(x_2)g''(x_2)=3g'(x_2)q'(x_2). \end{equation} Thus, we have $q'(x_2)=0$ by comparing the coefficients of the highest degree of $x_2$ of equation \eqref{eq3.26}. Therefore, it follows from equation \eqref{eq3.27} that $g''(x_2)=0$. Then equation \eqref{eq3.25} has the following form: $$q(x_2)\bar{g}'(x_2)=(g'(x_2))^2.$$ Let $c:=q(x_2)\in {K}^*$. Since $H(0)=0$, we have $g(0)=\bar{g}(0)=0$. So we assume that $g(x_2)=\tilde{c}x_2$ for some $\tilde{c} \in K$. Then $\bar{g}(x_2)=\frac{\tilde{c}^2}{c}x_2$. It follows from equation \eqref{eq3.19} that \begin{equation}\label{eq3.28} h_2=\frac{\tilde{c}}{c^2}(-c H_2^{(0)}+\tilde{c}x_2). \end{equation} If $\tilde{c}=0$, then it follows from equation \eqref{eq3.28} that $h_2=0$. Thus, $H_3,\ldots,H_n$ are linearly dependent. This contradicts the fact that the components of $H$ are linear independent over $K$. If $\tilde{c}\neq 0$, then let $r=\frac{\tilde{c}}{c}\neq 0$, we have $b_3H_3+\cdots+b_nH_n+rH_2^{(0)}=r^2x_2$. Let \[\bar{T}=\left( \begin{array}{ccccc} 1 & 0 & 0 & \cdots & 0 \\ 0 & \frac{1}{r} & -\frac{b_3}{r}& \cdots & -\frac{b_n}{r}\\ 0 & 0 & 1 & \cdots & 0\\ \vdots & \vdots &\vdots &\ddots &\vdots\\ 0 & 0 & 0 & \cdots & 1\\ \end{array} \right).\] Then we have that $\bar{T}^{-1}H\bar{T}=(\bar{H}_1, r\cdot x_2, \bar{H}_3, \ldots, \bar{H}_n)$. Since $JH$ is nilpotent, we have that $J(\bar{T}^{-1}H\bar{T})$ is nilpotent. However, the element of the second row and the second column of the matrix $(J(\bar{T}^{-1}H\bar{T}))^m$ is $r^m$, which is not equal to zero. This contradicts the fact that the matrix $J(\bar{T}^{-1}H\bar{T})$ is nilpotent. Thus, $d\leq 1$. If $d=1$, then we have $H_1^{(d)}\in K[x_3\ldots,x_n]$ by following the former arguments of (a) and (b) Lemma \ref{lem3.2}. \end{proof} \begin{thm}\label{thm3.3} Let $H$ be a polynomial map over $K$ of the form $$(H_1(x_1,x_2,\ldots,x_n),H_2(x_2,\ldots,x_n), H_3(x_1,x_2),\ldots,H_n(x_1,x_2)),$$ where $H(0)=0$ and $H_2(x_2,\ldots,x_n)=b_3x_3+\cdots+b_nx_n+H_2^{(0)}(x_2)$, $b_3,\ldots,b_n\in K$, $H_2^{(0)}\in K[x_2]$. If $JH$ is nilpotent and the components of $H$ are linearly independent over $K$, then there exists a $T\in \operatorname{GL}_n(K)$ such that $T^{-1}\circ H\circ T$ be the form of Theorem 2.4 in \cite{14}. \end{thm} \begin{proof} If $b_3=\cdots=b_n=0$, then the conclusion follows from Theorem 2.4 in \cite{14}. We can assume that at least one of $b_3,\ldots,b_n$ is non-zero in the following arguments. It follows from Lemma \ref{lem3.2} that $\deg H_1^{(d)}\leq 1$, where $H_1^{(d)}$ is the leading homogeneous part with respect to $x_3,\ldots,x_n$ of $H_1$. If $\deg H_1^{(d)}=0$, then let $\tilde{T}=P_n(1,2)$; $\tilde{T}^{-1}H\tilde{T}$ is of the form of Theorem 2.4 in \cite{14} and $J(\tilde{T}^{-1}H\tilde{T})$ is nilpotent. Thus, the conclusion follows from Theorem 2.4 in \cite{14}. If $\deg H_1^{(d)}= 1$, then let $H_1=a_3x_3+a_4x_4+\cdots+a_nx_n+H_1^{(0)}(x_1,x_2)$, where $H_1^{(0)}\in K[x_1,x_2]$. Since $JH$ is nilpotent, it follows from Lemma \ref{lem3.1} that we have the following equations: \begin{eqnarray} (H_1^{(0)})_{x_1}+(H_2^{(0)})_{x_2}=0,\label{eq3.31}\\ (H_1^{(0)})_{x_1}(H_2^{(0)})_{x_2}-h_{1x_1}-h_{2x_2}=0,\label{eq3.32}\\ -(H_2^{(0)})_{x_2}h_{1x_1}-((H_1^{(0)})_{x_1}h_{2x_2}-(H_1^{(0)})_{x_2}h_{2x_1})=0,\label{eq3.33}\\ h_{2x_2}h_{1x_1}-h_{2x_1}h_{1x_2}=0\label{eq3.34} \end{eqnarray} where $h_1=\sum_{i=3}^na_iH_i$, $h_2=\sum_{i=3}^nb_iH_i$. Clearly, $h_1\cdot h_2\neq 0$. It follows from equation \eqref{eq3.34} that there exists $f\in K[x_1,x_2]$, such that $h_1, h_2\in K[f]$. We have the following equation: \begin{equation}\label{eq3.35} H_1^{(0)}=-(H_2^{(0)})'\cdot x_1+W(x_2) \end{equation} by integrating the two sides of equation \eqref{eq3.31} with respect to $x_1$, where $W(x_2)\in K[x_2]$. It follows from equation \eqref{eq3.32} that \begin{equation}\label{eq3.36} h_{1x_1}=-h_{2x_2}-[(H_2^{(0)})']^2. \end{equation} Replacing $h_{1x_1}$ with equation \eqref{eq3.36} in equation \eqref{eq3.33}, we have the following equation: \begin{equation}\label{eq3.37} 2(H_2^{(0)})'h_{2x_2}+(H_1^{(0)})_{x_2}h_{2x_1}=-[(H_2^{(0)})']^3. \end{equation} Substituting equation \eqref{eq3.35} for $(H_1^{(0)})_{x_2}$ in equation \eqref{eq3.37}, we have the following equation: \begin{equation}\label{eq3.38} h_2'(f)[2(H_2^{(0)})'\cdot f_{x_2}-(H_2^{(0)})''\cdot x_1\cdot f_{x_1}+W'(x_2)f_{x_1}]=-[(H_2^{(0)})']^3. \end{equation} If $f_{x_1}=0$, then $h_{1x_1}=h_{2x_1}=0$. It follows from equation \eqref{eq3.33} that $(H_1^{(0)})_{x_1}\cdot h_{2x_2}\allowbreak =0$. Thus, we have $(H_1^{(0)})_{x_1}=0$ or $h_{2x_2}=0$. If $(H_1^{(0)})_{x_1}=0$, then it follows from equation \eqref{eq3.32} that $h_{2x_2}=0$. If $h_{2x_2}=0$, then we have $h_2=0$ because $H(0)=0$. Thus, $H_3,\ldots,H_n$ are linearly dependent. This contradicts the fact that the components of $H$ are linearly independent over $K$. If $f_{x_1}\neq 0$, then it follows from equation \eqref{eq3.38} that $h_2'(f)\in K$ or $(H_2^{(0)})'=0$ and $W'(x_2)=0$. If $(H_2^{(0)})'=0$ and $W'(x_2)=0$, then we have $H_2^{(0)}=0$ and $H_1^{(0)}=W(x_2)=0$ because $H(0)=0$. Thus, it follows from equations \eqref{eq3.32} and \eqref{eq3.34} that $J(h_1,h_2)$ is nilpotent, which we deduce that there is $c\in K$ such that $h_2=ch_1$. If $h_2'(f)\in K$, we have $h_2(f)=c_2f$ for some $c_2\in K$ because $H(0)=0$. It follows from equation \eqref{eq3.36} that \begin{equation}\label{eq3.39} h_1'(f)\cdot f_{x_1}=-c_2f_{x_2}-[(H_2^{(0)})']^2. \end{equation} Let $l=\deg_{x_1}f$. Then $l\geq 1$. If $l\geq 2$, then $h_1'(f)\in K$ by comparing the degree of $x_1$ of equation \eqref{eq3.39}. Since $H(0)=0$, we have $h_1=c_1f$ for some $c_1\in K$. If $l=1$, then let $f=\alpha_1(x_2)\cdot x_1+\alpha_0(x_2)$ with $\alpha_1, \alpha_0\in K[x_2]$ and $\alpha_1\neq 0$, we have $\deg_fh_1'\leq 1$ by comparing the degree of $x_1$ of equation \eqref{eq3.39}. Let $h_1'=t_2f+c_1$ for some $c_1,t_2\in K$. We view that the polynomials are in $K[x_2][x_1]$ with coefficients in $K[x_2]$ when comparing the coefficients of $x_1^j$. Comparing the coefficients of $x_1$ of equation \eqref{eq3.39}, we have that $$t_2\cdot \alpha_1^2=-c_2\alpha_1'.$$ Thus, we have that $\alpha_1'=0$ and $t_2=0$ by comparing the degree of $x_2$ of the above equation. Thus, we have $$h_1=c_1f.$$ If $c_1=0$, then $h_1=0$. Thus, $H_3,\ldots,H_n$ are linearly dependent. This contradicts the fact that the components of $H$ are linearly independent over $K$. If $c_1\neq 0$, then $h_2=\frac{c_2}{c_1}h_1$. Since the components of $H$ are linearly independent over $K$, we have $b_i=\frac{c_2}{c_1}a_i$ for all $3\leq i\leq n$. Let $\hat{T}=P_n(1(\frac{c_2}{c_1}),2)$. Then $\hat{T}^{-1}H\hat{T}$ is of the form of Theorem 2.4 in \cite{14}, and $J(\hat{T}^{-1}H\hat{T})$ is nilpotent. Thus, the conclusion follows. \end{proof} \begin{cor} Let $F=x+H$, where $H$ be as in Theorem \ref{thm3.3}. If $JH$ is nilpotent and the components of $H$ are linearly independent over $K$, then $F$ is tame. \end{cor} \begin{proof} The conclusion follows from Theorem \ref{thm3.3} and the arguments of section 3 in \cite{14}. \end{proof} \section{Some Remarks} In order to classify all polynomial maps with nilpotent Jacobians of the form $$(H_1(x_1,x_2,\ldots,x_n),H_2(x_1,x_2,\ldots,x_n),H_3(x_1,x_2),\ldots,H_n(x_1,x_2)),$$ where $H(0)=0$, $H_2(x_1,\ldots,x_n)=b_3x_3+\cdots+b_nx_n+H_2^{(0)}(x_1,x_2)$, and the components of $H$ are linearly independent over $K$, it suffices to classify all polynomial maps in dimension 4 of the form $$\tilde{h}=(z+\tilde{h}_1(x,y),w+\tilde{h}_2(x,y),\tilde{h}_3(x,y),\tilde{h}_4(x,y)),$$ where $J\tilde{h}$ is nilpotent and $\tilde{h}_i\in K[x,y]$ for all $1\leq i\leq 4$, and the component of $\tilde{h}$ are linearly independent over $K$. Jacobian nilpotency of $\tilde{h}$ is just the Keller condition on $x + t \tilde{h}$, so we have that $$(x + t \tilde{h}_1 (x , y) + t z, y + t \tilde{h}_2 (x , y) + t w, z + t \tilde{h}_3 (x , y), w + t \tilde{h}_4 (x , y))$$ is a Keller map over $K[t]$. This is equivalent to that $$(x + t \tilde{h}_1 (x , y) - t^2 \tilde{h}_3 (x , y), y + t \tilde{h}_2 (x , y) - t^2 \tilde{h}_4 (x , y))$$ is a Keller map over $K[t]$. By Moh's result \cite{Moh}, these maps are invertible over $K(t)$, hence over $K[t]$, if the degree is at most 100. In \cite{Wang}, it is claimed that there are errors in Moh's work, but these errors are repaired. We can find the solution for $\tilde{h}$ if the components of $\tilde{h}$ are linearly dependent. More precisely, we have the following theorem. \begin{thm} Let $\tilde{h}=(z+\tilde{h}_1(x,y),w+\tilde{h}_2(x,y),\tilde{h}_3(x,y),\tilde{h}_4(x,y))$, where $\tilde{h}_i\in K[x,y]$ for $1\leq i\leq 4$. If $J\tilde{h}$ is nilpotent and the components of $\tilde{h}$ are linearly dependent, then there exists $T\in \operatorname{GL}_4(K)$ such that $T^{-1}\tilde{h}T=(0,w,0,0)+\tilde{H}$ and $\tilde{H}$ is the form of Theorem 3.1 in \cite{14}. \end{thm} \begin{proof} Since the components of $\tilde{h}$ are linearly dependent, there exists $\lambda \in K$ such that $h_4=\lambda h_3$. Let $T_1\in P_4(3(\lambda),4)$. Then $T_1^{-1}\tilde{h}T_1=(z+\tilde{h}_1,w+\lambda z+\tilde{h}_2,\tilde{h}_3,0)$. There is $T_2\in P_4(1(\lambda),2)$ such that $T_2^{-1}(T_1^{-1}\tilde{h}T_1)T_2=(z+\hat{h}_1,w+\hat{h}_2-\lambda \hat{h}_1,\hat{h}_3,0)$, where $\hat{h}_i=\tilde{h}_i(x,y+\lambda x)$. Since $(T_1T_2)^{-1}\tilde{h}T_1T_2$ is nilpotent, we have $J\hat{H}$ is nilpotent, where $\hat{H}=(z+\hat{h}_1,\hat{h}_2-\lambda \hat{h}_1,\hat{h}_3,0)$. Thus, the conclusion follows from Theorem 3.1 in \cite{14}. \end{proof} {\bf{Acknowledgement}}: The author is very grateful to Michiel de Bondt who give some good suggestions, especially the proof of Theorem 2.5 and the setup of section 4.
proofpile-arXiv_065-4529
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Arctic sea ice extent has been declining sharply for the past three decades (see minimum sea ice extent in September 1991 and September 2018 in Fig. \ref{arctic-seas}). Variations of the sea ice cover has been the cause of notable changes to meteorological and oceanographic conditions in the Arctic Ocean \citep[e.g.][]{Thomson2014,Stopa2016a,Liu2016,Thomson2016,Waseda2018,casassea}. Emerging open waters provide longer fetches for surface waves to build up more energy and increase in magnitude \citep{Thomson2014,Thomson2016}. Concurrently, an increase of wave height impacts profoundly on the already weak sea ice cover by enhancing breakup and melting processes in a feedback mechanism \citep{Thomson2016,dolatshah2018hydroelastic,passerotti2020omae}. In addition, coastlines and coastal communities have been impacted by intensifying erosion with coastline retreat rates up to 25\,m per year \citep[e.g.][]{Jones2009,gunther2015observing}. Ocean climate evaluated from satellite observations \citep{Liu2016} for the months of August and September---the period of minimum ice coverage---reveal weak or even negative trends of average offshore wind speeds over the period between 1996 and 2015, while notable upward trends were detected in the higher $90^{th}$ and $99^{th}$ percentiles across the entire Arctic Ocean, except for the Greenland Sea. Unlike winds, waves showed more substantial increasing rates even for average values, especially in the Chukchi, Laptev, Kara seas and Baffin Bay. Satellite observations have temporal and spatial limitations, which are exacerbated in the Arctic where most of the altimeter sensors do not usually cover latitudes higher than $82^{\circ}$. Numerical models, on the contrary, provide more consistent data sets for climate analysis in this region. \citet{Stopa2016a} estimated trends using a 23-year model hindcast and found that simulated average wind speed exhibits a weak increasing trend, especially in the Pacific sector of the Arctic Ocean, slightly differing from the satellite-based observations in \citet{Liu2016}. Average wave heights, however, were found to be consistent with altimeter observations. \cite{Waseda2018} used the ERA-Interim reanalysis database \citep{Dee2011} to evaluate the area-maximum wind speed and wave height in the months of August, September and October from the period 1979-2016 in the Laptev, East Siberian, Chukchi, and Beaufort seas. Their analysis indicated robust increasing trends for both variables, with most significant changes in October: $\approx 0.06 \;$ m/s per year for wind speed and $\approx 2\;$cm per year for mean significant wave height. Recently, \citet{casassea} simulated historical (1979-2005) and future (2081-2100) sea state conditions to evaluate changes in regional annual maximum significant wave height, under high baseline emission scenarios (RCP8.5). Their results indicated that wave height is projected to increase at a rate of approximately 3 cm per year, which is more than 0.5\% per year. Previous assessment of ocean climate in the Arctic has focused on annual or monthly values and often paid specific attention to summer months. A comprehensive evaluation of climate and related changes, however, cannot ignore properties and frequency of occurrence of extremes. Classically, the latter is estimated with an extreme value analysis (EVA), where observations are fitted to a theoretical probability distribution to extrapolate values at low probability levels, such as those occurring on average once every 100 years \citep[normally referred to as the 100-year return period event,][]{thomson2014data}. Therefore, the EVA has to rely on long records spanning over one or more decades (observations typically cover more than a 1/3 of the return period), to be statistically significant. Furthermore, the EVA relies on the fundamental assumption that the statistical properties of the variable do not change over time, namely the process is stationary. For the strongly seasonal and rapidly changing Arctic environment, however, the hypothesis of stationarity cannot hold for an extended period of time, invalidating the fundamental assumption of the EVA. An alternative approach that better fits the highly dynamic nature of the Arctic is the estimation of time-varying extreme values with a non-stationary analysis \citep[see, for example,][for a general overview]{Coles2001,Mendez2006,galiatsatou2011modeling,cheng2014non,Mentaschi2016}. There are few methods for the estimation of time-varying extreme value distributions from non-stationary time series. A functional approach is the transformed-stationary extreme value analysis (TS-EVA) proposed by \citet{Mentaschi2016}. The method consists of transforming a non-stationary time series with a normalisation based on the time-varying mean and standard deviation into a stationary counterpart, for which the classical EVA theory can be applied. Subsequently, an inverse transformation allows the conversion of the EVA results in time-varying extreme values. Here we apply the TS-EVA method to assess time-varying extremes in the Arctic Ocean. The assessment is performed on a data set consisting of a long-term hindcast---from January 1991 to December 2018---that was obtained using the WAVEWATCH III \citep[WW3,][]{Tolman2009} spectral wave model forced with ERA5 reanalysis wind speeds \citep{hersbach2019global}. A description of the model and its validation is reported in Section \ref{ww3}. Model data are processed with the TS-EVA to determine extreme values for wind forcing and wave height. Long-term trends are investigated with a nonseasonal approach, and seasonal variability considered with a concurrent seasonal approach (Section \ref{TSEVA}). Results are discussed in terms of regional distributions and areal averages in Sections \ref{nses} and \ref{ses}. Concluding remarks are presented in the last Section. \begin{figure} \centering \includegraphics[scale=0.4]{1-arctic-seas.png} \caption{Regions of the Arctic Ocean used in this study with lines showing sea ice extent in September of 1991 (blue) and 2018 (red). Sea ice concentration dataset from ERA5 reanalysis.} \label{arctic-seas} \end{figure} \section{Wave hindcast}\label{ww3} A 28-year (from 1991 to 2018) wave hindcast of the Arctic Ocean was carried out with the WAVEWATCH III (WW3) spectral wave model---version 6.07---to build a database of sea state conditions, which is consistent in space and time. A regional model domain covering the area above latitude $53.17^{\circ}$ N was set up in an Arctic Polar Stereographic Projection with a horizontal resolution varying from 9 to 22 km (this configuration was found to optimise the accuracy of model results in relation to recorded data and computational time). The bathymetry was extracted from the ETOPO1 database \citep{amante2009etopo1}. The regional set up was then forced with ERA5 atmospheric forcing and sea ice coverage \citep{hersbach2019global}. Note that the model ran without wave-ice interaction modules as the focus is on the open ocean and not the marginal ice zone; regions of sea ice with concentration larger than 25\% were therefore treated as land \citep[e.g.][]{Thomson2016}. The model physics were defined by the ST6 source term package \citep{Zieger2015}. Boundary conditions were imposed on the regional model to account for energetic swells coming from the North Atlantic. To this end, boundaries were forced by incoming sea states from WW3 global runs with 1-degree spatial resolution. The global model used ERA5 wind forcing and the ST6 source term package. Simulations were run with a spectral domain of 32 frequency and 24 directional bins (directional resolution of 15 degrees). The minimum frequency was set at 0.0373 Hz and the frequency increment factor was set at 1.1, providing a frequency range of 0.0373-0.715 Hz. Grid outputs were stored every 3 hours. Calibration of the wind-wave growth parameter (CDFAC) was performed by testing the model outputs (significant wave height) against altimeter data across six different satellite missions \citep[ERS1, ERS2, ENVISAT, GFO, CRYOSAT-2 and Altika SARAL, see][]{Queffeulou2015} for the period August-September 2014. Note that the calibration of the regional configurations was undertaken after tuning the global model, to allow the input of reliable boundary conditions in the former. The best agreement was achieved for CDFAC = 1.19 in the global model, with correlation coefficient $R=0.96$, scatter index $SI=16\%$ and root mean square error $RMSE = 0.4$ m. For the regional model, the best agreement was for CDFAC = 1.23 with $R=0.95$, $SI\sim1\%$ and $RMSE \sim 0.3$ m. The regional model set up was further validated by comparing all modelled significant wave height values against matching altimeter observations for an independent period of four years from 2012 to 2016. Fig. \ref{validationst4st62}a shows the regional model outputs versus collocated altimeter data. Generally, the model correlates well with observations: $R = 0.97$, $SI = 16\%$, and $RMSE = 0.38$ m. The residuals between model and altimeters as a function of the observations are reported in Fig. \ref{validationst4st62}b. The comparison indicates a satisfactory level of agreement for the upper range of wave heights ($H_s>4$ m): $R = 0.86$, $SI = 11\%$, and $RMSE = 0.63$ m. The regional distribution of model errors is reported in Fig. \ref{validation-map}. It is worth noting that the model performed well across the entire Arctic Ocean, with no specific regions affected by significant errors. Further evaluation of the performance of the wave hindcast against ERA5 wave reanalysis is described in Appendix A. \begin{figure} \centering \includegraphics[scale=0.6]{2-validation-ST6-scatter-residual.png} \caption{Significant wave height from model versus collocated altimeter observations for the period 2012--2016 with ST6 core physics. (a) all data and (b) $90^{th}$ percentile and above. The black line represents the 1:1 agreement and the red lines are the linear regression.} \label{validationst4st62} \end{figure} \begin{figure} \centering \includegraphics[scale=0.45]{3-validation-ST6-maps-3.png} \caption{Regional distribution of error metrics: correlation (left panel), scatter index (middle panel), and root mean square error (right panel).} \label{validation-map} \end{figure} \section{Transformed Stationary Extreme Value Analysis (TS-EVA)} \label{TSEVA} The TS-EVA method developed by \cite{Mentaschi2016} is applied to extract time-varying information on climate extremes. This approach is based on three main steps. In the first step, the original non-stationary time series is transformed into a stationary counterpart that can be processed using classical EVA methods. The transformation is based on the following equation: \begin{equation} x(t)=\frac{y(t)-T_y (t)}{S_y (t)}, \label{eq63} \end{equation} where $y(t)$ is the non-stationary time-series, $x(t)$ is the stationary counterpart, $T_y (t)$ is the trend of $y(t)$ and the $S_y (t)$ is its standard deviation. Computation of $T_y (t)$ and $S_y (t)$ relies on algorithms based on running means and running statistics \citep[see][for more details]{Mentaschi2016}. This approach acts as a low-pass filter, which removes the variability within a specified time window $W$. The time window has to be short enough to incorporate the desired variability, but long enough to eliminate noise and short-term variability. Hereafter this approach is referred to as nonseasonal. A period of 5 years for $W$ is used to ensure stationary transformed time series, considering the rapid sea ice melting occurring in the last few decades in the region. Fig. \ref{ts-kara-nons}a shows an example of a time-series of significant wave height for the Kara sea, its long-term variability and concurrent standard deviation. Apart from an initial downward trend between 1993 and 1999, when the region was still covered by sea ice for most of the year, a clear positive trend is evident for the past two decades. In the second step, the stationary time-series $x(t)$ is processed with a standard EVA approach. Herein, a peaks-over-threshold method (POT) \citep[see, e.g.][for a general overview]{thomson2014data} was applied to extract extreme values from the records with a threshold set at the $90^{th}$ percentile. A Generalised Pareto Distribution \citep[GPD, e.g.][]{thomson2014data} \begin{equation} F(x)=1-\Bigg[1+k\Bigg(\frac{x-A}{B}\Bigg)^\frac{-1}{k}\Bigg], \label{eqgpd} \end{equation} where $A$ is the threshold and $B$ and $k$ are the scale and shape parameters respectively, was fitted to the data in order to derive an extreme value distribution. Note that the parameters $A$ and $B$ are time-dependent and change with trends, standard deviation, and seasonality in the TS-EVA approach \citep{Mentaschi2016}. To ensure statistical independence, peaks were selected at least 48 hours apart. Furthermore, to ensure a stable probability distribution, a minimum of 1000 peaks was selected for each grid point of the model domain \citep{Meucci2018WindEnsembles}, meaning that regions free of sea ice less than about two months per year were excluded from the analysis. The third and final step consists of back-transforming the extreme value distribution into a time-dependent one by reincorporating the trends that were excluded from the original non-stationary time series. An example of the non-stationary extreme value distribution for a point located in the Kara Sea is shown in Fig. \ref{ts-kara-nons}c. As the resulting distribution is different for each year within the time series, the TS-EVA method enables extrapolation of partial return period values (e.g. the 100-year return level for wind speed and significant wave height) for any specific year. Therefore, after fitting a GPD distribution to the stationary time series and transforming to a time-varying distribution, it was possible to obtain 100-year return levels for every five years within the original time series. \begin{figure} \centering \includegraphics[scale=0.1]{4-TSEVA-timeseries.jpg} \caption{TS-EVA of the projections of significant wave height for a point located in the Kara Sea. The time series of $H_s$ (m), its long-term trend and standard deviation computed with a time window of 5 years obtained with (a) the nonseasonal approach and (b) with the seasonal approach. The non-stationary time-dependent probability distribution for a GPD with a POT analysis and a $90^{th}$ percentile threshold with (c) the nonseasonal approach and (d) with the seasonal approach.} \label{ts-kara-nons} \end{figure} Effects of the seasonal cycle can be accounted for by incorporating seasonal components in the stationary time-series $x(t)$. To this end, trend $T_y (t)$ and standard deviation $S_y (t)$ in equation (\ref{eq63}) are expressed as $T_y(t)=T_{0y}(t)+s_T(t)$ and $S_y(t)=S_{0y}(t)\times s_S(t)$, where $T_{0y}(t)$ and $s_T(t)$ are the long-term and seasonal components of the trend and $S_{0y}(t)$ and $s_S(t)$ are the long-term and seasonal components of the standard deviation. $T_{0y}(t)$ and $S_{0y}(t)$ are computed by a running mean acting as a low-pass filter within a given time window ($W$). The seasonal component of the trend $s_T(t)$ is computed by estimating the average monthly anomaly of the de-trended series. The seasonal component of the standard deviation $s_S(t)$ is evaluated as the monthly average of the ratio between the fast and slow varying standard deviations, $S_{sn}(t)$ / $S_{0y}(t)$, where $S_{sn}$ is computed by another running mean standard deviation on a time window $W_{sn}$ much shorter than one year \citep[see][for more details]{Mentaschi2016}. As for the non-seasonal approach, the time window $W$ was set to 5 years to estimate the long-term components, while a time window $W_{sn}$ of 2 months was applied to evaluate the intra-annual variability (seasonal components). Note that the length of the seasonal window $W_{sn}$ is chosen to maximise accuracy and minimise noise. As an example, Fig. \ref{ts-kara-nons}b shows the seasonal components for the Kara sea. The resulting stationary time series $x(t)$ is analysed with an EVA approach to fit an extreme value distribution, which is then back-transformed to a time-dependent one (Fig. \ref{ts-kara-nons}d). The seasonal approach enables the extrapolation of partial extreme values such as the 100-year return period levels for each month. \section{Nonseasonal trends}\label{nses \subsection{Wind extremes} Fig. \ref{ci-nonseasonal-u10} shows the regional distribution of the 100-year return period levels for wind speed $U_{10}^{100}$ and 95\% confidence interval (CI95) width for the years 1993 and 2018; the regional distribution of the differences between the two years is also displayed in the figure. Extreme winds are estimated to reach approximately 25 m/s in the Baffin Bay, Greenland, Barents and Kara seas (i.e. the Atlantic sector of the Arctic Ocean, see Fig. \ref{arctic-seas} for the geographical location of sub-regions), with peaks up to 40\;$m/s$ along the Eastern coast of Greenland. Extreme winds in the Pacific Sector, i.e. the Beaufort, Chukchi, East Siberian and Laptev seas recorded slightly lower $U_{10}^{100}$, reaching values up to 20 m/s. Confidence intervals were normally narrow, with extremes varying within the range of $\pm0.5$ m/s. The magnitude of extreme wind speeds predicted here is generally consistent with values determined with classical EVA methods in the Atlantic sector of the Arctic Ocean \citep{Breivik2014WindEnsembles,bitner2018climate}. The TS-EVA analysis, nevertheless, shows that extremes have been changing for the past three decades. The difference in the 100-year return period wind speeds between years 1993 and 2018 are notable as shown in Fig. \ref{ci-nonseasonal-u10}. More specifically, the long term trends of $U_{10}^{100}$ are shown in Fig. \ref{area-hs}, which reports areal-averages as a function of time for each sub-region. In the Atlantic sector, $U_{10}^{100}$ showed a weak drop in the Norwegian and Greenland seas, with a total decrease of about 3 m/s over the period 1993-2018 (a rate of -0.12 m/s per year). More significant drops were recorded along the Western coast of Greenland (i.e. Fram Strait, Eastern Greenland sea), where $U_{10}^{100}$ reduced at a rate of -0.24 m/s per year. The Baffin Bay and the Barents sea showed negligible changes, with $U_{10}^{100}$ remaining approximately constant. The opposite trend was reported on the Eastern side of the Atlantic sector (i.e. the Kara sea), where wind speed showed a weak increase with a rate of 0.04 m/s per year. The Pacific sector, on the contrary, was subjected to more consistent trends across the sub-regions. The East Siberian and Chukchi seas show weak positive trends of about 0.16 and 0.12 m/s per year, respectively. A similar increase was also observed in the Western part of the Beaufort sea. The Laptev sea recorded the lowest rate of increase in the Pacific sector, with $U_{10}^{100}$ increasing at a rate of 0.04 m/s per year. \begin{figure} \centering \includegraphics[scale=0.7]{5-nonseasonal_u10-2018-1993-errors-35.png} \caption{$U_{10}^{100}$ (m/s) obtained with a POT analysis ($90^{th}$ percentile threshold) and a GPD distribution in the TS-EVA nonseasonal approach for (a) 1993 and (b) 2018. (c) The difference between estimations for 2018 and 1993. Width of 95\% confidence interval for $U_{10}^{100}$ for (d) 1993 and (e) 2018.} \label{ci-nonseasonal-u10} \end{figure} \begin{figure} \centering \includegraphics[scale=0.17]{6-nonseasonal-arealaveg2.png} \caption{Areal-averages of $H_s^{100}$ (blue) in meters and $U_{10}^{100}$ (red) in $m/s$ estimated by nonseasonal TS-EVA approach for each sea in the Arctic Ocean.} \label{area-hs} \end{figure} \subsection{Wave extremes} Fig. \ref{ci-nonseasonal} shows the 100-year return levels for significant wave height ($H_s^{100}$), confidence intervals and differences for the years 1993 and 2018. It should be noted that regions covered by sea ice for most of the year are not considered in this analysis and thus they are colour-coded with white in the figure. The Atlantic sector experiences high $H_s^{100}$ ($>10$ m) due to the energetic North Atlantic swell penetrating the Arctic Ocean. Likewise, the Pacific sector experiences significant values of $H_s^{100}$ ($>5$ m), despite a substantial sea ice cycle that limits fetch lengths for a large fraction of the year. The 95\% confidence intervals are typically $\pm0.5$ m (see panels d and e in Fig. \ref{ci-nonseasonal}). In more recent years (e.g. 2018), confidence intervals widen slightly in regions of significant sea ice decline, increasing to $\pm 0.6$ m. There is a clear difference of $H_s^{100}$ between 1993 and 2018, which appears consistent with the measured sea ice decline. There is a substantial increase of $H_s^{100}$ in the Pacific sector, with $H_s^{100}$ increasing by approximately 4 m in the Beaufort, Chukchi and East Siberian seas. In the Laptev and Kara seas, differences are typically smaller (the increment is approximately 2~m), even though $H_s^{100}$ reaches values of approximately 6 m nearby the sea ice margins. Note, however, that uncertainties related to the exact position of sea ice edges result in larger confidence intervals (up to $\pm2$~m) in these regions. Extremes in the Atlantic sector, surprisingly, show an overall decrease, with $H_s^{100}$ dropping by about 1-2~m. Note, however, that this is a region in which the sea ice extent has not changed dramatically over this period. Nevertheless, regions closer to sea ice such as the Fram straits and the Northern part of the Barents sea experienced substantial growth, with $H_s^{100}$ increasing up to 5~m between 1993 and 2018. \begin{figure} \centering \includegraphics[scale=0.7]{7-nonseasonal_hs-2018-1993-errors.png} \caption{$H_s^{100}$ (m) obtained with a POT analysis ($90^{th}$ percentile threshold) and a GPD distribution in the TS-EVA nonseasonal approach for (a) 1993 and (b) 2018. (c) The difference between estimations for 2018 and 1993. Width of 95\% confidence interval for $H_s^{100}$ for (d) 1993 and (e) 2018.} \label{ci-nonseasonal} \end{figure} Trends in $H_s^{100}$ are reported in Fig. \ref{area-hs}. A consistent increase of $H_s^{100}$ is evident in the emerging open waters of the Beaufort, Chukchi, East Siberian, Laptev and Kara seas. Variations in the Beaufort and East Siberian seas are the largest, with a total increase over the period 1993-2018 of approximately 16~cm per year. The Chukchi and Laptev seas also experienced a substantial growth of $H_s^{100}$, with an increase of 6~cm per year, while $H_s^{100}$ increased by approximately 4~cm per year in the Kara sea. In contrast, the Atlantic sector shows only weak upward trends, with the Baffin Bay and Greenland sea showing an increase of 1.6~cm per year. The Barents sea experienced no notable long-term variations, while the Norwegian sea reported a drop in $H_s^{100}$ of about 4~cm per year. As these latter regions are predominantly free from sea ice, the downward trends are associated with the decline of wind speeds over the North Atlantic \citep[results are consistent with finding in][]{Breivik2013WaveForecasts,bitner2018climate}. It is worth noting that negative trends for the North Atlantic are expected to continues in the future as indicated by projections based on RCP 4.5 and RCP 8.5 emission scenarios \citep{morim2019robustness,aarnes2017projected}. Wave height, however, is predicted to increase at high latitudes of the Norwegian and Barents seas over the next decades as a result of ice decline \citep{aarnes2017projected}, confirming the positive trend in wave extremes that is already arising close the ice edge (see Fig. \ref{ci-nonseasonal}). The contrast between an overall decrease of wave height as a result of wind speed decline and the increase of wave height in emerging open waters at high latitudes is also a distinct feature in the North Pacific \citep[cf.][]{shimura2016variability}. Areal averages for $H_s^{100}$, $U_{10}^{100}$ and sea ice extent across the entire Arctic Ocean are shown in Fig. \ref{comparehsu10ice} as a function of time. Fig. \ref{comparehsu10ice}a confirms that the weak trends in wind extremes do not fully substantiate the significant changes in wave extremes. However, the substantial contraction of sea ice cover (about $27\%$ in 25 years) exhibits a more robust correlation with trends of wave extremes, corroborating that the emergence of longer fetches, i.e. sea ice decline, contributes notably to the positive trends of $H_s^{100}$ in the Arctic Ocean (Fig. \ref{comparehsu10ice}b). \begin{figure} \centering \includegraphics[scale=0.5]{8-Hs100versusU10100andSEaIce4.png} \caption{(a) Areal-average of $H_s^{100}$ in meters across the entire Arctic Ocean against areal-average of $U_{10}^{100}$ in m/s and (b) against sea ice extent in million km$^2$. Areal trend shown with the dashed lines.} \label{comparehsu10ice} \end{figure} \section{Seasonal variability}\label{ses} \subsection{Wind extremes} Figures \ref{u10-seasonal-1993} and \ref{u10-seasonal-2018} show the monthly values of $U_{10}^{100}$ for 1993 and 2018, respectively. Extreme wind distributes rather uniformly over the Arctic Ocean. During the autumn and winter season, $U_{10}^{100}$ ranges between 20 and 30 m/s, with peaks along Greenland (Denmark and Fram Straits) up to 50 m/s. In the spring and summer months, $U_{10}^{100}$ ranges between 10 and 30 m/s with again the highest winds reported in the western Greenland sea. Note that the seasonal approach returns a geographical distribution of extremes that is similar to the one obtained with the nonseasonal approach, but it captures more extreme season-related events. The seasonal component tends to shift the tail of the time-varying extreme value distribution into higher frequencies, resulting in higher estimated extremes for all seasons. Differences between $U_{10}^{100}$ for 1993 and 2018 are reported in Fig. \ref{seasonal-u10-1993-2018}. Generally, differences range between 1 and 3 m/s and are quite consistent across all seasons. The Pacific sector experiences an increase, while the Atlantic sector and the central Arctic are subjected to a reduction of $U_{10}^{100}$. The most significant changes are observed in the western Greenland sea during the winter season, where reductions up to -5 m/s were detected. It is interesting to note that the regional distribution of differences is similar in each month, denoting a homogeneous change of extreme winds across the Arctic Ocean throughout the year. Note also that differences obtained with the seasonal approach are consistent with those estimated with the nonseasonal method. \begin{figure} \centering \includegraphics[scale=0.7]{9-seasonal-u10-1993-35.png} \caption{$U_{10}^{100}$ (m/s) for 1993 obtained with a POT analysis and a GPD distribution in the TS-EVA seasonal approach. Data obtained from the ERA5 dataset.} \label{u10-seasonal-1993} \end{figure} \begin{figure} \centering \includegraphics[scale=0.7]{10-seasonal-u10-2018-35.png} \caption{$U_{10}^{100}$ (m/s) for 2018 obtained with a POT analysis and a GPD distribution in the TS-EVA seasonal approach. Data obtained from the ERA5 dataset.} \label{u10-seasonal-2018} \end{figure} \begin{figure} \centering \includegraphics[scale=0.7]{11-U10-seasonal-2018-1993.png} \caption{Monthly differences in $U_{10}^{100}$ between estimates for 2018 and 1993.} \label{seasonal-u10-1993-2018} \end{figure} \subsection{Wave extremes} The seasonal variations of $H_s^{100}$ are presented in figures \ref{st6-seasonal-1993} and \ref{st6-seasonal-2018} for 1993 and 2018, respectively. The minimum sea ice coverage in 1991-1993 is shown as a dashed lines in Fig. \ref{st6-seasonal-2018}. Extreme wave height, as expected, is subjected to a substantial seasonal variation. The highest values are found in the region encompassing the Greenland and Norwegian Seas, where energetic swells coming from the North Atlantic Ocean propagate into the region \citep[cf.][]{Liu2016,Stopa2016a}. The highest $H_s^{100}$ in this region reaches values up to 18 m in the winter months, concomitantly with strong winds (Figs. \ref{u10-seasonal-1993} and \ref{u10-seasonal-2018}), and reduces to about 5\;m in the summer. Over the past three decades, however, the general trend shows a consistent reduction in this region at a rate of 4 cm per year regardless of the season (see maps of differences in Fig. \ref{seasonal-hs-1993-2018} and trends of areal-averages in Fig. \ref{seasonal-area-hs}). These results are in agreement with the results obtained with the nonseasonal approach. Nevertheless, extreme waves penetrate further North in the emerging open waters of the Northern Greenland, Barents and Kara seas, especially during the autumn (September to November) and winter (December to February) seasons in recent years. Consequently, there is a dramatic increase of $H_s^{100}$ in these regions with values up to 13 m in 2018. This corresponds to an average increasing rate of approximately 12 cm per year, with peaks of about 35 cm per year nearby the sea ice margins. Based on future projection, this positive trend is expected to continue \citep{aarnes2017projected}. In regions subjected to the sea ice cycle, wave extremes in 1993 used to build up in late spring or early summer (June), and reach their maximum of up to 12 m in a confined area of the Beaufort sea in October. In more recent years (2018), waves already have a significant presence earlier in spring (May), primarily in the coastal waters of the Beaufort sea and the East Siberian sea (see figures \ref{seasonal-hs-1993-2018}). From June to November, there is a rapid intensification of the sea state and extremes span from a few metres in June to about 16 m in November, with an average growth rate of 12 cm per year, over a region encompassing the whole Beaufort, Chukchi and East Siberian seas. These secluded areas, which are the most prone to positive long-term variations of wind speed (Fig. \ref{seasonal-u10-1993-2018}) and sea ice retreat \citep{strong2013arctic}, are now experiencing sea state extremes comparable to those reported in the North Atlantic. It is also worth noting that significant changes are also apparent for the western part of the East Siberian sea and the nearby Laptev sea at the end of autumn (November). These regions, which used to be entirely covered by sea ice by November in the earliest decade, are now still completely open with $H_s^{100}$ recording changes up to 8 m (a rate of 32 cm per year since 1993). \begin{figure} \centering \includegraphics[scale=0.7]{12-seasonal-hs-1993.png} \caption{$H_s^{100}$ (m) for 1993 obtained with a POT analysis and a GPD distribution in the TS-EVA seasonal approach. Data obtained from the 28-year wave hindcast with ERA5 wind forcing.} \label{st6-seasonal-1993} \end{figure} \begin{figure} \centering \includegraphics[scale=0.7]{13-hs100-seasonal-2018-2.png} \caption{$H_s^{100}$ (m) for 2018 obtained with a POT analysis and a GPD distribution in the TS-EVA seasonal approach. Data obtained from the 28-year wave hindcast with ERA5 wind forcing. Dashed lines represent the minimum sea ice coverage in the period between 1991-1993 for each month.} \label{st6-seasonal-2018} \end{figure} \begin{figure} \centering \includegraphics[scale=0.7]{14-Hs-seasonal-2018-1993-2.png} \caption{Monthly differences in $H_s^{100}$ between estimates for 2018 and 1993.} \label{seasonal-hs-1993-2018} \end{figure} \begin{figure} \centering \includegraphics[scale=0.18]{15-seasonal-arealaveg.png} \caption{Areal-averages of $H_s^{100}$ in meters estimated by the seasonal TS-EVA approach for each sea in the Arctic Ocean for winter (blue), spring (light green), summer (red), and autumn (light blue).} \label{seasonal-area-hs} \end{figure} \section{Conclusions} \label{conc} A non-stationary extreme value analysis \citep[TS-EVA,][]{Mentaschi2016} was applied to assess long-term and seasonal variability of wind and wave extremes (100-year return period levels) in the Arctic Ocean. This non-conventional approach is dictated by the highly dynamic nature of the Arctic, which has been undergoing profound changes over the past decades \citep{Liu2016,Stopa2016a} and invalidating the basic hypothesis of stationarity that is fundamental for classical extreme value analysis. Estimation of extremes was based on a 28-year (1991-2018) database of 10-metre wind speed and significant wave height, with a temporal resolution of three hours. Wind speed was obtained from the recently released ERA5 reanalysis database and subsequently used to force the WAVEWATCH III spectral wave model. An Arctic Polar Stereographic Projection grid with a horizontal resolution spanning from 9 to 22 km was applied. The model was calibrated and validated against satellite altimeter observations, producing good agreement with a correlation coefficient $R = 0.97$, scatter index $SI = 16\%$ and root mean squared error $RMSE = 0.36$ m. The TS-EVA extreme value analysis consisted of transforming the original non-stationary time series of wind speed and wave height into a stationary counterpart and then applying standard peak-over-threshold methods to evaluate extreme values with a return period of 100 years over a running window of 5 years. Non-stationarity was then reinstated by back-transforming the resulting extreme value distribution. Two different approaches were applied to the data sets: a nonseasonal approach, which returns yearly estimates of extremes and enables evaluation of long-term variability; and a seasonal approach, which incorporates a seasonal variability enabling estimation of extremes for specific months. The nonseasonal approach showed a weak long-term variability for the 100-year return period values of wind speed. An increase of approximately 3 m/s from 1993 to 2018 (a rate of $\approx$ 0.12 m/s per year since 1993) was reported in the Pacific sector, especially in the regions of the Chukchi and East Siberian seas and, more marginally, in the Beaufort sea and part of the Laptev sea. A decrease of roughly 3 m/s ($-0.12$ m/s per year), on the other hand, was found in most of the remaining regions of the Arctic, with peaks in the Eastern part of the Greenland sea ($\approx$ $-0.2$ m/s per year). Variability of wave extremes, on the other hand, is more dramatic and primarily driven by the substantially longer fetches following sea ice retreat. Large changes, in this respect, were found in the Pacific sector encompassing the area between the Beaufort and East Siberian seas, where wave height extremes have been increasing at a rate of approximately 12 cm per year, which results in an overall increase of $\approx$ 60\% from 1993 to 2018. It is interesting to note that wind extremes in the Beaufort sea only increased marginally, reinforcing the role of the sea ice decline in changing wave climate. The Atlantic sector, on the contrary, experienced a notable decrease of wave extremes at the rate of -4 cm per year; this is consistent with a reduction of wind extremes and with general climate trends observed in \citet{Liu2016}. For regions closer to the sea ice edge, where emerging open waters have been replacing pack ice, the 100-year return period levels of wave height exhibit the opposing trend, with a sharp increase of wave extremes at an extremely large local rate of 35 cm per year. It should be noted, however, that estimates of long-term trends closer to the sea ice edge are more uncertain due to lack of data in the earlier years, where sea ice covered the ocean more substantially. Nevertheless, it is worth reflecting on the consequences that a sharp upward trend of wave extremes can have on already weak sea ice. As extremes become more extreme, there is negative feedback accelerating sea ice dynamics \citep{vichi2019effects,alberello2020jgr}, break up \citep{passerotti2020omae} and melting processes \citep{dolatshah2018hydroelastic}, further contributing to sea ice retreat. The seasonal approach provides a more detailed picture of climate, providing a combined seasonal and long-term variability. Wind extremes distribute uniformly over the Arctic, with peaks in the autumn and winter periods spanning from 20 m/s in the Pacific sector to 30 m/s in the North Atlantic. Spring and Summer months still exhibit significant extremes up to 20 m/s, with a more homogeneous regional distribution. Over the entire 28-year period, trends are mild and stable through the seasons, consistent with those found with the nonseasonal approach. Variability of wave extremes is again more substantial than wind. In the Pacific sector, the decline of sea ice extent allows a rapid intensification of extremes in the spring (May and June); average growth rates span from 1 cm per year in spring to 12 cm per year in late summer and early autumn. In the Atlantic sector, in response to a notable drop of wind speed, a consistent decrease of wave extremes results all year-round. Nevertheless, the emerging waters of northern Greenland and Barents sea showed the opposite trend with an increase of wave height at a very large rate up to 32 cm per year closer to the sea ice margin. {\bf Acknowledgments} This research was partially supported by the Victoria Latin America Doctoral Scholarship (VLADS) program. AT acknowledges support from the ACE Foundation--Ferring Pharmaceuticals and the Air-Sea-Lab Project initiative. \section*{Appendix A: ERA5 wave data comparison} \label{A} \begin{figure} \centerline{\includegraphics[scale=0.49]{A1-ERA5xWW3-scatter.png}} \caption{A1}{Significant wave height from ERA5 reanalysis versus WW3 model results for 2015. (a) all data and (b) $90^{th}$ percentile and above. The black line represents the 1:1 agreement and the red lines are the linear regression.} \label{era5mod} \end{figure} In addition to comparing the model results against altimeter data as described in section \ref{ww3}, an evaluation of the wave hindcast was also performed against the ERA5 wave data reanalysis. Although the ERA5 reanalysis has a coarser spatial resolution ($0.5^{\circ}$) than the hindcast performed in this study, it is a widely used international resource. The comparison between the WW3 wave hindcast and ERA5 data for 2015 (Fig. A1) shows excellent agreement with a correlation (R) of 0.99, RMSE of 0.27 m NBIAS of $2.3\%$ for all data and correlation (R) of 0.95, RMSE of 0.52 m NBIAS of $1.4\%$ for the upper percentiles. \bibliographystyle{ametsoc2014}
proofpile-arXiv_065-4531
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\part{} \input{sections/intro} \input{sections/background} \input{sections/sec3} \input{sections/sec6} \clearpage \bibliographystyle{abbrv} \section{Introduction}\label{sec-intro} Optimal transport (OT) theory \citep{Villani03} plays an increasingly important role in machine learning to compare probability distributions, notably point clouds, discrete measures or histograms~\citep{COTFNT}. As a result, OT is now often used in graphics~\citep{2016-bonneel-barycoord,peyre2016gromov,NIPS2017_7095}, neuroimaging~\cite{janati2020multi}, to align word embeddings~\citep{alvarez2018gromov,alaux2018unsupervised,grave2019unsupervised}, reconstruct cell trajectories~\cite{hashimoto2016learning,schiebinger2019optimal,yang2020predicting}, domain adaptation~\cite{courty2014domain,courty2017optimal} or estimation of generative models~\citep{WassersteinGAN,salimans2018improving,genevay2018sample}. Yet, in their original form, as proposed by Kantorovich~\cite{Kantorovich42}, OT distances are not a natural fit for applied problems: they minimize a network flow problem, with a supercubic complexity $(n^3 \log n)$~\cite{Tarjan1997} that results in an output that is \textit{not} differentiable with respect to the measures' locations or weights~\cite[\S5]{bertsimas1997introduction}; they suffer from the curse of dimensionality~\cite{dudley1969speed,fournier2015rate} and are therefore likely to be meaningless when used on samples from high-dimensional densities. Because of these statistical and computational hurdles, all of the works quoted above do rely on some form of regularization to smooth the OT problem, and some more specific uses of an entropic penalty, to recover so called Sinkhorn divergences~\cite{CuturiSinkhorn}. These divergences are cheaper to compute than regular OT~\cite{ChizatPSV18,genevay2016stochastic}, smooth and programmatically differentiable in their inputs~\cite{2016-bonneel-barycoord,hashimoto2016learning}, and have a better sample complexity~\cite{genevay19} while still defining convex and definite pseudometrics~\cite{feydy2019interpolating}. While Sinkhorn divergences do lower OT costs from supercubic down to an embarassingly parallel quadratic cost, using them to compare measures that have more than a few tens of thousands of points in forward mode (less obviously if backward execution is also needed) remains a challenge. \textbf{Entropic regularization: starting from ground costs.} The definition of Sinkhorn divergences usually starts from that of the ground cost on observations. That cost is often chosen by default to be a $q$-norm between vectors, or a shortest-path distance on a graph when considering geometric domains~\cite{GramfortPC15,solomon2013dirichlet,SolomonEMDSurfaces2014,janati2020multi}. Given two measures supported respectively on $n$ and $m$ points, regularized OT instantiates first a $n\times m$ pairwise matrix of costs $C$, to solve a linear program penalized by the coupling's entropy. This can be rewritten as a Kullback-Leibler minimization: \begin{equation}\label{eq:sinkhornintro} \min_{\text{couplings } \mathbf{P}} \dotp{\mathbf{C}}{\mathbf{P}} - \varepsilon H(\mathbf{P}) = \varepsilon \min_{\text{couplings } \mathbf{P}} \kl(\mathbf{P}\|\mathbf{K})\,, \end{equation} where matrix $K$ appearing in Eq.~\eqref{eq:sinkhornintro} is defined as $\mathbf{K}:= \exp(-\mathbf{C}/\varepsilon)$, the elementiwe neg-exponential of a rescaled cost $\mathbf{C}$ As described in more detail in \S\ref{sec-reminders}, this problem can then be solved using Sinkhorn's algorithm, which only requires applying repeatedly kernel $\mathbf{K}$ to vectors. While faster optimization schemes to compute regularized OT have been been investigated~\cite{altschuler2017near,dvurechensky2018computational,pmlr-v97-lin19a}, the Sinkhorn algorithm remains, because of its robustness and simplicity of its parallelism, the workhorse of choice to solve entropic OT. Since Sinkhorn's algorithm cost is driven by the cost of applying $\mathbf{K}$ to a vector, speeding up that evaluation is the most impactful way to speedup Sinkhorn's algorithm. This is the case when using separable costs on grids (applying $\mathbf{K}$ boils down to carrying out a convolution at cost $(n^{1+1/d})$~\citep[Remark 4.17]{COTFNT}) or when using shortest path metrics on graph in which case applying $\mathbf{K}$ can be approximated using a heat-kernel~\cite{2015-solomon-siggraph}. While it is tempting to use low-rank matrix factorization, using them within Sinkhorn iterations requires that the application of the approximated kernel guarantees the positiveness of the output. As shown by~\cite{altschuler2018massively} this can only be guaranteed, when using the Nystr\"om method, when regularization is high and tolerance very low \textbf{Starting instead from the Kernel.} Because regularized OT can be carried out using only the definition of a kernel $\mathbf{K}$, we focus instead on kernels $\mathbf{K}$ that are guaranteed to have positive entries by design. Indeed, rather than choosing a cost to define a kernel next, we consider instead ground costs of the form $c(x,y)=-\varepsilon\log\dotp{\varphi(x)}{\varphi(y)}$ where $\varphi$ is a map from the ground space onto the positive orthant in $\mathbb{R}^r$. This choice ensures that both the Sinkhorn algorithm itself (which can approximate optimal primal and dual variables for the OT problem) and the evaluation of Sinkhorn divergences can be computed exactly with an effort scaling linearly in $r$ and in the number of points, opening new perspectives to apply OT at scale. \textbf{Our contributions} are two fold: \textit{(i)} We show that kernels built from positive features can be used to approximate some usual cost functions including the square Euclidean distance using random expansions. \textit{(ii)} We illustrate the versatility of our approach by extending previously proposed OT-GAN approaches~\citep{salimans2018improving,genevay19}, that focused on learning adversarially cost functions $c_\theta$ and incurred therefore a quadratic cost, to a new approach that learns instead adversarially a kernel $k_\theta$ induced from a positive feature map $\varphi_\theta$. We leverage here the fact that our approach is fully differentiable in the feature map to train a GAN at scale, with linear time iterations. \paragraph{Notations.} Let $\mathcal{X}$ be a compact space endowed with a cost function $c:\mathcal{X}\times \mathcal{X}\rightarrow \mathbb{R}$ and denote $D=\sup_{(x,y)\in\mathcal{X}\times\mathcal{X}}\Vert(x,y)\Vert_2$. We denote $\mathcal{P}(\mathcal{X})$ the set of probability measures on $\mathcal{X}$. For all $n\geq 1$, we denote by $\Delta_n$ all vectors in $\mathbb{R}^n_{+}$ with positive entries and summing to 1. We denote $f\in\mathcal{O}(g)$ if $f\leq C g$ for a universal constant $C$ and $f\in\Omega(g)$ if $g\leq Q f$ for a universal constant $Q$. \section{Regularized Optimal Transport}\label{sec-reminders} \paragraph{Sinkhorn Divergence.} Let $\mu=\sum_{i=1}^n a_i\delta_{x_i}$ and $\nu=\sum_{j=1}^m b_j\delta_{y_j}$ be two discrete probability measures. The Sinkhorn divergence~\cite{ramdas2017wasserstein,2017-Genevay-AutoDiff,salimans2018improving} between $\mu$ and $\nu$ is, given a constant $\varepsilon>0$, equal to \begin{align}\label{eq:reg-sdiv}\overline{W}_{\varepsilon,c}(\mu,\nu) \defeq W_{\varepsilon,c}(\mu,\nu)-\frac{1}{2}\left(W_{\varepsilon,c}(\mu,\mu)+W_{\varepsilon,c}(\nu,\nu)\right), \text{ where}\\ W_{\varepsilon,c}(\mu,\nu) \defeq \min_{\substack{P\in \mathbb{R}_{+}^{n\times m}\\ P\mathbf{1}_m=a,P^T\mathbf{1}_n=b}} \dotp{P}{C} \,-\varepsilon H(P)+\varepsilon.\label{eq:reg-ot} \end{align} Here $\mathbf{C}\defeq [c(x_i,y_j)]_{ij}$ and $H$ is the Shannon entropy, $H(\mathbf{P}) \defeq -\sum_{ij} P_{ij} (\log P_{ij} - 1)$. Because computing and differentiating $\overline{W}_{\varepsilon,c}$ is equivalent to doing so for three evaluations of $W_{\varepsilon,c}$ (neglecting the third term in the case where only $\mu$ is a variable)~\citep[\S4]{COTFNT}, we focus on $W_{\varepsilon,c}$ in what follows. \paragraph{Primal Formulation.} Problem~\eqref{eq:reg-ot} is $\varepsilon$-strongly convex and admits therefore a unique solution $\mathbf{P}^\star$ which, writing first order conditions for problem~\eqref{eq:reg-ot}, admits the following factorization: \begin{equation}\label{eq:sol-reg-ot} \exists u^\star\in\mathbb{R}^{n}_{+}, v^\star \in \mathbb{R}^{m}_{+} \text{ s.t. } \mathbf{P}^\star = \text{diag}(u^\star) \mathbf{K} \text{diag}(v^\star), \text{ where } \mathbf{K}\defeq \exp(-\mathbf{C}/\varepsilon). \end{equation} These \emph{scalings} $u^\star,v^\star$ can be computed using Sinkhorn's algorithm, which consists in initializing $u$ to any arbitrary positive vector in $\mathbb{R}^m$, to apply then fixed point iteration described in Alg.~\ref{alg-sink}. \begin{wrapfigure}{r}{0.37\textwidth} \vskip-.4cm \begin{minipage}{0.37\textwidth} \begin{algorithm}[H] \SetAlgoLined \textbf{Inputs:} $\mathbf{K},a,b,\delta, u$ \Repeat{$\|v\circ \mathbf{K}^T u - b\|_1<\delta$}{ $v\gets b/\mathbf{K}^T u,\;u\gets a/\mathbf{K}v$ } \KwResult{$u,v$} \caption{Sinkhorn \label{alg-sink}} \end{algorithm} \end{minipage}\vskip-.6cm \end{wrapfigure} These two iterations require together $2nm$ operations if $\mathbf{K}$ is stored as a matrix and applied directly. The number of Sinkhorn iterations needed to converge to a precision $\delta$ (monitored by the difference between the column-sum of $\text{diag}(u)\mathbf{K}\text{diag}(v)$ and $b$) is controlled by the scale of elements in $C$ relative to $\varepsilon$~\cite{franklin1989scaling}. That convergence deteriorates with smaller $\varepsilon$, as studied in more detail by~\cite{weed2017sharp,pmlr-v80-dvurechensky18a}. \textbf{Dual Formulation.} The dual of~\eqref{eq:reg-ot} plays an important role in our analysis~\citep[\S4.4]{COTFNT}: \begin{equation} \label{eq:eval-dual} W_{\varepsilon,c}(\mu,\nu) =\!\!\!\!\! \max_{\alpha\in\mathbb{R}^n,\beta\in\mathbb{R}^m} a^T\alpha + b^T\beta -\varepsilon (e^{\alpha/\varepsilon})^T \mathbf{K} e^{\beta/\varepsilon}+\varepsilon = \varepsilon \left(a^T\log u^\star+b^T\log v^\star \right) \end{equation} where we have introduced, next to its definition, its evaluation using optimal scalings $u^\star$ and $v^\star$ described above. This equality comes from that fact that \textit{(i)} one can show that $\alpha^\star \defeq \varepsilon \log u^\star, \;\beta^\star \defeq \varepsilon \log v^\star$, \textit{(ii)} the term $(e^{\alpha/\varepsilon})^T \mathbf{K}e^{\beta/\varepsilon}= u^T \mathbf{K} v$ is equal to $1$, whenever the Sinkhorn loop has been applied even just once, since these sums describe the sum of a coupling (a probability distribution of size $n\times m$). As a result, given the outputs $u,v$ of Alg.~\ref{alg-sink} we estimate \eqref{eq:reg-ot} using \begin{equation} \label{eq:dual-estimate} \widehat{W}_{\varepsilon,c}(\mu,\nu)\! =\! \varepsilon \left(a^T\log u+b^T\log v \right). \end{equation} Approximating $W_{\varepsilon,c}(\mu,\nu)$ can be therefore carried using exclusively calls to the Sinkhorn algorithm, which requires instantiating kernel $\mathbf{K}$, in addition to computing inner product between vectors, which can be computed in $\mathcal{O}(n+m)$ algebraic operations; the instantiation of $\mathbf{C}$ is never needed, as long as $\mathbf{K}$ is given. Using this dual formulation\eqref{eq:reg-ot} we can now focus on kernels that can be evaluated with a linear cost to achieve linear time Sinkhorn divergences. \section{Approximation properties of Positive Features} Let us now consider kernels which can be written for all $x,y\in\mathcal{X}$ as \begin{align} \label{eq:general} k(x,y)=\int_{u\in\mathcal{U}} \varphi(x,u)^T\varphi(y,u) d\rho(u) \end{align} where $\mathcal{U}$ is a metric space, $\rho$ is a probability measure on $\mathcal{U}$ and $\varphi:\mathcal{X}\times\mathcal{U}\rightarrow (\mathbb{R_{+}^{*}})^p$ such that for all $x\in\mathcal{X}$, $u\in\mathcal{U}\rightarrow \Vert\varphi(x,u)\Vert_2$ is square integrable (for the measure $d\rho$). In fact, we will see in the following that for some usual cost functions $c$, the Gibbs kernel associated $k(x,y)=\exp(-\varepsilon^{-1} c(x,y))$ admits such a decomposition. To obtain a finite-dimensional representation, one can approximate the integral with a weighted finite sum. Let $r\geq 1$ and $\theta:=(u_1,...,u_r)\in\mathcal{U}^r$ from which we can define the following positive feature map $$\varphi_{\mathbf{\theta}}(x) \defeq\frac{1}{\sqrt{r}}\left(\varphi(x,u_1),...,\varphi(x,u_r)\right)\in\mathbb{R}^{p\times r}$$ and obtain an approximation of the kernel $k$ defined as $k_{\theta}(x,y)\defeq\langle \varphi_{\mathbf{\theta}}(x),\varphi_{\mathbf{\theta}}(y)\rangle$. In the next section, our goal is to control uniformly the relative error between these two kernels. \subsection{Kernel Approximation via Random Features} To simplify the notations assume that $n=m$. Here we show with high probability that for all $(x,y)\in\mathcal{X}\times \mathcal{X}$, \begin{align} \label{eq:goal-RFF} (1-\delta)k(x,y)\leq k_{\theta}(x,y)\leq (1+\delta)k(x,y) \end{align} for an arbitrary $\delta>0$ as soon as the number of random features $r$ is large enough. See Appendix~\ref{proof:ratio-RFF} for the proof. \todo{il manque une phrase qui explique cette prop pour m. tout le monde, et qui motive son importance.} \begin{prop} \label{lem:ratio-RFF} Let $\mathcal{X}\subset \mathbb{R}^d$ compact, $n\geq 1$, $\mathbf{X}= \{x_1,...,x_n\}$ and $\mathbf{Y}= \{y_1,...,y_n\}$ such that $\mathbf{X},\mathbf{Y}\subset\mathcal{X}$, $\delta>0$ and $k$ kernel on $\mathcal{X}$ defined as in (\ref{eq:general}) such that for all $x,y\in\mathcal{X}$, $k(x,y)>0$. Let also $r\geq 1$, $u_1,...,u_r$ drawn randomly from $\rho$ and let us assume that there exists $K>0$ such that: \begin{align} \label{eq:assump:ratio} K:=\sup_{x,y\in\mathcal{X}}\sup_{u\in\mathcal{U}}\left|\frac{\varphi(x,u)^T\varphi(y,u)}{k(x,y)}\right|<+\infty. \end{align} First we have \begin{align*} \label{eq:finit-RFF} \mathbb{P}\left(\sup_{(x,y)\in\mathbf{X}\times\mathbf{Y}}\left |\frac{k_{\theta}(x,y)}{k(x,y)}-1\right|\geq \delta\right)\leq 2 n^2\exp\left(-\frac{r \delta^2}{2 K^2}\right) \end{align*} and if we assume in addition that there exists $\kappa>0$ such that for all $x,y\in\mathcal{X}$, $k(x,y)\geq\kappa$ and that $\varphi$ is differentiable such that \begin{align*} V:=\sup_{x\in\mathcal{X}}\mathbf{E}_{\rho}\left(\Vert \nabla_x\varphi(x,u)\Vert^2\right)<+\infty \end{align*} we have that \begin{align*} \mathbb{P}\left(\sup_{(x,y)\in\mathcal{X}\times\mathcal{X}}\left |\frac{k_{\theta}(x,y)}{k(x,y)}-1\right|\geq \delta\right)\leq \frac{(\kappa^{-1}D)^2 C_{K,V,r}}{\delta^2}\exp\left(-\frac{r\delta^2}{2K^2(d+1)}\right) \end{align*} where $C_{K,V,r} = 2^9 K(4+K^2/r)V \sup\limits_{x\in\mathcal{X}}k(x,x)$, and $D=\sup\limits_{(x,y)\in\mathcal{X}\times\mathcal{X}}\Vert(x,y)\Vert_2$. \end{prop} \todo{p ê dire que c'est une remarque? et la référer dans l'intro pour clarifier la dif?} \paragraph{Ratio Approximation.} The uniform bounds obtained to control of the ratio between the approximation and the kernel gives naturally a control of the form eq.(\ref{eq:goal-RFF}). In comparison, in \cite{rahimi2008random}, the authors obtain a uniform bound on their difference which leads to a control in high probability of the form \begin{align} k(x,y)-\tau\leq k_{\theta}(x,y)\leq k(x,y)+\tau \end{align} where $\tau$ is a decreasing function with respect to $r$. To be able to recover eq.(\ref{eq:goal-RFF}) from the above control, on may consider the case when $\tau=\inf\limits_{x,y\in\mathbf{X}\times\mathbf{Y}}k(x,y)\delta$ which in some cases may considerably increases the number of of random features $r$ needed to ensure the result with at leat the same probability. For example if the kernel is the Gibbs kernel associated to a cost function $c$, then $\inf\limits_{x,y\in\mathbf{X}\times\mathbf{Y}}k(x,y)=\exp(-\Vert C\Vert_\infty/\varepsilon)$. \subsection{Examples} Here we provides examples of some usual kernels $k$ that admits a decomposition of the form eq.(\ref{eq:general}) and we show that the assumptions of Proposition \ref{lem:ratio-RFF} are satisfied. Moreover it happens \MC{pas très clair ce "it happens"?} that these kernels may be the Gibbs kernels $k(x,y)=\exp(-\varepsilon^{-1} c(x,y))$ associated to some usual cost functions $c$. \MC{je me demande s'il ne faut pas séparer/distinguer les exemples directs, constructifs, des approximations de l'existant (e.g. Gaussien)} \textbf{Transport on the Positive Sphere.} \MC{je pense qu'il faut dire que qq part c'est un peu un linear kernel} Defining a cost as the log of a dot-product as described in \eqref{eq:cost} has already played a role in the recent OT literature. These references do not, however, consider such cost functions for practical considerations, as we do in this paper.~\citet{OLIKER2007600} defines a cost $c$ on the sphere $\mathbb{S}^d$, as $c(x,y)=-\log x^Ty, \text{if } x^Ty>0$, and $\infty$ otherwise. The cost is therefore finite whenever two normal vectors share the same halfspace, and infinite otherwise. A variant of that cost $-\log(1-x^Ty)$ also plays an important role in the reflector problem~\citep{glimm2003optical}, whose goal is to design a reflector able to reflect an incoming source of light in such a way that it matches a desired illumination. In several concurrent works,~\citet{LieroMielkeSavareShort}, \citet{2017-chizat-focm} and~\citet{kondratyev2016new} have considered the cost $c(x,y)= - \log (\cos (d(x,y)\wedge \pi/2)$ where $d$ is any metric. Note that when the metric is the geodesic distance on the sphere, $d(x,y)=\arccos(x^Ty)$ one then recovers exactly~\citeauthor{OLIKER2007600}'s cost. See Appendix \ref{sec:OT-sphere} for an illustration. \textbf{Arc-cosine Kernels.} Arc-cosine kernels have been considered in several works, starting notably from~\citep{NIPS2000_1790}, \citep{NIPS2009_3628} and~\citep{JMLR:v18:14-546}. The main idea behind arc-cosine kernels is to define positive maps for vectors $x,y$ in $\mathbb{R}^d$ using the signs (or higher exponent) of random projections $\mu= \mathcal{N}(0,I_d)$ $$ k_s(x,y) = \int_{\mathbb{R}^d}\Theta_s(u^T x) \Theta_s(u^T y)d\mu(u) $$ where $\Theta_s(w) = \sqrt{2}\max(0,w)^s$ is a rectified polynomial function, and show that these integrals coincide with kernels that only involve the norms and the angle between $x,y$, $$k_s(x,y)= \frac{1}{\pi}\|x\|^s\|y\|^s J_s\left(\arccos\left(\frac{x^T y}{\|x\|\|y\|}\right)\right),$$ where $$J_s(u)=(-1)^s(\sin u)^{2s+1}\left(\frac{1}{\sin u}\tfrac{\partial}{\partial \theta}\right)^s\left(\frac{\pi-u}{\sin u}\right).$$ To be able to define a cost function from such kernels, we need to ensure that the kernels $k_s$ take value in $\mathbb{R}^{*}_+$. In the following lemma, we show that one can build a perturbed version of $k_s$ which admits a decomposition of the form eq.(\ref{eq:general}) that satisfies the assumption of Proposition \ref{lem:ratio-RFF}. See Appendix \ref{proof:lemma_arccos} for the proof. \begin{lemma} \label{lem:decomp-arccos} Let $d\geq 1$, $s\geq 0$, $\kappa>0$ and $k_{s,\kappa}$ be the perturbed arc-cosine kernel on $\mathbb{R}^d$ defined as for all $x,y\in\mathbb{R}^d$, $ k_{s,\kappa}(x,y) = k_s(x,y) + \kappa$. Let also $\sigma>1$, $\rho=\mathcal{N}\left(0,\sigma^2\text{Id}\right)$ and let us define for all $x,u\in\mathbb{R}^d$ the following map: \begin{align*} \varphi(x,u)=\left(\sigma^{d/2}\sqrt{2}\max(0,u^T x)^s\exp\left(-\frac{\Vert u\Vert^2}{4}\left[1-\frac{1}{\sigma^2}\right]\right),\sqrt{\kappa}\right)^T \end{align*} Then for any $x,y\in\mathbb{R}^d$ we have: \begin{align*} k_{s,\kappa}(x,y)&=\int_{u\in\mathbb{R}^d} \varphi(x,u)^T\varphi(y,u) d\rho(u) \end{align*} Moreover we have for all $x,y\in\mathbb{R}^d$ $k_{s,\kappa}(x,y)\geq \kappa>0$ and for any compact $\mathcal{X}\subset\mathbb{R}^d$ we have: \begin{align*} \sup_{u\in\mathbb{R}^d}\sup_{ (x,y)\in\mathcal{X}\times\mathcal{X}}\left|\frac{\varphi(x,u)\varphi(y,u)}{k(x,y)}\right|<+\infty\quad \text{and} \quad \sup_{x\in\mathcal{X}}\mathbf{E}(\Vert \nabla_x\varphi\Vert_2^2)<+\infty \end{align*} \end{lemma} \textbf{Gaussian kernel.} The Gaussian kernel is in fact an important example as it is both a very widely used kernel on its own and its cost function associated is the $\ell_2$ square metric. In the following lemma, we show that the Gaussian kernel admits a decomposition of the form eq.(\ref{eq:general}) that satisfies the assumption of Proposition \ref{lem:ratio-RFF}. See Appendix \ref{proof:lemma_gaussian} for the proof. \begin{lemma} \label{lem:decomp-RBF} Let $d\geq 1$, $\varepsilon>0$ and $k$ be the kernel on $\mathbb{R}^d$ such that for all $x,y\in\mathbb{R}^d$ \begin{align*} k(x,y)=e^{-\Vert x-y\Vert_2^{2}/\varepsilon}. \end{align*} Let $R>0$, $q=\frac{R^2}{2\varepsilon d W_0\left(R^2/\varepsilon d\right)}$ where $W_0$ is the Lambert function, $\sigma^2 = \frac{q\varepsilon}{4}$, $\rho=\mathcal{N}\left(0,\sigma^2\text{Id}\right)$ and let us define for all $x,u\in\mathbb{R}^d$ the following map: \begin{align*} \varphi(x,u)&=(2q)^{d/4}\exp\left(-2\varepsilon^{-1} \Vert x-u\Vert^2_2\right)\exp\left(\frac{\varepsilon^{-1}\Vert u\Vert_2^2}{\frac{1}{2}+\varepsilon^{-1} R^2}\right) \end{align*} Then for any $x,y\in\mathbb{R}^d$ we have: \begin{align*} k(x,y)&=\int_{u\in\mathbb{R}^d} \varphi(x,u)\varphi(y,u) d\rho(u) \end{align*} Moreover for any $x,y\in \mathcal{B}(0,R)$ and for any $u\in\mathbb{R}^d$ we have $k(x,y)\geq \exp(-4\varepsilon^{-1} R^2)>0$ and \begin{align*} \left|\frac{\varphi(x,u)\varphi(y,u)}{k(x,y)}\right|\leq 2^{d/2+1} q^{d/2}. \end{align*} Finally we also have \begin{align*} \sup_{x\in\mathcal{B}(0,R)}\mathbf{E}(\Vert \nabla_x\varphi\Vert_2^2)\leq 2^{d/2+3} q^{d/2} \left[(R/\varepsilon)^2+\frac{q}{4\varepsilon}\right] \end{align*} \end{lemma} \section{Approximation of the Regularized Optimal Transport} Here we present our main result \MC{si c'est le main result, il doit être annoncé avant} about the approximation of the ROT problem. In the following $k$ is a kernel which can be written eq.(\ref{eq:general}) and satisfies the assumptions of Proposition \ref{lem:ratio-RFF}. Let $\varepsilon>0$. Thanks to the positivity of the kernel $k$, we also define $c(x,y)\defeq -\varepsilon\log k(x,y)$ the cost function associated. In the following, we denote $f\in\mathcal{O}(g)$ if $f\leq C g$ for a universal constant $C$ and $f\in\Omega(g)$ if $g\leq Q f$ for a universal constant $Q$. Let us now show the following theorem. See Appendix~\ref{proof:thm_approx_sin} for the proof. \begin{thm} \label{thm:result-sinkhorn-pos} Let $\delta>0$ and $r\geq 1$. Then the Sinkhorn algorithm \ref{alg-sink} with inputs $\mathbf{K}_\theta:=(k_\theta(x_i,y_j))_{i,j=1}^n$, $a$ and $b$ outputs $(u_{\theta},v_{\theta})$ such that \begin{align*} |W_{\varepsilon,c_{\theta}}-\widehat{W}_{\varepsilon,c_\theta}|\leq \frac{\delta}{2} \end{align*} in $\mathcal{O}\left(\frac{n\varepsilon r}{\delta}\left[\log\left(\frac{1}{c}\right)+Q_{\theta}\right]^2\right)$ algebric operations where \begin{align*} c=\min\limits_{i,j}(a_i,b_j) \text{\quad and\quad} Q_{\theta}=\log\left(\frac{1}{\min_{i,j} k_{\theta}(x_i,y_j)}\right) \end{align*} Moreover let $\tau>0$, \begin{align} \label{eq:number-feature} r\in\Omega\left(\frac{K^2}{\delta^2}\left[\min\left(d\varepsilon^{-1} \Vert C\Vert_{\infty}^2+d\log\left(\frac{KVD}{\tau\delta}\right),\log\left(\frac{n}{\tau}\right)\right)\right]\right) \end{align} and $u_1,...,u_r$ drawn independently from $\rho$, then with a probability $1-\tau$, $Q_{\theta}\leq \varepsilon^{-1} \Vert C\Vert_{\infty}^2 +\log\left(2+\delta\varepsilon^{-1}\right)$ and it holds \begin{align} |W_{\varepsilon,c}-\widehat{W}_{\varepsilon,c_\theta}|\leq \delta \end{align} \end{thm} Therefore with a probability $1-\tau$, Sinkhorn algorithm \ref{alg-sink} with inputs $\mathbf{K}_\theta$, $a$ and $b$ output a $\delta$ approximation of the ROT distance in $\tilde{\mathcal{O}}\left(\frac{n}{\varepsilon\delta^3} \Vert C\Vert_{\infty}^4 K^2\right)$ algebraic operation where the notation $\tilde{\mathcal{O}}(.)$ omits polylogarithmic factors depending on $R, D, \varepsilon,c, n$ and $\delta$. It worth noting that for every $r\geq 1$ and $\theta$, Sinkhorn algorithm \ref{alg-sink} using kernel matrix $\mathbf{K}_{\theta}$ will converge towards an approximate solution of the ROT problem associated with the cost function $c_\theta$ in linear time thanks to the positivity of the feature maps used. Moreover, to ensure with high probability that the solutions obtained approximate an optimal solution for the ROT problem associated with the cost function $c$, we need, if the features are chosen randomly, to ensure a minimum number of them. In constrast such result may not possible in \cite{altschuler2018massively} as $r$ is the one obtained by the Nyström algorithm of \cite{musco2016recursive} which guarantee the positivity of the coefficients of the kernel matrix obtained and therefore needs a very high precision. \subsection{Accelerated methods} \todo{Est-ce qu on laisse cela ou on dit une petite phrase comme quoi on a etudie le cas accelere en appendix aussi ?} \cite{guminov2019accelerated} show that one can accelarated the Sinkhorn algorithm (see Appendix \ref{algo:acc-sinkhorn}) and obtain a $\delta$-approximation of the OT distance. For that purpose, \cite{guminov2019accelerated} introduce a reformulation of the dual problem (\ref{eq:dual-theta}) and obtain \begin{align} \label{eq:dual-smooth} W_{\varepsilon,c_{\theta}}=\sup_{\alpha,\beta} F_\theta(\alpha,\beta):=\varepsilon\left[\langle \alpha,a\rangle + \langle \beta,b\rangle -\log\left(\langle e^{\alpha},K_{\theta}e^{\beta}\rangle\right)\right] \end{align} which can be shown to be an $L$-smooth function (\cite{nesterov2005smooth}) where $L\leq 2\varepsilon^{-1}$. Applying our approximation scheme ot their algorithm leads to the following result. See proof in Appendix \ref{proof:thm-acc-sin}. \begin{thm} \label{thm:result-acc-sinkhorn} Let $\delta>0$ and $r\geq 1$. Then the Accelerated Sinkhorn algorithm \ref{algo:acc-sinkhorn} with inputs $\mathbf{K}_\theta$, $a$ and $b$ outputs $(\alpha_{\theta},\beta_{\theta})$ such that \begin{align*} |W_{\varepsilon,c_{\theta}}-F_\theta(\alpha_{\theta},\beta_{\theta})|\leq \frac{\delta}{2} \end{align*} in $\mathcal{O}\left(\frac{nr}{\sqrt{\delta}} [\sqrt{\varepsilon^{-1}}A_\theta]\right)$ algebraic operations where $A_\theta=\inf\limits_{(\alpha,\beta)\in\Theta_{\theta}} \Vert (\alpha,\beta)\Vert_2$ and $\Theta_{\theta}$ is the set of optimal dual solutions of (\ref{eq:dual-smooth}). Moreover let $\delta>0$, \begin{align} \label{eq:number-feature} r\in\Omega\left(\frac{K^2}{\delta^2}\left[\min\left(d\varepsilon^{-1} \Vert C\Vert_{\infty}^2+d\log\left(\frac{KVD}{\delta\delta}\right),\log\left(\frac{n}{\delta}\right)\right)\right]\right) \end{align} and $u_1,...,u_r$ drawn independently from $\rho$, then with a probability $1-\delta$ it holds \begin{align} |W_{\varepsilon,c}-F_\theta(\alpha_{\theta},\beta_{\theta})|\leq \delta \end{align} \end{thm} \textbf{Illustrations.} In this experiment, we draw 40000 samples from two normal distributions respectively $\mu=\mathcal{N}(\mathbf{0},\sigma^2 \textbf{I}_d)$ where $d=2$ and $\sigma^2=10^{-1}$, and $\nu=\mathcal{N}(\mathbf{1}, \textbf{I}_d)$. Moreover the samples from $\nu$ has been scaled with the maximum norm of the samples, such that they live in the unit ball. In Figure \ref{fig:result_acc}, we plot the algebraic relative error\footnote{The algebraic relative errror considered in Figure \ref{fig:result_acc} is defined as $\text{RE}:=\frac{\widehat{\text{ROT}}-\text{ROT}}{\text{ROT}}$\MC{ éviter les footnotes}} with respect to the ground truth ROT distance ploted in the basis $100\%$ for different regularisations and compare the results obtained for our proposed method (\textbf{RF}) with the one proposed in \cite{altschuler2018massively} (\textbf{Nys}) and with the Sinkhorn algorithm (\textbf{Sin}) proposed in \cite{CuturiSinkhorn}. The number of random features (or rank) chosen varies from $100$ to $2000$. Given the fixed discrete distributions, we repeat for each regularisation 50 times the experiment with different seeds for both (\textbf{RF} and \textbf{RF} methods and we plot the average of the results obtained. We observe that there is 3 regimes of interest. The first regime is when the regularisation is sufficiently large, which corresponds to the case when $\varepsilon=0.5$. Both \textbf{Nys} and \textbf{RF} methods obtain very high accuracy with order of magnitude faster than \textbf{Sin}. Note that \textbf{Nys} method achieves even better accuracy than \textbf{RF} method in this regime. Another regime of interest is when $\varepsilon=0.1$. In that case both \textbf{Nys} and \textbf{RF} reach the same very high accuracy at a given number of random features (or rank) with order of magnitude faster than \textbf{Sin}. However it is important to note that \textbf{Ny} method works only if the number of random features (or the rank) is sufficiently large, that is if $r\geq 1000$. Moreover, even if $r\geq 1000$, among the 50 trials done in this setting, only few converge ($\simeq 5$) for \textbf{Ny} method. This phenomena is repeated and amplified when $\varepsilon=0.05$ as the \textbf{Ny} method totally failed. However, the \textbf{RF} works totally well and provides with high accuracy an approximation of the ROT cost with order of magnitude faster than \textbf{Sin}. Finally, there is a regime, which is when $\varepsilon=0.01$, where all the methods failed. Indeed our proposed method performs poorly as the accuracy of the \textbf{RF} method is of order of $10\%$, the Nystrom method cannot be computed, and Sinkhorn algorithm may be too costly. \begin{figure}[h!] \centering \includegraphics[width=\linewidth, trim=0.5cm 1cm 0.5cm 1cm]{FastSinkhornGAN/img/plot_accuracy_ROT_relative_error_base_2_scale.pdf} \caption{Accuracy vs Time.\label{fig:result_acc} \MC{les figures sont super. il faut maintenant trouver un moyen pour les agrandir, ainsi que les labels. il est envisageable de ne pas remettre le y-label sur tous les plots. il faut que les fonts soient plus grosses. peut être envisager une présentation en grille 2x2 qui serait sur les 3/5 de la colonne à droite en utilisant wrapfigure. Cette figure doit arriver bien avant la figure GAN. Peut on imaginer une autre appli synthétique? il faut que tu donnes un vrai titre (pas reg= ... , mais Regularization : ...}} \end{figure} \section{Experiments} \label{sec:exp} \begin{figure}[h!] \centering \includegraphics[width=1\linewidth, trim=0.5cm 1cm 0.5cm 1cm]{img/plot_accuracy_ROT_vf.pdf} \caption{In this experiment, we draw 40000 samples from two normal distributions and we plot the deviation from ground truth for different regularizations. These two normal distributions are in $\mathbb{R}^2$. One of them has mean $(1,1)^T$ and identity covariance matrix $I_2$. The other has 0 mean and covariance $0.1\times I_2$. We compare the results obtained for our proposed method (\textbf{RF}) with the one proposed in \cite{altschuler2018massively} (\textbf{Nys}) and with the Sinkhorn algorithm (\textbf{Sin}) proposed in \cite{CuturiSinkhorn}. The cost function considered here is the square Euclidean metric and the feature map used is that presented in Lemma~\ref{lem:decomp-RBF}. The number of random features (or rank) chosen varies from $100$ to $2000$. We repeat for each problem 50 times the experiment. Note that curves in the plot start at different points corresponding to the time required for initialization. \emph{Right}: when the regularization is sufficiently large both \textbf{Nys} and \textbf{RF} methods obtain very high accuracy with order of magnitude faster than \textbf{Sin}. \emph{Middle right, middle left}: \textbf{Nys} fails to converge while \textbf{RF} works for any given random features and provides very high accuracy of the ROT cost with order of magnitude faster than \textbf{Sin}. \emph{Left}: when the regularization is too small all the methods failed as the Nystrom method cannot be computed, the accuracy of the \textbf{RF} method is of order of $10\%$ and Sinkhorn algorithm may be too costly.\label{fig:result_acc}} \end{figure} \begin{figure} \begin{minipage}{0.25\textwidth} \hfill \includegraphics[width=1\linewidth, trim=0.5cm 1cm 0.5cm 1cm]{img/plot_sphere.jpg} \end{minipage} \hfill \hfill \begin{minipage}{0.65\textwidth} \caption{Here we show the two distributions considered in the experiment presented in Figure~\ref{fig:result_acc_sphere} to compare the time-accuracy tradeoff between the different methods. All the points are drawn on the unit sphere in $\mathbb{R}^3$, and uniform distributions are considered respectively on the red dots and on the blue dots. There are 10000 samples for each distribution.\label{fig:data-sphere}} \end{minipage} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1\linewidth, trim=0.5cm 1cm 0.5cm 1cm]{img/plot_accuracy_ROT_sphere.jpg} \caption{In this experiment, we draw 20000 samples from two distributions on the sphere (see Figure~\ref{fig:data-sphere}) and we plot the deviation from ground truth for different regularizations. We compare the results obtained for our proposed method (\textbf{RF}) with the one proposed in \cite{altschuler2018massively} (\textbf{Nys}) and with the Sinkhorn algorithm (\textbf{Sin}) proposed in \cite{CuturiSinkhorn}. The cost function considered here is the square Euclidean metric and the feature map used is that presented in Lemma~\ref{lem:decomp-RBF}. The number of random features (or rank) chosen varies from $100$ to $2000$. We repeat for each problem 10 times the experiment. Note that curves in the plot start at different points corresponding to the time required for initialization. \emph{Right}: when the regularization is sufficiently large both \textbf{Nys} and \textbf{RF} methods obtain very high accuracy with order of magnitude faster than \textbf{Sin}. \emph{Middle right, middle left, left}: \textbf{Nys} fails to converge while \textbf{RF} works for any given random features and provides very high accuracy of the ROT cost with order of magnitude faster than \textbf{Sin}.\label{fig:result_acc_sphere}} \end{figure} \paragraph{Efficiency vs. Approximation trade-off using positive features.} In Figures~\ref{fig:result_acc},\ref{fig:result_acc_sphere} we plot the deviation from ground truth, defined as $\text{D}:= 100\times\frac{\text{ROT}-\widehat{\text{ROT}}}{|\text{ROT}|}\ + 100 $, and show the time-accuracy tradeoff for our proposed method \textbf{RF}, Nystrom \textbf{Nys} \cite{altschuler2018massively} and Sinkhorn \textbf{Sin} \cite{CuturiSinkhorn}, for a range of regularization parameters $\varepsilon$ (each corresponding to a different ground truth $W_{\varepsilon,c}$) and approximation with $r$ random features in two settings. In particular, we show that our method obtains very high accuracy with order of magnitude faster than \textbf{Sin} in a larger regime of regularizations than \textbf{Nys}. In Figure~\ref{fig:result_acc_higgs} in Appendix~\ref{sec:OT-sphere}, we also show the time-accuracy tradeoff in the high dimensional setting. \paragraph{Using positive features to learn adversarial kernels in GANs.} Let $P_X$ a given distribution on $\mathcal{X}\subset\mathbb{R}^D$, $(\mathcal{Z},\mathcal{A},\zeta)$ an arbitrary probability space and let $g_{\rho}:\mathcal{Z}\rightarrow\mathcal{X}$ a parametric function where the parameter $\rho$ lives in a topological space $\mathcal{O}$. The function $g_{\rho}$ allows to generate a distribution on $\mathcal{X}$ by considering the push forward operation through $g_{\rho}$. Indeed $g_{\rho_{\sharp}}\zeta$ is a distribution on $\mathcal{X}$ and if the function space $\mathcal{F}=\left\{g_{\rho}\text{: } \rho\in\mathcal{O}\right\}$ is large enough, we may be able to recover $P_X$ for a well chosen $\rho$. The goal is to learn $\rho^*$ such that $g_{\rho^{*}_{\sharp}}\zeta$ is the closest possible to $P_X$ according to a specific metric on the space of distributions. Here we consider the Sinkhorn distance as introduced in Eq.(\ref{eq:reg-sdiv}). One difficulty when using such metric is to define a well behaved cost to measure the distance between distributions in the ground space. We decide to learn an adversarial cost by embedding the native space $\mathcal{X}$ into a low-dimensional subspace of $\mathbb{R}^d$ thanks to a parametric function $f_{\gamma}$. Therefore by defining $h_\gamma(x,y)\defeq(f_{\gamma}(x),f_{\gamma}(y))$ and given a fixed cost function $c$ on $\mathbb{R}^d$, we can define a parametric cost function on $\mathcal{X}$ as $c\circ h_\gamma(x,y)\defeq c(f_{\gamma}(x),f_{\gamma}(y))$. To train a Generative Adversarial Network (GAN), one may therefore optimizes the following objective: \begin{align*} \label{eq:obj_GAN} \min_{\rho}\max_{\gamma}\overline{W}_{\varepsilon,c\circ h_\gamma}(g_{\rho_{\#}}\zeta,P_X) \end{align*} Indeed, taking the $\max$ of the Sinkhorn distance according to $\gamma$ allows to learn a discriminative cost $c\circ h_\gamma$ \cite{2017-Genevay-gan-vae,salimans2018improving}. However in practice, we do not have access to the distribution of the data $P_X$, but only to its empirical version $\widehat{P}_X$, where $\widehat{P}_X\defeq\frac{1}{n}\sum_{i=1}^n \delta_{x_i}$ and $\mathbf{X}\defeq\{x_1,...,x_n\}$ are the $n$ i.i.d samples drawn from $P_X$. By sampling independently $n$ samples $\mathbf{Z}:=\{z_1,...,z_n\}$ from $\zeta$ and denoting $\widehat\zeta\defeq\frac{1}{q}\sum_{i=1}^q \delta_{z_i}$ we obtain the following approximation: \begin{align*} \min_{\rho}\max_{\gamma}\overline{W}_{\varepsilon,c\circ h_\gamma}(g_{\rho_{\#}}\widehat\zeta,\widehat{P}_X) \end{align*} However as soon as $n$ gets too large, the above objective, using the classic Sinkhorn Alg.~\ref{alg-sink} is very costly to compute as the cost of each iteration of Sinkhorn is quadratic in the number of samples. Therefore one may instead split the data and consider $B\geq 1$ mini-batches $\mathbf{Z} = (\mathbf{Z}^b)_{b=1}^B$ and $\mathbf{X} = (\mathbf{X}^b)_{b=1}^B$ of size $s=\frac{n}{B}$, and obtain instead the following optimisation problem: \begin{align*} \min_{\rho}\max_{\gamma}\frac{1}{B}\sum_{b=1}^B \overline{W}_{\varepsilon,c\circ h_\gamma}(g_{\rho_{\#}}\widehat{\zeta}^{b},\widehat{P}_X^{b}) \end{align*} where $\widehat{\zeta}^{b}:=\frac{1}{s}\sum_{i=1}^{s} \delta_{z_i^{b}}$ and $\widehat{P}_X^{b}:=\frac{1}{s}\sum_{i=1}^{s} \delta_{x_i^{b}}$. However the smaller the batches are, the less precise the approximation of the objective is. To overcome this issue we propose to apply our method and replace the cost function $c$ by an approximation defined as $c_{\theta}(x,y)=-\epsilon \log \varphi_\theta(x)^T\varphi_\theta(y) $ and consider instead the following optimisation problem: \begin{align*} \min_{\rho}\max_{\gamma}\frac{1}{B}\sum_{b=1}^B \overline{W}_{\varepsilon,c_\theta \circ h_\gamma}(g_{\rho_{\#}}\widehat{\zeta}^{b},\widehat{P}_X^{b}). \end{align*} Indeed in that case, the Gibbs kernel associated to the cost function $c_\theta \circ h_\gamma$ is still factorizafable as we have $c_\theta \circ h_\gamma(x,y)= -\epsilon \log \varphi_\theta(f_\gamma(x))^T\varphi_\theta(f_\gamma(y)). $ Such procedure allows us to compute the objective in linear time and therefore to largely increase the size of the batches. Note that we keep the batch formulation as we still need it because of memory limitation on GPUs. Moreover, we may either consider a random approximation by drawing $\theta$ randomly for a well chosen distribution or we could learn the random features $\theta$. In the following we decide to learn the features $\theta$ in order to obtain a cost function $c_\theta\circ h_\gamma$ even more discriminative. Finally our objective is: \begin{equation} \label{eq:final-obj} \min_{\rho}\max_{\gamma,\theta}\frac{1}{B}\sum_{b=1}^B \overline{W}_{\varepsilon,c_\theta \circ h_\gamma}(g_{\rho_{\#}}\widehat{\zeta}^{b},\widehat{P}_X^{b}) \end{equation} Therefore here we aim to learn an embedding from the input space into the feature space thanks to two operations. The first one consists in taking a sample and embedding it into a latent space thanks to the mapping $f_{\gamma}$ and the second one is an embedding of this latent space into the feature space thanks to the feature map $\varphi_\theta$. From now on we assume that $g_\rho$ and $f_\gamma$ are neural networks. More precisely we take the exact same functions used in \cite{radford2015unsupervised,li2017mmd} to define $g_\rho$ and $f_\gamma$. Moreover, $\varphi_\theta$ is the feature map associated to the Gaussian kernel defined in Lemma \ref{lem:decomp-RBF} where $\theta$ is initialised with a normal distribution. The number of random features considered has been fixed to be $r=600$ in the following. The training procedure is the same as \cite{2017-Genevay-AutoDiff,li2017mmd} and consists in alterning $n_c$ optimisation steps to train the cost function $c_\theta\circ h_\gamma$ and an optimisation step to train the generator $g_{\rho}$. The code is available at \href{https://github.com/meyerscetbon/LinearSinkhorn}{github.com/meyerscetbon/LinearSinkhorn}. \begin{table}[!h] \centering \begin{tabular}{ccc} $k_{\theta}(f_\gamma(x),f_\gamma(z))$ & Image $x$ & Noise $z$ \\ \hline Image $x$ & $1802 \times 1e12$ & $2961 \times 1e5$ \\ \hline Noise $z$ & $2961 \times 1e5$ & 48.65 \end{tabular} \caption{Comparison of the learned kernel $k_{\theta}$, trained on CIFAR-10 by optimizing the objective~(\ref{eq:final-obj}), between images taken from CIFAR-10 and random noises sampled in the native of space of images. The values shown are averages obtained between 5 noise and/or image samples. As we can see the cost learned has well captured the structure of the image space. \label{table:1}} \end{table} \textbf{Optimisation.} Thanks to proposition \ref{prop:diff-gene}, the objective is differentiable with respect to $\theta,\gamma$ and $\rho$. We obtain the gradient by computing an approximation of the gradient thanks to the approximate dual variables obtained by the Sinkhorn algorithm. We refers to section \ref{sec:enveloppe-thm} for the expression of the gradient. This strategy leads to two benefits. First it is memory efficient as the computation of the gradient at this stage does not require to keep track of the computations involved in the Sinkhorn algorithm. Second it allows, for a given regularization, to compute with very high accuracy the Sinkhorn distance. Therefore, our method may be applied also for small regularization. \begin{figure*}[t] \begin{tabular}{c@{\hskip 0.1in}c@{\hskip 0.1in}} \includegraphics[width=0.48\textwidth]{img/cifar_image_vf.png}& \includegraphics[width=0.48\textwidth]{img/celebA_image_vf.png} \end{tabular} \caption{Images generated by two learned generative models trained by optimizing the objective~(\ref{eq:final-obj}) where we set the number of batches $s=7000$, the regularization $\varepsilon=1$, and the number of features $r=600$. \emph{Left, right:} samples obtained from the proposed generative model trained on respectively CIFAR-10~\cite{cifar} and celebA~\cite{liu2015faceattributes}. \label{fig-GAN}} \end{figure*} \paragraph{Results.} We train our GAN models on a Tesla K80 GPU for 84 hours on two different datasets, namely CIFAR-10 dataset~\citep{cifar} and CelebA dataset~\citep{liu2015faceattributes} and learn both the proposed generative model and the adversarial cost function $c_\theta$ derived from the adversarial kernel $k_\theta$. Figure~\ref{fig-GAN} illustrates the generated samples and Table~\ref{table:1} displays the geometry captured by the learned kernel. \paragraph{Discussion.} Our proposed method has mainly two advantages compared to the other Wasserstein GANs (W-GANs) proposed in the literature. First, the computation of the Sinkhorn divergence is linear with respect to the number of samples which allow to largely increase the batch size when training a W-GAN and obtain a better approximation of the true Sinkhorn divergence. Second, our approach is fully differentiable and therefore we can directly compute the gradient of the Sinhkorn divergence with respect the parameters of the network. In \cite{salimans2018improving} the authors do not differentiate through the Wasserstein cost to train their network. In \cite{2017-Genevay-gan-vae} the authors do differentiate through the iterations of the Sinkhorn algorithm but this strategy require to keep track of the computation involved in the Sinkhorn algorithm and can be applied only for large regularizations as the number of iterations cannot be too large. \section{Linear Sinkhorn with Positive Features} \label{sec-method} The usual flow in transport dictates to choose a cost first $c(x,y)$ to define a kernel $k(x,y)\defeq \exp(-c(x,y)/\varepsilon)$ next, and adjust the temperature $\varepsilon$ depending on the level of regularization that is adequate for the task. We propose in this work to do exactly the opposite, by choosing instead parameterized feature maps $\varphi_{\theta} : \mathcal{X} \mapsto (\mathbb{R}^{*}_{+})^r$ which associate to any point in $\mathcal{X}$ a vector in the positive orthant. With such maps, we can therefore build the corresponding positive-definite kernel $k_{\theta}$ as $k_{\theta}(x,y)\defeq\varphi_{\theta}(x)^T\varphi_\theta(y)$ which is a positive function. Therefore as a by-product and by positivity of the feature map, we can define for all $(x,y)\in \mathcal{X}\times\mathcal{X}$ the following cost function \begin{equation}\label{eq:cost} c_{\theta}(x,y)\defeq-\varepsilon\log \varphi_\theta(x)^T\varphi_\theta(y). \end{equation} \begin{rmq}[Transport on the Positive Sphere.] Defining a cost as the log of a dot-product as described in \eqref{eq:cost} has already played a role in the recent OT literature. In~\cite{OLIKER2007600}, the author defines a cost $c$ on the sphere $\mathbb{S}^d$, as $c(x,y)=-\log x^Ty, \text{if } x^Ty>0$, and $\infty$ otherwise. The cost is therefore finite whenever two normal vectors share the same halfspace, and infinite otherwise. When restricted to the the positive sphere, the kernel associated to this cost is the linear kernel. See App.~\ref{sec:OT-sphere} for an illustration. \end{rmq} More generally, the above procedure allows us to build cost functions on any cartesian product spaces $\mathcal{X}\times\mathcal{Y}$ by defining $c_{\theta,\gamma}(x,y)\defeq-\varepsilon\log \varphi_\theta(x)^T\psi_\gamma(y)$ where $\psi_\gamma: \mathcal{Y} \mapsto (\mathbb{R}^{*}_{+})^r$ is a parametrized function which associates to any point $\mathcal{Y}$ also a vector in the same positive orthant as the image space of $\varphi_\theta$ but this is out of the scope of this paper. \subsection{Achieving linear time Sinkhorn iterations with Positive Features} Choosing a cost function $c_\theta$ as in~\eqref{eq:cost} greatly simplifies computations, by design, since one has, writing for the matrices of features for two set of points $x_1,\dots,x_n$ and $y_1,\dots,y_m$ \begin{align*}\pmb{\xi}\defeq \begin{bmatrix}\varphi_\theta(x_1),\dots,\varphi_\theta(x_n)\end{bmatrix}\in(\mathbb{R}_{+}^{*})^{r\times n}&,& \pmb{\zeta}\defeq \begin{bmatrix}\varphi_\theta(y_1),\dots,\varphi_\theta(y_m)\end{bmatrix}\in(\mathbb{R}_{+}^{*})^{r\times m} , \end{align*} that the resulting sample kernel matrix $\mathbf{K}_\theta$ corresponding to the cost $c_\theta$ is $\mathbf{K}_\theta = \begin{bmatrix}e^{-c_\theta(x_i,y_j)/\varepsilon}\end{bmatrix}_{i,j} = \pmb{\xi}^T\pmb{\zeta}$. Moreover thanks to the positivity of the entries of the kernel matrix $\mathbf{K}_\theta$ there is no duality gap and we obtain that \begin{equation} \label{eq:dual-theta} W_{\varepsilon,c_\theta}(\mu,\nu) =\!\!\!\!\! \max_{\alpha\in\mathbb{R}^n,\beta\in\mathbb{R}^m} a^T\alpha + b^T\beta -\varepsilon (\pmb{\xi} e^{\alpha/\varepsilon})^T \pmb{\zeta} e^{\beta/\varepsilon}+\varepsilon. \end{equation} Therefore the Sinkhorn iterations in Alg.~\ref{alg-sink} can be carried out in exactly $r(n+m)$ operations. The main question remains on how to choose the mapping $\varphi_\theta$. In the following, we show that, for some well chosen mappings $\varphi_\theta$, we can approximate the ROT distance for some classical costs in linear time. \subsection{Approximation properties of Positive Features} Let $\mathcal{U}$ be a metric space and $\rho$ a probability measure on $\mathcal{U}$. We consider kernels on $\mathcal{X}$ of the form: \begin{align} \label{eq:general} \text{for } (x,y)\in\mathcal{X}^2, \,k(x,y)=\int_{u\in\mathcal{U}} \varphi(x,u)^T\varphi(y,u) d\rho(u). \end{align} Here $\varphi:\mathcal{X}\times\mathcal{U}\rightarrow (\mathbb{R_{+}^{*}})^p$ is such that for all $x\in\mathcal{X}$, $u\in\mathcal{U}\rightarrow \Vert\varphi(x,u)\Vert_2$ is square integrable (for the measure $d\rho$). Given such kernel and a regularization $\varepsilon$ we define the cost function $c(x,y)\defeq-\varepsilon\log(k(x,y)).$ In fact, we will see in the following that for some usual cost functions $\tilde{c}$, e.g. the square Euclidean cost, the Gibbs kernel associated $\tilde{k}(x,y)=\exp(-\varepsilon^{-1} \tilde{c}(x,y))$ admits a decomposition of the form Eq.(\ref{eq:general}). To obtain a finite-dimensional representation, one can approximate the integral with a weighted finite sum. Let $r\geq 1$ and $\theta:=(u_1,...,u_r)\in\mathcal{U}^r$ from which we define the following positive feature map $$\varphi_{\mathbf{\theta}}(x) \defeq\frac{1}{\sqrt{r}}\left(\varphi(x,u_1),...,\varphi(x,u_r)\right)\in\mathbb{R}^{p\times r}$$ and a new kernel as $k_{\theta}(x,y)\defeq\langle \varphi_{\mathbf{\theta}}(x),\varphi_{\mathbf{\theta}}(y)\rangle$. When the $(u_i)_{1\leq i\leq r}$ are sampled independently from $\rho$, $k_\theta$ may approximates the kernel $k$ arbitrary well if the number of random features $r$ is sufficiently large. For that purpose let us now introduce some assumptions on the kernel $k$. \begin{assump} \label{assump-1} There exists a constant $\psi>0$ such that for all $x,y\in\mathcal{X}$: \begin{align} \vert \varphi(x,u)^T\varphi(y,u)/k(x,y)\vert\leq \psi \end{align} \end{assump} \begin{assump} \label{assump-2} There exists a $\kappa>0$ such that for ally $x,y\in\mathcal{X}$, $k(x,y)\geq \kappa>0$ and $\varphi$ is differentiable there exists $V>0$ such that: \begin{align} \sup_{x\in\mathcal{X}}\mathbf{E}_{\rho}\left(\Vert \nabla_x\varphi(x,u)\Vert^2\right)\leq V \end{align} \end{assump} We can now present our main result on our proposed approximation scheme of $W_{\varepsilon,c}$ which is obtained in linear time with high probability. See Appendix~\ref{proof:thm_approx_sin} for the proof. \begin{thm} \label{thm:result-sinkhorn-pos} Let $\delta>0$ and $r\geq 1$. Then the Sinkhorn Alg. \ref{alg-sink} with inputs $\mathbf{K}_\theta$, $a$ and $b$ outputs $(u_{\theta},v_{\theta})$ such that $|W_{\varepsilon,c_{\theta}}-\widehat{W}_{\varepsilon,c_\theta}|\leq \frac{\delta}{2}$ in $\mathcal{O}\left(\frac{n\varepsilon r}{\delta}\left[Q_{\theta}- \log\min\limits_{i,j}(a_i,b_j)\right]^2\right)$ algebric operations where $Q_{\theta}= - \log \min\limits_{i,j} k_{\theta}(x_i,y_j) $. Moreover if Assumptions \ref{assump-1} and \ref{assump-2} hold then for $\tau>0$, \begin{align} \label{eq:number-feature} r\in\Omega\left(\frac{\psi^2}{\delta^2}\left[\min\left(d\varepsilon^{-1} \Vert \mathbf{C}\Vert_{\infty}^2+d\log\left(\frac{\psi VD}{\tau\delta}\right),\log\left(\frac{n}{\tau}\right)\right)\right]\right) \end{align} and $u_1,...,u_r$ drawn independently from $\rho$, with a probability $1-\tau$, $Q_{\theta}\leq \varepsilon^{-1} \Vert \mathbf{C}\Vert_{\infty}^2 +\log\left(2+\delta\varepsilon^{-1}\right)$ and it holds \begin{align} |W_{\varepsilon,c}-\widehat{W}_{\varepsilon,c_\theta}|\leq \delta \end{align} \end{thm} Therefore with a probability $1-\tau$, Sinkhorn Alg. \ref{alg-sink} with inputs $\mathbf{K}_\theta$, $a$ and $b$ output a $\delta$-approximation of the ROT distance in $\tilde{\mathcal{O}}\left(\frac{n}{\varepsilon\delta^3} \Vert \mathbf{C}\Vert_{\infty}^4 \psi^2\right)$ algebraic operation where the notation $\tilde{\mathcal{O}}(.)$ omits polylogarithmic factors depending on $R, D, \varepsilon, n$ and $\delta$. It worth noting that for every $r\geq 1$ and $\theta$, Sinkhorn Alg. \ref{alg-sink} using kernel matrix $\mathbf{K}_{\theta}$ will converge towards an approximate solution of the ROT problem associated with the cost function $c_\theta$ in linear time thanks to the positivity of the feature maps used. Moreover, to ensure with high probability that the solution obtained approximate an optimal solution for the ROT problem associated with the cost function $c$, we need, if the features are chosen randomly, to ensure a minimum number of them. In constrast such result is not possible in \cite{altschuler2018massively}. Indeed in their works, the number of random features $r$ cannot be chosen arbitrarily as they need to ensure the positiveness of the all the coefficients of the approximated kernel matrix obtained by the Nyström algorithm of \cite{musco2016recursive} to run the Sinkhorn iterations and therefore need a very high precision which requires a certain number of random features $r$. \begin{rmq}[Acceleration.] It is worth noting that our method can also be applied in combination with the accelerated version of the Sinkhorn algorithm proposed in \cite{guminov2019accelerated}. Indeed for $\tau>0$, applying our approximation scheme to their algorithm leads with a probability $1-\tau$ to a $\delta/2$-approximation of $W_{\varepsilon,c}$ in $\mathcal{O}\left(\frac{nr}{\sqrt{\delta}} [\sqrt{\varepsilon^{-1}}A_\theta]\right)$ algebraic operations where $A_\theta=\inf\limits_{(\alpha,\beta)\in\Theta_{\theta}} \Vert (\alpha,\beta)\Vert_2$, $\Theta_{\theta}$ is the set of optimal dual solutions of (\ref{eq:dual-theta}) and $r$ satisfying Eq.(\ref{eq:number-feature}). See the full statement and the proof in Appendix \ref{proof:thm-acc-sin}. \end{rmq} The number of random features prescribed in Theorem~\ref{thm:result-sinkhorn-pos} ensures with high probability that $\widehat{W}_{\varepsilon,c_{\theta}}$ approximates $W_{\varepsilon,c}$ well when $u_1,\dots,u_r$ are drawn independently from $\rho$. Indeed, to control the error due to the approximation made through the Sinkhorn iterations, we need to control the error of the approximation of $\mathbf{K}$ by $\mathbf{K}_\theta$ relatively to $\mathbf{K}$. In the next proposition we show with high probability that for all $(x,y)\in\mathcal{X}\times \mathcal{X}$, \begin{align} \label{eq:goal-RFF} (1-\delta)k(x,y)\leq k_{\theta}(x,y)\leq (1+\delta)k(x,y) \end{align} for an arbitrary $\delta>0$ as soon as the number of random features $r$ is large enough. See Appendix~\ref{proof:ratio-RFF} for the proof. \begin{prop} \label{lem:ratio-RFF} Let $\mathcal{X}\subset \mathbb{R}^d$ compact, $n\geq 1$, $\mathbf{X}= \{x_1,...,x_n\}$ and $\mathbf{Y}= \{y_1,...,y_n\}$ such that $\mathbf{X},\mathbf{Y}\subset\mathcal{X}$, $\delta>0$. If $u_1,...,u_r$ are drawn independently from $\rho$ then under Assumption~\ref{assump-1} we have \begin{align*} \label{eq:finit-RFF} \mathbb{P}\left(\sup_{(x,y)\in\mathbf{X}\times\mathbf{Y}}\left |\frac{k_{\theta}(x,y)}{k(x,y)}-1\right|\geq \delta\right)\leq 2 n^2\exp\left(-\frac{r \delta^2}{2 \psi^2}\right) \end{align*} Moreover if in addition Assumption~\ref{assump-2} holds then we have \begin{align*} \mathbb{P}\left(\sup_{(x,y)\in\mathcal{X}\times\mathcal{X}}\left |\frac{k_{\theta}(x,y)}{k(x,y)}-1\right|\geq \delta\right)\leq \frac{(\kappa^{-1}D)^2 C_{\psi,V,r}}{\delta^2}\exp\left(-\frac{r\delta^2}{2\psi^2(d+1)}\right) \end{align*} where $C_{\psi,V,r} = 2^9 \psi(4+\psi^2/r)V \sup\limits_{x\in\mathcal{X}}k(x,x)$ and $D=\sup\limits_{(x,y)\in\mathcal{X}\times\mathcal{X}}\Vert(x,y)\Vert_2$. \end{prop} \begin{rmq}[Ratio Approximation.] The uniform bound obtained here to control the ratio gives naturally a control of the form Eq.(\ref{eq:goal-RFF}). In comparison, in \cite{rahimi2008random}, the authors obtain a uniform bound on their difference which leads with high probability to a uniform control of the form \begin{align} k(x,y)-\tau\leq k_{\theta}(x,y)\leq k(x,y)+\tau \end{align} where $\tau$ is a decreasing function with respect to $r$ the number of random features required. To be able to recover Eq.(\ref{eq:goal-RFF}) from the above control, one may consider the case when $\tau=\inf_{x,y\in\mathbf{X}\times\mathbf{Y}}k(x,y)\delta$ which can considerably increases the number of of random features $r$ needed to ensure the result with at least the same probability. For example if the kernel is the Gibbs kernel associated to a cost function $c$, then $\inf\limits_{x,y\in\mathbf{X}\times\mathbf{Y}}k(x,y)=\exp(-\Vert \mathbf{C}\Vert_\infty/\varepsilon)$. More details are left in Appendix~\ref{proof:ratio-RFF}. \end{rmq} In the following, we provides examples of some usual kernels $k$ that admits a decomposition of the form Eq.(\ref{eq:general}), satisfy Assumptions \ref{assump-1} and \ref{assump-2} and hence for which Theorem \ref{thm:result-sinkhorn-pos} can be applied. \paragraph{Arc-cosine Kernels.} Arc-cosine kernels have been considered in several works, starting notably from~\citep{NIPS2000_1790}, \citep{NIPS2009_3628} and~\citep{JMLR:v18:14-546}. The main idea behind arc-cosine kernels is that they can be written using positive maps for vectors $x,y$ in $\mathbb{R}^d$ and the signs (or higher exponent) of random projections $\mu= \mathcal{N}(0,I_d)$ $$ k_s(x,y) = \int_{\mathbb{R}^d}\Theta_s(u^T x) \Theta_s(u^T y)d\mu(u) $$ where $\Theta_s(w) = \sqrt{2}\max(0,w)^s$ is a rectified polynomial function. In fact from these formulations, we build a perturbed version of $k_s$ which admits a decomposition of the form Eq.(\ref{eq:general}) that satisfies the required assumptions. See Appendix \ref{proof:lemma_arccos} for the full statement and the proof. \paragraph{Gaussian kernel.} The Gaussian kernel is in fact an important example as it is both a very widely used kernel on its own and its cost function associated is the square Euclidean metric. A decomposition of the form (\ref{eq:general}) has been obtained in (\cite{NIPS2014_5348}) for the Gaussian kernel but it does not satisfies the required assumptions. In the following lemma, we built a feature map of the Gaussian kernel that satisfies them. See Appendix \ref{proof:lemma_gaussian} for the proof. \begin{lemma} \label{lem:decomp-RBF} Let $d\geq 1$, $\varepsilon>0$ and $k$ be the kernel on $\mathbb{R}^d$ such that for all $x,y\in\mathbb{R}^d$, $k(x,y)=e^{-\Vert x-y\Vert_2^{2}/\varepsilon}$. Let $R>0$, $q=\frac{R^2}{2\varepsilon d W_0\left(R^2/\varepsilon d\right)}$ where $W_0$ is the Lambert function, $\sigma^2 =q\varepsilon/4$, $\rho=\mathcal{N}\left(0,\sigma^2\text{Id}\right)$ and let us define for all $x,u\in\mathbb{R}^d$ the following map \begin{align*} \varphi(x,u)&=(2q)^{d/4}\exp\left(-2\varepsilon^{-1} \Vert x-u\Vert^2_2\right)\exp\left(\frac{\varepsilon^{-1}\Vert u\Vert_2^2}{\frac{1}{2}+\varepsilon^{-1} R^2}\right) \end{align*} Then for any $x,y\in\mathbb{R}^d$ we have $k(x,y)=\int_{u\in\mathbb{R}^d} \varphi(x,u)\varphi(y,u) d\rho(u)$. Moreover if $x,y\in \mathcal{B}(0,R)$ and $u\in\mathbb{R}^d$ we have $k(x,y)\geq \exp(-4\varepsilon^{-1} R^2)>0$, \begin{align*} \left|\varphi(x,u)\varphi(y,u)/k(x,y)\right|\leq 2^{d/2+1} q^{d/2} & \text{\quad and } & \sup_{x\in\mathcal{B}(0,R)}\mathbf{E}(\Vert \nabla_x\varphi\Vert_2^2)\leq 2^{d/2+3} q^{d/2} \left[(R/\varepsilon)^2+\frac{q}{4\varepsilon}\right]. \end{align*} \end{lemma} \subsection{Constructive approach to Designing Positive Features: Differentiability} \label{sec:enveloppe-thm} In this section we consider a constructive way of building feature map $\varphi_\theta$ which may be chosen arbitrary, or learned accordingly to an objective defined as a function of the ROT distance, e.g. OT-GAN objectives \cite{salimans2018improving,2017-Genevay-gan-vae}. For that purpose, we want to be able to compute the gradient of $W_{\varepsilon,c_\theta}(\mu,\nu)$ with respect to the kernel $\mathbf{K}_{\theta}$, or more specifically with respect to the parameter $\theta$ and the locations of the input measures. In the next proposition we show that the ROT distance is differentiable with respect to the kernel matrix. See Appendix~\ref{sec:diff} for the proof. \begin{prop} \label{prop:diff-gene} Let $\epsilon>0$, $(a,b) \in\Delta_n\times\Delta_m$ and let us also define for any $\mathbf{K}\in(\mathbb{R}_{+}^{*})^{n\times m}$ with positive entries the following function: \begin{align} \label{eq:dual-prob-K} G(\mathbf{K})\defeq \sup\limits_{(\alpha,\beta)\in \mathbb{R}^{n}\times \mathbb{R}^{m}} \langle \alpha,a\rangle + \langle \beta,a\rangle -\varepsilon (e^{\alpha/\varepsilon})^T \mathbf{K} e^{\beta/\varepsilon}. \end{align} Then $G$ is differentiable on $(\mathbb{R}_{+}^{*})^{n\times m}$ and its gradient is given by \begin{align} \nabla G(\mathbf{K})=-\varepsilon e^{\alpha^*/\varepsilon} (e^{\beta^*/\varepsilon})^T \end{align} where $(\alpha^*,\beta^*)$ are optimal solutions of Eq.(\ref{eq:dual-prob-K}). \end{prop} Note that when $c$ is the square euclidean metric, the differentiability of the above objective has been obtained in~\citep{CuturiBarycenter}. We can now provide the formula for the gradients of interest. For all $\mathbf{X} \defeq \begin{bmatrix}x_1,\dots,x_n\end{bmatrix}\in\mathbb{R}^{d\times n}$, we denote $\mu(\mathbf{X})=\sum_{i=1}^n a_i\delta_{x_i}$ and $W_{\varepsilon,c_\theta}=W_{\varepsilon,c_\theta}(\mu(\mathbf{X}),\nu)$. Assume that $\theta$ is a $M$-dimensional vector for simplicity and that $(x,\theta)\in\mathbb{R}^d\times\mathbb{R}^M\rightarrow\varphi_\theta(x)\in(\mathbb{R}_{+}^{*})^r$ is a differentiable map. Then from proposition \ref{prop:diff-gene} and by applying the chain rule theorem, we obtain that $$ \begin{aligned} \nabla_\theta W_{\varepsilon,c_\theta} = & - \varepsilon \left( \left(\frac{\partial\pmb{\xi}}{\partial\theta}\right)^T\!\!\! u_{\theta}^\star(\pmb{\zeta} v_{\theta}^{\star})^T +\,\left(\frac{\partial\pmb{\zeta}}{\partial\theta}\right)^T\!\!\! v_{\theta}^\star(\pmb{\xi} u_{\theta}^{\star})^T\right)\text{, } & \nabla_X W_{\varepsilon,c_\theta}= -\varepsilon\left(\frac{\partial\pmb{\xi}}{\partial X}\right)^T\!\!\! u_{\theta}^\star(\pmb{\zeta} v_{\theta}^{\star})^T \end{aligned} $$ where $(u^{*}_{\theta},v^{*}_{\theta})$ are optimal solutions of (\ref{eq:eval-dual}) associated to the kernel matrix $\mathbf{K}_{\theta}$. Note that $\left(\frac{\partial\pmb{\xi}}{\partial\theta}\right)^T,\left(\frac{\partial\pmb{\zeta}}{\partial\theta}\right)^T$ and $\left(\frac{\partial\pmb{\xi}}{\partial X}\right)^T$ can be evaluated using simple differentiation if $\varphi_\theta$ is a simple random feature, or, more generally, using automatic differentiation if $\varphi_\theta$ is the output of a neural network. \paragraph{Discussion.} Our proposed method defines a kernel matrix $\mathbf{K}_\theta$ and a parametrized ROT distance $W_{\varepsilon,c_{\theta}}$ which are differentiable with respect to the input measures and the parameter $\theta$. These proprieties are important and used in many applications, e.g. GANs. However such operations may not be allowed when using a data-dependent method to approximate the kernel matrix such as the Nyström method used in \cite{altschuler2018massively}. Indeed there, the approximated kernel $\widetilde{\mathbf{K}}$ and the ROT distance $W_{\varepsilon,\widetilde{c}}$ associated are not well defined on a neighbourhood of the locations of the inputs measures and therefore are not differentiable. \section*{Acknowledgements} This work was funded by a "Chaire d’excellence de l’IDEX Paris Saclay". \section*{Supplementary materials} \paragraph{Outline.} In Sec.~\ref{sec:approx} we provide the proofs related to the approximation proprieties of our proposed method. In Sec.~\ref{sec:diff} we show the differentiability of the constructive approach. Finally in Sec.~\ref{sec:OT-sphere} we add more experiments and illustrations of our proposed method. \section{Approximation via Random Fourier Features} \label{sec:approx} \subsection{Proof of Theorem \ref{thm:result-sinkhorn-pos}} \label{proof:thm_approx_sin} In the following we denote $\mathbf{K}=(k(x_i,y_j))_{i,j=1}^n$ $\mathbf{K}_{\theta}=(k_{\theta}(x_i,y_j))_{i,j=1}^n$ the two gram matrices associated with $k$ and $k_{\theta}$ respectively. By duality and from these two matrices we can define the two objectives to maximize to obtain $W_{\varepsilon,c}$ and $W_{\varepsilon,c_{\theta}}$: \begin{align*} W_{\varepsilon,c}&=\max_{\alpha,\beta} f(\alpha,\beta):= \langle \alpha,a\rangle +\langle \beta,b\rangle -\varepsilon\langle e^{ \alpha/\varepsilon}, \mathbf{K} e^{\beta/ \varepsilon}\rangle\\ W_{\varepsilon,c_{\theta}}&=\max_{\alpha,\beta} f_{\theta}(\alpha,\beta):= \langle \alpha,a\rangle +\langle \beta,b\rangle -\varepsilon\langle e^{\alpha/\varepsilon}, \mathbf{K}_{\theta} e^{\beta/\varepsilon}\rangle \end{align*} Moreover as $k$ and $\varphi$ are assumed to be positive, there exists unique (up to a scalar translation) $(\alpha^{*},\beta^{*})$ and $(\alpha^{*}_{\theta},\beta^{*}_{\theta})$ respectively solutions of $\max_{\alpha,\beta} f(\alpha,\beta)$ and $\max_{\alpha,\beta} f_{\theta}(\alpha,\beta)$. \begin{prv*} Let us first show the following proposition: \begin{propo} \label{prop:conv-sing} Let $\delta>0$ and $r\geq 1 $. Assume that for all $(x,y)\in\mathbf{X}\times\mathbf{Y}$, \begin{align} \label{eq:assump-complexity} \left|\frac{k(x,y)-k_{\theta}(x,y)}{k(x,y)}\right|\leq \frac{\delta\varepsilon^{-1}}{2+\delta\varepsilon^{-1}} \end{align} then Sinkhorn Alg.~\ref{alg-sink} with inputs $a,b,K_{\theta}$ outputs $(\alpha_{\theta},\beta_{\theta})$ in $$\mathcal{O}\left(\frac{nr}{\delta\varepsilon^{-1}}\left[\log \left(\frac{1}{\iota}\right)+\log\left(2+\delta\varepsilon^{-1}\right)+\varepsilon^{-1} R^2\right]^2\right)$$ where \begin{align} \iota=\min\limits_{i,j}(a_i,b_j)&\text{\quad and\quad} R=\max\limits_{(x,y)\in\mathbf{X}\times\mathbf{Y}}c( x,y). \end{align}{} such that: \begin{align*} |W_{\varepsilon,c}-f_{\theta}(\alpha_{\theta},\beta_{\theta})|&\leq \delta \end{align*} \end{propo} \begin{prv*} We remark that: \begin{align*} |f(\alpha^{*},\beta^{*})-f_{\theta}(\alpha_{\theta},\beta_{\theta})|&\leq |f(\alpha^{*},\beta^{*})-f(\alpha^{*}_{\theta},\beta^{*}_{\theta})|\\ &+|f(\alpha^{*}_{\theta},\beta^{*}_{\theta})-f_{\theta}(\alpha^{*}_{\theta},\beta^{*}_{\theta})|\\ &+|f_{\theta}(\alpha^{*}_{\theta},\beta^{*}_{\theta})-f_{\theta}(\alpha_{\theta},\beta_{\theta})| \end{align*} Moreover we have that: \begin{align*} |f(\alpha^{*},\beta^{*})-f(\alpha^{*}_{\theta},\beta^{*}_{\theta})|&= f(\alpha^{*},\beta^{*})-f(\alpha^{*}_{\theta},\beta^{*}_{\theta})\\ &= f(\alpha^{*},\beta^{*}) - f_{\theta}(\alpha^{*}_{\theta},\beta^{*}_{\theta}) + f_{\theta}(\alpha^{*}_{\theta},\beta^{*}_{\theta}) - f(\alpha^{*}_{\theta},\beta^{*}_{\theta})\\ &\leq | f(\alpha^{*},\beta^{*}) - f_{\theta}(\alpha^{*},\beta^{*})| + | f_{\theta}(\alpha^{*}_{\theta},\beta^{*}_{\theta}) - f(\alpha^{*}_{\theta},\beta^{*}_{\theta}) | \end{align*} Therefore we obtain that: \begin{align*} |f(\alpha^{*},\beta^{*})-f_{\theta}(\alpha_{\theta},\beta_{\theta})|&\leq 2 |f(\alpha^{*}_{\theta},\beta^{*}_{\theta})-f_{\theta}(\alpha^{*}_{\theta},\beta^{*}_{\theta})| + | f(\alpha^{*},\beta^{*}) - f_{\theta}(\alpha^{*},\beta^{*})| \\ &+ |f_{\theta}(\alpha^{*}_{\theta},\beta^{*}_{\theta})-f_{\theta}(\alpha_{\theta},\beta_{\theta})| \end{align*} Let us now introduce the following lemma: \begin{lemma} \label{lem:apply-RFF-S} Let $1>\tau>0$ and let us assume that for all $(x,y)\in\mathbf{X}\times\mathbf{Y}$, $$\left|\frac{k(x,y)-k_{\theta}(x,y)}{k(x,y)}\right|\leq \tau $$ then for any $\alpha,\beta\in\mathbb{R}^n$ it holds \begin{align} |f(\alpha,\beta)-f_{\theta}(\alpha,\beta)|\leq \varepsilon\tau[\langle e^{\varepsilon^{-1} \alpha}, \mathbf{K} e^{\varepsilon^{-1}\beta }\rangle ] \end{align} and \begin{align} |f(\alpha,\beta)-f_{\theta}(\alpha,\beta)|\leq \varepsilon\frac{\tau}{1-\tau}[\langle e^{\varepsilon^{-1} \alpha}, \mathbf{K}_{\theta} e^{\varepsilon^{-1}\beta}\rangle ] \end{align} \end{lemma} \begin{prv*} Let $\alpha,\beta\in\mathbb{R}^n$. We remarks that: \begin{align} f(\alpha,\beta)-f_{\theta}(\alpha,\beta)=\varepsilon [\langle e^{\varepsilon^{-1} \alpha}, (\mathbf{K}_{\theta}-\mathbf{K}) e^{\varepsilon^{-1}\beta }\rangle ] \end{align} Therefore we obtain that: \begin{align} |f(\alpha,\beta)-f_{\theta}(\alpha,\beta)|\leq \varepsilon\sum_{i,j=1}^n e^{\varepsilon^{-1} \alpha_i}e^{\varepsilon^{-1}\beta_j} |[\mathbf{K}_{\theta}]_{i,j}-\mathbf{K}_{i,j}| \end{align} And the first inequality follows from the fact that $|[\mathbf{K}_{\theta}]_{i,j}-\mathbf{K}_{i,j}|\leq \tau|\mathbf{K}_{i,j}|$ for all $i,j\in\{1,...,n\}$ and that $k$ is positive. Moreover from the same inequality we obtain that: \begin{align*} |[\mathbf{K}_{\theta}]_{i,j}-\mathbf{K}_{i,j}|\leq \frac{\tau}{1-\tau} [\mathbf{K}_{\theta}]_{i,j} \end{align*} Therefore the second inequality follows. \end{prv*} Therefore thanks to lemma \ref{lem:apply-RFF-S}, we obtain that: \begin{align} |f(\alpha^{*}_{\theta},\beta^{*}_{\theta})-f_{\theta}(\alpha^{*}_{\theta},\beta^{*}_{\theta})|\leq \varepsilon\frac{\tau}{1-\tau}[\langle e^{\varepsilon^{-1} \alpha^{*}_{\theta}}, \mathbf{K}_{\theta} e^{\varepsilon^{-1}\beta^{*}_{\theta} }\rangle ] \end{align} But as $(\alpha^{*}_{\theta},\beta^{*}_{\theta})$ is the optimum of $f_{\theta}$, the first order conditions give us that $\langle e^{\varepsilon^{-1} \alpha^{*}_{\theta}}, \mathbf{K}_{\theta} e^{\varepsilon^{-1}\beta^{*}_{\theta} }\rangle=1$ and finally we have: \begin{align} |f(\alpha^{*}_{\theta},\beta^{*}_{\theta})-f_{\theta}(\alpha^{*}_{\theta},\beta^{*}_{\theta})|\leq \varepsilon\frac{\tau}{1-\tau} \end{align} Thanks to lemma \ref{lem:apply-RFF-S}, we also deduce that: \begin{align} |f(\alpha^{*},\beta^{*})-f_{\theta}(\alpha^{*},\beta^{*})|\leq \varepsilon\tau \end{align} Let us now introduce the following theorem: \begin{thm}(\cite{dvurechensky2018computational}) \label{thm:cvg-sink} Given $\mathbf{K}_\theta\in\mathbb{R}^{n\times n}$ with positive entries and $a,b\in\Delta_n$ the Sinkhorn Alg.~\ref{alg-sink} computes $(\alpha_{\theta},\beta_{\theta})$ such that \begin{align*} |f_{\theta}(\alpha^{*}_{\theta},\beta^{*}_{\theta})-f_{\theta}(\alpha_{\theta},\beta_{\theta})|\leq \frac{\delta}{2} \end{align*} in $\mathcal{O}\left(\delta^{-1}\varepsilon\log\left(\frac{1}{ \iota\min_{i,j}[K_{\theta}]_{i,j}}\right)^2\right)$ iterations where $ \iota=\min\limits_{i,j}(a_i,b_j)$ and each of which requires $\mathcal{O}(1)$ matrix-vector products with $K_{\theta}$ and $\mathcal{O}(n)$ additional processing time. \end{thm} Moreover from Eq. (\ref{eq:assump-complexity}) we have that \begin{align} [\mathbf{K}_{\theta}]_{i,j}\geq (1-\tau)\mathbf{K}_{i,j} \end{align} where $\tau = \frac{\delta\varepsilon^{-1}}{2+\delta\varepsilon^{-1}}$, therefore $\log\left(\frac{1}{\min_{i,j}[\mathbf{K}_{\theta}]_{i,j}}\right)\leq \log\left(\frac{1}{(1-\tau)\min_{i,j}\mathbf{K}_{i,j}}\right)\leq\log\left(\frac{1}{1-\tau}\right) + \varepsilon^{-1} R^2 $ where $R=\max\limits_{(x,y)\in\mathbf{X}\times\mathbf{Y}}c(x,y)$ and we obtain that \begin{align} |f(\alpha^{*},\beta^{*})-f_{\theta}(\alpha_{\theta},\beta_{\theta})|&\leq 2 \varepsilon\frac{\tau}{1-\tau}+\varepsilon\tau + \frac{\delta}{2} \end{align} By replacing $\tau$ by its value, we obtain the desired result. \end{prv*} We are now ready to prove the theorem. Let $r\geq 1$. From theorem \ref{thm:cvg-sink}, we obtain directly that: \begin{align} |f(\alpha^{*},\beta^{*})-f_{\theta}(\alpha_{\theta},\beta_{\theta})|&\leq \frac{\delta}{2} \end{align} in $\mathcal{O}\left(\frac{nr}{\delta}\left[\log\left(\frac{1}{\iota}\right)+Q_{\theta}\right]^2\right)$ algebric operations. Moreover let $\tau>0$ and $$r\in\Omega\left(\frac{\psi^2}{\delta^2}\left[\min\left(d\varepsilon^{-1} R^2+d\log\left(\frac{\psi VD}{\tau\delta}\right),\log\left(\frac{n}{\tau}\right)\right)\right]\right)$$ and $u_1,...,u_r$ drawn independently from $\rho$. Then from Proposition \ref{lem:ratio-RFF} we obtain that with a probability of at least $1-\delta$ it holds for all $(x,y)\in\mathbf{X}\times\mathbf{Y}$, \begin{align} \left|\frac{k(x,y)-k_{\theta}(x,y)}{k(x,y)}\right|\leq \frac{\delta\varepsilon^{-1}}{2+\delta\varepsilon^{-1}} \end{align} and the result follows from Proposition \ref{prop:conv-sing}. \end{prv*} \subsection{Accelerated Version} \label{proof:thm-acc-sin} \cite{guminov2019accelerated} show that one can accelarated the Sinkhorn algorithm (see Alg.~\ref{algo:acc-sinkhorn}) and obtain a $\delta$-approximation of the ROT distance. For that purpose, \cite{guminov2019accelerated} introduce a reformulation of the dual problem (\ref{eq:dual-theta}) and obtain \begin{align} \label{eq:dual-smooth} W_{\varepsilon,c_{\theta}}=\sup_{\eta_1,\eta_2} F_\theta(\eta_1,\eta_2):=\varepsilon\left[\langle \eta_1,a\rangle + \langle \eta_2,b\rangle -\log\left(\langle \mathbf{K}_{\theta}e^{\eta_2}\rangle\right)\right] \end{align} which can be shown to be an $L$-smooth function (\cite{nesterov2005smooth}) where $L\leq 2\varepsilon^{-1}$. Let us now present our result using the accelarated Sinkhorn algorithm. \begin{algorithm}[h] \SetAlgoLined \textbf{Input:} Initial estimate of the Lipschitz constant $L_0$, $a$, $b$, and $\mathbf{K}$ \\ \textbf{Init:} $A_0 = \alpha_0 = 0$, $\eta^0=\zeta^0=\lambda^0=0$. \\ \For{$k\geq 0$} {$L_{k+1}= L_k/2$\\ \While{True} { Set $L_{k+1}=L_{k}/2$\\ Set $a_{k+1}=\frac{1}{2L_{k+1}}+\sqrt{\frac{1}{4L_{k+1}^2}+a_k^2\frac{L_k}{L_{k+1}}}$\\ Set $\tau_k = \frac{1}{a_{k+1}L_{k+1}}$\\ Set $\lambda^k=\tau_k\zeta^k+(1-\tau_k)\zeta^k$\\ Choose $i_k=\argmaxB\limits_{i\in\{1,2\}}\Vert \nabla_i \phi(\lambda^k)\Vert_2$\\ \If{$i_k=1$} {$\eta^{k+1}_1=\lambda^k_1+\log(a)-\log(e^{\lambda^k_1}\circ \mathbf{K} e^{\lambda^k_2})$\\ $\eta_2^{k+1}=\lambda_2^{k+1}$ \\ \Else{$\eta_1^{k+1}=\lambda_1^{k+1}$\\ $\eta_2^{k+1} = \lambda^k_2+\log(b)-\log(e^{\lambda^k_2}\circ \mathbf{K}^T e^{\lambda^k_1})$ } } Set $\zeta^{k+1}=\zeta^k-a_{k+1}\nabla F_{\theta}(\lambda^k)$\\ \If {$\phi(\eta^k+1)\leq \phi(\lambda^{k})-\frac{\Vert \nabla F_\theta(\lambda^k)\Vert^2}{2 L_{k+1}}$} {Set $z = \text{Diag}(e^{\lambda^k_1})\circ\mathbf{K}\circ\text{Diag}(e^{\lambda^k_2})$\\ Set $c= \langle e^{\lambda^k_1},\mathbf{K}e^{\lambda^k_2}\rangle $\\ Set $\hat{x}^{k+1}=\frac{a_{k+1}c^{-1}z+L_k a_k^2\hat{x^k}}{L_{k+1}a_{k+1}^2}$\\ \textbf{Break}} Set $L_{k+1} = 2 L_{k+1}$\\ } } \caption{Accelerated Sinkhorn Algorithm.~\label{algo:acc-sinkhorn}} \textbf{Result}: Transport Plan $\hat{x}^{k+1}$ and dual points $\eta^{k+1}=(\eta_1^{k+1},\eta_2^{k+1})^T$ \end{algorithm} \begin{thm} \label{thm:result-acc-sinkhorn} Let $\delta>0$ and $r\geq 1$. Then the Accelerated Sinkhorn Alg.~\ref{algo:acc-sinkhorn} with inputs $\mathbf{K}_\theta$, $a$ and $b$ outputs $(\alpha_{\theta},\beta_{\theta})$ such that \begin{align*} |W_{\varepsilon,c_{\theta}}-F_\theta(\alpha_{\theta},\beta_{\theta})|\leq \frac{\delta}{2} \end{align*} in $\mathcal{O}\left(\frac{nr}{\sqrt{\delta}} [\sqrt{\varepsilon^{-1}}A_\theta]\right)$ algebraic operations where $A_\theta=\inf\limits_{(\alpha,\beta)\in\Theta_{\theta}} \Vert (\alpha,\beta)\Vert_2$ and $\Theta_{\theta}$ is the set of optimal dual solutions of (\ref{eq:dual-theta}). Moreover let $\tau>0$, \begin{align} r\in\Omega\left(\frac{\psi^2}{\delta^2}\left[\min\left(d\varepsilon^{-1} \Vert C\Vert_{\infty}^2+d\log\left(\frac{\psi VD}{\delta\delta}\right),\log\left(\frac{n}{\delta}\right)\right)\right]\right) \end{align} and $u_1,...,u_r$ drawn independently from $\rho$, then with a probability $1-\tau$ it holds \begin{align} |W_{\varepsilon,c}-F_\theta(\alpha_{\theta},\beta_{\theta})|\leq \delta \end{align} \end{thm} \begin{prv*} Let us first introduce the theorem presented in \cite{guminov2019accelerated}: \begin{thm} Given $\mathbf{K}_{\theta}\in\mathbb{R}^{n\times n}$ with positive entries and $a,b\in\Delta_n$ the Accelerated Sinkhorn Alg.~(\ref{algo:acc-sinkhorn}) computes $(\alpha_{\theta},\beta_{\theta})$ such that \begin{align*} |W_{\varepsilon,c_{\theta}}-F_\theta(\alpha_{\theta},\beta_{\theta})|\leq \delta \end{align*} in $\mathcal{O}\left(\sqrt{\frac{{\eta}}{\delta}}A_\theta\right)$ iterations where $A_\theta=\inf\limits_{(\alpha_\theta^*,\beta_\theta^*)\in\Theta^*} \Vert (\alpha_\theta^*,\beta_\theta^*)\Vert_2$ and $\Theta^*$ is the set of optimal dual solutions. Moreover each of which requires $\mathcal{O}(1)$ matrix-vector products with $\mathbf{K}_{\theta}$ and $\mathcal{O}(n)$. \end{thm} From the above result and applying an analogue proof of Theorem \ref{proof:thm_approx_sin}, we obtain the desired result. \end{prv*} \subsection{Proof of Proposition \ref{lem:ratio-RFF}} \label{proof:ratio-RFF} \begin{prv*} The proof is given for $p=1$ but it hold also for any $p\geq 1$ after making some simple modifications. To obtain the first inequality we remarks that \begin{align} \mathbb{P}\left(\sup_{(x,y)\in\mathcal{X}\times\mathcal{X}}\left |\frac{k_{\theta}(x,y)}{k(x,y)}-1\right|\geq \delta\right)\leq \sum_{(x,y)\in\mathbf{X}\times\mathbf{Y}}\mathbb{P}\left(\left |\frac{k_{\theta}(x,y)}{k(x,y)}-1\right|\geq \delta\right) \end{align} Moreover as $\mathbf{E}_{\rho}\left(\frac{\varphi(x,u)\varphi(y,u)}{k(x,y)}\right)=1$, the result follows by applying Hoeffding's inequality. \medbreak To show the second inequality, we follow the same strategy adopted in \cite{rahimi2008random}. Let us denote $f(x,y)=\frac{k_{\theta}(x,y)}{k(x,y)}-1$ and $\mathcal{M}:= \mathcal{X}\times\mathcal{X}$. First we remarks that $|f(x,y)|\leq K + 1$ and $\mathbf{E}_{\rho}(f)=0$. As $\mathcal{M}$ is a compact, we can find an $\mu$-net that covers $\mathcal{M}$ with $\mathcal{N}(\mathcal{M},\mu)=\left( \frac{4 R}{\mu}\right)^{2d}$ where $R=\sup_{(x,y)}\Vert(x,y)\Vert_2$ balls of radius $\delta$. Let us denote $z_1,...,z_{\mathcal{N}(\mathcal{M},\mu)}\in\mathcal{M}$ the centers of these balls, and let $L_f$ denote the Lipschitz constant of $f$. As $f$ is differentiable We have therefore $L_f = \sup\limits_{z\in\mathcal{M}}\Vert\nabla f(z)\Vert_2$. Moreover we have: \begin{align} \nabla f(z)&=\frac{\nabla k_{\theta}(z)}{k(z)}-\frac{k_{\theta}(z)}{k(z)}\nabla k(z)\\ &=\frac{1}{k(z)}\left[(\nabla k_{\theta}(z)-\nabla k(z))+\nabla k(z)\left(1-\frac{k_{\theta}(z)}{k(z)}\right)\right] \end{align} Therefore we have \begin{align} \mathbf{E}(\Vert\nabla f(z)\Vert^2)\leq \frac{2}{k(z)^2}\left[\mathbf{E}(\Vert\nabla k_{\theta}(z)-\nabla k(z)\Vert^2)+ \Vert\nabla k(z)\Vert^2\mathbf{E}\left(1-\frac{k_{\theta}(z)}{k(z)}\right)^2\right] \end{align} But for any $z\in\mathcal{M}$ we have from Eq. (\ref{eq:finit-RFF}) : \begin{align} \mathbf{E}\left(1-\frac{k_{\theta}(z)}{k(z)}\right)^2&=\int_{t\geq 0}\mathbb{P}\left(\left(1-\frac{k_{\theta}(z)}{k(z)}\right)^2\geq t\right)\\ &\leq\frac{K^2}{r} \end{align} Moreover, we have: \begin{align} \nabla k_{\theta}(z)=\frac{1}{r}\sum_{i=1}^r\nabla_x \varphi(x,u_i) \varphi(y,u_i) + \varphi(x,u_i) \nabla_y\varphi(y,u_i) \end{align} Therefore we have: \begin{align*} \Vert\nabla k_{\theta}(z)\Vert^2&=\frac{1}{r^2}\sum_{i,j=1}^r \langle \nabla_x \varphi(x,u_i), \nabla_x \varphi(x,u_j)\rangle \varphi(y,u_i) \varphi(y,u_j)\\ & +\frac{1}{r^2}\sum_{i,j=1}^r \nabla_y \varphi(y,u_i), \nabla_y \varphi(y,u_j)\rangle \varphi(x,u_i) \varphi(x,u_j) \\ &+ \frac{2}{r^2}\sum_{i,j=1}^r \nabla_x \varphi(x,u_i), \nabla_y \varphi(x,u_j)\rangle \varphi(y,u_i) \varphi(x,u_j) \end{align*} Moreover as: \begin{align} |\varphi(y,u_i) \varphi(x,u_j)|&\leq \frac{\varphi(y,u_i)^2+\varphi(x,u_j)^2}{2}\\ &\leq K \sup_{x\in\mathcal{X}}k(x,x) \end{align} And: \begin{align} |\langle \nabla_x \varphi(x,u_i), \nabla_y \varphi(y,u_j)\rangle|&\leq \Vert \nabla_x \varphi(x,u_i)\Vert \Vert \nabla_y \varphi(y,u_j)\Vert \\ &\leq \frac{\Vert \nabla_x \varphi(x,u_i)\Vert^2+ \Vert \nabla_y \varphi(y,u_j)\Vert^2}{2} \end{align} And by denoting: \begin{align} V:= \sup_{x\in\mathcal{X}}\mathbf{E}_{\rho}\left(\Vert \nabla_x\varphi(x,u)\Vert^2\right) \end{align} Therefore we have: \begin{align} \mathbf{E}\left(|\langle \nabla_x \varphi(x,u_i), \nabla_y \varphi(y,u_j)\rangle|\right)&\leq V \end{align} We can now derive the following upper bound: \begin{align} \mathbf{E}(\Vert\nabla k_{\theta}(z)-\nabla k(z)\Vert^2)&= \mathbf{E}(\Vert\nabla k_{\theta}(z)\Vert^2) - \Vert\nabla k(z)\Vert^2\leq 4 V K \sup_{x\in\mathcal{X}}k(x,x) \end{align} Moreover by convexity of the $\ell_2$ square norm, we also obtain that: \begin{align} \Vert\nabla k(z)\Vert^2\leq V K \sup_{x\in\mathcal{X}}k(x,x) \end{align} Therefore we have \begin{align} \mathbf{E}(\Vert\nabla f(z)\Vert^2)&\leq 2\kappa^{-2} VK \sup_{x\in\mathcal{X}}k(x,x)\left[4+\frac{K^2}{r}\right] \end{align} Then by applying Markov inequality we obtain that: \begin{align} \label{eq:lipchtiz} \mathbb{P}\left(L_f\geq \frac{\delta}{2\mu}\right)\leq 2\kappa^{-2} VK\sup_{x\in\mathcal{X}}k(x,x)\left[4+\frac{K^2}{r}\right]\left(\frac{2\mu}{\delta}\right)^2 \end{align} Moreover, the union bound followed by Hoeffding’s inequality applied to the anchors in the $\mu$-net gives \begin{align} \label{eq:union} \mathbb{P}\left(\cup_{i=1}^{\mathcal{N}(\mathcal{M},\mu)}|f(z_i)|\geq \delta\right)\leq 2\mathcal{N}(\mathcal{M},\mu)\exp\left(-\frac{r\delta^2}{2K^2}\right) \end{align} Then by combining Eq. (\ref{eq:lipchtiz}) and Eq.(\ref{eq:union}) we obtain that: \begin{align*} \mathbb{P}\left(\sup_{z\in\mathcal{M}}|f(z)|\geq \delta\right)\leq 2\left( \frac{4 R}{\mu}\right)^{2d}\exp\left(-\frac{r\delta^2}{2K^2}\right)+2\kappa^{-2} VK\sup_{x\in\mathcal{X}}k(x,x)\left[4+\frac{K^2}{r}\right]\left(\frac{2\mu}{\delta}\right)^2 \end{align*} Therefore by denoting \begin{align} A_1 &:= 2\left(4 R\right)^{2d}\exp\left(-\frac{r\delta^2}{2K^2}\right)\\ A_2 &:= 2\kappa^{-2} VK\sup_{x\in\mathcal{X}}k(x,x)\left[4+\frac{K^2}{r}\right]\left(\frac{2}{\delta}\right)^2 \end{align} and by choosing $\mu= \frac{A_1}{A_2}^{\frac{1}{2d+2}}$, we obtain that: \begin{align*} \mathbb{P}\left(\sup_{z\in\mathcal{M}}|f(z)|\geq \delta\right)\leq 2^9\left[\frac{\kappa^{-2} K V \sup_{x\in\mathcal{X}}k(x,x)\left[4+\frac{K^2}{r}\right]R^2}{\delta^2}\right]\exp\left(-\frac{r\delta^2}{2K^2(d+1)}\right) \end{align*} \end{prv*} \paragraph{Ratio Approximation.} Let us assume here that $p=1$ for simplicity. The uniform bound obtained on the ratio gives naturally a control of the form Eq.(\ref{eq:goal-RFF}) with a prescribed number of random features $r$. This result allows to control the error when using the kernel matrix $\mathbf{K}_{\theta}$ instead of the true kernel matrix $\mathbf{K}$ in the Sinkhorn iterations. In the proposition above, we obtain such a result with a probability of at least $1-2 n^2\exp\left(-\frac{r \delta^2}{2 \psi^2}\right)$ where $r$ is the number of random features and $\psi$ is defined as \begin{align*} \psi:=\sup_{u\in\mathcal{U}}\sup_{ (x,y)\in\mathbf{X}\times\mathbf{Y}}\left|\frac{\varphi(x,u)\varphi(y,u)}{k(x,y)}\right|. \end{align*} In comparison, in \cite{rahimi2008random}, the authors obtain a uniform bound on their difference and by denoting \begin{align*} \phi=\sup_{u\in\mathcal{U}}\sup_{ (x,y)\in\mathbf{X}\times\mathbf{Y}}\left|\varphi(x,u)\varphi(y,u)\right|, \end{align*} one obtains that with a probability of at least $1-2 n^2\exp\left(-\frac{r \tau^2}{2 \phi^2}\right)$ for all $(x,y)\in\mathbf{X}\times\mathbf{Y}$ \begin{align} k(x,y)-\tau\leq k_{\theta}(x,y)\leq k(x,y)+\tau \end{align} To be able to recover Eq.(\ref{eq:goal-RFF}) from the above control, we need to take $\tau=\inf\limits_{x,y\in\mathbf{X}\times\mathbf{Y}}k(x,y)\delta$ and by denoting $\phi'=\frac{\phi}{\inf\limits_{x,y\in\mathbf{X}\times\mathbf{Y}}k(x,y)}$ we obtain that with a probability of at least $1-2 n^2\exp\left(-\frac{r \delta^2}{2 \phi'^2}\right)$ for all $(x,y)\in\mathbf{X}\times\mathbf{Y}$ \begin{align*} (1-\delta) k(x,y) \leq k_{\theta}(x,y)\leq (1+\delta)k(x,y) \end{align*} Therefore the number of random features needed to guarantee Eq.(\ref{eq:goal-RFF}) from a control between the difference of the two kernels with at least a probability $1-\delta$ has to be larger than $\left(\frac{\phi'}{\psi}\right)^2$ times the number of random features needed from the control of Proposition \ref{lem:ratio-RFF} to guarantee Eq.(\ref{eq:goal-RFF}) with at least the same probability $1-\delta$. But we always have that \begin{align*} \psi=\sup_{u\in\mathcal{U}}\sup_{ (x,y)\in\mathbf{X}\times\mathbf{Y}}\left|\frac{\varphi(x,u)\varphi(y,u)}{k(x,y)}\right|\leq \frac{\sup\limits_{u\in\mathcal{U}}\sup\limits_{ (x,y)\in\mathbf{X}\times\mathbf{Y}}\left|\varphi(x,u)\varphi(y,u)\right|}{\inf\limits_{x,y\in\mathbf{X}\times\mathbf{Y}}k(x,y)}=\phi' \end{align*} and in some cases the ratio $\left(\frac{\phi'}{\psi}\right)^2$ can be huge. Indeed, as we will see in the following, for the Gaussian kernel, $$k(x,y)=\exp(-\varepsilon^{-1}\Vert x-y\Vert_2^2)$$ there exists $\varphi$ and $\mathcal{U}$ such that for all $x,y$ and $u\in\mathcal{U}$: \begin{align*} \varphi(x,u)\varphi(y,u)=k(x,y)h(u,x,y) \end{align*} where for all $(x_0,y_0)\in\mathbf{X}\times\mathbf{Y}$, $$\sup\limits_{u\in\mathcal{U}}|h(u,x_0,y_0)|=\sup\limits_{u\in\mathcal{U}}\sup\limits_{ (x,y)\in\mathbf{X}\times\mathbf{Y}}|h(u,x,y)|.$$ Therefore by denoting $M=\sup\limits_{ (x,y)\in\mathbf{X}\times\mathbf{Y}}\Vert x-y\Vert_2$ and $m=\inf\limits_{ (x,y)\in\mathbf{X}\times\mathbf{Y}}\Vert x-y\Vert_2$ , we obtain that \begin{align*} \left(\frac{\phi'}{\psi}\right)^2=\left(\frac{\sup\limits_{x,y\in\mathbf{X}\times\mathbf{Y}}k(x,y)}{\inf\limits_{x,y\in\mathbf{X}\times\mathbf{Y}}k(x,y)}\right)^2=\exp\left(2\varepsilon^{-1}[M^2-m^2]\right) \end{align*} \subsection{Proof of Lemma \ref{lem:decomp-RBF}} \label{proof:lemma_gaussian} \begin{prv*} Let $\varepsilon>0$ and $x,y\in\mathbb{R}^d$. We have that: \begin{align} \label{eq:relation-to-k} \exp\left(-2\varepsilon^{-1} \Vert x-u\Vert^2_2\right) \exp\left(-2\varepsilon^{-1} \Vert y-u\Vert^2_2\right)= \exp\left(-\varepsilon^{-1} \Vert x-y\Vert^2_2\right) \exp\left(-4\varepsilon^{-1} \left\Vert u-\left(\frac{x+y}{2}\right)\right\Vert_2^2\right) \end{align} And as the LHS is integrable we have: \begin{align*} \int_{u\in\mathbb{R}^d}\exp\left(-2\varepsilon^{-1} \Vert x-u\Vert^2_2\right) \exp\left(-2\varepsilon^{-1} \Vert y-u\Vert^2_2\right)du=\int_{u\in\mathbb{R}^d} e^{-\varepsilon^{-1} \Vert x-y\Vert^2_2}\exp\left(-4\varepsilon^{-1} \left\Vert u-\left(\frac{x+y}{2}\right)\right\Vert_2^2\right)du \end{align*} Therefore we obtain that: \begin{align} \label{eq:rewrite-int} e^{-\varepsilon^{-1} \Vert x-y\Vert^2_2} = \left(\frac{4}{\pi\varepsilon}\right)^{d/2}\int_{u\in\mathbb{R}^d}\exp\left(-2\varepsilon^{-1} \Vert x-u\Vert^2_2\right) \exp\left(-2\varepsilon^{-1} \Vert y-u\Vert^2_2\right)du \end{align} Now we want to transform the above expression as the one stated in \ref{eq:general}. To do so, let $q>0$ and let us denote $f_{q}$ the probability density function associated with the multivariate Gaussian distribution $\rho_q\sim \mathcal{N}\left(0,\frac{q}{4\varepsilon^{-1}}\text{Id}\right)$. We can rewrite the RHS of Eq. (\ref{eq:rewrite-int}) as the following: \begin{align*} &\left(\frac{4}{\pi\varepsilon}\right)^{d/2}\int_{u\in\mathbb{R}^d}\exp\left(-2\varepsilon^{-1} \Vert x-u\Vert^2_2\right) \exp\left(-2\varepsilon^{-1} \Vert x-u\Vert^2_2\right)du \\ & =\left(\frac{4}{\pi\varepsilon}\right)^{d/2}\int_{u\in\mathbb{R}^d}\exp\left(-2\varepsilon^{-1} \Vert x-u\Vert^2_2\right) \exp\left(-2\varepsilon^{-1} \Vert x-u\Vert^2_2\right)\frac{f_q(u)}{f_q(u)}d(u) \\ &=\left(\frac{4}{\pi\varepsilon}\right)^{d/2}\int_{u\in\mathbb{R}^d}\exp\left(-2\varepsilon^{-1} \Vert x-u\Vert^2_2\right) \exp\left(-2\varepsilon^{-1} \Vert x-u\Vert^2_2\right) \left[\left(2\pi\frac{q}{4\varepsilon^{-1}}\right)^{d/2} e^{\frac{2\varepsilon^{-1}\Vert u\Vert_2^2}{q}}\right] d\rho_q(u) \\ &=(2q)^{d/2}\int_{u\in\mathbb{R}^d}\exp\left(-2\varepsilon^{-1} \Vert x-u\Vert^2_2\right) \exp\left(-2\varepsilon^{-1} \Vert x-u\Vert^2_2\right) e^{\frac{2\varepsilon^{-1}\Vert u\Vert_2^2}{q}}d\rho_q(u) \end{align*} Therefore for each $q>0$, we obtain a feature map of $k$ in $L^2(d\rho_q)$ which is defined as: $$\varphi(x,u)= (2q)^{d/4}\exp\left(-2\varepsilon^{-1} \Vert x-u\Vert^2_2\right)e^{\frac{\varepsilon^{-1}\Vert u\Vert_2^2}{q}}. $$ Moreover thanks to Eq. (\ref{eq:relation-to-k}) we have also: \begin{align*} \varphi(x,u)\varphi(y,u)&=(2q)^{d/2}\exp\left(-2\varepsilon^{-1} \Vert x-u\Vert^2_2\right)\exp\left(-2\varepsilon^{-1} \Vert y-u\Vert^2_2\right) e^{\frac{2\varepsilon^{-1}\Vert u\Vert_2^2}{q}}\\ &=(2q)^{d/2} \exp\left(-\varepsilon^{-1} \Vert x-y\Vert^2_2\right) \exp\left(-4\varepsilon^{-1} \left\Vert u-\left(\frac{x+y}{2}\right)\right\Vert_2^2\right) e^{\frac{2\varepsilon^{-1}\Vert u\Vert_2^2}{q}} \end{align*} Therefore we have: \begin{align*} \frac{\varphi(x,u)\varphi(y,u)}{k(x,y)}&= (2q)^{d/2} \exp\left(-4\varepsilon^{-1} \left\Vert u-\left(\frac{x+y}{2}\right)\right\Vert_2^2\right) e^{\frac{2\varepsilon^{-1}\Vert u\Vert_2^2}{q}}\\ & = (2q)^{d/2} \exp\left(-4\varepsilon^{-1} \left(1-\frac{1}{2q}\right)\left\Vert u-\left(1-\frac{1}{2q}\right) \left(\frac{x+y}{2}\right)\right\Vert_2^2 \right)\\ &\quad\exp\left( \frac{4\varepsilon^{-1}}{2q-1}\left\Vert\left(\frac{x+y}{2}\right)\right\Vert_2^2 \right) \end{align*} Finally by choosing $$q=\frac{\varepsilon^{-1} R^2}{2 d W\left(\frac{\varepsilon^{-1} R^2}{d}\right)}$$ where $W$ is the positive real branch of the Lambert function, we obtain that for any $x,y\in\mathcal{B}(0,R)$: \begin{align} 0 \leq \frac{\varphi(x,u)\varphi(y,u)}{k(x,y)}\leq 2 \times (2q)^{d/2} \end{align} Moreover we have: \begin{align*} \varphi(x,u)&= (2q)^{d/4}\exp\left(-2\varepsilon^{-1} \Vert x-u\Vert^2_2\right)e^{\frac{\varepsilon^{-1}\Vert u\Vert_2^2}{q}} \end{align*} Therefore $\varphi$ is differentiable with respect to $x$ and we have: \begin{align} \Vert \nabla_x\varphi\Vert_2^2&= 4\varepsilon^{-2} \Vert x-u\Vert_2^2 \varphi(x,u)^2\\ &\leq 4\varepsilon^{-2} \psi \sup_{x\in\mathcal{X}}k(x,x) \Vert x-u\Vert_2^2 \end{align} where $\psi= 2\times (2q)^{d/2} $. But by definition of the kernel we have $\sup_{x\in\mathcal{B}(0,R)}k(x,x)=1$ and finally we have that for all $x\in\mathcal{B}(0,R)$: \begin{align} \mathbf{E}(\Vert \nabla_x\varphi\Vert_2^2)\leq 4\varepsilon^{-2} \psi \left[R^2+\frac{q}{4\varepsilon^{-1}}\right] \end{align} \end{prv*} \subsection{Another example: Arc-cosine kernel} \begin{lemma} \label{lem:decomp-arccos} Let $d\geq 1$, $s\geq 0$, $\kappa>0$ and $k_{s,\kappa}$ be the perturbed arc-cosine kernel on $\mathbb{R}^d$ defined as for all $x,y\in\mathbb{R}^d$, $ k_{s,\kappa}(x,y) = k_s(x,y) + \kappa$. Let also $\sigma>1$, $\rho=\mathcal{N}\left(0,\sigma^2\text{Id}\right)$ and let us define for all $x,u\in\mathbb{R}^d$ the following map: \begin{align*} \varphi(x,u)=\left(\sigma^{d/2}\sqrt{2}\max(0,u^T x)^s\exp\left(-\frac{\Vert u\Vert^2}{4}\left[1-\frac{1}{\sigma^2}\right]\right),\sqrt{\kappa}\right)^T \end{align*} Then for any $x,y\in\mathbb{R}^d$ we have: \begin{align*} k_{s,\kappa}(x,y)&=\int_{u\in\mathbb{R}^d} \varphi(x,u)^T\varphi(y,u) d\rho(u) \end{align*} Moreover we have for all $x,y\in\mathbb{R}^d$ $k_{s,\kappa}(x,y)\geq \kappa>0$ and for any compact $\mathcal{X}\subset\mathbb{R}^d$ we have: \begin{align*} \sup_{u\in\mathbb{R}^d}\sup_{ (x,y)\in\mathcal{X}\times\mathcal{X}}\left|\frac{\varphi(x,u)\varphi(y,u)}{k(x,y)}\right|<+\infty\quad \text{and} \quad \sup_{x\in\mathcal{X}}\mathbf{E}(\Vert \nabla_x\varphi\Vert_2^2)<+\infty \end{align*} \end{lemma} \label{proof:lemma_arccos} \begin{prv*} Let $s\geq 0$. From \cite{NIPS2009_3628}, we have that: $$ k_s(x,y) = \int_{\mathbb{R}^d}\Theta_s(u^T x) \Theta_s(u^T y)\frac{e^{-\frac{\Vert u \Vert_2^2}{2}} }{(2\pi)^{d/2}}du $$ where $\Theta_s(w)=\max(0,w)^s$. Let $\sigma>1$ and $f_\sigma$ the probability density function associated with the distribution $\mathcal{N}(0,\sigma^2\text{Id})$. Therefore we have that \begin{align} k_s(x,y) &= \int_{\mathbb{R}^d}\Theta_s(u^T x) \Theta_s(u^T y)\frac{e^{-\frac{\Vert u \Vert_2^2}{2}} }{(2\pi)^{d/2}}\frac{f_\sigma(u)}{f_\sigma(u)} du\\ &= \sigma^d \int_{\mathbb{R}^d}\Theta_s(u^T x) \Theta_s(u^T y)\exp\left(-\frac{\Vert u\Vert^2}{2}\left[1-\frac{1}{\sigma^2}\right]\right) d\rho(u) \end{align} where $\rho = \mathcal{N}(0,\sigma^2\text{Id})$. And by defining for all $x,u\in\mathbb{R}^d$ the following map: \begin{align*} \varphi(x,u)=\left(\sigma^{d/2}\sqrt{2}\max(0,u^T x)^s\exp\left(-\frac{\Vert u\Vert^2}{4}\left[1-\frac{1}{\sigma^2}\right]\right),\sqrt{\kappa}\right)^T \end{align*} we obtain that any $x,y\in\mathbb{R}^d$: \begin{align*} \int_{u\in\mathbb{R}^d} \varphi(x,u)^T\varphi(y,u) d\rho(u)&= \kappa + \sigma^d \int_{\mathbb{R}^d}\Theta_s(u^T x) \Theta_s(u^T y)\exp\left(-\frac{\Vert u\Vert^2}{2}\left[1-\frac{1}{\sigma^2}\right]\right) d\rho(u)\\ &=\kappa + k_s(x,y)\\ &= k_{s,\kappa}(x,y) \end{align*} Moreover from the definition of the feature map $\varphi$, it is clear that $k_{s,\kappa}\geq \kappa>0$, \begin{align*} \sup_{u\in\mathbb{R}^d}\sup_{ (x,y)\in\mathcal{X}\times\mathcal{X}}\left|\frac{\varphi(x,u)\varphi(y,u)}{k(x,y)}\right|<+\infty\quad \text{and} \quad \sup_{x\in\mathcal{X}}\mathbf{E}(\Vert \nabla_x\varphi\Vert_2^2)<+\infty. \end{align*} \end{prv*} \section{Constructive Method: Differentiability} \label{sec:diff} \subsection{Proof of Proposition \ref{prop:diff-gene}} \label{sec:diff-proof} \begin{prv*} Let us first introduce the following Lemma: \begin{lemma} \label{lem:gene_control_dual} Let $(\alpha^*,\beta^*)$ solution of (\ref{eq:eval-dual}), then we have \begin{align*} \max_{i} \alpha^{*}_i - \min_{i} \alpha^{*}_i &\leq \varepsilon R(\mathbf{K})\\ \max_{j} \beta^{*}_j - \min_{j} \beta^{*}_j &\leq \varepsilon R(\mathbf{K}) \end{align*} where $R(\mathbf{K})=-\log\left(\iota\frac{\min\limits_{i,j}\mathbf{K}_{i,j}}{\max\limits_{i,j}\mathbf{K}_{i,j}}\right)$ with $\iota:= \min\limits_{i,j}(a_i,b_j)$. \end{lemma} \begin{prv} Indeed at optimality, the primal-dual relationship between optimal variables gives us that for all $i=1,...,n$: \begin{align*} e^{\alpha^{*}_i/\varepsilon}\langle \mathbf{K}_{i,:},e^{\beta^*/\varepsilon}\rangle = a_i\leq 1 \end{align*} Moreover we have that \begin{align*} \min\limits_{i,j}\mathbf{K}_{i,j}\langle \mathbf{1},e^{\beta^*/\varepsilon}\rangle \leq \langle \mathbf{K}_{i,:},e^{\beta^*/\varepsilon}\rangle \leq \max\limits_{i,j}\mathbf{K}_{i,j}\langle \mathbf{1},e^{\beta^*/\varepsilon}\rangle \end{align*} Therefore we obtain that \begin{align*} \max_{i} \alpha^{*}_i \leq \varepsilon\log\left(\frac{1}{\min\limits_{i,j}\mathbf{K}_{i,j}\langle\mathbf{1},e^{\beta^*/\varepsilon}\rangle}\right) \end{align*} and \begin{align*} \min_{i} \alpha^{*}_i \geq \varepsilon\log\left(\frac{\iota}{\langle\mathbf{1},e^{\beta^*/\varepsilon}\rangle\max\limits_{i,j}\mathbf{K}_{i,j}}\right) \end{align*} Therefore we obtain that \begin{align*} \max_{i} \alpha^{*}_i - \min_{i} \alpha^{*}_i \geq -\varepsilon\log\left(\iota\frac{\min\limits_{i,j}\mathbf{K}_{i,j}}{\max\limits_{i,j}\mathbf{K}_{i,j}}\right) \end{align*} An analogue proof for $\beta^*$ leads to similar result. \end{prv} Let us now define for any $\mathbf{K}\in(\mathbb{R}_{+}^{*})^{n\times m}$ with positive entries the following objective function: \begin{align*} F(\mathbf{K},\alpha,\beta)\defeq \langle \alpha,a\rangle + \langle \beta,a\rangle -\varepsilon (e^{\alpha/\varepsilon})^T \mathbf{K} e^{\beta/\varepsilon}. \end{align*} Let us first show that \begin{align} \label{eq:dual_with_K} G(\mathbf{K})\defeq \sup\limits_{(\alpha,\beta)\in \mathbb{R}^{n}\times \mathbb{R}^{m}}F(\mathbf{K},\alpha,\beta) \end{align} is differentiable on $(\mathbb{R}_{+}^{*})^{n\times m}$. For that purpose let us introduce for any $\gamma_1,\gamma_2>0$, the following objective function: \begin{align*} G_{\gamma_1,\gamma_2}(\mathbf{K})\defeq \sup_{\substack{(\alpha,\beta)\in B_{\infty}^n(0,\gamma_1)\times B_{\infty}^m(0,\gamma_2)\\ \alpha^Te_1=0}} F(\mathbf{K},\alpha,\beta) \end{align*} where $B_{\infty}^n(0,\gamma)$ denote the ball of radius $\gamma$ according to the infinite norm and $e_1 = (1,0,....0)^T\in\mathbb{R}^n$. In the following we denote by $$S_{\gamma_1,\gamma_2}\defeq \left\{(\alpha,\beta)\in B_{\infty}^n(0,\gamma_1)\times B_{\infty}^m(0,\gamma_2)\text{\quad:\quad} \alpha^Te_1=0 \right\}.$$ Let us now introduce the following Lemma: \begin{lemma} \label{lem:unique_sol} Let $\varepsilon>0$, $(a,b) \in\Delta_n\times\Delta_m$, $K\in(\mathbb{R}_{+}^{*})^{n\times m}$ with positive entries. Then \begin{equation*} \max_{\alpha\in\mathbb{R}^n,\beta\in\mathbb{R}^m} a^T\alpha + b^T\beta -\varepsilon (e^{\alpha/\varepsilon})^T \mathbf{K} e^{\beta/\varepsilon} \end{equation*} admits a unique solution $(\alpha^{*},\beta^{*})$ such that $\alpha^Te_1=0$, $\Vert \alpha^{*}\Vert_{\infty}\leq \varepsilon R_1(\mathbf{K}) $, and , $\Vert \beta^{*}\Vert_{\infty} \leq \varepsilon[R_1(\mathbf{K})+ R_2(\mathbf{K})]$ where $R_1(\mathbf{K})=-\log\left(\iota\frac{\min\limits_{i,j}K_{i,j}}{\max\limits_{i,j}K_{i,j}}\right)$, $R_2(\mathbf{K})=\log\left(n\frac{\max\limits_{i,j}K_{i,j}}{\iota}\right)$ and $\iota:= \min\limits_{i,j}(a_i,b_j)$. \end{lemma} \begin{prv} In fact the existence and uncity up to a scalar transformation is a well known result. See for example \cite{CuturiSinkhorn}. Therefore there is a unique solution $(\alpha^0,\beta^0)$ such that $(\alpha^0)^Te_1=0$. Moreover thanks to Lemma \ref{lem:gene_control_dual}, we have that for any $(\alpha^{*},\beta^{*})$ optimal solution that \begin{align} \label{eq:lhs-bounded} \max_{i} \alpha^{*}_i - \min_{i} \alpha^{*}_i &\leq \varepsilon R(\mathbf{K})\\ \max_{j} \beta^{*}_j - \min_{j} \beta^{*}_j &\leq \varepsilon R(\mathbf{K}) \end{align} Therefore we have $\Vert \alpha^0\Vert_{\infty}\leq \max_{i} \alpha^0_i - \min_{i} \alpha^0_i \leq \varepsilon R(\mathbf{K})$. Moreover, the first order optimality conditions for the dual variables $(\alpha,\beta)$ implies that for all $j=1,..,m$ $$\beta^0_j = -\varepsilon\log\left(\sum_{i=1}^n\frac{\mathbf{K}_{i,j}}{b_j} \exp\left(\frac{\alpha_i^0}{\varepsilon}\right)\right) $$ Therefore we have that: \begin{align*} \Vert \beta^0 \Vert_{\infty}\leq \Vert \alpha^0 \Vert_{\infty} + \varepsilon\log\left(n\frac{\max\limits_{i,j}\mathbf{K}_{i,j}}{\iota}\right) \end{align*} and the result follows. \end{prv} Let $\mathbf{K}_0\in(\mathbb{R}_{+}^{*})^{n\times m}$, and let us denote $M_0=\max\limits_{i,j}\mathbf{K}_0[i,j]$, $m_0=\min\limits_{i,j}\mathbf{K}_0[i,j]$ and $$ A_\omega\defeq\left\{\mathbf{K}\in(\mathbb{R}_{+}^{*})^{n\times m}\text{\quad such that\quad } \Vert \mathbf{K}-\mathbf{K}_0\Vert_{\infty}< \omega \right\}$$ By considering $\omega_0=\frac{m_0}{2}$, we obtain that for any $K\in A_{\omega_0}$, \begin{align*} R_1(\mathbf{K})&\leq \log\left(\frac{1}{\iota}\frac{2M_0+m_0}{m_0}\right)\\ R_2(\mathbf{K})&\leq \log\left(n\frac{2M_0+m_0}{2\iota}\right) \end{align*} Therefore by denoting \begin{align*} \gamma_1^0&=\varepsilon\log\left(\frac{1}{\iota}\frac{2M_0+m_0}{m_0}\right)\\ \gamma_2^0&=\varepsilon\left[\log\left(\frac{1}{\iota}\frac{2M_0+m_0}{m_0}\right)+\log\left(n\frac{2M_0+m_0}{2\iota}\right)\right] \end{align*} Therefore, from Lemma \ref{lem:unique_sol}, we have that for all $\mathbf{K}\in A_{\omega_0}$ there exists a unique optimal solution $(\alpha,\beta)\in B_{\infty}^n(0,\gamma_1^0)\times B_{\infty}^m(0,\gamma_2^0)$ satisfying $\alpha^Te_1=0$. Therefore we have first that for all $K\in A_{\omega_0}$ \begin{align} \label{eq:equality} G_{\gamma_1^0,\gamma_2^0}(\mathbf{K})=G(\mathbf{K}) \end{align} and moreover for all $\mathbf{K}\in A_{\omega_0}$, the following set $$Z_\mathbf{K}\defeq\left\{(\alpha,\beta)\in S_{\gamma_1^0,\gamma_2^0}\text{\quad such that\quad}F(\mathbf{K},\alpha,\beta)= \sup\limits_{(\alpha,\beta)\in S_{\gamma_1^0,\gamma_2^0}}F(\mathbf{K},\alpha,\beta) \right\}$$ is a singleton. Let us now consider the restriction of $F$ on $A_{\omega_0}\times S_{\gamma_1^0,\gamma_2^0}$ denoted $F_0$. It is clear from their definition that $A_{\omega_0}$ is an open convex set, and $S_{\gamma_1^0,\gamma_2^0}$ is compact. Moreover $F_0$ is clearly continuous, and for any $(\alpha,\beta)\in S_{\gamma_1^0,\gamma_2^0}$, $F_0(\cdot,\alpha,\beta)$ is convex. Moreover for any $\mathbf{K}\in A_{\omega_0}$ the set $Z_\mathbf{K}$ is a singleton, therefore from Danskin theorem \cite{bacsar2008h}, we deduce that $G_{\gamma_1^0,\gamma_2^0}$ is convex and differentiable on $A_{\omega_0}$ and we have for all $K\in A_{\omega_0}$ \begin{align} \nabla G_{\gamma_1^0,\gamma_2^0}(\mathbf{K})=-\varepsilon e^{\alpha^*/\varepsilon} (e^{\beta^*/\varepsilon})^T \end{align} where $(\alpha^*,\beta^*)\in Z_K$. Note that any solutions of Eq.(\ref{eq:dual_with_K}) can be used to evaluated $\nabla G_{\gamma_1^0,\gamma_2^0}(\mathbf{K})$. Moreover thanks to Eq.(\ref{eq:equality}), we deduce also that $G$ is also differentiable on $A_{\omega_0}$. Finally the reasoning hold for any $\mathbf{K}_0\in(\mathbb{R}_{+}^{*})^{n\times m}$, therefore $G$ is differentiable and we have: \begin{align} \nabla G(\mathbf{K})=-\varepsilon e^{\alpha^*/\varepsilon} (e^{\beta^*/\varepsilon})^T \end{align} \end{prv*} \section{Illustrations and Experiments} \label{sec:OT-sphere} In Figure~\ref{fig:result_acc_higgs}, we show the time-accuracy tradeoff in the high dimensional setting. Here the samples are taken from the higgs dataset\footnote{https://archive.ics.uci.edu/ml/datasets/HIGGS}~\cite{higgs_2} where the sample lives in $\mathbb{R}^{28}$. This dataset contains two class of signals: a signal process which produces Higgs bosons and a background process which does not. We take randomly 5000 samples from each of these two distributions. \begin{figure}[h!] \centering \includegraphics[width=1\linewidth, trim=0.5cm 1cm 0.5cm 1cm]{img/plot_accuracy_higgs_5000.jpg} \caption{In this experiment, we take randomly 10000 samples from the two distributions of the higgs dataset and we plot the deviation from ground truth for different regularizations. We compare the results obtained for our proposed method (\textbf{RF}) with the one proposed in \cite{altschuler2018massively} (\textbf{Nys}) and with the Sinkhorn algorithm (\textbf{Sin}) proposed in \cite{CuturiSinkhorn}. The cost function considered here is the square Euclidean metric and the feature map used is that presented in Lemma~\ref{lem:decomp-RBF}. The number of random features (or rank) chosen varies from $100$ to $2000$. We repeat for each problem 10 times the experiment. Note that curves in the plot start at different points corresponding to the time required for initialization. \emph{Right, middle right}: when the regularization is sufficiently large both \textbf{Nys} and \textbf{RF} methods obtain very high accuracy with order of magnitude faster than \textbf{Sin}. \emph{Middle left}: both methods manage to obtain high accuracy of the ROT with order of magnitude faster than \textbf{Sin}. Note that \textbf{Nys} performs better in this setting than our proposed method. \emph{Left}: both methods fail to obtain a good approximation of the ROT.\label{fig:result_acc_higgs}} \end{figure} In Figure~\ref{fig:spheres3d}, we consider a discretization of the positive sphere using $50^2=2,500$ points and generate three simple histograms of blurred pixels located in the three corners of the simplex. \begin{figure*}[h!] \begin{tabular}{@{}c@{}c@{}c@{}c@{}c@{}c@{}} \includegraphics[width=0.17\textwidth]{./img/1.png}& \includegraphics[width=0.17\textwidth]{./img/2.png}& \includegraphics[width=0.17\textwidth]{./img/3.png}& \includegraphics[width=0.17\textwidth]{./img/4.png}& \includegraphics[width=0.17\textwidth]{./img/5.png}& \includegraphics[width=0.17\textwidth]{./img/grid.png}\\[-.15cm] (a)&(b)&(c)&(d)&(e)& $K= X^TX$\end{tabular} \caption{Using a discretization of the positive sphere with $50^2=2,500$ points we generate three simple histograms (a,b,c) located in the three corners of the simplex. (d) Wasserstein barycenter with a cost $c(x,y)=-\log(x^Ty)$ using the method by~\cite{2015-benamou-cisc}. (e) Soft-max with temperature 1000 of that barycenter (strongly increasing the relative influence of peaks) reveals that mass is concentrated in areas that would make sense from the more usual $c(x,y)=\arccos x^Ty$ distance on the sphere. The kernel corresponding to that cost, here the simple outer product of a matrix $X$ of dimsension $3\times 2500$.\label{fig:spheres3d}} \end{figure*}
proofpile-arXiv_065-4532
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Quantum decoherence gains importance both in the foundational studies such as quantum measurements and in the practical studies such as the open quantum systems that are applicable to modern quantum technologies. An important recent development in quantum decoherence, known as Quantum Darwinism \cite{Z03,OPZ1,OPZ2,BZ05,BZ06,BZ08,QD,ZQZ09,ZQZ10,RZ10,RZ11,RZZ12,ZRZ14,ZRZ16,ZZ17,ULZZJ19}, explains how classical objectivity emerges from the quantum formalism. Basically, in Quantum Darwinism, one treats the information transmission from the system $S$ to a fragment $F$ of the environment $E$ as a quantum channel, so that the accessible classical information as bounded by the Holevo information for different fragments $F_i$ of the environment $E$ can be compared. If these fragments agree upon the accessible information to some reasonable extent, then we say the classical information thus obtained is objective. In this sense, Quantum Darwinism tells us how the environment witnesses the system in the process of quantum decoherence as an effective measurement channel. As the Holevo bound involves an optimization procedure, it is difficult to calculate it. To circumvent this computational difficulty, an alternative formulation of Quantum Darwinism is proposed in the recent works \cite{22a,22b,22c}, where one makes measurements on the fragments $F_i$ of $E$ and asks for the maximal information about $S$ that one can extract from these measurements. In contrast, in the original framework of Quantum Darwinism one makes measurements on the systems $S$ and considers the Holevo information defined by the conditional entropy conditioned on $S$. Such a change of measurements is reasonable in the sense that the two settings lead to the same scaling behavior of the Holevo information. It turns out that such a change of measurements not only makes the computations easier, but also reveals deep relations between the correlations in the environment $E$ and the information extracted from measuring $F_i$. For example, the Koashi-Winter monogamy relation can be invoked in this new framework to show the trade-off relation for the quantum correlations between $S$ and $F_i$ and the classical information of the $F_i$'s. Interestingly, in this new framework of Quantum Darwinism, one can consider also the scaling of the conditional mutual information $I(S:F_l|F_k)$ \cite{22b}. This form of conditional mutual information also makes appearance in our recent work on quantum non-Markovianity \cite{HG21,HG22}. We therefore expect to study the connections between Quantum Darwinism and quantum (non-)Markovianity from a new perspective, in light of these new results. Previous studies on the connection between Quantum Darwinism and quantum (non-)Markovianity relies on specific models. In these model studies \cite{DM1,DM2,DM3,DM4,DM5,MS21}, there are strong evidences showing that the presence of non-Markovianity would suppress Quantum Darwinism. However, there are some models \cite{DM7, OdD19,DM9} in which the non-Markovian nature of environment does not prevent the appearance of Quantum Darwinism. That the presence of non-Markovianity would hinder Quantum Darwinism is expected from the general understanding that quantum non-Markovianity is related to the information backflow from environment to system, but the absence of relations in several models calls upon further study in this direction. In this note, we consider the relations between Quantum Darwinism and approximate quantum Markovianity in a general setting. This is operationally meaningful since the Holevo information decreases under quantum operations and hence the quantum operations can only be operated approximately \cite{Cai03}. We first use straightforward arguments from the approximately complete positive maps \cite{BDW16} to explain in \S\ref{S3} that for approximately Markovian processes the scaling property of Quantum Darwinism still holds, meaning that for the quantum non-Markovianity to hinder Quantum Darwinism a large non-Markovianity quantifier should be required. In \S\ref{S4}, we study the backflow of correlation as quantified by the conditional mutual information and find such backflow of correlation is upper bounded by both the classical information and the quantum discord. If the suppressions as in \cite{22b} hold, then the backflow of information, which is one way of characterizing quantum non-Markovianity, is also greatly suppressed, thereby corroborating the approximately Markovian results discussed in \S\ref{S3}. \section{Quantum Darwinism: Decohering the system or eavesdropping the environment}\label{S2} We start by reviewing briefly both the standard formulation of Quantum Darwinism \cite{Z03,OPZ1,OPZ2,BZ05,BZ06,BZ08,QD,ZQZ09,ZQZ10,RZ10,RZ11,RZZ12,ZRZ14,ZRZ16,ZZ17} and the recent formulation given in \cite{22a,22b,22c}. In Quantum Darwinism one considers the setting of an open quantum system $S$ interacting with an environment $E$ which consists of $N$ fragments (or subenvironments) $F_i$. The interaction between $S$ and $E$, or the quantum decoherence of $S$ by $E$, is considered as a quantum communication channel $\Lambda_{S\rightarrow F_i}$ such that information is transmitted from $S$ to $F_i$. Namely, correlation is created between $S$ and $F_i$ as the quantum mutual information \begin{equation} I(S:F_i)=H_S+H_{F_i}-H_{SF_i}\neq0 \end{equation} where $H_S=-\text{tr}\rho_S\log_2\rho_S$ is the von Neumann entropy. To extract classical information about $S$ from $F_i$, one considers the quantum decoherence of $S$ by projecting it to the pointer basis, $\{\ket{\hat{s}}\}$ \cite{Z03,PBasis}, and hence one obtains the Holevo information \begin{equation}\label{2} \chi(\hat{\Pi}_S:F_i)=H\Bigl(\sum_{\hat{s}}p_{\hat{s}}\rho_{F_i|\hat{s}}\Bigr)-\sum_{\hat{s}}p_{\hat{s}}H(\rho_{F_i|\hat{s}}), \end{equation} where $\hat{\Pi}_S=\sum_{\hat{s}}\pi_{\hat{s}}\ket{\hat{s}}\bra{\hat{s}}$ and $\rho_{F_i|\hat{s}}=\braket{\hat{s}|\rho_{SF_i}|\hat{s}}/p_{\hat{s}}$ is the fragment state conditioned on system's pointer state with the $p_{\hat{s}}$ being the probabilities for projecting to the pointer basis. The Holevo information is an upper bound on the classical information transmittable in the quantum channel $\Lambda_{S\rightarrow F_i}$. For the successful extraction of information about $S$, one sets the condition \begin{equation}\label{3} \braket{\chi(\hat{\Pi}_S:F_i)}_{\# F_\delta}\approx(1-\delta) H_S \end{equation} where $\delta$ is a small number called the information deficit, and $\braket{}_{\# F_\delta}$ denotes the average over the fragments of size ${\# F_\delta}$. Here, $H_S=H(\hat{\Pi}_S)$ quantifies the missing information about $S$ computed in the pointer basis, i.e. the classical information content of the pointer state when the quantum state is decohered. The condition \eqref{3} says that an observer only needs a number ${\# F_\delta}$ of fragments $F_i$ to retrieve $(1-\delta) H_S$ classical bits of information about $S$, and therefore the rest of the environment is redundant. Let $\# E=N$ be the size of the environment $E$, then we define the redundancy as \begin{equation} R_\delta=\frac{\# E}{\# F_\delta}. \end{equation} When the fragments $F_i$ are of the same size, we have $R_\delta=1/f_\delta$ with $f_\delta$ the fraction of the relevant fragments. By using the complementary relation for the Holevo information and the quantum discord \cite{ZZ13}, we can rewrite the condition \eqref{3} as \begin{equation}\label{5} I(S:F_i)\approx(1-\delta) H_S. \end{equation} This is because the Holevo information is the upper bound on the transmittable classical information in the pointer state, \begin{equation}\label{6} \chi(\hat{\Pi}_S:F_i)=\max_{{s}}I({\Pi}_S:F_i)=J(\hat{\Pi}_S:F_i) \end{equation} where the maximum is taken over all possible eigenbases, and $J(\hat{\Pi}_S:F_i)$ is the classical correlation used in the definition of quantum discord. When the quantum correlation is small, we obtain \eqref{5}, with the understanding that it has been averaged, i.e. $I(S:F_i)=\braket{I(S:F_i)}_{\# F_\delta}$. The most important property of $I(S:F_i)$ for Quantum Darwinism is its {\it scaling property} under the increase of $\# F_\delta$. The study of various models confirms the general behavior of $I(S:F_i)$: At the initial stage, $I(S:F_i)$ follows a steep rise with increasing $\# F_\delta$ as the classical correlation increases; then it saturates a long and flat plateau meaning that at this stage the added fragments are redundant, i.e. when \begin{equation}\label{7} I(S:F_i)\geqslant(1-\delta) H_S;\end{equation} finally, $I(S:F_i)$ follows again a steep rise to reach its maximum as the quantum correlation increases. Now from \eqref{6} we see that it involves an optimization, which is generically hard. Also, since the Holevo information is only an upper bound, the above conditions for information retrieval can in fact be relaxed. To improve upon these limitations, the recent work \cite{22a} considers instead the measurements on $F_i$. Let us denote the part being measured by a check, e.g. the Holevo information in \eqref{2} becomes the asymmetric mutual information $J(\check{S}:F_i)$. When $F_i$ is measured, one has similarly \begin{equation} J(S:\check{F}_i)=\chi(S:\check{F}_i), \end{equation} which is the maximal information about $S$ one can extract by measuring $F_i$. In fact, this $J(S:\check{F}_i)$ also upper bounds the classical accessible information \cite{22c}. Numerical results show that the scaling property for Quantum Darwinism still holds for $J(S:\check{F}_i)$. In particular, the plateau condition can be written as \begin{equation}\label{9} J(S:\check{F}_i)\geqslant(1-\delta) H_S. \end{equation} Using the symmetric mutual information $I(S:F_i)$ and the asymmetric mutual information $J(S:\check{F}_i)$, one can define the fragmentary discord, $D(S:\check{F}_i)=I(S:F_i)-J(S:\check{F}_i)$. If we consider each fragment $F_i$ has its own information deficit $\delta_i$, and consider the averaged discord $\bar{D}(S:\check{F}_i)=\frac{1}{N}\sum_{i=1}^ND(S:\check{F}_i)$, then the following bound holds \cite{22b}, \begin{equation}\label{10} \bar{D}(S:\check{F}_i)\leqslant\delta H_S \end{equation} where $\delta=\sum_{i=1}^N\delta_i/N$. Similarly, for the $\# F_\delta\leqslant\frac{N}{2}$ fragments, the bound can be rewritten as \begin{equation}\label{11} \bar{D}(S:\check{F}_i)\leqslant\bigl[1-(1-\delta)\frac{R_\delta}{N}\bigr] H_S. \end{equation} Clearly, since the information deficit is chosen to be small, the quantum correlation as quantified by the quantum discord is greatly suppressed. Importantly, the plateau condition \eqref{9} implies \begin{equation}\label{12} I(S:F_l|F_k)\leqslant2\delta H_S, \end{equation} for $k\geqslant\#F_\delta,~k+l\leqslant N-\# F_\delta$. With \eqref{12}, one can directly test Quantum Darwinism by inspecting the scaling of the conditional mutual information $I(S:F_l|F_k)$, without facing the problem of optimization. \section{Relation to approximately quantum Markovian processes}\label{S3} We consider in this section the case in which the quantum channels involved deviate a little from complete positivity. The complete positivity of the open quantum system dynamics is related to the quantum data processing inequality \cite{Bus14}, and the deviation from the quantum data processing inequality indicates quantum non-Markovianity. A small deviation from the quantum data processing inequality can give rise to an approximately complete positive map, and the converse is also true \cite{BDW16}. To apply these deviation results to Quantum Darwinism, we consider the following novel scenario: Instead of focusing on the system $S$, we take the measurement made on the fragment $F_k$ as a quantum channel $\Lambda_{F_k\rightarrow F_k'}$. After the system $S$ has decohered and the information about $S$ has been transmitted to $F_k$, we assume that the fragment $F_k$ no longer interacts with $S$ but the interactions between different $F_i$'s are still possible. With this assumption, the quantum channel $\Lambda_{F_k\rightarrow F_k'}$ can be considered as an $F_k$-$F_l$ open system where $F_k$ is the new open ``system'' of an observer's concern and another fragment $F_l$ with $l\neq k$ is the new environment for $F_k$. On the other hand, the system $S$ becomes now a {\it steerable } extension of the $F_k$-$F_l$ open system, because the states $\rho_{F_kF_l}$ can be engineered according to the pointer basis of $S$. This way, we have adapted the new setting of Quantum Darwinism to that of \cite{Bus14}. Now suppose that the quantum channel $\Lambda_{F_k\rightarrow F_k'}$ is approximately CPTP, in the sense that \begin{equation} \frac{1}{2}\norm{\sigma_{SF'_k}-\Lambda_{F_k\rightarrow F_k'}(\rho_{SF_k})}_1\leqslant\epsilon \end{equation} where $\epsilon\in[0,1]$ and $\sigma_{SF_kF_l}=U_{F_kF_l}(\rho_{SF_kF_l})U^\dag_{F_kF_l}$ is the partially evolved state. Then by the Alicki-Fannes-Winter inequality \cite{AF04,W16}, we have \begin{equation} I(S:F'_k)_\sigma\leqslant I(S:F_k)+2\epsilon\log\abs{S}+(1+\epsilon)h[\frac{\epsilon}{1+\epsilon}] \end{equation} where $h[x]=-x\log x-(1-x)\log(1-x)$ is the binary entropy. Furthermore, by Theorem 8 of \cite{BDW16}, we also have the bound on the conditional mutual information \begin{equation}\label{15} I(S:F_l|F_k)\leqslant2\epsilon\log\abs{S}+(1+\epsilon)h[\frac{\epsilon}{1+\epsilon}]. \end{equation} The bound in \eqref{15} has the same qualitative structure as \eqref{12}: two bounds both depends on a small parameter and the ``entropy'' of $S$.\footnote{In \eqref{15}, the term $\log\abs{S}$ can be roughly understood as the statistical-mechanical entropy in the Boltzmann sense.} We therefore see that, with a small deviation from complete positivity, the scaling behavior \eqref{12} of conditional mutual information for Quantum Darwinism {\it can still hold}, if we choose proper parameters $\epsilon,\delta$, and assume that the fragment $F_k$ no longer interacts with the system $S$. In other words, the approximately Markovian quantum processes will not hinder the presence of Quantum Darwinism. This understanding is consistent with the strict bounds on the deviation from the quantum Markovianity \cite{FR15,FMP19}. Next, let us consider the Holevo information of the ensemble of states $\mathcal{S}=\{p_{\hat{s}},\rho_{\hat{s}}\}$, \begin{equation} \chi(\mathcal{S})=H\Bigl(\sum_{\hat{s}}p_{\hat{s}}\rho_{\hat{s}}\Bigr)-\sum_{\hat{s}}p_{\hat{s}}H(\rho_{\hat{s}}). \end{equation} Then the Holevo information \eqref{2} of the quantum channel $\Lambda_{S\rightarrow F_i}$ is the Holevo information $\chi(\mathcal{F})$ of the output ensemble $\mathcal{F}=\{p_{\hat{s}},\rho_{F_i|\hat{s}}\}$. The monotonicity of the Holevo information under quantum channels give $\chi(\mathcal{S})\geqslant\chi(\mathcal{F})$, and when the equality holds there exists a (Petz) recovery channel $\Phi_{F_i\rightarrow S}$ such that $\Phi_{F_i\rightarrow S}\circ\Lambda_{S\rightarrow F_i}(\rho_{\hat{s}})=\rho_{\hat{s}}$ \cite{HJPW04}. The existence of recovery channel is equivalent to the backflow of {\it all} information from $F_i$ to $S$, and we see that this requires the Holevo information of the ensemble of states is intact under the quantum channel $\Lambda_{S\rightarrow F_i}$, i.e. \begin{equation} \chi(\mathcal{S})=\chi(\mathcal{F})=\chi(\hat{\Pi}_S:F_i).\end{equation} A small deviation from this equality is allowable, if the following bound holds (cf. Theorem 6 of \cite{BDW16}) \begin{equation}\label{18} \chi(\mathcal{S})-\chi(\mathcal{F})\geqslant-2\log\sum_{\hat{s}}p_{\hat{s}}\sqrt{F}(\rho_{\hat{s}},\Phi\circ\Lambda(\rho_{\hat{s}})) \end{equation} where $\sqrt{F}(\rho,\sigma)=\norm{\sqrt{\rho}\sqrt{\sigma}}_1$ is the root of the fidelity. The inequality \eqref{18} is the constraint for the existence of the approximate recovery channel or approximately full information backflow. As far as the quantum decoherence or emergence of classicality is concerned, we expect a large violation of the condition \eqref{18} to reach an irreversible information transmission. However, when strong non-Markovianity is present, it is still possible for such an approximate recovery channel to exist. \section{Relation to backflow of correlation}\label{S4} \begin{CJK*}{UTF8}{gbsn} It is well-known by now that the quantum non-Markovianity of open quantum dynamics can be quantified by the information backflow from environment to system, or the increase in distinguishability of system's states \cite{BLP09}. Recently, we have shown in \cite{HG21} that the backflow of quantum mutual information can be equivalently reformulated as the backflow of conditional mutual information in a system-environment-ancilla setting. Here, we shall consider the backflow of conditional mutual information in the new setting of Quantum Darwinism. We consider first the pure state $\rho_{SE}$ for the system-environment total system. Since for pure states both the quantum discord and the classical correlation takes the maximal value of $H_S$, we have \begin{equation} I(S:E)=D(S:\check{E})+J(S:\check{E})=2H_S. \end{equation} In the system-environment-ancilla setting of \cite{HG21}, we require the ancilla-system initial state is maximally entangled, i.e. $I(A:S)_{\rho_i}=2H_A=2\ln d_A$. In other words, initially the ancillary $A$ possesses all the information about $S$ (and vice versa). After the information of $S$ is spread into $E$, we have in general $I(A:S)\neq2H_A$ and $I(A:SE)=2H_A$. The memory effect is reflected by the non-monotonicity of the conditional mutual information $I(A:E|S)=I(A:SE)-I(A:S)$. Notice that the ancillary $A$ can be considered either as a fragment $F_i$ of the environment or a special observer who makes measurements on $S$. We first consider the former interpretation of $A$ as fragment of $E$. To avoid confusions, let us label the $(S^*,E^*,A^*)$ in the system-environment-ancilla setting with a $*$. We can take the $A^*$ as the system $S$ in Quantum Darwinism setting and take $S^*$ as a marked fragment $F_1$ in Quantum Darwinism setting. Let us consider the following \begin{equation} I(A^*:E^*_{sub}|S^*)\equiv I(S:{F}_l|{F}_1), \end{equation} where the RHS is understood as the information transmission from $S$ to $F_1$ in the setting of Quantum Darwinism, which can be alternatively understood as in the LHS as information flow from $A^*$ to $S^*$. If we adopt the interpretation of $A^*$ as a fragment of environment, this information transmission is the backflow of information. So we have \begin{align} &I(A^*:E^*_{sub}|S^*)= I(S:{F}_l|{F}_1)\notag \\ =&I(S:F_{l+1})-I(S:F_1)\notag \\ =&I(S:F_{\#F_\delta})-I(S:F_1)+I(S:F_{l+1})-I(S:F_{\#F_\delta})\notag \\ =&I(S:{F}_{\#F_\delta-1}|\mathcal{F}_1)+I(S:{F}_{l-\#F_\delta+1}|{F}_{\#F_\delta})\notag \\ \leqslant&I(S:F_{\#F_\delta})-I(S:F_1)+2\delta H_S \label{20}\\ \approx&J(S;F_{\#F_\delta})+D(S;F_{\#F_\delta})-(1-2\delta-\delta')H(S)\label{21} \end{align} where in \eqref{20} we have used the scaling \eqref{12}, and in \eqref{21} we have used \eqref{5} and $\delta'$ is the information deficit for this particular case, $I(S:F_1)\approx(1-\delta')H_S$. By expressing \eqref{5} in terms of $J$, we obtain \begin{equation} I(A^*:E^*_{sub}|S^*)\lessapprox(\delta'+\delta)H_S +D(S;F_{\#F_\delta})\label{22}. \end{equation} This result \eqref{22} shows that the conditional mutual information $I(A^*:E^*_{sub}|S^*)$ is bounded both by the classical information and the quantum correlation. This clearly shows that the backflow of information contains both classical and quantum correlations. Now, in the parameter range $ l\leqslant N-2\#F_\delta$, by further using the bound \eqref{11}, we see that the conditional mutual information, which is to be backflowed from $A^*$ to $S^*$, is greatly suppressed. Notice that the backflow of information from $A^*$ to $S^*$ is understood by the standard setting of Quantum Darwinism as the channel from $S$ to $F_i$. Although a little bit strange, this is allowable by the general results on the emergence of Quantum Darwinism in a general many-body quantum system \cite{BPH15,KTPA18,CLAT21,QR21,RZZ16,Rie17,Oll22}. In a sense, we have modeled the emergence of Quantum Darwinism from $A^*$ to $\{E_{sub}^*\}_{i}\cup\{S^*\}$. Next, we consider an alternative setting by taking $A$ as an observer who measures $S$. Suppose the initial system-environment state is \begin{equation}\label{SODAW0} \ket{\psi_{SE}(0)}=\Bigl(\sum_{{s}} \ket{{s}}_S\Bigr)\otimes \ket{\phi}_E \end{equation} where the $\ket{s}$ is the eigenbasis of $S$ and the normalization is hidden. After $A$ measured $S$ by projections $\ket{\hat{s}}\bra{\hat{s}}$ to the pointer basis $\ket{\hat{s}}$, the joint state is then \begin{equation}\label{SOH0} \ket{\psi'_{ASE}(0)}=\Bigl(\sum_{\hat{s}} \sqrt{P_{\hat{s}}}\ket{\hat{s}}_A\otimes\ket{\hat{s}}_S\Bigr)\otimes \ket{\phi}_E \end{equation} where $\sqrt{P_{\hat{s}}}=\braket{\hat{s}|s}$. In this case, $AS$-state is entangled, and the observer $A$ keeps all the classical information about $S$. We have in fact returned to the system-environment-ancilla setting of \cite{HG21}. After the time evolution of $SE$ by $U^{SE}$ (or under the quantum channel $\Lambda_{S\rightarrow E}$), we have \begin{equation}\label{SOH} \ket{\psi'_{ASE}}=U^{SE} \ket{\psi'_{ASE}(0)}=\sum_{\hat{s}} \sqrt{P_{\hat{s}}}\ket{\hat{s}}_A\otimes\ket{\hat{s}}_S\otimes \ket{\phi_{\hat{s}}}_E. \end{equation} The $SE$-part is now a branching state \begin{equation}\label{SOH} \ket{\psi_{SE}}=U^{SE} \ket{\psi_{SE}(0)}=\sum_{\hat{s}} \sqrt{P_{\hat{s}}}\ket{\hat{s}}_S\otimes \ket{\phi_{\hat{s}}}_E. \end{equation} To proceed, we observe that joint state $\ket{\psi'_{ASE}}$ can be obtained from the state $\ket{0}_A\otimes\ket{\psi_{SE}}$, with $\ket{0}_A$ a pure state, by a unitary $U^{AS}: \ket{\hat{s}0}\to \ket{\hat{s}\hat{s}}$, \begin{equation} \ket{\psi'_{ASE}}=U^{AS} \ket{0}_A\otimes \ket{\psi_{SE}}. \end{equation} Since the pure state has zero von Neumann entropy and the unitary does not increase von Neumann entropy, we have in this case, \begin{equation}\label{DCMI1} I(AS:E_{sub})_{\ket{\psi'_{ASE}}}= I(S:E_{sub})_{ \ket{\psi_{SE}}}. \end{equation} Let us take partial trace of \eqref{SOH} \begin{equation}\label{27} \text{Tr}_{A\overline{E}_{sub}} \Pi_{\psi'}^{ASE}=\sum_{\hat{s}} P_{\hat{s}} \Pi_{\hat{s}}^S \otimes ( \text{Tr}_{\overline{E}_{sub}} \Pi_{\phi_{\hat{s}}}^{E}). \end{equation} where $\overline{E}_{sub}E_{sub}=E$. Now we can assume the condition of good decoherence \cite{ZQZ10} that the off-diagonal terms of $\rho_{S}$ are negligible, so that \eqref{27} is a classical-quantum state with zero quantum discord, i.e. \begin{equation}\label{DCMI2} I(S:E_{sub})_{\ket{\psi'_{ASE}}}= \chi(\check{S}:E_{sub})_{ \ket{\psi_{SE}}}. \end{equation} Finally, combining \eqref{DCMI1} and \eqref{DCMI2}, we obtain \begin{align} I(A:E_{sub}|S)_{\ket{\psi'}}&=I(AS:E_{sub})_{\ket{\psi'}}-I(S:E_{sub})_{\ket{\psi'}}\notag \\ & =I(S:\mathcal{F}_l)- \chi(\check{S}:\mathcal{F}_l)=D(\check{S}:\mathcal{F}_l).\label{29} \end{align} Since we have assumed the good decoherence, we have indeed $D(\check{S}:\mathcal{F}_l)\approx0$ \cite{22a}. In the present case, the $I(A:E_{sub}|S)_{\ket{\psi'}}$ contains the part that can be possibly backflowed, and any backflow of conditional mutual information is also bounded as above. Note that, in comparison to \eqref{22}, the classical information is not present in the bound \eqref{29}; this is because the interpretations of $A$ differ in two cases. Here, $A$ contains all the classical information (i.e. the pointer observables) of $S$, while for \eqref{22} this classical information needs to be transmitted. \end{CJK*} \section{Conclusion}\label{IV} We have discussed the relation between Quantum Darwinism and approximate quantum non-Markovianity. In \S\ref{S3}, we have showed that given an approximately Markovian quantum process with an approximate CPTP map, the conditional mutual information still satisfies the scaling property for Quantum Darwinism as is recently given in \cite{22b}, while in \S\ref{S4} we have proved the converse that the existence of Quantum Darwinism greatly suppresses the backflow of correlation. In this sense, we have actually proved, under some physically reasonable assumptions, the following statement: \begin{quote} {\it Quantum Darwinism is present in an open quantum system if and only if the open system is Markovian or approximately Markovian. } \end{quote} This result is consistent with the numerical works \cite{DM1,DM2,DM3,DM4,DM5,MS21} where the highest redundancy is reached when the dynamics is fully Markovian, the redundancy tends to zero when the dynamics is non-Markovian. (Particularly, \cite{MS21} also considers the information flow from the environment-point-of-view.) With the explicit bounds obtained in this paper, we can further understand the boundary between these two regimes. But the numerical results supporting the irrelevance between Quantum Darwinism and non-Markovianity \cite{DM7, OdD19,DM9} cannot be understood easily from the point of view of present work. These irrelevance results involve other aspects of classical objectivity such as the spectrum broadcast structures \cite{SBS}, which is worthy of further investigations. \begin{acknowledgments} We would like to thank Diogo O. Soares-Pinto for helpful comments. ZH is supported by the National Natural Science Foundation of China under Grant Nos. 12047556, 11725524 and the Hubei Provincial Natural Science Foundation of China under Grant No. 2019CFA003. \end{acknowledgments}
proofpile-arXiv_065-4563
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Active Galactic Nuclei (AGN) are the most luminous persistent sources in the Universe ($\gtrsim10^{42}$ erg s$^{-1}$) \citep{Lyden69}. They represent the active stage of supermassive black hole growth \citep{Kormendy13} at the core of most galaxies as material reaches its vicinity often through an accretion disc \citep{SS76}. Detailed characterisation of these regions is challenging due to their small angular size proving inaccessible for direct imaging \citep[except for the notable example of M87$^{*}$,][]{EHT19}. AGN are interestingly variable sources over a wide energy range on different timescales \citep{Matthews63, Smith63}. This variability and the associated timescales carry information of the emitting region sizes as well as provide a window to measure fundamental properties of the supermassive black hole at its centre \citep{Shen15a,Shen16b,Yue18}, in a method known as reverberation mapping \citep[RM;][]{Blandford82, Peterson14}. Continuum variability spanning X-ray through optical wavelengths provides important clues to the size and structure of the accretion disc in AGN. In the standard lamp-post reprocessing model \citep[e.g.,][]{Frank02}, photons originating in the X-ray emitting corona located above the central region of the accretion disc travel to the external parts of the accretion disc and are locally reprocessed into photons of longer wavelength, with a characteristic time delay, $\tau$, that depends on the light-travel time from the corona to the disc. For the temperature profile of a geometrically thin disc, this model predicts that the continuum lags from UV through optical wavelengths should follow $\tau\propto\lambda^{4/3}$ \citet{SS73}. Thus, by measuring the delay time of the different continuum bands (which probe different regions of the disc) it is possible to test this model and to map the size and temperature profile of the disc. Previous continuum RM studies \citep[e.g.,][]{Cackett2007,Edelson15,Edelson17,Edelson19,Cackett2020,hernandez2020} have shown that wavelength dependent measurements are broadly consistent with the predictions of the \citet{SS73} model where the average delay as a function of wavelength scales as $\tau\propto \lambda^{4/3}$. However, these studies have also found that disc size estimation are $\sim 3-4 $ times larger than expected \citep{Edelson19}, in agreement with microlensing observations \citep[e.g.,][]{Morgan2010}. {PKS 0558$-$504}\ (\emph{z}=0.1372) is a variable quasar on different time scales and over a wide energy range from X-rays to the near infrared and with frequent flares \citep{Gli07,Gli10}. Different studies have estimated its black hole mass through multiple methods obtaining results between $\sim (2-4) \times 10^8 M_{\odot}$ and with a super-Eddington luminosity, $L/L_{\rm Edd} = 1.7$ \citep{Gli10}. In addition, radio observations revealed the existence of an extended and aligned structure characteristic of bipolar jets \citep{Gli10}, with properties analogous to Galactic stellar black holes. The first continuum RM study of {PKS 0558$-$504}\ was carried out by \citet{Gli13} using data from the \textit{Neil Gehrels Swift Observatory} \citep[hereafter \textit{Swift},][]{Gehrels04} employing simultaneous UVOT \citep{Roming05} and XRT \citep{Burrows05} observations. While the X-ray, UV, and optical bands light curves show a strong correlation between them, \citet{Gli13} found that variations in the optical bands led (rather than lagged behind) the corresponding UV variations, and the UV led the X-ray variations. This result suggested that the optical bands are responsible for the variations observed in X-rays, very different from that predicted by the reprocessing model and in contrast to most continuum RM studies. \citet{Gli13} interpreted this result as due to fluctuations in the disc that drive the variability and propagate from the outer disc (optical) to the internal regions of the disc (UV) and corona (X-rays) \citep{Lyubarskii97,Arevalo08}. Given this puzzling behaviour, which exhibited the opposite lag-wavelength trend seen in other AGN, we revisited the measurement of the {PKS 0558$-$504}\ reverberation lags with the goal of better understanding the origin of the inverted lag-wavelength relationship found by \citet{Gli13}. In this work, we present a new analysis and result of the multi-band continuum RM of {PKS 0558$-$504}. The paper is divided as follows: in Section~\ref{sec:observations} we describe the observations and data reduction procedures. We present a revised time series analysis of the \textit{Swift} data and new measurements of lag time in Sections~\ref{sec:TSA} and \ref{sec:cream}. We present a discussion of the revised analysis in Section~\ref{sec:discussion} and conclusions in Section~\ref{sec:conclusion}. \section{Observations and data} \label{sec:observations} A long-duration multi-wavelength monitoring of {PKS 0558$-$504}\ was carried out with {\em Swift} between September 9, 2008, and March 30, 2010, covering a total of 90 visits during this period, with approximately $\sim 2$ ks of exposure time per visit every week. Simultaneous observations were made at each visit with UVOT's 6 filters (W2, M2, W1, U, B, and V), and with XRT, in which the window timing mode was used. We obtained the light curves of the X-ray, UV, and optical bands from the online data tables of \citet{Gli13}, where further details on the data collection and reduction process are described. These data include a correction for line-of-sight extinction of $E(B - V) = 0.044$ mag \citep{Schlegel98}. The UVOT light curve data provided by \citet{Gli13} are given in magnitudes, and we converted the data to flux densities ($f_\lambda$) to carry out lag measurements. \section{Time-series analysis} \label{sec:TSA} We revisited the time-series analysis of {PKS 0558$-$504}\ to determine the time delays between the X-ray, UV, and optical bands, using multiple methods as consistency checks. In all cases, lags are measured relative to the W2\,$(\lambda1928$\,\AA) band as the reference or driving band. \label{sec:measurements} \subsection{Discrete Correlation Function} The lag measurements presented by \cite{Gli13} were done using the Discrete Correlation Function (DCF) method of \cite{Edelson88}. This method determines the cross-correlation function (CCF) between two unevenly sampled light curves by binning the CCF into regularly spaced temporal bins. With the goal of replicating the \cite{Gli13} lag measurements, we applied the DCF method to the {PKS 0558$-$504}\ light curve data, measuring the CCF of each band relative to the W2\, band as the driving light curve. We followed the earlier measurement by using a 7-day bin size for the DCF. The resulting CCFs are shown in Figure \ref{fig:dcf}, and can be directly compared with the CCFs in Figure 5 of \citet{Gli13}. \begin{figure*} \includegraphics[trim=0.0cm 0.0cm 0.0cm 0.0cm, clip, width=15cm]{images/dcf.pdf} \caption{Cross-correlation functions measured with the DCF method for the V, B, U, W1, M2, and X-ray bands. In each case, the W2\, band was used as the driving band. The asymmetric structure in these CCFs, particularly the V and X-ray bands, shows the opposite shape seen in the CCFs in Figure 5 of \citet{Gli13}, indicating that their measurements must have inadvertently been carried out using the W2\, band as the responding rather than the driving band. The shape of these CCFs clearly illustrates that the V-band variations lag behind the W2, and W2\, lags behind the X-ray band.} \label{fig:dcf} \end{figure*} There are small differences in the point-to-point scatter between our CCFs and those of \citet{Gli13}. These may be attributed to using different implementations of the DCF algorithm although we are unable to point to a specific cause. Aside from these minor details, the shapes of our CCFs are largely the same as those presented by \citet{Gli13}, with one major overall difference: comparing our DCFs with those presented in Figure 5 of \citet{Gli13}, it is immediately clear that their CCFs are time-reversed versions of ours-- that is, mirror images reflected about $\tau=0$. This is most obvious in the V band, where the \citet{Gli13} CCF shows a clear negative lag while ours shows a positive lag, and in the X-ray band, where the \citet{Gli13} CCF indicates a positive lag while ours is negative. In the U and B bands, the CCFs are peaked near $\tau=0$ but slightly asymmetric, and our CCFs show the opposite sense of asymmetry from those of \citet{Gli13}. The CCFs for the W1 and M2 bands are peaked at zero lag and nearly symmetric. Since our CCFs were measured using W2 as the driving continuum band, we conclude that the inverted lag-wavelength relationship found by \citet{Gli13} must have been the result of their having interchanged the order of the driving and responding bands when applying the DCF method. That is, they must have inadvertently used the W2 band as the responding band rather than the driving band when calculating the DCF, which would have the effect of time-reversing the CCF structure. Our updated DCF measurements demonstrate that the continuum lags in {PKS 0558$-$504}\ actually behave in the usual, expected manner, with shorter-wavelength variations occurring first and longer-wavelength variations responding at later times. \citet{Gli13} performed a Monte Carlo bootstrapping analysis, generating multiple randomised versions of the light curves and re-running the DCF to produce an ensemble of results to evaluate the lag of the CCF peak and its uncertainty. However, the DCF method can have difficulty in identifying correlations when applied to sparsely sampled light curves \citep{Peterson93}, and the 7-day observed cadence does not optimally sample the time delays. Except for the V band, the DCF peaks for the UV and optical bands occur at the $\tau=0$ bin, indicating that the lags are not well resolved by the discrete sampling. In order to obtain more accurate determinations of the lags and uncertainties, we re-measured the lags using three additional methods, each of which have better sensitivity than the DCF method for detection of the short lags in the {PKS 0558$-$504}\ light curves. \subsection{Interpolation Cross-correlation Function} One of the most common methods for RM measurements is the Interpolation Cross-Correlation Function (ICCF), which employs a linear interpolation between successive observations \citep{Peterson04}. The ICCF quantifies the amount of similarity between two time series as a function of time shift or lag between them. We measure ICCF lags using the {\sc pyccf} code\footnote{\url{http://ascl.net/code/v/1868}} \citep{Sun18}. Since the {PKS 0558$-$504}\ observations consists of $\sim$300 days with an average cadence of 3 days, we calculated the ICCF with a lag time range of $\pm$250 days and using a 0.2 day grid spacing. The resulting CCF has peak amplitude $r_\mathrm{max}$ at lag $\tau_\mathrm{peak}$. The centroid lag $\tau_\mathrm{cen}$ is determined as the centroid of all points in the CCF above $0.8r_\mathrm{max}$. Uncertainties were determined using the Monte Carlo flux randomisation/random subset sampling \citep[FR/RSS;][]{Peterson98, Peterson04} method, with 10,000 realisations. From the $\tau_\mathrm{peak}$ and $\tau_\mathrm{cen}$ values of each Monte Carlo realisation of the data, we obtain the cross-correlation peak distribution (CCPD) and the cross-correlation centroid distribution (CCCD), and the final values of $\tau_\mathrm{peak}$ and $\tau_\mathrm{cen}$ and their uncertainties are taken to be the median and 68\% confidence intervals of these distributions. The ICCF lags are listed in Table~\ref{tab:lagTime}, and the CCCD is displayed in the third panel of Fig.~\ref{fig:LagJavelin}, for each combination pair between W2\ and the X-ray/UV/optical light curves. The CCCDs show narrow peaks at regularly spaced intervals corresponding to $0.5$ times the sampling interval due to aliasing, but the overall widths of these distributions are much broader than these narrow aliasing peaks, resulting in large uncertainty ranges on $\tau_\mathrm{peak}$ and $\tau_\mathrm{cen}$. We also find a trend of decreasing $r_{\rm max}$ as a function of wavelength, similar to that observed in other \emph{Swift} RM studies \citep[e.g.,][]{Edelson19}. This can be attributed to both the lower S/N in longer wavelength bands and the increasing dilution of the AGN variability by host galaxy starlight. \subsection{JAVELIN}\label{sec:javelin} We also used {\sc javelin}\footnote{{\sc javelin}, Just Another Vehicle for Estimating Lags In Nuclei Code, \url{https://bitbucket.org/nye17/javelin}} \citep{Zu11, Zu13, Zu16} which models the behaviour of the continuum light curve variability as an auto-regressive process using a damped-random walk model (DRW) to interpolate the reference light curve. It implements a Markov Chain Monte Carlo (MCMC) procedure via {\sc emcee} \citep{Foreman13} to sample the posterior distributions of the optimal time-delay (assuming a top-hat response function with a central value at $\tau_{\textrm{jav}}$) and scaling parameters needed for the driving light curve to match each continuum light curve. We ran the {\sc javelin} on the full light curves, with the MCMC parameters of 200 starting walkers, a 1000 step chain and burn in time of 500. We show the light curve together with the best fit and its $1\sigma$ confidence envelope in the left panel of Fig.~\ref{fig:LagJavelin} for each band. The marginalised posterior distributions of $\tau_{\textrm{jav}}$ are show in the middle panel of Fig.~\ref{fig:LagJavelin}, and a summary of the their median values and 68\% confidence interval are shown in Table~\ref{tab:lagTime}. \begin{figure*} \includegraphics[trim=0.2cm 3.3cm 2.0cm 3.7cm, clip, width=17cm]{images/lag_Flux4.pdf} \caption{Inter-band lag measurements of {PKS 0558$-$504}. {\em Left panels:} Best fit DRW model made with {\sc javelin} to each light curve in relation to the W2-band. The contours show the 68\% confidence interval. {\em Middle panels:} Marginalised posterior distributions of the lag measurements with {\sc javelin}. The median value and its corresponding 68\% confidence interval are shown as \emph{$\tau_{\rm jav}$\,}. $\tau=0$ is shown for reference as the red vertical line. {\em Right panels:}. CCCD histograms as measured by {\sc pyccf}, the black lines shows 0.8$r_\mathrm{max}$ used to calculate $\tau_\mathrm{cent}$. The vertical red line corresponds to $\tau=0$, for reference. } \label{fig:LagJavelin} \end{figure*} \begin{table*} \centering \caption{Lag measurements for {PKS 0558$-$504}\ obtained with 5 different methods (see Sec.~\ref{sec:measurements}) for every filter with respect to W2-band. Columns 2 and 3 give the observed and rest-frame wavelength of each filter. Column 4 lists the ICCF maximum correlation coefficient $r_\mathrm{max}$, and the uncertainty is obtained as the standard deviation of the $r_\mathrm{max}$ values from the 10,000 FR/RSS iterations. The columns 5 and 6 gives the ICCF centroids and peaks, respectively. Columns 7 gives the lags measured by {\sc javelin}, column 8 correspond the lag obtained with {\sc mica2}. and column 9 gives the lag estimates from the PyceCREAM.} \label{tab:lagTime} \begin{tabular}{lcccccccc} \hline\hline Band & $\lambda_\mathrm{observed}$ & $\lambda_\mathrm{rest}$ & ICCF & CCPD & CCCD & {\sc javelin} & {\sc mica2} & PyceCREAM \\ & & & $r_\mathrm{max}$ & $\tau_{\rm peak}$& $\tau_{\rm cent}$ &$\tau_{\rm JAV}$& $\tau_{\rm MICA2}$ &$\tau_{}$\\ &[\AA] & [\AA] & & [days] & [days] & [days] & [days] &[days] \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline \hline HX & 3 & 2.63& 0.52$\pm 0.06$ & $-6.75^{+11.06}_{-8.68}$ & $-7.56^{+6.10}_{-8.20}$ &$-3.07^{+0.3}_{-0.4}$ & $-14.0^{+6.54}_{-6.55}$ & $\cdots$ \\[.15cm] UVW2 & 1928 & 1695 &1.00$\pm0.03$ &$0.00^{+0.80}_{-0.79}$ & $0.00^{+0.80}_{-0.79}$ & $0.00^{+0.11}_{-0.12}$ & $0.00^{+0.06}_{-0.05}$ & $0.00\pm0.49$ \\[.15cm] UVM2 & 2246 & 1975 & 0.93$\pm0.04$ & $0.80^{+4.23}_{-4.43}$ & $-0.01^{+4.23}_{-4.43}$ & $0.05^{+0.53}_{-0.53}$ & $0.05^{+0.49}_{-0.49}$ & $0.31\pm 0.61$\\[.15cm] UVW1 & 2600 & 2286 & 0.95$\pm0.03$ & $1.25^{+3.50}_{-3.56}$ & $2.40^{+3.50}_{-3.56}$ & $0.91^{+0.42}_{-0.43}$ & $0.82^{+0.47}_{-0.51}$ & $0.66\pm 0.74$\\[.15cm] U & 3467 & 3047 & 0.86$\pm0.04$ & $2.60^{+3.19}_{-3.61}$ & $4.34^{+3.19}_{-3.61}$ & $2.82^{+0.47}_{-0.46}$ & $2.21^{+0.94}_{-0.99}$ & $1.64\pm 1.10$ \\[.15cm] B & 4392 & 3862 & 0.80$\pm0.05$ & $2.60^{+3.90}_{-3.65}$ & $3.76^{+3.90}_{-3.65}$ & $2.86^{+0.55}_{-0.54}$ & $2.18^{+1.14}_{-1.34}$ & $2.75\pm 1.48$ \\[.15cm] V & 5468 & 4808 & 0.68$\pm0.07$ & $5.98^{+5.48}_{-6.51}$ & $11.21^{+5.48}_{-6.51}$ & $10.38^{+0.82}_{-0.74}$ & $7.73^{+3.32}_{-3.28}$ & $4.06\pm1.81$ \\[.15cm] \hline \end{tabular} \end{table*} \subsection{MICA2} \label{subsec:mica} Finally, we use {\sc mica2} \citep{Li2016}\footnote{{\sc mica2} is available at \url{https://github.com/LiyrAstroph/MICA2}}, which is similar to {\sc javelin} where the variability of the driving light curve, used to interpolate, is modelled as a DRW process. A main difference is that {\sc mica2} employs a family of relatively displaced Gaussian function to model the response function instead of a top-hat. Thus, the time lag ($\tau_{\textrm{MICA2}}$) is set to be the variance-weighted centre of the Gaussian. For the sake of simplicity, we only use one Gaussian. We also test for multiple Gaussians and find the case of one Gaussian is preferable in terms of Bayesian factors. An MCMC procedure with the diffusive nested sampling algorithm (\citealt{Brewer11}) is adopted to optimise the posterior probability and determine the best estimate and uncertainties for the model parameters. In Fig.~\ref{fig:Lagmica2}, we show the best fit to the light curves (right panel) and its associated response functions (left panel). The obtained time lags are presented in Table~\ref{tab:lagTime}. \subsection{Method comparison} Overall, the lags derived from the four methods display an increasing trend with wavelength, with values consistent across methods for the UV optical bands albeit larger uncertainties. These values suggest that the variations shown by {PKS 0558$-$504}, follow the expected trend of larger lags at longer wavelengths in stark contrast to the results reported by \citet[][see Section~\ref{sec:discussion} for further details]{Gli13}. In Fig.~\ref{fig:comparison}, we show a comparison between the measurements obtained with three additional time analysis methods {\sc javelin} against {\sc mica2} (purple) and {\sc pyccf} (black). These lag measurements are consistent with each other within 1$\sigma$ for all bands (all results are presented in Table~\ref{tab:lagTime}). The median lag values obtained with {\sc pyccf} (using the ICCF method) are similar to those obtained with {\sc javelin} and {\sc mica2} albeit larger uncertainties (see Fig.~\ref{fig:comparison}) due to the ICCF method being less sensitive in detecting lags with low cadence light curves and lower S/N \citep{Li19}. Therefore, we have adopted the results obtained with {\sc javelin} and {\sc mica2} as the values used in the analysis in the following sections. \begin{figure} \includegraphics[trim=1.cm 1.2cm 1.2cm 2cm,clip, width=\columnwidth,angle=0]{images/comparison.pdf} \caption{Lag measurement comparison between methods: {\sc javelin} vs {\sc pyccf} (black), and {\sc javelin} vs {\sc mica2} (purple). The solid line shows the one-to-one relation for reference. } \label{fig:comparison} \end{figure} \iffalse \begin{figure} \includegraphics[trim=0cm 0cm 0cm 0cm,clip, width=\columnwidth,angle=0]{images/xray_driver.pdf} \caption{Comparison between the inferred driving light curve (red) from {\sc pycecream} and the X-ray light curve (black). The X-ray light curve has its mean subtracted for display purpose. The red envelope represent the 1$\sigma$ confidence interval. } \label{fig:xray_driving} \end{figure} \fi \begin{figure} \includegraphics[trim=0cm 0cm 0cm 0cm,clip, width=\columnwidth,angle=0]{images/mdot_faceon.pdf} \caption{The marginalised posterior distribution for the product $M\dot{M}$ as inferred by PyceCREAM\ (see Sec.~\ref{sec:cream}), with the median and 1$\sigma$ marked as solid and dashed lines, respectively. We calculate the Eddington ratio shown in the top axis, assuming a BH mass of $2.5\times10^8$ M$_{\odot}$, and a fixed inclination angle of $i=0^\circ$ and a temperature profile $T\propto R^{-3/4}$ within the PyceCREAM fit. } \label{fig:mdotcream} \end{figure} \section{Accretion disc modelling}\label{sec:cream} We also analyse the light curves using the Continuum REprocessed AGN Chain Monte Carlo PyceCREAM\ code described in \citep{Starkey16}\footnote{We used the {\sc python} wrapper PyceCREAM\ \url{https://github.com/dstarkey23/pycecream}} to infer the properties of the accretion flow in {PKS 0558$-$504}. Here, we present a brief description of the model and refer to \citet{Starkey16,Starkey:2017} for further details of the algorithm. PyceCREAM\ assumes that the variability of the continuum light curves are described within the reprocessing model. A variable X-ray source (corona) located above the supermassive black hole at a height of the accretion disc acts as a ``lamp-post'' that shines and heats up the disc \citep{Cackett2007}. This additional energy input on the viscously heated disc is in turn thermalised and re-emitted at longer wavelengths. This X-ray reprocessing produces correlated variations in the emission of the disc which propagates radially outwards, and produces ``light echoes'' exciting first the variations of the hot internal disc and then the variations of the external part of the cold disc. The wavelength-dependent variations probe different regions of the disc and allow to measure the temperature profile $T(R)$, where $T(R)\propto R^{-3/4}$ corresponds to a standard accretion disc. Therefore, the expected time delay at a given the observed wavelength (which probes a characteristic temperature/radius on the disc) scales as $\tau=R/c\propto(M_{BH}\dot{M})^{1/3}T^{-4/3}\propto(M_{BH}\dot{M})^{1/3}\lambda^{4/3} $, where $M_{BH}$ is the mass of the black hole and $\dot{M}$ is the rate of mass accretion. Thus, the delay distribution at different wavelengths inferred via the light-curves carries information of the product $M_{BH}\dot{M}$. Here, we have assumed the quasar to be face-on, i.e. $i=0$, and set the temperature profile index to $-3/4$. The effect of fixing the inclination has a negligible effect on the measured average delay, as shown by \citet{Starkey16}. As for the driving light curve, PyceCREAM models it as a Fourier time series using a random-walk prior to constrain the power density spectrum. We used a fixed high frequency cutoff for the power spectrum of 0.5 cycles d$^{-1}$. Then, each light curve is shifted and stretched to match the flux levels at each wavelength (after the convolution with the delay distribution). We used uniform priors for all parameters except for the driving light curve power spectrum. We ran the MCMC sampling for $10^{5}$ iterations, discarding the first third of the chains as the burn-in phase. In Figure~\ref{fig:creamJuan}, we show the best fit to the UV and optical light curves, as well as the delay distribution for each band. The driving light curve, shown in the top most panel, provides a good description of the variability at all wavelengths. The mean delays for each band are consistent with those obtained from other methods (see Table~\ref{tab:lagTime}) following --by construction-- the $\tau\propto\lambda^{4/3}$ relation. The uncertainties in the mean lags are larger than those obtained by other methods. This likely arises from the flexibility of MICA and Javelin to fit every band independently. On the other hand, PyceCREAM fits all bands simultaneously and restricts the lag measurements to follow the $\tau\propto\lambda^{4/3}$ relationship. As a consequence, any scatter around this relationship line will introduce additional variance scatter in the physical parameters inferred by PyceCREAM and thus, in the mean lags. Furthermore, we find a value for the product $\log( M_{BH}\dot{M}$ / M$_{\odot}^2$ yr$^{-1}$) = $9.4\pm0.4$; the marginalised posterior distribution is shown in Fig.~\ref{fig:mdotcream}. Using the BH mass of $2.5\times10^8$ M$_{\odot}$, we estimate a mass accretion rate (divided by its Eddington limit) of $\dot{m}_{\textrm{Edd}}=1.87^{+2.79}_{-1.15}$. This value is above its Eddington limit consistent with previous measurements $\dot{m}_{\textrm{Edd}}=1.7$ \citep{Gli10}. \iffalse \subsection{Spectral decomposition} Besides the wavelength-dependent lag distribution, accretion theory also predicts the spectral energy distribution (SED). The multi-band light curves contain information about the SED of the variable component -- AGN -- superimposed over the constant light of the underlying galaxy \citep[e.g.,]{Cackett2020,hernandez2020}. We used the inferred normalised driving light curve from {\sc pycecream}, $X(t)$, to perform a linear decomposition of each light curve as a combination of a static ($G(\lambda)$) and a linear variable component ($C(\lambda,t)$) via the following equation: \begin{equation} F(\lambda) = C(\lambda,t) * \left[X(t) + \tau_{\textrm{JAV}}(\lambda)\right] + G(\lambda), \end{equation} where we shifted the driving light curve by the best-fit lag as measured by {\sc javelin}. We used {\sc lmfit} \citep{lmfitref} to fit to each light curve and find the best fit parameters for the linear equation. The best fits and their $1\sigma$ uncertainty envelopes for every light curve are shown in the top panel of Fig.~\ref{fig:fluxflux}. By extrapolating the best fit for all light curves, we can find where the first light curve reaches zero flux within $1\sigma$, i.e. the W2\ band. At this value, we can take a vertical slice and estimate the minimum contribution of the underlying constant galaxy. This method, allows us to dissect the SED of the AGN (as measured by the slopes of each band), as a function of rest wavelength shown in the lower panel of Fig.~\ref{fig:fluxflux}. \fi \iffalse \begin{figure} \centering \includegraphics[trim=0cm 0cm 0cm 0cm,clip, width=\columnwidth,angle=0]{images/flux_flux_combined.pdf} \caption{Spectral energy decomposition of {PKS 0558$-$504}. \textit{Top:} A flux-flux diagram for {PKS 0558$-$504}\ that separates the constant (galactic) contribution (red) as well as the highest (grey) and lowest (black) variability amplitude observed by Swift. The lines for each each filter show the linear fit and its 1$\sigma$ confidence interval. \textit{Bottom:} The spectral energy distribution of each component. All fluxes have been deredenned. }\label{fig:fluxflux} \end{figure} \fi \section{Discussion} \label{sec:discussion} \citet{Gli13} presented the results of the first continuum RM study of {PKS 0558$-$504}\, with {\em Swift}. The inter-band lag measurements, made using the DCF method, found a trend of negative lag times towards the UV and optical bands, while positive values were found towards the X-ray bands. This result was interpreted as accretion induced fluctuations moving inwards through the disc \citep{Lyubarskii97, Arevalo08} where the optical bands are the driver of the observed changes in the UV bands, and in turn, responsible for those observed in X-rays. This trend is the opposite of the disc reprocessing model or ``lamp-post". This model postulates that the corona and/or internal part of the disc illuminates and heats the outer regions of the disc by disturbing its local temperature, thus generating wavelength-dependent delays increasing at longer wavelengths \citep{SS73, Frank02, Cackett2007}. In this work, we revisited the lag measurements of the \textit{Swift}/UVOT light curves taken from \citet{Gli13}. We recreated the DCF analysis presented in \citet{Gli13} and found that their measurements must have inadvertently used the reference W2 band as the responding light curve rather than the driving one. This was reflected as a mirror image cross-correlation function and thus provided an opposite sign in the measurement of the inter-band delays (see Fig.~\ref{fig:dcf}). Further analysis with {\sc javelin}, {\sc mica2} and ICCF/{\sc pyccf} codes (described in Section~\ref{sec:measurements}) confirm the wavelength-dependent trend in which the longer wavelength light curves lag the shorter wavelength ones (see Fig.~\ref{fig:LagJavelin} and Table~\ref{tab:lagTime}), in line with the disc reprocessing model and most continuum reverberation studies. We show that the UV and optical light curves are well correlated and the lag time measurements obtained with four different codes/methods are widely consistent with each other. \subsection{Lag spectrum analysis } \label{sec:fit} We can use the delay spectrum for {PKS 0558$-$504}, as measured with {\sc javelin} (see Table~\ref{tab:lagTime}) to test the predictions from accretion disc theory. We transformed the delay measurements and wavelengths to the AGN rest frame. We then proceeded to fit the delay spectrum with the functional form: \begin{equation} \label{eqn:cont_lag} \tau = \tau_{0}\left(\frac{\lambda}{\lambda_{0}}\right)^{\beta} - y_{0}\,, \end{equation} where $\lambda_{0}$ is the reference wavelength corresponding to the rest-frame W2-band ($\lambda 1695.4$ \AA), $\tau_{0}$ is the reference time which measures the radius of the disc emitting at the reference wavelength $\lambda_{0}$, $\beta$ is the power-law index which reflects the disc temperature profile, and $y_0$ allows the model to pass through zero lag at the reference wavelength $\lambda_{0}$. We made a fit to the lag time data obtained with {\sc JAVELIN} and {\sc MICA2} (see~\ref{fig:Lagfit}) using the Eq.~\ref{eqn:cont_lag} and considering the $\beta$ parameter fixed at 4/3 as predicted by the standard thin-disc theory \citep{SS73} and leaving $\tau_0$ free. The fit was applied twice: first to the set of all lag-time measurements from X-rays to the V-band, and a second fit was then applied to the UVOT data only. For {\sc JAVELIN}, we find that the fits for the two models are very similar with values of $\tau_{0}=2.69\pm0.40$ days with $\chi^2_\nu=4.75$ considering all points, while excluding the X-ray data gives a value of $\tau_{0}=2.58\pm0.50$ days and a $\chi^2_\nu$ of 5.63. For the {\sc MICA2} case, the values obtained for $\tau_{0}$ are $1.66\pm044$ days for X-ray+UVOT data and $1.61\pm0.29$ days for UVOT data with $\chi^2_\nu$ of 0.86 and 0.37 respectively. The summary of the results are listed in table~\ref{tab:lagfitinfo} and Figure~\ref{fig:Lagfit} shows the lag times for each band (purple circles) together with the fit made to the UVOT data (black line) and Xray+UVOT data (red line). The $\beta=4/3$ model follows the general trend of the data, although the B and V-band points scatter significantly below and above the best-fit model respectively. The corresponding high $\chi^2_\nu$ values obtained with {\sc JAVELIN} indicate that the power-law model for the lag spectrum is not formally a good fit for these data. The $\chi^2_\nu$ value obtained with {\sc MICA2} is more reasonable, indicating that the reprocessing model is an acceptable match to the data. However, we note that the {\sc JAVELIN} measurements for the B and V bands carry substantially smaller uncertainties than the lags measured using the other methods ({\sc PyCCF}, {\sc MICA2}, and {\sc PyceCREAM}), and the interpretation of any model fit is subject to considerable ambiguity given the different results obtained from different lag measurement techniques. \begin{table*} \centering \caption{Parameters for lag-wavelength fits. Column 2: shows the filters used for the fit, all-data corresponds to the X-ray data up to the V-band, and UVOT corresponds to the W2\, to V bands. columns 3,4,5 correspond to the parameters obtained from the fit of equation~\ref{eqn:cont_lag}. Column 6: $\chi^2$/degrees of freedom.} \label{tab:lagfitinfo} \begin{tabular}{llcccc} \hline\hline Code &Data & $\tau_0$ & $\beta$ & $y_{0}$ & $\chi_\nu^2$\\ & & (days) & & \\ (1) & (2) & (3) & (4) & (5) & (6) \\ \hline {\sc JAVELIN}& & & & & \\ &X-ray + UVOT & 2.69$\pm$0.40 & 4/3 & 0.88$\pm$0.08 & 4.75 \\ &UVOT & 2.58$\pm$0.50 & 4/3 & 0.86$\pm$0.09 & 5.63\\ \hline {\sc MICA2}& & & & & \\ & X-ray + UVOT & 1.66$\pm$0.44 & 4/3 & 0.84$\pm$0.03 & 0.86\\ & UVOT & 1.61$\pm$0.29 & 4/3 & 0.84$\pm$0.02 & 0.37 \\ \hline\hline \end{tabular} \end{table*} \begin{figure*} \includegraphics[trim=0cm 1.6cm 0.5cm 1cm 1cm, clip,width=18cm,height=3in]{images/lagtime_B.pdf} \caption{Delay spectrum of {PKS 0558$-$504}\ using the W2\ band as reference. The {\sc javelin} (left-panel) and {\sc MICA2} (right-panel) measurements (purple circles, transformed to the AGN rest frame) show an increasing trend with wavelength. The best fit (black and red line, with shaded envelope illustrating the uncertainty range) follows the reprocessing model relation of $\tau\propto\lambda^{\beta}$, with a fixed power-law index $\beta=4/3$. For {\sc JAVELIN} data, the X-ray data was not included in the fit (black line) of the figure. For the fit (red line) shown in the {\sc MICA2} figure all data were included. The red and blue dotted line correspond to the predictions for a thin-disk model using the equation~\ref{eq:3} considering the value of $X=2.49$ (blue line) and $X=4.97$ (red line). } \label{fig:Lagfit} \end{figure*} We also compare the observed lag-wavelength behaviour against theoretical expectations to probe the size of the accretion disc. We followed the methodology proposed by \citet{Fausnaugh16} and \citet{Edelson17}, to estimate the expected photon travel time $r(\lambda)/c$ (where $r$ is the disc radius at wavelength $\lambda$) from the inner part of the accretion disc to the outer region, using the following equation: \begin{equation} r(\lambda) = 0.09\left(X\frac{\lambda}{\lambda_{0}} \right)^{4/3} M_{8}^{2/3} \left( \frac{\dot{m}_\mathrm{Edd}}{0.10}\right)^{1/3} \mathrm{lt-days}\,, \label{eq:3} \end{equation} where $\lambda_0$ corresponds to the rest-frame wavelength of the driving light curve (W2), $M_{8}$ is the black hole mass in units of $10^8\,M_{\sun}$, and $\dot{m}_{\rm Edd}$ is the Eddington ratio $L_{\rm bol}/L_{\rm Edd}$. The multiplicative factor $X$ incorporates the geometry of the delay distribution. Following \cite{Edelson19} we consider two cases. If we simply assume that at radius $r$ the disc emits at a wavelength corresponding to the local temperature via Wien's law, then the scaling factor is $X=4.97$. For a model in which $r$ corresponds to the flux-weighted radius for a disk emitting locally as a blackbody, then $X=2.49$. To obtain estimates of $r(\lambda)$ across the wavelength range of the data, we have considered both $X$ values, a mass of the BH of $2.5\times10^{8}~M_{\sun}$, and the accretion rate $\dot{m}_{\rm Edd}=1.7$ \citep{Gli10}. In Figure \ref{fig:Lagfit} we plot curves of $[r(\lambda) - r(\mathrm{W2})]/c$ for these two values of $X$, to compare with the observed lag-wavelength curve. We find that for the {\sc JAVELIN} data the best-fitting 4/3 model is found between the two model curves (black line - left panel), while the best fit found for the {\sc MICA2} data (red line - right panel) falls exactly on the flux-weighted model (X=2.49). This might be naively interpreted as an indication that the disc in {PKS 0558$-$504}\ is $\sim50\%$ larger than expected according to the {\sc JAVELIN} results or has a size of $1.66\pm0.4$ lt-days if we consider the {\sc MICA2} results and take its black hole mass and Eddington ratio. However, the data points from the UV through B bands are largely compatible with the ($X=2.49$) model within their uncertainties, and the discrepancy between this model and the data is almost entirely the result of the long lag measured for the V band. In some other objects having high-quality UV-optical monitoring data, the disc radius exceeds model predictions by a factor of $\sim3$ \citep[e.g.,][]{Fausnaugh16}. While it is possible that the disc radius of {PKS 0558$-$504}\ is also in excess of model predictions, albeit by a smaller factor, the data quality in this case is not sufficient to draw unambiguous conclusions. Furthermore, any comparison between the disc model predictions and data will be subject to the usual uncertainties in the estimated black hole mass and Eddington ratio, which may be of order $\sim0.5$ dex and certainly exceed the observed discrepancies between the model predictions and the data. Another question of interest is whether the data show an excess lag in the U band that would indicate a substantial contribution of Balmer continuum emission from the broad-line region \citep{Korista01,Chelouche:2019,Netzer:2020}. While definite excess U-band lags have been observed in \emph{Swift} data of some other sources \citep[e.g.,][]{Fausnaugh16, Edelson19}, the data in this case do not give a clear result. The U-band lag as measured by the ICCF method exceeds the the B-band lag by $\sim0.6$ days, but the difference is much smaller than the measurement uncertainties, and {\sc JAVELIN} and {\sc MICA2} each obtain nearly equal lags in the U and B bands with substantial uncertainties. In this case, the weekly observing cadence is not optimal for discerning the details of wavelength-dependent lag behaviour on timescales less than several days. Despite these limitations, the reverberation measurements nevertheless lead to the clear conclusion that PKS 0558--504 does follow an increasing trend of lag against wavelength similar to that observed in other intensively monitored AGN. Further monitoring of this object, at a more rapid cadence and ideally extending to wavelengths longer than the V band, would provide a basis for more detailed and rigorous comparison with model predictions. \section{Conclusions} \label{sec:conclusion} In this paper, we present the results of a revision to measurements of continuum reverberation lags for the AGN {PKS 0558$-$504}\, based on \emph{Swift} observations that were carried out during 2008--2010. This object had previously been an outlier among AGN having \emph{Swift} UV-optical continuum reverberation data, in that \citet{Gli13} found that the UV variations lagged behind the optical variations, and the X-rays lagged behind the UV. This trend is opposite to the behaviour found in other AGN with intensive monitoring data from \emph{Swift}: in other Seyferts, the lags are observed to increase as a function of wavelength. Based on their result, \citet{Gli13} argued against the disc reprocessing model for the variability. The puzzling behaviour found for {PKS 0558$-$504}\ prompted us to review the lag measurement for this object. Applying the same DCF method to reproduce the \citet{Gli13} measurement, we found that the data actually followed the more canonical trend of optical wavelengths lagging behind the UV, and UV lagging behind the X-rays. To examine the reverberation lags more closely, we carried out new measurements with four codes: {\sc pyccf}, {\sc javelin}, {\sc mica2}, and PyceCREAM, taking the W2\ band ($\lambda_\mathrm{obs} = 1928$~\AA) light curve as the driving band. We find that the variations of the UV/optical light curves are strongly correlated, but there is a poor correlation with the X-ray bands (see Fig.~\ref{fig:LagJavelin}). The results obtained with all four codes show similar delay spectra, with a clear trend of increasing lags as a function of increasing wavelength. All methods demonstrate the opposite lag-wavelength trend from that reported by \citet{Gli13}. Our results demonstrate that \citet{Gli13} must have inadvertently swapped the ordering of the driving and responding light curves when calculating the DCF. We present fits to the delay spectrum (Fig.~\ref{fig:Lagfit}) using the relation $\tau\propto\lambda^{\beta}$ for $\beta$ fixed at 4/3 (Eq.~\ref{eqn:cont_lag}), expected for an optically-thick and geometrically-thin accretion disc \citep{Cackett2007}. While the $\beta=4/3$ model appears to be compatible with the lag-wavelength trend in the data, the large uncertainties in the data precludes us from exploring deviations from the canonical temperature-radius (which sets $\beta$). We also compared the data with the standard model prediction for the disc radius as a function of emitting wavelength to test whether the disc size is too big, as seen in other continuum reverberation mapping experiments \citep[e.g.,][]{Edelson19}. Using a model for the flux-weighted radius as a function of wavelength, we find that the observed lags indicate a continuum emission region $\sim50\%$ larger than predicted by the disc model. If we consider the {\sc JAVELIN} results, or the {PKS 0558$-$504}\ disk has a size similar to that predicted by model if we consider the {\sc MICA2} result. For the two results, the values of its black hole mass and Eddington ratio were taken. However, given the anticipated uncertainties in these parameters as well as the scatter in the measured lags, the apparent discrepancy between the disc model and the data is not significant. Nevertheless, the data indicate that {PKS 0558$-$504}\ is a promising subject for additional monitoring observations, and further UV through optical monitoring at a daily cadence would be able to resolve the optical lags well and provide a definitive measurement of the wavelength-dependent lag behaviour in this object. \section*{Acknowledgements} DHGB acknowledges CONACYT support \#319800 and of the researchers program for Mexico. Research by AJB is supported by National Science Foundation grant AST-1907290. JVHS acknowledges funds from a Science and Technology Facilities Council grant ST/R000824/1 research fellowship. YRL acknowledges the financial support from the National Natural Science Foundation of China through grant No. 11922304 and from the Youth Innovation Promotion Association CAS. \section*{Data availability} All of the data used in this work are available in the Supplementary Data section of \citet{Gli13} at \url{https://academic.oup.com/mnras/article/433/2/1709/1751179}.
proofpile-arXiv_065-4565
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Convolutional neural networks (CNNs) and their variants are widely used with state-of-the-art results in many Computer Vision tasks. However, it is notably hard to ascribe reasoning properties to a CNN based solely on its performance, such as the capability for spatial reasoning in a structured scene. Despite the development of explainable artificial intelligence (XAI), most approaches trying to explain the predictions of CNN focus on local information only (regions or features involved in a decision)~\cite{Zhang2018-FITEE-visualinterpretabilitycnn} and not on the structure. However, reasoning capabilities would intuitively help CNNs avoid common pitfalls that hurt their generalization capability, such as some forms of dataset bias~\cite{Tommasi2017-BOOK-datasetbias} or their capacity of learning spurious correlations in the dataset while ignoring cues that are obvious to humans~\cite{Das2017-CVIU-networkattention}, such as structure in a scene. Spatial relations have proved useful to assess the structure of a scene and to recognize the objects it contains~(see e.g.~\cite{Santoro2017-NIPS-relationalreasoning, Shaban2020-TMI-contextcnn},~\cite{IB:FSS-15} and the references therein) In this work, we focus on \textit{directional relationships}, where objects in a scene are distributed in specific directions and/or distances from others (e.g., ``the circle is 20 pixels to the left of the square, at the same height''). It is often assumed that CNNs have the inherent capacity for learning relevant relationships as long as they fit inside the receptive field~\cite{Shaban2020-TMI-contextcnn,Redmon2016-CVPR-YOLO, Yang2019-TGRS-roadcontextcnn,Mohseni2017-TMI-contextcnnbrain}. Other works assume that this capacity is not always guaranteed, and force or emphasise relationships using techniques external to the CNN~\cite{Santoro2017-NIPS-relationalreasoning, Zhou2018-ECCV-relationalreasoning}. Additionally, the use of certain performance measures do not put into evidence what was the reasoning process behind a decision. For all these reasons, it becomes hard to say if, when or how a given CNN learns a particular object relationship. Differently from the aforementioned techniques, our work aims to explore the implicit assumption that a CNN can reason on relationships between objects in its receptive field, in a controlled manner. The objective of this paper is to determine if a basic U-Net, trained for a multi-object segmentation task with common loss functions, is capable of learning and using directional relationships between distinct objects to aid in their segmentation. To the best of the authors' knowledge, this scientific question has never been explored in-depth. We train the popular U-Net~\cite{ronneberger_u-net:_2015}, using commonly used hyperparameters, in a context where information on directional relations is key for perfect segmentation of objects of interest\Riva{; this experimental protocol, as well as the synthetic dataset used in its elaboration, are also both novel contributions}. Finally, we contribute to the growing field of neural network explainability by showcasing the performance of this network in such a context. Our code is publicly available at \url{https://github.com/mateusriva/satann_synth}, \Riva{and supplementary experiments are available at \url{https://mateusriva.github.io}.} \section{Related Work} \label{sec:related_work} Some recent works implicitly assume that CNNs inherently have relational reasoning capabilities. For instance, in their seminal paper YOLO, Redmon et al.~\cite{Redmon2016-CVPR-YOLO} mention that ``YOLO sees the entire image during training and test time so it implicitly encodes contextual information about classes''. Similar assertions are implicit in papers that link CNNs with larger receptive fields to usage of contextual information~\cite{Shaban2020-TMI-contextcnn, Yang2019-TGRS-roadcontextcnn, Mohseni2017-TMI-contextcnnbrain}. However, to the authors' knowledge, the extent of this implicit encoding has never been explored in full. We are particularly interested in the directional relationships, which provide semantics to the involved context (i.e. named relationships). Recent relational reasoning works focus on explicit modeling. Some examples follow: Kamnitsas et al.~\cite{Kamnitsas2017-MIA-CRF} augment a 3D CNN with a Conditional Random Field to integrate local context during post-processing. Santoro et al.~\cite{Santoro2017-NIPS-relationalreasoning} and follow-up work by Zhou et al.~\cite{Zhou2018-ECCV-relationalreasoning} propose an extra MLP-based network module to improve CNN relational reasoning capabilities via self-attention. In a similar way, LSTM are widely used as an additional network in many works in image captioning, visual question answering. Janner et al.~\cite{Janner2018-TACL-spatialreasoning} mix text and visual information for solving relational reasoning based tasks, with the visual encoding being CNN-based, in a reinforcement learning scenario. Si et al.~\cite{Si2018-ECCV-skeletonactionspatial} perform skeleton-based action recognition with relational reasoning based on a graph neural network. Krishnaswamy et al.~\cite{Krishnaswamy2019-complexstructures} operate on the creation of a sequence of relational operations based on out-of-network search heuristics. However, these works fail to analyse the inherent capacity of CNNs for relational reasoning, by augmenting them with extra modules or replacing them entirely. \section{Methods} \label{sec:materials} In this section, we present experimental methods for assessing the directional reasoning capabilities of the U-Net, by training on a pretext segmentation task that requires directional spatial reasoning for a correct answer. To this end, we present the synthetic Cloud of Structured Objects (CSO) dataset. \subsection{The Cloud of Structured Objects Dataset} \label{ssec:cso} The proposed Cloud of Structured Objects (CSO) dataset uses simple image datasets (such as the Fashion-MNIST~\cite{xiao_fashion_2017}) to generate a structured scene. A CSO data item is an image with objects of interest (OIs) of specific classes distributed in a structured way, along with several instances of a specified set of classes randomly distributed and called noise. The OIs (and only the OIs) are the segmentation targets, and are always at the foreground (i.e. they are never occluded by noise objects). OIs have a bounding box of size $28\times28$ pixels. We use a configuration (named ``T'') composed of three objects of interest, each belonging to a different class (specifically, ``shirts'', ``pants'', and ``bags'' from Fashion-MNIST). These objects form the vertices of a $48\times64\times80$ right-angled triangle, with its long leg laying horizontally, included in $160\times160$ 2D images (see Figure~\ref{fig:cso_examples}), thus determining the directional relationships between the objects. The entire OIs structure is translated by a random amount of pixels, drawn independently from a uniform distribution for each axis in the range of $[-32,32]$ pixels. We use the following noise distribution configurations: \textbf{Easy}: three noise elements are added to the image, belonging to a different class from those of the objects of interest (``shoes'' in our experiments). Each individual OI is independently translated by a uniform random draw in the range of $[-16,16]$ pixels, resulting in a slightly imperfect triangle and adding noise to the directional relations. \textbf{Hard}: similar to Easy, but the noise elements belong to the same class as one of the objects of interest (specifically, ``shirts''). Intuitively, the recognition and segmentation of the ``shirt'' OI must rely on its (imperfect) relationship with the other objects. \textbf{Strict}: similar to Hard, but with no individual element positional noise (that is, the triangle is always perfect). Additionally, the noise elements are distributed only in the bottom-left region of the figure (inside a $80\times80$ square), and the triangle can be translated in the range of $[-40,40]$ pixels, so the absolute position information is useless in segmenting the OIs. The correct segmentation is only possible if the directional relationships between the objects are learned. Finally, only the class with noise (``shirts'') is considered as a segmentation target. Examples of Fashion-MNIST-based CSO images of different configurations are displayed in Figure~\ref{fig:cso_examples}. \begin{figure}[tb] \begin{minipage}[b]{.32\linewidth} \centering \centerline{\includegraphics[width=3.0cm]{figures/cso/cso_easy_detailed.png}} \centerline{(a) T-Easy}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.32\linewidth} \centering \centerline{\includegraphics[width=3.0cm]{figures/cso/cso_hard_detailed.png}} \centerline{(b) T-Hard}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.32\linewidth} \centering \centerline{\includegraphics[width=3.0cm]{figures/cso/cso_strict_detailed.png}} \centerline{(c) T-Strict}\medskip \end{minipage} \caption{Examples of Cloud of Structured Objects images, for different configurations. The segmentation targets are highlighted. The directional relationships are represented by the dotted triangle.} \label{fig:cso_examples} \end{figure} The harder CSO configurations present a joint segmentation and detection problem. Networks must learn to correctly detect and segment objects (a simple task), but must also learn to reason on which object is the correct one. A good segmentation of the correct object implies a high true-positive (TP) to false-negative (FN) rate. However, segmentation results that point to incorrect objects will result in a low true-positive (TP) to false-positive (FP) rate. \subsection{U-Net Training} \label{ssec:u-net_training} The model training begins by choosing a CSO configuration and setting a size of the training and validation dataset $D$, from which $70\%$ is used for training, and the remaining $30\%$ for validation. We utilise a standard U-Net~\cite{ronneberger_u-net:_2015} with 4 levels. The receptive field at the bottleneck (respectively at the output) is $61\times61$ pixels (respectively $101 \times 101$ pixels)\footnote{Calculated using the \textit{receptivefield} library, available at \url{https://github.com/shelfwise/receptivefield}}, and thus can fit all OIs. We randomly initialise the models following He's initialisation~\cite{He2015-ICCV-init} with $5$ distinct seeds. The training/validation split is repeated randomly $5$ times. For each CSO configuration, we train a network for $100$ epochs using an ADAM optimiser and cross-entropy loss function To evaluate the models, we generate a test set containing $100$ new images of the same CSO configuration as the model, and use two measures: precision, defined as the per-pixel positive predictive value $\frac{TP}{TP+FP}$, and recall, defined as the per-pixel true positive rate $\frac{TP}{TP+FN}$. We compute the average test precision and recall for class ``shirt'', over all initializations where the model converged (defined as both precision and recall being above $0.5$). We also report how many of the trained models converged. Results are available in Table~\ref{tab:proof_of_learning}. Sample outputs are shown in Figures~\ref{fig:test_images_easy_hard} and~\ref{fig:test_images_strict}. \begin{table}[tb] \centering \begin{tabular}{c|c|cc|c} \multirow{2}{*}{Config.} & \multirow{2}{*}{$D$} & \multicolumn{2}{c|}{Class ``shirt''} & Conver-\\ & & Precision & Recall & gences \\ \hline \multirow{3}{*}{T-Easy} & 100 & $0.97 \pm 0.09$ & $0.95 \pm 0.11$ & 25/25\\ & 1000 & $1.00 \pm 0.01$ & $1.00 \pm 0.03$ & 25/25\\ & 10000 & $0.99 \pm 0.03$ & $0.99 \pm 0.04$ & 25/25\\ \hline \multirow{3}{*}{T-Hard} & 100 & $0.83 \pm 0.23$ & $0.82 \pm 0.23$ & 24/25\\ & 1000 & $0.95 \pm 0.14$ & $0.94 \pm 0.16$ & 24/25\\ & 10000 & $0.98 \pm 0.11$ & $0.98 \pm 0.10$ & 25/25\\ \hline \multirow{4}{*}{T-Strict} & 1000 & $0.65 \pm 0.32$ & $0.71 \pm 0.33$ & 6/25\\ & 5000 & $0.79 \pm 0.29$ & $0.79 \pm 0.30$ & 14/25\\ & 10000 & $0.87 \pm 0.19$ & $0.86 \pm 0.21$ & 21/25\\ & 50000 & $0.91 \pm 0.14$ & $0.90 \pm 0.15$ & 22/25\\ \end{tabular} \caption{Average precision and recall for the class ``shirt'', and standard deviation, for different dataset sizes and configurations, when the models converge.} \label{tab:proof_of_learning} \end{table} \begin{figure*}[htbp] \centering \begin{tabular}{c|ccc|c} Easy & \multicolumn{3}{c|}{Hard, Converging} & Hard, Non-Converging \\ \includegraphics[width=0.17\linewidth]{figures/tests/test_easy_100.pdf} & \includegraphics[width=0.17\linewidth]{figures/tests/test_hardC_100_1.pdf} & \includegraphics[width=0.17\linewidth]{figures/tests/test_hardC_1000_1.pdf} & \includegraphics[width=0.17\linewidth]{figures/tests/test_hardC_10000_1.pdf} & \includegraphics[width=0.17\linewidth]{figures/tests/test_hardNC_100.pdf} \\ $D=100$ & $D=100$ & $D=1000$ & $D=1000$ & $D=100$ \\ \includegraphics[width=0.17\linewidth]{figures/tests/test_easy_1000.pdf} & \includegraphics[width=0.17\linewidth]{figures/tests/test_hardC_100_2.pdf} & \includegraphics[width=0.17\linewidth]{figures/tests/test_hardC_1000_2.pdf} & \includegraphics[width=0.17\linewidth]{figures/tests/test_hardC_10000_2.pdf} & \includegraphics[width=0.17\linewidth]{figures/tests/test_hardNC_1000.pdf} \\ $D=1000$ & $D=100$ & $D=1000$ & $D=1000$ & $D=1000$ \\ \end{tabular} \caption{Sample results of some of the trained models for the easier tasks. Green regions indicate true positives; blue regions indicate false negatives; yellow regions indicate false positives of the ``shirt'' class; magenta regions indicate false positives of the other classes. Noise around the OIs is inherited from the Fashion-MNIST dataset.} \label{fig:test_images_easy_hard} \end{figure*} \begin{figure*}[htbp] \centering \begin{tabular}{ccc|cc} \multicolumn{3}{c|}{Strict, Converging} & \multicolumn{2}{c}{Strict, Non-Converging} \\ \includegraphics[width=0.17\linewidth]{figures/tests/test_strictC_1000_1.pdf} & \includegraphics[width=0.17\linewidth]{figures/tests/test_strictC_5000_1.pdf} & \includegraphics[width=0.17\linewidth]{figures/tests/test_strictC_10000.pdf} & \includegraphics[width=0.17\linewidth]{figures/tests/test_strictNC_1000.pdf} & \includegraphics[width=0.17\linewidth]{figures/tests/test_strictNC_5000.pdf} \\ $D=1000$ & $D=5000$ & $D=10000$ & $D=1000$ & $D=5000$ \\ \includegraphics[width=0.17\linewidth]{figures/tests/test_strictC_1000_2.pdf} & \includegraphics[width=0.17\linewidth]{figures/tests/test_strictC_5000_2.pdf} & \includegraphics[width=0.17\linewidth]{figures/tests/test_strictC_50000.pdf} & \includegraphics[width=0.17\linewidth]{figures/tests/test_strictNC_10000.pdf} & \includegraphics[width=0.17\linewidth]{figures/tests/test_strictNC_50000.pdf} \\ $D=1000$ & $D=5000$ & $D=50000$ & $D=10000$ & $D=50000$ \\ \end{tabular} \caption{Sample results of some of the trained models for the strict task. The color code is the same as in Figure~\ref{fig:test_images_easy_hard}. } \label{fig:test_images_strict} \end{figure*} \subsection{Discussion} We can see that properly segmenting and recognizing objects in the ``hard'' and ``strict'' cases is difficult with a small $D$. However, with enough data, the model learns to recognize the OIs. Lower precision scores in harder and/or small-dataset configurations point to the network being unable to completely avoid noise elements. The number of converging models shows that there is little guarantee of succeeding in the ``strict'' task without much more data than for the ``easy'' and ``hard'' tasks. Analysing the example outputs of the networks, in Figures~\ref{fig:test_images_easy_hard} and~\ref{fig:test_images_strict}, sheds more light on the measures in Table~\ref{tab:proof_of_learning}. In Figure~\ref{fig:test_images_easy_hard}, for the ``Easy'' configuration, the network performs perfectly, which indicates that in a scenario without confusing noise objects (such as ``Hard'' and ``Strict''), the segmentation of the OIs is a simple task. For the converging models in the ``Hard'' configuration, most of the objects are correctly segmented, as can be seen by the high recall (and, correspondingly, true positives). However, scenarios where $D$ is smaller also run the risk of predicting noise objects. Finally, when the model fails to converge (rightmost column of Figure~\ref{fig:test_images_easy_hard}), we can see that it is still capable of predicting the ``bag'' and ``pants'' OIs, and simply omits all predictions of the class ``shirt''. In Figure~\ref{fig:test_images_strict}, in the ``Strict'' scenario, non-converging models (on the two rightmost columns) still output some predictions, as the network had only a single segmentation target. However, they fail to properly detect and fully segment the correct shirt. In the converging cases (three leftmost columns), we can see the same expected tendency towards better segmentations when increasing data; it is clear that with enough data, the network can satisfy this task -- and thus, it must be capable of reasoning on directional relations. \begin{figure*}[htpb] \centering \begin{tabular}{cccc|cccc} & \multicolumn{3}{c|}{``Hard'', $D=10000$} & \multicolumn{3}{c}{``Strict'', $D=50000$} & \\ \rotatebox[origin=l]{90}{\textbf{Recall}} & \includegraphics[width=0.125\linewidth]{figures/heatmaps/seg_10000_T_hard_noise_1on1_recall_overlay.png} & \includegraphics[width=0.125\linewidth]{figures/heatmaps/seg_10000_T_hard_noise_2on1_recall_overlay.png} & \includegraphics[width=0.125\linewidth]{figures/heatmaps/seg_10000_T_hard_noise_3on1_recall_overlay.png} & \includegraphics[width=0.125\linewidth]{figures/heatmaps/strict_50000_T_strict_noise_1on1_recall_overlay.png} & \includegraphics[width=0.125\linewidth]{figures/heatmaps/strict_50000_T_strict_noise_2on1_recall_overlay.png} & \includegraphics[width=0.125\linewidth]{figures/heatmaps/strict_50000_T_strict_noise_3on1_recall_overlay.png} & \multirow{2}{*}[1.4cm]{\includegraphics[scale=0.12]{figures/heatmaps/colorbar.png}} \\ \rotatebox[origin=l]{90}{\textbf{Precision}} & \includegraphics[width=0.125\linewidth]{figures/heatmaps/seg_10000_T_hard_noise_1on1_precision_overlay.png} & \includegraphics[width=0.125\linewidth]{figures/heatmaps/seg_10000_T_hard_noise_2on1_precision_overlay.png} & \includegraphics[width=0.125\linewidth]{figures/heatmaps/seg_10000_T_hard_noise_3on1_precision_overlay.png} & \includegraphics[width=0.125\linewidth]{figures/heatmaps/strict_50000_T_strict_noise_1on1_precision_overlay.png} & \includegraphics[width=0.125\linewidth]{figures/heatmaps/strict_50000_T_strict_noise_2on1_precision_overlay.png} & \includegraphics[width=0.125\linewidth]{figures/heatmaps/strict_50000_T_strict_noise_3on1_precision_overlay.png} & \\ & Ref.: ``shirt'' & Ref.: ``pants'' & Ref.: ``bag'' & Ref.: ``shirt'' & Ref.: ``pants'' & Ref.: ``bag'' & \end{tabular} \caption{Precision and recall heatmaps of the class ``shirt'' when reference objects are slid across the image, for different configurations. Border effects are due to the sliding window limits.} \label{fig:heatmaps} \end{figure*} \section{Measuring Directional Relationship Awareness} \label{sec:tests} If the model learns to segment one OI by using another as a reference, we can expect that moving the reference around will affect the segmentation. To demonstrate this, we generate test images where each of the OIs, one at a time, is slid across the image using a stride of 20 pixels, while the other OIs remain fixed. The sliding OI is called the ``reference''. The ``reference'' is always at the foreground of the image. We compute the recall and precision of the segmentation of the ``shirt'' OI (even when it is used as ``reference''). For all positions of the reference, $20$ images are generated with the triangle perfectly centered and noise distributed according to the considered configuration. We then build a heatmap, where its value at a specific location $(x,y)$ is the averaged evaluation measure (either precision or recall) of the class ``shirt'' when the reference object is at position $(x,y)$. In the ``hard'' and ``strict'' configurations, if the network has learned to use other classes for the segmentation of the OI, we expect to see poor performance when the references are not positioned at their expected places. Figure \ref{fig:heatmaps} show the resulting heatmaps on the two largest datasets for the ``Hard'' and ``Strict'' configurations. To facilitate interpretation, the heatmaps are overlayed on a dummy image showing the centered OI structure, and the reference is not displayed. In the first and fourth columns, where the ``shirt'' itself is slid across the image, we can see that its segmentation can only happen in a specific region of the image. This may be due to the network needing the other OIs to segment the ``shirt'', learning the absolute positions where the ``shirt'' can be found, or a combination of both. In the second and fifth columns, we can see that the position of the ``pants'' does not affect the recall of the ``shirt'' (except when the ``pants'' occlude the ``shirt''); the precision of the ``shirt'', however, benefits from the proper positioning of the ``pants'' (highest values of precision in the heatmap), implying that it plays some role in allowing the network to avoid segmenting the wrong ``shirts''. Finally, in the third and sixth columns, we see that the same observations made for the ``pants'' as the reference are true for the ``bag'', with the notable exception of the recall in the ``strict'' case (sixth column, top image). In that case, the recall is remarkably diminished when the bag is not perfectly placed. All of this is a further evidence that the U-Net has learned to use other objects when reasoning about the segmentation of the shirt OI. \section{CONCLUSIONS} From the experiments shown, it can be reasonably concluded that the U-Net is indeed capable of reasoning between different objects in its receptive field, and using directional relationships to ensure proper segmentation. When trained on a task requiring directional relational reasoning, a simple U-Net trained with a cross-entropy loss function was capable of attaining satisfactory results, when enough data were supplied. {Our tests also show that} disturbing the directional relationships in test data directly results in underperformance, helping to explain the nature of the relationships learned by the network. This work is but a first step towards improving CNN explainability by better understanding how basic CNNs can reason about relationships between objects contained in their receptive fields. We have demonstrated that a CNN \textbf{can} learn to contextualise objects~-- specifically, it can learn directional spatial relationships --~in its receptive field, alongside putting into evidence the data hunger inherent to complicated reasoning tasks. Further works will aim at exploring this question in different directions: (i) what are the details of the relationship learning process? (ii) can relationship learning be accelerated? (iii) will accelerating relationship learning result in better-performing networks or lessen training data hunger? (iv) what are the limits of relational reasoning (such as behavior when facing overly narrow or sparse receptive fields)? \bibliographystyle{IEEEbib}
proofpile-arXiv_065-4569
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Motivated by the great success of deep neural networks (DNNs) in several applications such as pattern recognition and natural language processing, there has been an increasing interest in revealing the reason why DNNs work well from the statistical point of view. In the past few years, many researchers have contributed to understand theoretical advantages of DNN estimates for nonparametric regression models. See, for example, \cite{BaKo19}, \cite{ImFu19}, \cite{Sc19, Sc20}, \cite{Su19}, \cite{HaSu20}, \cite{NaIm20}, \cite{KoLa21}, \cite{SuNi21}, \cite{TsSu21}, and references therein. In contrast to the recent progress of DNNs, theoretical results on statistical properties of DNN methods for stochastic processes are scarce. As exceptional studies, we refer to \cite{KoKr20}, \cite{Og21}, and \cite{OgKo21}. \cite{KoKr20} consider a time series prediction problem and investigate the convergence rate of a deep recurrent neural network estimate. \cite{Og21} considers DNN estimation for the diffusion matrices and studies their estimation errors as misspecified parametric models. \cite{OgKo21} investigate nonparametric drift estimation of a multivariate diffusion process. Notably, there seems no theoretical results on the statistical properties of feed-forward DNN estimators for nonparametric estimation of the mean function of a nonlinear and possibly nonstationary time series. The goal of this paper is to develop a general theory for adaptive nonparametric estimation of the mean function of a nonlinear time series using DNNs. The contributions of this paper are as follows. First, we provide bounds of (i) generalization error (Lemma \ref{lem: error-gap-bound}) and (ii) expected empirical error (Lemma \ref{lem: emp-error-bound}) of general estimators of the mean function of a nonlinear and possibly nonstationary time series. These results are of independent theoretical interest since they can be useful to investigate asymptotic properties of nonparametric estimators including DNNs. Building upon the results, we establish a generalization error bound of DNN estimators (Theorem \ref{thm: pred-error-bound}). Second, we consider a sparse-penalized DNN estimator which is defined as a minimizer of an empirical risk with penalization by the clipped $L_1$-norm and develop its asymptotic properties. In particular, we establish a generalization error bound of the sparse penalized DNN estimator (Theorem \ref{thm: pred-error-bound2}). This enables us to estimate mean functions of nonlinear time series models adaptively. Our work can be viewed as extensions of the results in \cite{Sc20} and \cite{OhKi22} for independent observations to nonstationary time series. From the technical point of view, our analysis is related to the strategy in \cite{Sc20}. Due to the existence of temporal dependence, the extensions are nontrivial and we achieve this by developing a new strategy to obtain generalization error bounds for dependent data using a blocking technique for $\beta$-mixing processes and exponential inequalities for self-normalized martingale difference sequences. It shall be noted that our approach is also quite different from that of \cite{OhKi22} since their approach strongly depends on the independence of observations and our generalization error bounds of the sparse penalized DNN estimator improve the power of the logs in their bounds. More detailed differences are discussed in Section \ref{subsec:pdnn}. Our approach to derive generalization error bounds paves a way to new techniques for studying statistical properties of machine learning methods for more richer classes of models for dependent data including time series and spatial data. Third, we establish that the sparse-penalized DNN estimators achieve minimax rates of convergence up to a poly-logarithmic factor over a wide class of nonlinear AR($d$) processes including generalized additive AR models and functional coefficient AR models introduced in \cite{ChTs93} that allow discontinuous mean functions. When the mean function belongs to a class of suitably smooth functions (e.g., H\"older space), one can use other nonparametric estimators for adaptively estimating the mean function (see \cite{Ho99} for example). Similar assumptions on the smoothness of the mean functions have been made in most papers that investigate nonparametric estimation of the mean function of a nonlinear time series regression model (\cite{Ro83}, \cite{Tr93}, \cite{Tr94}, \cite{Ma96a, Ma96b}, \cite{Ho99}, \cite{FaYa03}, \cite{ZhWu08}, \cite{Ha08}, and \cite{LiWu10}). However, the methods in those papers cannot be applied for estimating nonlinear time series models with possibly discontinuous mean functions. Our results show that the sparse-penalized DNN estimation is a unified method for adaptively estimating both smooth and discontinuous mean functions of time series regression models. Further, we shall note that the sparse-penalized DNN estimators attain the parametric rate of convergence up to a logarithmic factor when the mean functions belong to an $\ell^0$-bounded affine class that include (multi-regime) threshold AR processes (Theorems \ref{thm:minimax-l0} and \ref{thm:rate-l0}). In addition to the theoretical results, we also conduct simulation studies to investigate the finite sample performance of the DNN estimators. We find that the DNN methods work well for the models with (i) intrinsic low-dimensional structures and (ii) discontinuous or rough mean functions. These results are consistent with our main results. To summarize, this paper contributes to the literature of nonparametric estimation of nonlinear and nonstationary time series by establishing (i) the theoretical validity of non-penalized and sparse-penalized DNN estimators for the adaptive nonparametric esitmation of the mean function of nonlinear time series regression and (ii) show the optimality of the sparse-penalized DNN estimator for a wide class of nonlinear AR processes. The rest of the paper is organized as follows. In Section \ref{sec:setting}, we introduce nonparametric regression models considered in this paper. In Section \ref{sec:main}, we provide generalization error bounds of (i) the non-penalized and (ii) the sparse-penalized DNN estimators. In Section \ref{sec:ar-model}, we present the minimax optimality of the sparse-penalized DNN estimators and show that the estimators achieve the minimax optimal convergence rate up to a logarithmic factor over (i) composition structured functions and (ii) $\ell^0$-bounded affine classes. In Section \ref{sec:simulation}, we investigate finite sample properties of the DNN estimators and compare their performance with other estimators (kernel ridge regression, $k$-nearest neighbors, and random forest) via numerical simulations. Section \ref{sec: conclusion} concludes and discusses possible extensions. All the proofs are included in Appendix. \subsection{Notations} For any $a,b \in \mathbb{R}$, we write $a \vee b=\max\{a,b\}$ and $a \wedge b=\min\{a,b\}$. For $x \in \mathbb{R}$, $\lfloor x \rfloor$ denotes the largest integer $\leq x$. Given a function $f$ defined on a subset of $\mathbb R^d$ containing $[0,1]^d$, we denote by $f|_{[0,1]^d}$ the restriction of $f$ to $[0,1]^d$. When $f$ is real-valued, we write $\|f\|_{\infty}:=\sup_{x \in [0,1]^{d}}|f(x)|$ for the supremum on the compact set $[0,1]^{d}$. Also, let $\supp(f)$ denote the support of the function $f$. For a vector or matrix $W$, we write $|W|$ for the Frobenius norm (i.e. the Euclidean norm for a vector), $|W|_{\infty}$ for the maximum-entry norm and $|W|_0$ for the number of non-zero entries. For any positive sequences $a_n, b_n$, we write $a_n \lesssim b_n$ if there is a positive constant $C>0$ independent of $n$ such that $a_n \leq Cb_n$ for all $n$, $a_n \asymp b_n$ if $a_n \lesssim b_n$ and $b_n \lesssim a_n$. \section{Settings}\label{sec:setting} Let $(\Omega, \mathcal{G}, \{\mathcal{G}_{t}\}_{t\geq 0},\Prob)$ be a filtered probability space. Consider the following nonparametric time series regression model: \begin{align}\label{NLAR-model} Y_t &= m(X_t) + \eta(X_t)v_t,\ t=1,\dots, T, \end{align} where $T \geq 3$, $(Y_t, X_t) \in \mathbb{R} \times \mathbb{R}^{d}$, and $\{X_t, v_t\}_{t=1}^{T}$ is a sequence of random vectors adapted to the filtration $\{\mathcal{G}_t\}_{t=1}^{T}$. We assume $C_\eta := \sup_{x \in [0,1]^{d}}|\eta(x)|<\infty$. In this paper we investigate nonparametric estimation of the mean function $m$ on the compact set $[0,1]^{d}$, that is, $f_0 := m\mathbf{1}_{[0,1]^{d}}$. The model (\ref{NLAR-model}) covers a range of nonlinear time series models. \begin{example}[Nonlinear AR($p$)-ARCH($q$) model]\label{Ex:ar-arch} Consider a nonlinear AR model \begin{align*} Y_t &= \tilde m(Y_{t-1},\dots, Y_{t-p}) + (\gamma_0 + \gamma_1Y_{t-1}^2+\dots+\gamma_qY_{t-q}^2)^{1/2}v_t, \end{align*} where $\gamma_0>0$, $\gamma_i \geq 0$, $i=1,\dots,q$ with $1 \leq p,q \leq d$. This example corresponds to the model (\ref{NLAR-model}) with $X_t = (Y_{t-1},\dots, Y_{t-d})'$, $m(x_1,\dots,x_d)=\tilde m(x_1,\dots, m_p)$ and $\eta(x_1,\dots,x_d) = (\gamma_0 + \gamma_1 x_1^2 + \dots + \gamma_q x_q^2)^{1/2}$. \end{example} \begin{example}[Multivariate nonlinear time series]\label{Ex:multi-ar} Consider the case that we observe multivariate time series $\{\mathbf Y_t = (Y_{1,t},\dots, Y_{p,t})'\}_{t = 1}^{T}$ and $\{\mathbf X_t = (X_{1,t},\dots, X_{d,t})'\}_{t = 1}^{T}$ such that \begin{align}\label{multi-ar-reg} Y_{j,t} &= m_{j}(\mathbf X_t) + \eta_j(\mathbf X_t)v_{j,t},\ j=1,\dots, p. \end{align} The model (\ref{multi-ar-reg}) corresponds to (i) multivariate nonlinear AR model when $\mathbf X_t = (\mathbf Y'_{t-1}, \dots, \mathbf Y'_{t-q})'$ for some $q \geq 1$ and (ii) multivariate nonlinear time series regression with exogenous variables when $\eta_j(\cdot) = 1$ and \{$\mathbf X_t\}_{t=1}^{T}$ is uncorrelated with $\{\mathbf v_t = (v_{1,t},\dots,v_{p,t})'\}_{t=1}^{T}$. If one is interested in estimating the mean function $\mathbf m = (m_1,\dots,m_p)': \mathbb{R}^{d} \to \mathbb{R}^{p}$, then it is enough to estimate each component $m_j$. In this case, the problem of estimating $m_j$ is reduced to that of estimating the mean function $m$ of the model (\ref{NLAR-model}). \end{example} \begin{example}[Time-varying nonlinear models]\label{Ex:tv-model} Consider a nonlinear time-varying model \begin{align}\label{tv-model} Y_t &= m\left({t \over T}, Y_{t-1},\dots, Y_{t-p}\right) + \eta\left({t \over T}, Y_{t-1},\dots, Y_{t-q}\right)v_t, \end{align} where $1 \leq p,q \leq d-1$. This example corresponds to the model (\ref{NLAR-model}) with $X_t = (t/T,Y_{t-1},\dots, Y_{t-(d-1)})'$ as well as $m$ and $\eta$ regarded as functions on $\mathbb R^d$ in the canonical way. The model (\ref{tv-model}) covers, for instance, time-varying AR($p$)-ARCH($q$) models when $m(u,x_1,\dots,x_{p}) = m_0(u)+\sum_{j=1}^{p}m_j(u)x_j$ and $\eta(u,x_1,\dots,x_{q})=(\eta_0(u)+\sum_{j=1}^{q}\eta_j(u)x_j^2)^{1/2}$ with some functions $m_j:[0,1] \to \mathbb{R}$, $\eta_j:[0,1] \to [0,\infty)$. \end{example} See also Section \ref{sec:ar-model} for other examples of the model (\ref{NLAR-model}). \section{Main results}\label{sec:main} In this section, we provide generalization error bounds of (i) the non-penalized and (ii) the sparse-penalized DNN estimators. We assume the following conditions. \begin{assumption}\label{Ass: model} \begin{itemize} \item[(i)] The random variables $v_t$ are conditionally centered and sub-Gaussian, that is, $\mathrm{E}[v_t\mid\mathcal G_{t-1}]=0$ and $\mathrm{E}[\exp(v_t^2/K_t^2)|\mathcal{G}_{t-1}] \leq 2$ for some constant $K_t>0$. Moreover, $\mathrm{E}[v_t^2|\mathcal{G}_{t-1}]=1$. Define $K = \max_{1 \leq t \leq T}K_t$. \item[(ii)] The process $X=\{X_t\}_{t = 1}^{T}$ is exponentially $\beta$-mixing, i.e. the $\beta$-mixing coefficient $\beta_X(t)$ of $X$ satisfies $\beta_X(t) \leq C_{1,\beta}\exp(-C_{2,\beta}t)$ with some constants $C_{1,\beta}$ and $C_{2,\beta}$ for all $t \geq 1$. \item[(iii)] The process $X$ is predictable, that is, $X_t$ is measurable with respect to $\mathcal{G}_{t-1}$. \end{itemize} \end{assumption} Condition (i) is used to apply exponential inequalities for self-normalized processes presented in \cite{deKlLa04}. Since $\mathrm{E}[\mathrm{E}[\exp(v_t^2/K_t^2)|\mathcal{G}_{t-1}]]=\mathrm{E}[\exp(v_t^2/K_t^2)]$, Condition (i) also implies that each $v_t$ is sub-Gaussian. Condition (ii) is satisfied for a wide class of nonlinear time series. In particular, the process $X = \{X_t\}_{t = 1}^{T}$ can be nonstationary. When $X_t = (Y_{t-1},\dots,Y_{t-d})'$, \cite{ChCh00} provide a set of sufficient conditions for the process $X$ to be strictly stationary and exponentially $\beta$-mixing (Theorem 1 in \cite{ChCh00}): \begin{itemize} \item[(i)] $\{v_t\}$ is a sequence of i.i.d. random variables and has an everywhere positive and continuous density function, $E[v_t]=0$, and $v_t$ is independent of $X_{t-s}$ for all $s \geq 1$. \item[(ii)] The function $m$ is bounded on every bounded set, that is, for every $\Gamma \geq 0$, \[ \sup_{|x| \leq \Gamma}|m(x)| < \infty. \] \item[(iii)] The function $\sigma$ satisfies, for every $\Gamma \geq 0$, \[ 0<\eta_1 \leq \inf_{|x|\leq \Gamma}\eta(x)\leq \sup_{|x|\leq \Gamma}\eta(x) <\infty, \] where $\eta_1$ is a constant. \item[(iv)] There exist constants $c_{m,i} \geq 0$, $c_{\eta,i} \geq 0$ ($i=0,\dots,d$) and $M > 0$ such that \begin{align*} |m(x)| &\leq c_{m,0} + \sum_{i=1}^{d}c_{m,i}|x_i|,\ \text{for $|x| \geq M$,}\\ \eta(x) &\leq c_{\eta,0} + \sum_{i=1}^{d}c_{\eta,i}|x_i|,\ \text{for $|x| \geq M$, and}\ \sum_{i=1}^{d}(c_{m,i} + c_{\eta,i}E[|v_1|])<1. \end{align*} \end{itemize} We also refer to \cite{Tj90}, \cite{BaLe95}, \cite{LuJi01}, \cite{ClPu04} and \cite{Vo12} for other sufficient conditions for the process $X$ being strictly or locally stationary and exponentially $\beta$-mixing. \subsection{Deep neural networks} To estimate the mean function $m$ of the model (\ref{NLAR-model}), we fit a deep neural network (DNN) with a nonlinear activation function $\sigma : \mathbb{R} \to \mathbb{R}$. The network architecture $(L,\mathbf{p})$ consists of a positive integer $L$ called the \textit{number of hidden layers} or \textit{depth} and a \textit{width vector} $\mathbf{p}=(p_0,\dots,p_{L+1}) \in \mathbb{N}^{L+2}$. A DNN with network architecture $(L, \mathbf{p})$ is then any function of the form \begin{align}\label{DNN-func} f:\mathbb{R}^{p_0} \to \mathbb{R}^{p_{L+1}},\ x \mapsto f(x) = A_{L+1} \circ \sigma_{L} \circ A_{L} \circ \sigma_{L-1} \circ \dots \circ \sigma_{1} \circ A_{1}(x), \end{align} where $A_\ell :\mathbb{R}^{p_{\ell-1}} \to \mathbb{R}^{p_{\ell}}$ is an affine linear map defined by $A_{\ell}(x) := W_{\ell}x + \mathbf{b}_\ell$ for given $p_{\ell-1} \times p_{\ell}$ weight matrix $W_\ell$ and a shift vector $\mathbf{b}_\ell \in \mathbb{R}^{p_\ell}$, and $\sigma_{\ell}: \mathbb{R}^{p_\ell} \to \mathbb{R}^{p_\ell}$ is an element-wise nonlinear activation map defined as $\sigma_{\ell}(z):=(\sigma(z_1),\dots,\sigma(z_{p_{\ell}}))'$. We assume that the activation function $\sigma$ is $C$-Lipschitz for some $C>0$, that is, there exists $C>0$ such that $|\sigma(x_1) - \sigma(x_2)| \leq C|x_1 - x_2|$ for any $x_1,x_2 \in \mathbb{R}$. Examples of $C$-Lipschitz activation functions include the rectified linear unit (ReLU) activation function $x \mapsto \max\{x,0\}$ and the sigmoid activation function $x \mapsto 1/(1+e^{-x})$. For a neural network of the form (\ref{DNN-func}), we define \begin{align*} \theta(f) := (\text{vec}(W_1)',\mathbf{b}'_1,\dots,\text{vec}(W_{L+1})',\mathbf{b}'_{L+1})' \end{align*} where $\text{vec}(W)$ transforms the matrix $W$ into the corresponding vector by concatenating the column vectors. We let $\mathcal{F}_{\sigma,p_0,p_{L+1}}$ be the class of DNNs which take $p_0$-dimensional input to produce $p_{L+1}$-dimensional output and use the activation function $\sigma:\mathbb{R}\to \mathbb{R}$. Since we are interested in real-valued function on $\mathbb{R}^d$, we always assume that $p_0=d$ and $p_{L+1}=1$ in the following. For a given DNN $f$, we let $\text{depth}(f)$ denote the depth and $\text{width}(f)$ denote the width of $f$ (i.e. $\text{width}(f)=\max_{1 \leq \ell \leq L}p_{\ell}$). For positive constants $L, N, B$, and $F$, we set \[ \mathcal{F}_{\sigma}(L,N,B):=\{f\in \mathcal{F}_{\sigma,d,1}: \text{depth}(f) \leq L, \text{width}(f) \leq N,|\theta(f)|_{\infty}\leq B\} \] and \ben{\label{DNN-func-class} \mathcal{F}_{\sigma}(L,N,B,F) :=\left\{f\mathbf{1}_{[0,1]^{d}}:f\in\mathcal{F}_{\sigma}(L,N,B),\|f\|_{\infty} \leq F\right\}. } Moreover, we define a class of sparsity constrained DNNs with sparsity level $S>0$ by \begin{align}\label{DNN-func-class-sparse} \mathcal{F}_{\sigma}(L,N,B,F,S) &:= \left\{f \in \mathcal{F}_{\sigma}(L,N,B,F): |\theta(f)|_0 \leq S \right\}. \end{align} \subsection{Non-penalized DNN estimator} Let $\hat{f}_{T}$ be an estimator which is a real-valued random function on $\mathbb{R}^{p}$ such that the map $(\omega, x) \mapsto \hat{f}_{T}(\omega,x)$ is measurable with respect to the product of the $\sigma$-field generated by $\{Y_t,X_t\}_{t=1}^{T}$ and the Borel $\sigma$-field of $\mathbb{R}^{d}$. In this section, we provide finite sample properties of a DNN estimator $\hat{f}_T \in \mathcal{F}_{\sigma}(L,N,B,F,S)$ of $f_0$. In particular, we provide bounds for the generalization error \begin{align*} R(\hat{f}_T,f_0)= \mathrm{E}\left[{1 \over T}\sum_{t=1}^{T}(\hat{f}_{T}(X_t^{\ast}) - f_0(X_t^{\ast}))^2\right], \end{align*} where $\{X_t^{\ast}\}_{t =1}^{T}$ is an independent copy of $X$. Let $\mathcal{F}$ be a pointwise measurable class of real-valued functions on $\mathbb{R}^{d}$ (cf. Example 2.3.4 in \cite{vaWe96}). Define \begin{align*} \Psi_T^{\mathcal{F}}(\hat{f}_T) &:= \mathrm{E}\left[Q_T(\hat{f}_T) - \inf_{\bar{f} \in \mathcal{F}}Q_T(\bar{f})\right], \end{align*} where $Q_T(f)$ is the empirical risk of $f$ defined by $Q_T(f) := {1 \over T}\sum_{t=1}^{T}(Y_t - f(X_t))^2$. The function $\Psi_T^{\mathcal{F}}(\hat{f}_T)$ measures a gap between $\hat{f}_T$ and an exact minimizer of $Q_T(f)$ subject to $f \in \mathcal{F}$. Define \begin{align*} \hat{f}_{T,np} \in \mathop{\rm arg~min}\limits_{f \in \mathcal{F}_{\sigma}(L,N,B,F,S)}Q_T(f) \end{align*} and we call $\hat{f}_{T,np}$ as the non-penalized DNN estimator. The next result gives a generalization bound of $\hat{f}_{T,np}$. \begin{theorem}\label{thm: pred-error-bound} Suppose that Assumption \ref{Ass: model} is satisfied. Consider the nonparametric time series regression model (\ref{NLAR-model}) with unknown regression function $m$ satisfying $\|f_0\|_{\infty} \leq F$ where $f_0 = m\mathbf{1}_{[0,1]^d}$ for some $F \geq 1$. Let $\hat{f}_T$ be any estimator taking values in the class $\mathcal{F}=\mathcal{F}_{\sigma}(L,N,B,F,S)$ with $B \geq 1$. Then for any $\rho >1$, there exits a constant $C_\rho$, only depending on $(C_\eta, C_{1,\beta}, C_{2,\beta}, K, \rho)$, such that \begin{align*} R(\hat{f}_T,f_0) &\leq \rho\left(\Psi_T^{\mathcal{F}}(\hat{f}_T) + \inf_{f \in \mathcal{F}}R(f, f_0)\right) + C_\rho F^2{S(L+1)\log \left((L+1)(N+1)BT\right)(\log T) \over T}. \end{align*} \end{theorem} Theorem \ref{thm: pred-error-bound} is an extension of Theorem 2 in \cite{Sc20} to possibly nonstationary $\beta$-mixing sequence and the process $\{v_t\}$ can be non Gaussian and dependent. The result follows from Lemmas \ref{lem: error-gap-bound} and \ref{lem: emp-error-bound} in Appendix \ref{Appendix: lemmas}. Note that Lemmas \ref{lem: error-gap-bound} and \ref{lem: emp-error-bound} are of independent interest since they are general results so that the estimator $\hat{f}_{T}$ do not need to take values in $\mathcal{F}_{\sigma}(L,N,B,F,S)$. Hence the results would be useful to investigate generalization error bounds of other nonparametric estimators. Let $f_0=m\mathbf{1}_{[0,1]^d}$ belong to composition structured functions $\mcl F_0 = \mcl G\big(q, \mathbf d, \mathbf t, \bol\beta, A\big)$ for example (see Section \ref{subsec:comp} for the definition). By choosing $\sigma(x)=\max\{x,0\}$ and the parameters of $\mathcal{F}_{\sigma}(L_T,N_T,B_T,F,S_T)$ as $L_T\asymp \log T$, $N_T\asymp T$, $B_T \geq 1$, $F > \|f_0\|$, and $S_T\asymp T^{\kappa/(\kappa+1)}\log T$ with $\kappa = \max_{i=0,\dots,q}t_i/(2\beta_i^*)$, one can show that the non-penalized DNN estimator achieves the minimax convergence rate over $\mathcal{F}_0$ up to a logarithmic factor. However, the sparsity level $S_T$ depends on the characteristics $\mathbf t$ and $\bol\beta$ of $f_0$. Therefore, the non-penalized DNN estimator is not adaptive since we do not know the characteristics in practice. In the next subsection, we provide a generalization error bound of sparse-penalized DNN estimators which plays an important role to show that the sparse-penalized DNN estimators can estimate $f_0$ adaptively. \subsection{Sparse-penalized DNN estimator}\label{subsec:pdnn} Define $\bar{Q}_T(f)$ as a penalized version of the empirical risk: \begin{align*} \bar{Q}_T(f) &:= {1 \over T}\sum_{t=1}^{T}(Y_t - f(X_t))^2 + J_T(f), \end{align*} where $J_T(f)$ is the sparse penalty given by \begin{align*} J_T(f) &:= J_{\lambda_T,\tau_T}(f) := \lambda_T\|\theta(f)\|_{\text{clip}, \tau_T} \end{align*} for tuning parameters $\lambda_T>0$ and $\tau_T>0$. Here, $\|\cdot \|_{\text{clip},\tau}$ denotes the clipped $L_1$ norm with a clipping threshold $\tau>0$ (\cite{Zh10}) defined as \begin{align*} \|\theta\|_{\text{clip},\tau} &:= \sum_{j=1}^{p}\left({|\theta_j| \over \tau} \wedge 1\right) \end{align*} for a $p$-dimensional vector $\theta = (\theta_1,\dots, \theta_p)'$. In this section, we provide finite sample properties of the sparse-penalized DNN estimator defined as \begin{align*} \hat{f}_{T, sp} \in \mathop{\rm arg~min}\limits_{f \in \mathcal{F}_{\sigma}(L,N,B,F)}\bar{Q}_T(f). \end{align*} Further, for any estimator $\hat{f}_{T} \in \mathcal{F}=\mathcal{F}_{\sigma}(L,N,B,F)$ of $f_0$, we define \begin{align*} \bar{\Psi}_T^{\mathcal{F}}(\hat{f}_T) &:= \mathrm{E}\left[\bar{Q}_T(\hat{f}_T) - \inf_{\bar{f} \in \mathcal{F}}\bar{Q}_T(\bar{f})\right]. \end{align*} The next result provides a generalization error bound of the sparse-penalized DNN estimator. \begin{theorem}\label{thm: pred-error-bound2} Suppose that Assumption \ref{Ass: model} is satisfied. Consider the nonparametric time series regression model (\ref{NLAR-model}) with unknown regression function $m$ satisfying $\|f_0\|_{\infty} \leq F$ where $f_0 = m\mathbf{1}_{[0,1]^d}$ for some $F \geq 1$. Let $\hat{f}_T$ be any estimator taking values in the class $\mathcal{F}=\mathcal{F}_{\sigma}(L_T,N_T,B_T,F)$ where $L_T, N_T$, and $B_T$ are positive values such that $L_T \leq C_L\log^{\nu_0} T$, $N_T \leq C_N T^{\nu_1}$, $1 \leq B_T \leq C_B T^{\nu_2}$ for some positive constants $C_L, C_N, C_B, \nu_0,\nu_1$, and $\nu_2$. Moreover, we assume that the tuning parameters $\lambda_T$ and $\tau_T$ of the sparse penalty function $J_T(f)$ satisfy $\lambda_T = (F^2\iota_{\lambda}(T)\log^{2+\nu_0} T)/T$ with a strictly increasing function $\iota_{\lambda}(x)$ such that $\iota_{\lambda}(x)/\log x \to \infty$ as $x \to \infty$ and $\tau_T(L_T+1)((N_T+1)B_T)^{L_T+1}\leq C_\tau T^{-1}$ with some positive constant $C_\tau$ for any $T$. Then, \begin{align*} R(\hat{f}_T,f_0) &\leq 6\left(\bar{\Psi}_T^{\mathcal{F}}(\hat{f}_T) + \inf_{f \in \mathcal{F}}\left(R(f,f_0) + J_T(f)\right)\right) + CF^2 \left({1+ \log T \over T}\right), \end{align*} where $C$ is a positive constant only depending on $(C_\eta, C_{1,\beta}, C_{2,\beta}, C_L, C_N, C_B, C_\tau,\nu_0,\nu_1, \nu_2, K, \iota_\lambda)$. \end{theorem} Theorem \ref{thm: pred-error-bound2} is an extension of Theorem 1 in \cite{OhKi22}, which considers i.i.d.~observations. Here we explain some differences between their result and ours. First, Theorem \ref{thm: pred-error-bound2} can be applied to nonstationary time series since we only assume the process $X$ to be $\beta$-mixing. Second, our approach to proving Theorem \ref{thm: pred-error-bound2} is different from that of \cite{OhKi22}. Their proofs heavily depends on the theory for i.i.d.~data in \cite{GKKW02} so extending their approach to our framework seems to require substantial work. In contrast, our approach is based on other technical tools such as the blocking technique of $\beta$-mixing processes in \cite{Ri13} and exponential inequalities for self-normalized martingale difference sequences. In particular, considering continuous time embedding of a martingale difference sequence and applying the results on (super-) martingales in \cite{BJY86}, we can allow the process $\{v_t\}_{t=1}^{T}$ to be conditionally centered and circumvent additional conditions on its distribution such as conditional Gaussianity or symmetry (see also Lemma \ref{lem: sym-bound} and the proof of Lemma \ref{lem: mar-bound} in Appendix). As a result, our result improve the power of logs in their generalization error bound. Third, (a) the upper bound of the depth of the sparse penalized DNN estimator $L$ can glow by a power of $\log T$ and (b) we take the tuning parameter $\lambda_T$ to depend on $F^2$. Particularly, (a) enables us to estimate $f_0$ adaptively when $f_0$ belongs to an $\ell^0$-bounded affine class as well as composition structured functions (see Sections \ref{subsec:comp} and \ref{subsec:ell0-bound} for details) and (b) enables $\hat{f}_{T,sp}$ to be adaptive with respect to $\|f\|_{\infty} \leq F$. See also the comments on Proposition \ref{prop:pdnn} on the improvement of the upper bound. \section{Minimax optimality in nonlinear AR models}\label{sec:ar-model} In this section, we show the minimax optimality of the sparse-penalized DNN estimator $\hat{f}_{T,sp}$. In particular, we show that $\hat{f}_{T,sp}$ achieves the minimax convergence rate over (i) composition structured functions and (ii) $\ell^0$-bounded affine class. We note that these classes of functions include many nonlinear AR models such as (generalized) additive AR models, single-index models, (multi-regime) threshold AR models, and exponential AR models. We consider the observation $\{Y_t\}_{t=1}^T$ generated by the following nonlinear AR($d$) model: \ben{\label{ar-model} \left\{\begin{array}{l} Y_t=m(Y_{t-1},\dots,Y_{t-d})+v_t,\qquad t=1,\dots,T,\\ (Y_0,Y_{-1},\dots,Y_{-d+1})'\sim\nu,\\ v_t\overset{i.i.d.}{\sim}N(0,1). \end{array}\right. } Here, $\nu$ is a (fixed) probability measure on $\mathbb R^d$ such that $\int_{\mathbb R^d}|x|\nu(dx)<\infty$, and $m:\mathbb R^d\to\mathbb R$ is an unknown function to be estimated. Our results in this section can be extended to more general AR($d$) model that allows non-Gaussianity of the distribution of $v_t$ and non-constant volatility function $\eta(\cdot)$. To simplify our argument, we focus on the model (\ref{ar-model}). Let $\mathbf c=(c_0,c_1,\dots,c_d)\in(0,\infty)^{d+1}$ satisfy $\sum_{i=1}^dc_i<1$. We denote by $\mcl M_0(\mathbf c)$ the set of measurable functions $m:\mathbb R^d\to\mathbb R$ satisfying $|m(x)|\leq c_0+\sum_{i=1}^dc_i|x_i|$ for all $x\in\mathbb R^d$. The following lemma shows that the process $Y=\{Y_t\}_{t=1}^T$ is exponentially $\beta$-mixing ``uniformly'' over $m\in\mcl M_0(\mathbf c)$. \begin{lemma}\label{lemma:beta} Consider the nonlinear AR($d$) model \eqref{ar-model} with $m\in\mathcal M_0(\mathbf c)$. There are positive constants $C_\beta$ and $C_\beta'$ depending only on $\mathbf c, d$ and $\nu$ such that \begin{equation}\label{eq:minimax-beta} \beta_Y(t)\leq C'_\beta e^{-C_\beta t}\qquad\text{for all }t\geq1. \end{equation} \end{lemma} The next result gives the generalization error for a family of functions that can be approximated with a certain degree of accuracy by DNNs. \begin{proposition}\label{prop:pdnn} Consider the nonlinear AR($d$) model \eqref{ar-model} with $m\in\mathcal M_0(\mathbf c)$. Let $F$, $\hat f_T$, $\mcl F$, $L_T$, $N_T$, $B_T$, $\lambda_T$ and $\tau_T$ as in Theorem \ref{thm: pred-error-bound2}. Suppose that there are constants $\kappa,r\geq0$, $C_0>0$ and $C_S>0$ such that \[ \inf_{f\in\mathcal{F}_{\sigma}(L_T,N_T,B_T,F,S_{T})}\|f-f_0\|_{L^2([0,1]^d)}^2\leq\frac{C_0}{T^{1/(\kappa+1)}} \] with $S_{T}:=C_ST^{\kappa/(\kappa+1)}\log^r T$. Then, \[ R(\hat{f}_T,f_0)\leq 6\bar{\Psi}_T^{\mathcal{F}}(\hat{f}_T)+C'F^2\frac{\iota_\lambda(T)\log^{2+\nu_0+r} T}{T^{1/(\kappa+1)}}, \] where $C'$ is a positive constant only depending on $(\mathbf c, d, \nu, C_L, C_N, C_B,C_\tau, \nu_0,\nu_1, \nu_2, K, \iota_\lambda,\kappa,r,C_0,C_S)$. \end{proposition} If $\hat{f}_T = \hat{f}_{T,sp}$, then the generalization error bound in Proposition \ref{prop:pdnn} is reduced to \[ R(\hat{f}_T,f_0) \leq C'F^2\frac{\iota_\lambda(T)\log^{2+\nu_0+r} T}{T^{1/(\kappa+1)}}. \] When $\nu_0=1$ and $\iota_\lambda(T)=\log^{\nu_3} T$ with $\nu_3 \in (1,2)$, one can see that our result improves the power of logs in the generalization error bound in Theorem 2 in \cite{OhKi22}. Moreover, our result allows the generalization error bound to depend explicitly on $F$. Combining this with the results in the following sections implies that the sparse-penalized DNN estimator can be adaptive concerning the upper bound of $\|f_0\|_{\infty}$ (by taking $F \asymp \log^{\nu_4}T$ with $\nu_4>0$ for example) and hence Proposition \ref{prop:pdnn} is useful for the computation of $\hat{f}_{T,sp}$ since the upper bound $F$ is unknown in practice as well as other information about the shape of $f_0$. \subsection{Composition structured functions}\label{subsec:comp} In this subsection, we consider nonparametric estimation of the mean function $f_0$ when it belongs to a class of composition structured functions which is defined as follows. For $p, r\in\mathbb N$ with $p\geq r$, $\beta,A>0$ and $l<u$, we denote by $C_r^\beta([l,u]^p, A)$ the set of functions $f:[l,u]^p\to\mathbb R$ satisfying the following conditions: \begin{enumerate}[label=(\roman*)] \item $f$ depends on at most $r$ coordinates. \item $f$ is of class $C^{\lfloor\beta\rfloor}$ and satisfies \[ \sum_{\bol\alpha : |\bol\alpha|_1 < \beta}\|\partial^{\bol\alpha} f\|_\infty + \sum_{\bol\alpha : |\bol\alpha |_1= \lfloor \beta \rfloor } \, \sup_{x, y \in [l,u]^p:x \neq y} \frac{|\partial^{\bol\alpha} f(x) - \partial^{\bol\alpha} f(y)|}{|x-y|_\infty^{\beta-\lfloor \beta \rfloor}} \leq A, \] where we used multi-index notation, that is, $\partial^{\bol\alpha}= \partial^{\alpha_1}\cdots \partial^{\alpha_p}$ with $\bol\alpha = (\alpha_1, \ldots, \alpha_p) \in \mathbb{Z}_{\geq 0}^p$ and $|\bol\alpha|_1:=\sum_{j=1}^p\alpha_j.$ \end{enumerate} Let $\mathbf d=(d_0,\ldots,d_{q+1})\in\mathbb N^{q+2}$ with $d_0=d$ and $d_{q+1}=1$, $\mathbf t=(t_0, \ldots, t_q)\in\mathbb N^{q+1}$ with $t_i\leq d_i$ for all $i$ and $\bol\beta=(\beta_0, \ldots, \beta_q)\in(0,\infty)^{q+1}$. We define $\mcl G\big(q, \mathbf d, \mathbf t, \bol\beta, A\big)$ as the class of functions $f:[0,1]^d\to\mathbb R$ of the form \begin{equation}\label{eq.mult_composite_regression} f= g_q \circ \cdots \circ g_0, \end{equation} where $g_i=(g_{ij})_j : [l_i, u_i]^{d_i}\rightarrow [l_{i+1},u_{i+1}]^{d_{i+1}}$ with $g_{ij} \in C_{t_i}^{\beta_i}\big([l_i,u_i]^{d_i}, A\big)$ for some $|l_{i+1}|, |u_{i+1}| \leq A$, $i=0,\dots,q$. Denote by $\mathcal M\big(\mathbf c, q, \mathbf d, \mathbf t, \bol\beta, A\big)$ the class of functions in $\mcl M_0(\mathbf c)$ whose restrictions to $[0,1]^d$ belong to $\mathcal G\big(q, \mathbf d, \mathbf t, \bol\beta, A\big)$. Also, define \begin{align*} \beta_i^* &:= \beta_i \prod_{\ell=i+1}^q (\beta_{\ell}\wedge 1),\qquad \phi_T := \max_{i=0, \ldots, q } T^{-\frac{2\beta_i^*}{2\beta_i^*+ t_i}}. \end{align*} \begin{example}[Nonlinear additive AR model]\label{Ex:add-ar} Consider a nonlinear AR model: \begin{align*} Y_t &= m_1(Y_{t-1}) + \dots + m_d(Y_{t-d}) + v_t, \end{align*} where $m_1,\dots,m_d$ are univariate measurable functions. In this case, the mean function can be written as a composition of functions $m = g_1 \circ g_0$ with $g_0(x_1,\dots, x_d) = (m_1(x_1),\dots, m_d(x_d))'$ and $g_1(x_1,\dots, x_d) = \sum_{j=1}^{d}x_j$. Suppose that $m_j|_{[0,1]} \in C_1^{\beta}([0,1],A)$ for $j = 1,\dots, d$. Note that $g_1 \in C_d^{\gamma}([-A,A]^d, (A+1)d)$ for any $\gamma>1$. Then we can see that $m|_{[0,1]^d}: [0,1]^{d} \to [-Ad,Ad]$ and \[ m|_{[0,1]^d} \in \mathcal G\big(1, (d,d,1), (1,d), (\beta, (\beta \vee 2)d), (A+1)d\big). \] Hence $\phi_T=T^{-\frac{2\beta}{2\beta+1}}$ in this case. \end{example} \begin{example}[Nonlinear generalized additive AR model]\label{Ex:g-add-ar} Consider a nonlinear AR model: \begin{align*} Y_t &= \phi(m_1(Y_{t-1}) + \dots + m_d(Y_{t-d})) +v_t, \end{align*} where $\phi: \mathbb{R} \to \mathbb{R}$ is some unknown link function. In this case, the mean function can be written as a composition of functions $m = g_2 \circ g_1 \circ g_0$ with $g_0$ and $g_1$ as in Example \ref{Ex:add-ar} and $g_2 = \phi$. Suppose that $\phi \in C_1^{\gamma}([-Ad,Ad],A)$ and take $m_j$ and $g_1$ as in Example \ref{Ex:add-ar}. Then we can see that $m|_{[0,1]^d}: [0,1]^{d} \to [-A,A]$ and \[ m|_{[0,1]^d} \in \mathcal G\big(2, (d,d,1,1), (1,d,1), (\beta, (\beta \vee 2)d,\gamma), (A+1)d\big). \] Hence $\phi_T=T^{-\frac{2\beta(\gamma\wedge1)}{2\beta(\gamma\wedge1)+1}}\vee T^{-\frac{2\gamma}{2\gamma+1}}$ in this case. \end{example} \begin{example}[Single-index model]\label{ex:simod} Consider a nonlinear AR model: \begin{align*} Y_t &= \phi_0(Z_t)+\phi_1(Z_t)Y_{t-1}+\cdots+\phi_d(Z_t)Y_{t-d} +v_t,& Z_t&=b_0+b_1Y_{t-1}+\cdots+b_dY_{t-d}, \end{align*} where, for $j=0,1,\dots,d$, $\phi_j: \mathbb{R} \to \mathbb{R}$ is an unknown function and $b_j$ is an unknown constant. In this case, the mean function can be written as a composition of functions $m = g_2 \circ g_1 \circ g_0$ with $g_0(x_1,\dots,x_d)=(b_0+b_1x_1+\cdots+b_dx_d,x_1,\dots,x_d)'$, $g_1(z,x_1,\dots,x_d)=(\phi_0(z),\dots,\phi_d(z),x_1,\dots,x_d)'$, and $g_2(w_0,w_1,\dots,w_d,x_1,\dots,x_d)=w_0+w_1x_1+\cdots+w_dx_d$. Suppose that $\phi_0,\dots,\phi_d \in C_1^\beta([-A,A],A)$ for some constants $\beta\geq1$ and $A\geq1\vee\sum_{j=0}^d|b_j|$. Then we have \[ m|_{[0,1]^d} \in \mathcal G\big(2, (d,d+1,2d+1,1), (d,1,2d+1), (\beta d,\beta, \beta (2d+1)), (A+1)(1+d+dA)\big). \] Hence $\phi_T=T^{-\frac{2\beta}{2\beta+1}}$ in this case. \end{example} Below we show the minimax lower bound for estimating $f_0 \in \mathcal{M}(\mathbf c, q, \mathbf d, \mathbf t, \bol\beta, A)$. \begin{theorem}\label{thm:minimax} Consider the nonlinear AR($d$) model \eqref{ar-model} with $m\in\mathcal{M}(\mathbf c, q, \mathbf d, \mathbf t, \bol\beta, A)$. Suppose that $c_0\geq A$ and $t_j\leq\min\{d_0,\dots,d_{j-1}\}$ for all $j$. Then, for sufficiently large $A$, \[ \liminf_{T\to\infty}\phi_T^{-1}\inf_{\hat f_T}\sup_{m\in\mathcal{M}(\mathbf c, q, \mathbf d, \mathbf t, \bol\beta, A)}R(\hat f_T,f_0)>0, \] where the infimum is taken over all estimators $\hat f_T$. \end{theorem} Theorem \ref{thm:minimax} and the next result imply that the sparse-penalized DNN estimator $\hat{f}_{T,sp}$ is rate optimal since it attains the minimax lower bound up to a poly-logarithmic factor. We write $\relu$ for the ReLU activation function, i.e.~$\relu(x)=\max\{x,0\}$. \begin{theorem}\label{thm:rate-comp} Consider the nonlinear AR($d$) model \eqref{ar-model} with $m\in\mathcal{M}(\mathbf c, q, \mathbf d, \mathbf t, \bol\beta, A)$. Let $F\geq1\vee A$ be a constant, $L_T\asymp\log^{r}T$ for some $r>1$, $N_T\asymp T$, $B_T,\lambda_T$ and $\tau_T$ as in Theorem \ref{thm: pred-error-bound2} with $\nu_0=r$, and $\hat f_T$ a minimizer of $\bar Q_T(f)$ subject to $f\in \mathcal{F}_{\relu}(L_T,N_T,B_T,F)$. Then \[ \sup_{m\in\mathcal{M}(\mathbf c, q, \mathbf d, \mathbf t, \bol\beta, A)}R(\hat f_T,f_0)=O\left(\phi_T\iota_\lambda(T)\log^{3+r}T\right)\qquad\text{as}~T\to\infty. \] \end{theorem} \subsection{$\ell^0$-bounded affine class}\label{subsec:ell0-bound} In this subsection, we consider nonparametric estimation of the mean function $f_0$ when it belongs to an $\ell^0$-bounded affine class $\mcl I^0_\Phi$. This class was introduced in \cite{HaSu20} and is defined as follows. \begin{definition} Given a set $\Phi$ of real-valued functions on $\mathbb R^d$ with $\|\varphi\|_{L^2([0,1]^d)}=1$ for each $\varphi\in\Phi$ along with constants $n_s\in\mathbb N$ and $C>0$, we define an \textit{$\ell^0$-bounded affine class $\mcl I^0_\Phi$} as \bm{ \mcl I^0_\Phi(n_s,C):=\left\{\sum_{i=1}^{n_s}\theta_i\varphi_i(A_i\cdot-b_i):A_i\in\mathbb R^{d\times d},b_i\in\mathbb R^d,\theta_i\in\mathbb R,\varphi_i\in\Phi,\right.\\ \left.|\det A_i|^{-1}\vee|A_i|_\infty\vee|b_i|_\infty\vee|\theta_i|\leq C,~i=1,\dots,n_s\right\}. } \end{definition} By taking the set $\Phi$ suitably, the class of functions $\mcl I^0_\Phi$ includes many nonlinear AR models such as threshold AR (TAR) models and we can show that the sparse-penalized DNN estimator attains the convergence rate $O(T^{-1})$ up to a poly-logarithmic factor (Theorem \ref{thm:rate-l0}). \begin{example}[Threshold AR model]\label{ex:tar} Consider a two-regime TAR(1) model: \[ Y_t=\begin{cases} a_1Y_{t-1}+v_t & \text{if }Y_{t-1}\leq r,\\ a_2Y_{t-1}+v_t & \text{if }Y_{t-1}> r, \end{cases} \] where $a_1,a_2,r$ are some constants. This model corresponds to \eqref{ar-model} with $d=1$ and $m(y)=(a_1\mathbf{1}_{(-\infty,r]}(y)+a_2\mathbf{1}_{(r,\infty)}(y))y$. Note that the mean function $m$ can be discontinuous and this $m$ can be rewritten as \[ m(y)=-a_1\relu(r-y)+a_1r\mathbf{1}_{[0,\infty)}(r-y)+a_2\relu(y-r)+a_2r\mathbf{1}_{[0,\infty)}(y-r). \] Hence $m\in\mcl I^0_\Phi(n_s,C)$ with $\Phi=\{\sqrt 3\relu,\mathbf{1}_{[0,\infty)}\}$, $n_s\geq4$ and $C\geq\max\{|a_1|,|a_2|,|r|\}$. This argument can be extended to a multi-regime (self-exciting) TAR model of any order in an obvious manner. \end{example} We set $\mcl{M}_\Phi^0(\mathbf c,n_s,C):=\mcl{M}_0(\mathbf c)\cap\mcl I^0_\Phi(n_s,C)$. Below we show the minimax lower bound for estimating $f_0 \in \mcl{M}_\Phi^0(\mathbf c,n_s,C)$. \begin{theorem}\label{thm:minimax-l0} Consider the nonlinear AR($d$) model \eqref{ar-model} with $m\in\mcl{M}_\Phi^0(\mathbf c,n_s,C)$. Suppose that there is a function $\varphi\in\Phi$ such that $\supp(\varphi)\subset[0,1]^d$ and $\|\varphi\|_\infty\leq c_0$. Then, \[ \liminf_{T\to\infty}T\inf_{\hat f_T}\sup_{m\in\mcl{M}_\Phi^0(\mathbf c,n_s,C)}R(\hat f_T,f_0)>0, \] where the infimum is taken over all estimators $\hat f_T$. \end{theorem} Now we extend the argument in Example \ref{ex:tar}. For this, we introduce the function class $\mathrm{AP}_{\sigma,d}(C_1,C_2,D,r)$ which can be approximated by ``light'' networks. \begin{definition}\label{defi:AP} For $C_1,C_2,D>0$ and $r\geq0$, we denote by $\mathrm{AP}_{\sigma,d}(C_1,C_2,D,r)$ the set of functions $\varphi:\mathbb R^d\to\mathbb R$ satisfying that, for each $\varepsilon\in(0,1/2)$, there exist parameters $L_\varepsilon,N_\varepsilon,B_\varepsilon,S_\varepsilon>0$ such that \begin{itemize} \item $L_\varepsilon\vee N_\varepsilon\vee S_\varepsilon\leq C_1\{\log_2(1/\varepsilon)\}^{r}$ and $B_\varepsilon\leq C_2/\varepsilon$ hold; \item there exists an $f\in\mathcal{F}_{\sigma}(L_\varepsilon,N_\varepsilon,B_\varepsilon)$ such that $|\theta(f)|_0\leq S_\varepsilon$ and $\|f-\varphi\|_{L^2([-D,D]^d)}^2\leq\varepsilon$. \end{itemize} \end{definition} Depending on the value of $r$, $\mathrm{AP}_{\sigma,d}(C_1,C_2,D,r)$ contains various functions such as step functions ($0 \leq r $), polynomials ($r=1$), and very smooth functions ($r=2$). \begin{example}[Piecewise linear functions]\label{ex:plin} For $\sigma=\relu$, we evidently have $\relu\in\mathrm{AP}_{\sigma,1}(C_1,C_2,D,r)$ for any $C_1,C_2,D\geq2$ and $r\geq0$. In this case we also have $\mathbf{1}_{[0,\infty)}\in\mathrm{AP}_{\sigma,1}(C_1,C_2,D,r)$ if $C_1,C_2\geq7$. In fact, for any $\varepsilon\in(0,1/2)$, the function \[ f_\varepsilon(x)=\sigma\left(\sigma(x+1)-\sigma(x)-\frac{1}{\varepsilon}\sigma(-x)\right),\qquad x\in\mathbb R, \] satisfies $\|f_\varepsilon-\mathbf{1}_{[0,\infty)}\|_{L^2([-D,D])}^2\leq\varepsilon$. \end{example} \begin{example}[Polynomial functions]\label{ex:poly} Take $\sigma=\relu$ and consider a polynomial function $\varphi(x)=\sum_{i=0}^pa_ix^i$ for some constants $a_0,\dots,a_p\in\mathbb R$. Then, given $D>0$, we have $\varphi\in\mathrm{AP}_{\sigma,1}(C_1,1/2,D,1)$ for some constant $C>0$ depending only on $\max_{i=0,\dots,p}|a_i|$, $p$ and $D$ by Proposition III.5 in \cite{EPGB21}. \end{example} \begin{example}[Very smooth functions]\label{ex:holo} Take $\sigma=\relu$ again. Let $\varphi:\mathbb R\to\mathbb R$ be a $C^\infty$ function such that there are constants $A\geq1$ and $D>0$ satisfying $\sup_{x\in[-D,D]}|\varphi^{(n)}(x)|\leq n!A$ for all $n\in\mathbb Z_{\geq0}$. Then, by Lemma A.6 in \cite{EPGB21}, $A^{-1}\varphi\in\mathrm{AP}_{\sigma,1}(C_1,1,D,2)$ for some constants $C_1>0$ depending only on $D$. Hence $\varphi\in\mathrm{AP}_{\sigma,1}(C_1,A,D,2)$. The condition on $\varphi$ is satisfied e.g.~when there is a holomorphic function $\Psi$ on $\{z\in\mathbb C:|z|<D+1\}$ such that $|\Psi|\leq A$ and $\Psi(x)=\varphi(x)$ for all $x\in[-D,D]$. This follows from Cauchy's estimates (cf.~Theorem 10.26 in \cite{Ru87}). \end{example} \begin{example}[Product with an indicator function]\label{ex:prod} Again consider the ReLU activation function $\sigma=\relu$. Let $\varphi\in\mathrm{AP}_{\sigma,1}(C_1,C_2,D,r)$ for some constants $C_1,C_2,D>0$ and $r\geq1$, and assume $\sup_{x\in[-D,D]}|\varphi(x)|\leq A$ for some constant $A\geq1$. Then $\varphi\mathbf1_{[0,\infty)}\in\mathrm{AP}_{\sigma,1}(C_3,C_3,D,r)$ for some constant $C_3$ depending only on $C_1,C_2,D,A$. To see this, fix $\varepsilon\in(0,1/2)$ arbitrarily and take $L_\varepsilon,N_\varepsilon,B_\varepsilon,S_\varepsilon$ and $f$ as in Definition \ref{defi:AP}. Also, let $f_\varepsilon$ defined as in Example \ref{ex:plin}. By Proposition III.3 in \cite{EPGB21}, there is an $f_1\in\mcl F_\sigma(C_4\log(1/\varepsilon),5,1)$ with $C_4>0$ depends only on $A$ such that $\sup_{x,y\in[-A,A]}|f_1(x,y)-xy|\leq\varepsilon$. Then, by Lemmas II.3--II.4 and A.7 in \cite{EPGB21}, there is an $f_2\in\mcl F_\sigma(C_5\{\log(1/\varepsilon)\}^r,C_5\{\log(1/\varepsilon)\}^r,C_5/\varepsilon)$ with $C_5>0$ depending only on $C_1,C_2,A$ such that $f_2(x)=f_1(f(x),f_\varepsilon(x))$ for all $x\in\mathbb R$ and $\|\theta(f_2)\|_\infty\leq C_5\{\log(1/\varepsilon)\}^r$. For this $f_2$, we have \ba{ &\|f_2-\varphi\mathbf1_{[0,\infty)}\|_{L^2([-D,D])}\\ &\leq\|f_2-ff_\varepsilon\|_{L^2([-D,D])}+\|(f-\varphi)f_\varepsilon\|_{L^2([-D,D])} +\|\varphi(f_\varepsilon-\mathbf1_{[0,\infty)})\|_{L^2([-D,D])}\\ &\leq (D+1+A)\sqrt{\varepsilon}. } Applying this argument to $\varepsilon/\sqrt{D+1+A}$ instead of $\varepsilon$, we obtain the desired result. \end{example} Theorem \ref{thm:minimax-l0} and the next result imply that the sparse-penalized DNN estimator $\hat{f}_{T,sp}$ attains the minimax optimal rate over $\mcl{M}_\Phi^0(\mathbf c,n_s,C)$ up to a poly-logarithmic factor. \begin{theorem}\label{thm:rate-l0} Consider the nonlinear AR($d$) model \eqref{ar-model} with $m\in\mcl{M}_\Phi^0(\mathbf c,n_s,C)$. Suppose that $\Phi\subset\mathrm{AP}_{\relu,d}(C_1,C_2,D,r)$ for some constants $C_1,C_2>0$, $D\geq(d+1)C$ and $r\geq0$. Let $F\geq1+c_0$ be a constant, $L_T\asymp\log^{r'}T$ for some $r'>r$, $N_T\asymp T$, $B_T\asymp T^{\nu}$ for some $\nu>1$, $\lambda_T$ and $\tau_T$ as in Theorem \ref{thm: pred-error-bound2} with $\nu_0=r'$, and $\hat f_T$ a minimizer of $\bar Q_T(f)$ subject to $f\in \mathcal{F}_{\relu}(L_T,N_T,B_T,F)$. Then \[ \sup_{m\in\mcl{M}_\Phi^0(\mathbf c,n_s,C)}R(\hat f_T,f_0)=O\left(\frac{\iota_\lambda(T)\log^{2+r+r'}T}{T}\right)\qquad\text{as}~T\to\infty. \] \end{theorem} \begin{example} By Examples \ref{ex:tar} and \ref{ex:plin}, the sparse-penalized DNN estimator adaptively achieve the minimax rate of convergence up to a logarithmic factor for threshold AR models. Thanks to Examples \ref{ex:poly}--\ref{ex:prod}, this result can be extended to some threshold AR models with nonlinear coefficients. \end{example} \begin{example}[Functional coefficient AR model]\label{ex:far} Examples \ref{ex:holo} and \ref{ex:prod} also imply that Theorem \ref{thm:rate-l0} can be extended to some functional coefficient AR (FAR) models introduced in \cite{ChTs93}: \[ Y_t = f_1(\mathbf Y_{t-1}^*)Y_{t-1} + \dots + f_d(\mathbf Y_{t-1}^*)Y_{t-d} + v_t \] where $\mathbf Y_{t-1}^* = (Y_{t-1},\dots, Y_{t-d})'$ and $f_j : \mathbb{R}^{d} \to \mathbb{R}$ are measurable functions. This model include many nonlinear AR models such as (1) TAR models (when $f_j$ are step functions), (2) exponential AR (EXPAR) models proposed in \cite{HaOz81} (when $f_j$ are exponential functions), and (3) smooth transition AR (STAR) models (e.g. \cite{GrTe93} and \cite{Te94}). Note that some classes of FAR models such as EXPAR and STAR models can be written as a composition of functions so Theorem \ref{thm:rate-comp} can be applied to those examples. \end{example} \section{Simulation results}\label{sec:simulation} In this section, we conduct a simulation experiment to assess the finite sample performance of DNN estimators for the mean function of nonlinear time series. Following \cite{OhKi22}, we compare the following five estimators in our experiment: Kernel ridge regression (KRR), $k$-nearest neighbors (kNN), random forest (RF), non-penalized DNN estimator (NPDNN), and sparse-penalized DNN estimator (SPDNN). For kernel ridge regression, we used a Gaussian radial basis function kernel and selected the tuning parameters by 5-fold cross-validation as in \cite{OhKi22}. We determined the search grids for selection of the tuning parameters following \cite{Ex13}. The tuning parameter $k$ for $k$-nearest neighbors was also selected by 5-fold cross-validation with the search grid $\{5,7,\dots,41,43\}$. For random forest, unlike \cite{OhKi22}, we did not tune the number of the trees but fix it to 500 following discussions in \cite[Section 15.3.4]{HTF09} as well as the analysis of \cite{PrBo18}. Instead, we tuned the number of variables randomly sampled as candidates at each split. This was done by the R function \fun{tuneRF} of the package \pck{randomForest}. For the DNN based estimators, we set the network architecture $(L,\mathbf p)$ as $L=3$ and $p_1=p_2=p_3=128$ along with the ReLU activation function $\sigma(x)=\max\{x,0\}$. Supposing that data were appropriately scaled, we ignored the restriction to $[0,1]^d$ of observations when constructing (and evaluating) the DNN based estimators. The network weights were trained by Adam \citep{KiBa15} with learning rate $10^{-3}$ and minibatch size of 64. To avoid overfitting, we determined the number of epochs by the following early stopping rule: First, we train the network weights using the first half of observation data and evaluate its mean square error (MSE) using the second half of the data at each epoch. We stop the training when the MSE is not improved within 5 epochs. After determining the number of epochs by this rule, we trained the network weights using the full sample. For the sparse-penalized DNN estimator, we also need to select the parameters $\lambda_T$ and $\tau_T$. We set $\tau_T=10^{-9}$. $\lambda_T$ was selected from $\{\frac{S_y\log_{10}^3T}{8T},\frac{S_y\log_{10}^3T}{4T},\frac{S_y\log_{10}^3T}{2T},\frac{S_y\log_{10}^3T}{T},\frac{2S_y\log_{10}^3T}{T}\}$ to minimize the MSE in the above early stopping rule. Here, $S_y$ is the sample variance of $\{Y_t\}_{t=1}^T$. We consider the following non-linear AR models for data-generating processes. Throughout this section, $\{\varepsilon_t\}_{t=1}^T$ denote i.i.d.~standard normal variables. \begin{description} \item[EXPAR] $Y_t=a_1(Y_{t-1})Y_{t-1}+a_2(Y_{t-1})Y_{t-2}+0.2\varepsilon_t$ with \ba{ a_1(y)&=0.138+(0.316+0.982y)e^{-3.89y^2},\\ a_2(y)&=-0.437-(0.659+1.260y)e^{-3.89y^2}. } \item[TAR] $Y_t=b_1(Y_{t-1})Y_{t-1}+b_2(Y_{t-1})Y_{t-2}+\varepsilon_t$ with \ba{ b_1(y)&=0.4\cdot\mathbf1_{(-\infty,1]}(y)-0.8\cdot\mathbf1_{(1,\infty)}(y),\\ b_2(y)&=-0.6\cdot\mathbf1_{(-\infty,1]}(y)+0.2\cdot\mathbf1_{(1,\infty)}(y). } \item[FAR] \[ Y_t=-Y_{t-2}\exp(-Y_{t-2}^2/2)+\frac{1}{1+Y_{t-2}^2}\cos(1.5Y_{t-2})Y_{t-1}+0.5\varepsilon_t. \] \item[AAR] \ba{ Y_t=4\frac{Y_{t-1}}{1+0.8Y_{t-1}^2}+\frac{\exp\{3(Y_{t-2}-2)\}}{1+\exp\{3(Y_{t-2}-2)\}}+\varepsilon_t. } \item[SIM] \ba{ Y_t=\exp(-8Z_t^2)+0.5\sin(2\pi Z_t)Y_{t-1}+0.1\varepsilon_t,\quad Z_t=0.8Y_{t-1}+0.6Y_{t-2}-0.6. } \item[SIM$_v$] For $v\in\{0.5,1.0,5.0\}$, \ba{ Y_t&=\{\Phi(-vZ_t)-0.5\}Y_{t-1}+\{\Phi(2vZ_t)-0.6\}Y_{t-2}+\varepsilon_t,\\ Z_t&=Y_{t-1}+Y_{t-2}-Y_{t-3}-Y_{t-4}, } where $\Phi$ is the standard normal distribution function. \end{description} The first four models, EXPAR, TAR, FAR and AAR, are taken from Chapter 8 of \cite{FaYa03}; see Examples 8.3.7, 8.4.7 and 8.5.6 ibidem. The models SIM and SIM$_v$ are respectively taken from \cite[Example 1]{XiLi99} and \cite[Example 3.2]{XLT07} to cover the single-index model (cf.~Example \ref{ex:simod}). Since the model SIM$_v$ has a parameter $v$ varying over $\{0.5,1,5\}$, we consider totally eight models. We generated observation data $\{Y_t\}_{t=1}^T$ with $T=400$ and burn-in period of 100 observations. As in \cite{OhKi22}, we evaluate the performance of each estimator by the empirical $L_2$ error computed based on newly generated $10^5$ simulated data. Figure \ref{fig:sim} shows the boxplots of the empirical $L_2$ errors of the five estimators over 500 Monte Carlo replications for eight models. As the figure reveals, the performances of KRR, NPDNN and SPDNN are superior to those of KNN and RF. Moreover, except for FAR, the DNN based estimators are comparable or better than KRR: For models with intrinsic low-dimensional structures such as AAR, SIM and SIM$_{0.5}$, the DNN based estimators perform slightly better than KRR. For models with discontinuous or rough mean functions such as TAR, SIM$_{1}$ and SIM$_5$, the performances of the DNN based estimators dominate that of KRR. These observations are in line with theoretical results developed in this paper. \begin{figure}[h] \caption{Boxplots of empirical $L_2$ errors}\label{fig:sim} \begin{center} \includegraphics[scale=1]{nlts-boxplot.pdf} \end{center} \end{figure} \section{Concluding remarks}\label{sec: conclusion} In this paper, we have advanced statistical theory of feed-forward deep neural networks (DNN) for dependent data. For this, we investigated statistical properties of DNN estimators for nonparametric estimation of the mean function of a nonstationary and nonlinear time series. We established generalization bounds of both non-penalized and sparse-penalized DNN estimators and showed that the sparse-penalized DNN estimators can estimate the mean functions of a wide class of the nonlinear autoregressive (AR) models adaptively and attain the minimax optimal convergence rates up to a logarithmic factor. The class of nonlinear AR models covers nonlinear generalized additive AR, single index models, and popular nonlinear AR models with discontinuous mean functions such as multi-regime threshold AR models. It would be possible to extend the results in Section \ref{sec:ar-model} to other function classes such as piecewise smooth functions (\cite{ImFu19}), functions with low intrinsic dimensions (\cite{Sc19} and \cite{NaIm20}), and functions with varying smoothness (\cite{Su19} and \cite{SuNi21}). We leave such extensions as future research. \newpage
proofpile-arXiv_065-4577
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Video Captioning (VC) is an important research branch of video understanding. The task of VC aims to generate a natural sentence to describe the content of a video. The VC task only deals with ideal situations where the provided video is short and the generated sentence only describes one main event in the video. However, for most natural videos composed of multiple events, a single sentence cannot cover the content of the video. To tackle this issue, the task of Dense Video Caption (DVC) is developed for temporally localizing and generating descriptions for multiple events in one video. Intuitively, DVC can be divided into two sub-tasks which are event localization and event captioning. The localization sub-task aims to predict the timestamps of each event. This requires the DVC model to decide temporal boundaries between event and non-event segments, and discriminate one event from another. For the captioning sub-task, the model needs to generate a natural sentence to describe each corresponding event. Recent works \cite{wang2021end, deng2021sketch} have proposed models that can achieve good performance under DVC metrics. However, semantic information, which is proved to be useful in VC tasks \cite{gan2017semantic, perez2021attentive}, hasn't been used in DVC tasks yet. As shown in Figure \ref{fig1}, we notice that there are different concepts (i.e. actions and object tags) in different segments in one video. This can help the DVC model decide temporal boundaries between different segments. Introducing high-level semantic concepts also helps to bridge the semantic gap between video and text. To make full use of semantic information, we introduce \textbf{semantic assistance} to our model, both in the encoding and decoding stage. We use PDVC, which stands for \emph{end-to-end dense \textbf{V}ideo \textbf{C}aptioning with \textbf{P}arallel \textbf{D}ecoding} \cite{wang2021end}, as our baseline model. PDVC is a transformer-based framework with parallel sub-tasks. In the encoding stage, a \textbf{concept detector} is designed to extract frame-level semantic information. We design a \textbf{fusion module} to integrate all the features. In the decoding stage, a \textbf{classification sub-task} is added in parallel with localization and captioning sub-tasks. By predicting attributes for events, the classification sub-task can provide event-level semantic supervision. Experimental results show that our strategy of using semantic information achieves significant improvement on YouMakeup dataset \cite{wang2019youmakeup} under DVC evaluation metrics. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig1} \caption{In the YouMakeup dataset \cite{wang2019youmakeup}, different segments have different concepts. Segments with events (top and bottom rows) have concepts including makeup actions, products, tools, and face areas being affected by makeup actions. Non-event segments may not have concepts of make-up actions and affected face areas.} \Description{Different segments have different concepts.} \label{fig1} \end{figure} \section{Related Works} DVC models often follow the encoder-decoder framework. The encoder extracts visual features from the raw video and gives a general representation of the video. Off-the-shelf models, such as C3D \cite{ji20123d}, I3D \cite{carreira2017quo}, and ResNet \cite{he2016deep} can be used as the backbones of the encoder. The decoder takes the encoding visual representation as input and performs two tasks including event localization and event captioning. Krishna et al. \cite{krishna2017dense} propose the first DVC captioning model with a two-stage framework. The decoder combines a proposal module and a captioning module. The proposal module performs the localization sub-task by selecting numerous video segments as event proposals, then the captioning module generates captions for each proposal. Motivated by transformer-based end-to-end object detection methods \cite{carion2020end, zhu2020deformable}, Wang et al. \cite{wang2021end} propose a parallel decoding method where the DVC task is considered as a set prediction problem. An event set with temporal locations and captions is directly predicted by applying localization and captioning sub-task in parallel. Deng et al. \cite{deng2021sketch}, in another way, reverse the "localize-then-captioning" fashion and propose a top-down scheme. In their method, a paragraph is firstly generated to describe the input video from a global view. Each sentence of the paragraph is treated as an event and then temporally grounded to a video segment for fine-grained refinement. \begin{figure*}[t] \includegraphics[width=\textwidth]{fig2} \caption{Overview of our proposed DVC model. M multi-scale feature extractors and a multi-scale concept detector are respectively used to extract frame-level multi-modal visual features and the concept feature from video frame sequences, which are then fused by the multi-modal feature fusion module. The transformer encoder is used to obtain the final representation of the video. The transformer decoder and four parallel heads are proposed to predict the labels, locations, captions, and the number of events.} \Description{Overview of the proposed DVC model} \label{fig2} \end{figure*} \section{Methods} \subsection{Overview} In the DVC task, given an input video sequence $\{ \boldsymbol{x}_t \}^T_{t=1}$, the model needs to predict all the $K$ events $\{ \boldsymbol{\tilde{E}}_i|\boldsymbol{\tilde{E}}_i=(\boldsymbol{l}_i, \boldsymbol{s}_i) \}^K_{i=1}$. $\boldsymbol{l}_i$ and $\boldsymbol{s}_i$ respectively stand for timestamps and caption sentences of the i-th event. In our work, PDVC \cite{wang2021end} is used as the baseline model. We further add a semantic concept detector, a multi-modal feature fusion module and a classification head on the basis of PDVC. Here we present an overview of our model. As shown in Figure \ref{fig2}, our model follows the encoder-decoder pipeline. In the encoding stage, a video frame sequence is fed into $M$ multi-scale feature extractor and a multi-scale concept detector. The multi-modal feature fusion module is employed to fuse all the extracted features. The transformer encoder takes the fused feature sequence with positional embedding to produce the final visual representation. In the decoding stage, the transformer decoder takes event query sequences and encoded feature as input, followed by four parallel heads. The localization and captioning heads predict the timestamps and captions for each query respectively. The classification head performs a multi-label classification task to assign each event to predefined classes. The event counter predicts the actual number of events in the video. \subsection{Feature Encoding} \subsubsection{Multi-scale feature extractor} $M$ multi-scale feature extractors take video frame sequence $\{\boldsymbol {x}_t \}^T_{t=1}$ to extract features of $M$ modalities $\{ \boldsymbol{v}_j^m \}^{T'}_{j=1}$, where $m = 1,...,M$. Each multi-scale feature extractor is composed of an off-the-shelf pretrained feature extractor (e.g. Swin Transformer \cite{liu2021swin}, I3D \cite{carreira2017quo}) and a following temporal convolution network with $L$ layers. Multi-scale features are obtained by temporally concatenating the raw features with outputs of $L$ 1D temporal convolution layers (stride=2). Thus the output sequence length $T'$ can be calculated as: \begin{equation} T' = \sum_{l=0}^{L} \frac{T}{2^l} \end{equation} \subsubsection{Concept Detector} The concept detector is a pretrained module to predict concept vectors $\{ \boldsymbol{c}_t \}^{T}_{t=1}$, i.e. the probabilities of concepts appearing in each video frame. The concept detection approach is defined as follows. We first use NLTK toolkit \cite{bird2009natural} to apply part-of-speech tagging to each word in the training corpus. We choose nouns and verbs of high word frequency as $N_c$ concepts. For t-th frame with captions, its ground truth concept vector $\boldsymbol{c}_t = [c^1_t, ..., c^{N_c}_t] $ is assigned by: \begin{equation} c^i_t = \left\{ \begin{array}{ll} 1 & \text{if i-th concept in the caption}\\ 0 & \text{otherwise}\\ \end{array} \right. , i = 1,2,...N_c \end{equation} The concept detector contains a pretrained feature extractor and a trainable multi-layer perceptron. Frames without captions (i.e. non-event frames) are not taken into consideration at training stage. In the whole DVC pipeline, the pretrained concept detector serves as a feature extractor for frames both with and without captions. A temporal convolution network also follows to produce multi-scale feature $\{ \boldsymbol{v}^C \}^{T'}_{j=1}$ from concept vectors $\{ \boldsymbol{c}_t \}^{T}_{t=1}$. \subsubsection{Multi-Modal Feature Fusion Module} The multi-modal feature fusion module fuses features from all modalities, as well as the concept feature. Features are projected into embedding space and then concatenated by frame. The fused feature is denoted as $\{\boldsymbol{f}_j\}^{T'}_{j=1}$. \subsubsection{Transformer Encoder} The transformer encoder takes the fused feature sequence $\{\boldsymbol{f}_j\}^{T'}_{j=1}$ with positional embedding to produce final visual representation $\{\boldsymbol{\tilde{f}}_j\}^{T'}_{j=1}$ by applying multi-scale deformable attention (MSDatt) \cite{zhu2020deformable}. MSDatt helps to capture multi-scale inter-frame interactions. \subsection{Parallel Decoding} The decoding part of the model contains a transformer decoder and four parallel heads. The transformer decoder takes $N$ event queries $\{\boldsymbol{q}_i\}^{N}_{i=1}$ and encoding frame-level feature $\{\boldsymbol{\tilde{f}}_j\}^{T'}_{j=1}$. Each event query corresponds with a video segment. The transformer decoder also applies MSDatt to capture frame-event and inter-event interactions. Four heads make predictions based on the output event-level representations $\{\boldsymbol{\tilde{q}}_i\}^{N}_{i=1}$ of transformer decoder. \subsubsection{Localization head} The localization head predicts the timestamps $\{\boldsymbol{l}_i\}^N_{i=1}$ of each query using a multi-layer perceptron. Each timestamp contains the normalized starting and ending times. \subsubsection{Captioning head} The captioning head employs a LSTM network to predict caption sentences $\{\boldsymbol{s}_i\}^{N}_{i=1}$ of each query. For i-th query, the event level representation $\boldsymbol{\tilde{q}}_i$ is fed into LSTM every time step and a fully-connected layer takes the hidden state of LSTM to predict words. \subsubsection{Classification head} Each ground truth event is assigned with labels that indicate certain attributes of the event. The classification head predicts the label vector $\{\boldsymbol{y}_i\}^{N}_{i=1}$. The head is composed of a multi-layer perceptron. Each value of vector $\boldsymbol{y}_i$ indicates the probability of a certain label in the event. The classification sub-task, which brings semantic supervision to the model, serves as an auxiliary task for DVC. \subsubsection{Event counter} The event counter predicts the actual number of events in the video by performing a multi-class classification. The counter contains a max-pooling layer and a fully-connected layer, taking $\boldsymbol{\tilde{q}}_i$ and predicting a vector $k_{num}$ of probabilities of the certain numbers. The length of $k_{num}$ is set to be the expected max number of events plus 1. The actual event number is obtained by $K = \text{argmax} (k_{num})$. \subsection{Training and Inference} \subsubsection{Training} \ In the training stage, we fix the parameters of the pretrained feature extractors and the concept detector. Feature extractors are directly loaded with off-the-shelf pretrained parameters. The concept detector is offline trained using focal loss \cite{lin2017focal} to alleviate the problems of unbalanced samples. When training the whole DVC model, the predicted event set $\{E_i\}^N_{i=1}$ has to be matched with the ground truths. We use the Hungarian algorithm to find the best matching, following \cite{wang2021end}. The captioning loss $L_c$ and localization loss $L_l$ are calculated only using matched queries. $L_c$ is the cross-entropy between the ground truth and the predicted probabilities of words. $L_l$ is the gIOU loss \cite{rezatofighi2019generalized} between matched prediction and ground truth pairs. The classification loss $L_{cls}$ is calculated using focal loss between all predicted labels and their targets. For the matched queries, the label target is equal to the matched ground truth. For the unmatched queries, the label target is set to be an all-zero vector. The counter loss is the cross-entropy between the predicted result and the ground truth. The DVC loss is the weighted sum of the four losses above. \subsubsection{Inference} In the inference stage, the predicted $N$ event proposals $\{E_i\}^N_{i=1}$ are ranked by confidence. Following \cite{wang2021end}, the confidence is the sum of the classification confidence and the captioning confidence. The top $K$ events are chosen as the final DVC result $\{ \tilde{E_i}\}^K_{i=1}$. \section{Experiments} \subsection{Settings} \subsubsection{Dataset} We conduct experiments on the YouMakeup dataset \cite{wang2019youmakeup}. The YouMakeup dataset contains 2800 make-up instructional videos of which the length varies from 15s to 1h. There are a total of 30,626 events with 10.9 events on average for each video. Each event is annotated with a caption, a timestamp, and grounded facial area labels from 25 classes. We follow the official split with 1680 for training, 280 for validation, and 840 for test. \subsubsection{Evaluation Metrics} \ We evaluate our method using the evaluation tool provided by the 2018 ActivityNet Captions Challenge in aspects of localization and caption. For localization performance, we compute the average precision (P) and recall (R) across tIoU thresholds of 0.3/0.5/0.7/0.9. For captioning performance, we calculate BLEU4 (B4), METEOR (M), and CIDEr (C) of the matched pairs between generated captions and the ground truth across tIOU thresholds of 0.3/0.5/0.7/0.9. \subsubsection{Implementation details} \ We use PDVC \cite{wang2021end} as our baseline model. Pretrained I3D \cite{carreira2017quo} and Swin Transformer (Base) \cite{liu2021swin} are used to extract frame-level motion and appearance features. The concept detection is performed on Swin Transformer feature of every frame, and the concept number $N_c$ is set to 100. For parallel computing, all the feature sequences are temporally resized into the same length. Sequences with a length larger than 1024 are temporally interpolated into the length of 1024. Those of length less than 1024 are padded to 1024. In the decoding stage, the grounded facial area labels are predicted by the classification head. The number of queries $N$ and the length of $k_{num}$ are set to 35 and 11. Other settings follow the baseline model PDVC. \begin{table \begin{tabular}{ccccccc} \toprule Method & Dataset &P & R & B4 & M & C \\ \midrule PDVC \cite{wang2021end} & \multirow{2}{*}{val} & 31.47 & 23.76 & 6.30 & 12.49 & 68.18 \\ ours & & \textbf{48.80 } & \textbf{29.28} & \textbf{14.24} & \textbf{22.01} & \textbf{137.23} \\ \hline PDVC \cite{wang2021end} & \multirow{2}{*}{test} & 32.23 & 24.82 & 5.72 & 12.25 & 65.54 \\ ours & & \textbf{48.20} & \textbf{28.04} & \textbf{13.91} & \textbf{21.56} & \textbf{135.45} \\ \bottomrule \end{tabular} \caption{Evaluation Results on validation and test dataset comparing with baseline} \label{tab:base} \end{table} \begin{table} \begin{tabular}{ccccccc} \toprule Feature & Fusion &P & R & B4 & M & C \\ \midrule i3d & - & 43.05 & 30.05 & 10.20 & 17.89 & 111.23 \\ swin & - & 43.34 & 29.51 & 10.98 & 15.78 & 108.76 \\ i3d+swin & early & 47.70 & \textbf{32.40} & 13.25 & 20.25 & 122.97\\ i3d+swin & late & \textbf{48.26} & 32.12 & \textbf{13.75} & \textbf{20.55} & \textbf{130.14} \\ \bottomrule \end{tabular} \caption{Ablation study: feature fusion} \label{tab:features} \end{table} \begin{table \begin{tabular}{ccccccc} \toprule \makecell[c]{Concept \\ detector} & \makecell[c]{Classification \\ head} & \makebox[0.015\textwidth][c]{P} & \makebox[0.015\textwidth][c]{R} & \makebox[0.015\textwidth][c]{B4} & \makebox[0.015\textwidth][c]{M} & \makebox[0.015\textwidth][c]{C} \\ \midrule - & - & 48.26 & 32.12 & 13.75 & 20.55 & 130.14 \\ \checkmark & - & 47.71 & \textbf{32.62} & 14.10 & 21.64 & 132.50 \\ - & \checkmark & 45.13 & 27.08 & 13.75 & 21.01 & 128.51 \\ \checkmark & \checkmark & \textbf{48.80} & 29.28 & \textbf{14.24} & \textbf{22.01} & \textbf{137.23}\\ \bottomrule \end{tabular} \caption{Ablation study: semantic assistant} \label{tab:semantic} \end{table} \begin{table \begin{tabular}{ccccccc} \toprule \makecell[c]{Max event \\ number} & \makebox[0.01\textwidth][c]{Data split} & \makebox[0.01\textwidth][c]{P} & \makebox[0.01\textwidth][c]{R} & \makebox[0.01\textwidth][c]{B4} & \makebox[0.01\textwidth][c]{M} & \makebox[0.01\textwidth][c]{C} \\ \midrule 10 & all & 48.80 & \textbf{29.28} & 14.24 & 22.01 & 137.23 \\ 7 & all & 48.50 & 23.57 & 14.18 & 22.71 & 144.80 \\ 5 & all & \textbf{49.51} & 20.50 & \textbf{14.85} & 23.69 & \textbf{157.57} \\ 3 & all & 48.16 & 14.50 & 13.47 & \textbf{24.21} & 151.67 \\ \hline 3 & num>3 & 51.24 & 12.28 & 13.64 & 24.77 & 165.76 \\ 3 & num<=3 & 40.12 & 13.30 & 12.41 & 20.65 & 61.92 \\ \bottomrule \end{tabular} \caption{Ablation study: different max event number} \label{tab:counter} \end{table} \subsection{Comparison with baseline} Table \ref{tab:base} shows DVC metrics on validation and test dataset. Our methods achieves a 55.07\%/23.23\%/126.03\%/76.22\%/101.28\% relative gain on validation dataset and 49.55\%/12.97\%/143.18\%/76.00\%/106.67\% on test dataset under the metrics of P/R/B4/M/C compared with the baseline model. \subsection{Ablation study} \subsubsection{Feature fusion} We evaluate the effectiveness of the usage of multi-modal features on the validation set. We also tried early feature fusion. Instead of fusing multi-scale features, features are fused before the temporal convolution network. As shown in Table \ref{tab:features}, using multi-modal features helps to improve all the 5 DVC metrics in comparison with only using feature of one modality. Compared with early fusion, the late fusion method has higher precision and captioning scores but slightly lower recall. The results demonstrate that: 1) Using multi-modal features helps to improve model performance. 2) Details can be better captured by applying late fusion on multi-scale features. \subsubsection{Semantic assistance} We evaluate the effectiveness of two semantic assistance modules on the validation set. Table \ref{tab:semantic} shows that: 1) Adding the concept detector increases recall and captioning scores; 2) The classification sub-task cannot bring performance gain alone; 3) Better precision and caption scores can be obtained by applying the concept detector and classification head together. \subsubsection{Expected max event number} We try different settings of the expected max event number, which is the upper bound of the event counter output $K$. Table \ref{tab:counter} shows that as the max event number increases, precision and captioning scores increase but recall decreases. We also split the validation into 2 parts by event number. When setting the max event number to 3, the model has higher precision and captioning scores on videos containing more than 3 events, oppositely on videos with no more than 3 events. Results can be explained by the trade-off between precision and recall. Since BLEU4/METEOR/CIDEr are only computed on events tIOU-matched with the ground truths, captioning scores are positively correlated with the precision score. \section{Conclusion} In this paper, we present a semantic-assisted dense video captioning model with multi-modal feature fusion. The concept detector extracts semantic feature that is fused with other multi-modal visual features. The classification sub-task provides semantic supervision. Experiments prove that our method achieves significant performance on DVC tasks. \bibliographystyle{ACM-Reference-Format}
proofpile-arXiv_065-4583
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\label{sec:level1}INTRODUCTION} Over the past few decades, domain wall dynamics has attracted much attention because of the fundamental interests and potential applications in future memory and logic devices \cite{DW1,DW2,DW3}. Domain walls are transition regions between two differently oriented magnetic domains. The most traditional way to drive the domain wall motion is to apply an external field along one of the domains \cite{Field1,Field2,Field3,Field4}. Later, spin-polarized electric current \cite{Current1,Current2,Current3,Current4}, temperature gradient \cite{TG1,TG2,TG3,TG4}, and spin waves \cite{SW1,SW2,SW3} are used as control knobs of domain wall dynamics. Due to the non-linear nature of magnetization dynamics, although the field-driven domain wall dynamics has been studied for the longest time, the understanding is still incomplete. On one hand, most analytical models are one-dimensional (1D) models that are only applicable for very thin and narrow lines or strips, or very large bulk systems which are uniform in the other two dimensions \cite{Field1,Tatara04,Shibata2011}. For generic systems, especially magnetic strips whose width is much larger than the domain wall width, the 1D model fails due to the variation in the width direction \cite{Yuan2014,Laurson20151,Laurson20152}. On the other hand, most studies focus on in-plane head-to-head (or tail-to-tail) domain walls \cite{SW1,TG4} or perpendicular magnetic anisotropy (PMA) domain walls \cite{Miron2011, Emori2013, Laurson20191}. This is mainly because these two types of domain walls are energetically preferable in magnetic nanostrips, which are the platform of domain wall racetrack memory \cite{DW2}. For soft magnetic materials, the domains align along the strip due to shape anisotropy, forming head-to-head (or tail-to-tail, HtH/TtT for short) domain walls. For materials with strong PMA, the domains are oriented out-of-plane, and the domain walls between them are PMA domain walls. However, there is another type of 180\textdegree domain walls which is usually ignored, i.e. the ``side-by-side'' domain walls \cite{FE2020}, as schematically depicted in Fig. \ref{fig1}. The magnetization directions of the two domains are in-plane and perpendicular to the strip. This type of domain walls may exist in wide strips made of materials with in-plane magnetocrystalline anisotropy, such as cobalt or permalloy grown on some designed substrates \cite{Cobalt2000,Py2020}. There have been a lot of studies on the field-driven dynamics and internal structures of head-to-head (tail-to-tail) domain walls and PMA domain walls \cite{Yuan2014,Laurson20151,Laurson20152}. It is well known that in a biaxial system with an easy anisotropy axis and a hard anisotropy axis, the domain wall moves under the longitudinal magnetic field in a rigid-body manner and the velocity increases with the field strength. Beyond a critical field, the rigid-body motion breaks down, and the velocity drops with increasing field. This behavior is called ``Walker breakdown'' and the critical field is called ``Walker breakdown field''. For thin strips, the breakdown occurs when the domain wall rotates around the external field \cite{Field1}. For wide and thick strips, the dynamics is much more complicated. It has been shown that the HtH/TtT or PMA domain walls undergo periodic transformation through the generation and annihilation of vortices \cite{Yuan2014,Laurson20151} or Bloch lines \cite{Laurson20152,Ono2016}. In this paper, we theoretically investigate the field-driven dynamics of the third type of domain walls, i.e. the in-plane side-by-side domain walls \cite{Filippov2004}. By micromagnetic simulations, we find a normal Walker breakdown in thin strips which compares well with the 1D analytical model, and a multi-step Walker breakdown behavior in wide strips. The multi-step breakdown occurs due to the generation of vortices and the generation and annihilation of vortex-antivortex pairs. Furthermore, we study the influence of Dzyaloshinskii-Moriya interaction (DMI) on the domain wall dynamics. For bulk DMI (BDMI), the Walker breakdown field increases with BDMI strength. For interfacial DMI (IDMI), the Walker breakdown field first decreases then increases with increasing IDMI strength. These behaviors are different from those in HtH/TtT domain walls \cite{Zhuo2016} and PMA domain walls \cite{Thiaville2012} due to the different domain wall geometry. Our findings fill in the blank of side-by-side domain wall dynamics and complement the understandings on 180\textdegree domain wall dynamics. \section{MODEL AND STATIC SIDE-BY-SIDE DOMAIN WALL} We consider a wide ferromagnetic strip of saturation magnetization $M_s$ along $x$ direction with easy axis along $y$ direction of strength $K_u$, as shown in Fig. \ref{fig1}. The length, width and thickness of the strip are $l$, $w$ and $d$, respectively. To be convenient, we define a polar coordinate system with respect to $y$ axis. \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{Fig1.pdf} \caption{Schematic illustration of an in-plane side-by-side magnetic domain wall. In the absence of DMI, the static domain wall is N\'{e}el-type in which all the spins are in-plane due to the shape anisotropy. The external field $\mathbf{B}$ is applied along $y$ direction. Inset: comparison of the $m_y$ and $m_x$ components at the edge and the centerline of the strip.} \label{fig1} \end{figure} The magnetization dynamics is governed by the Landau-Lifshitz-Gilbert (LLG) equation \cite{LLGGilbert}, \begin{equation} \frac{\partial \mathbf{m}}{\partial t}=-\gamma\mathbf{m}\times\mathbf{H}_{\mathrm{eff}} +\alpha\mathbf{m}\times\frac{\partial \mathbf{m}}{\partial t}, \label{LLGeq} \end{equation} where $\mathbf{m}$ is the unit vector of the magnetization direction, $\gamma$ is the gyromagetic ratio, and $\alpha$ is the Gilbert damping. The effective field $\mathbf{B}_\text{eff}$ is the variation of the total energy $E$, $\mathbf{B}_\text{eff}= -\frac{\delta E}{M_s \delta \mathbf{m}}$. We do not consider the DMI at this stage. The total energy density $E$ is, \begin{equation} E=A|\nabla \mathbf{m}|^2-K_u m_y^2-M_sBm_y+E_d, \label{totenergy} \end{equation} where $A\left|\nabla \mathbf{m}\right|^2$ is the exchange energy density with exchange constant $A$, $-K_u m_y^2$ is the easy-axis anisotropy energy density, $-M_sBm_y$ is the Zeeman energy density with $B$ the field strength along $y$ direction, and $E_d$ is the demagnetization energy density. As a demonstration of concept, we consider ferromagnet strips of thickness $d = 3$ nm with widths $w$ ranging from 48 to 2100 nm, and use a moving simulation window of length $l = 3072$ nm centered around the domain wall. The material parameters are $K_u = 2\times10^5\ \rm J/m^3$, $A =10^{-11}$ J/m, $\alpha=0.1$, $M_s=3\times10^5$ A/m. The exchange length $L_\text{ex}=\sqrt{A/(\mu_0M_s^2)}\approx13.3$ nm. In our numerical simulations, we use the Mumax3 \cite{mumax} package to numerically solve the LLG equation \eqref{LLGeq} with the mesh size 3 nm $\times$ 3 nm $\times$ 3 nm, which is smaller than the exchange length. Due to the thin-film shape anisotropy, the domain wall center should be in-plane, resulting in a N\'{e}el wall. This can be easily verified by numerical simulation. We set up an $\mathbf{m}\parallel +y$ ($-y$) domain at the left (right) half, and a region of $5L_\text{ex}$ wide with random magnetization in between the two domains as the initial state. After relaxing, the domain wall is a N\'{e}el type one as shown in Fig. \ref{fig1}. For $w=48$ nm, the three components $m_{x,y,z}$ along the strip edge ($y=-w/2$) and the centerline $y=0$) are plotted in the inset (the origin is set at the center of the strip). It can be observed that the magnetization is almost uniform in the transverse ($y$) direction, which is different from the strawberry-shape head-to-head walls \cite{Yuan2014} but similar to the PMA domain walls \cite{Emori2013}. The domain wall center can either be $+x$ or $-x$ direction with a degenerated energy in the absence of DMI. The domain wall profile can be well fitted by the Walker solution \cite{Field1}, \begin{equation} \theta(x) = 2\arctan (e^{\frac{x-X}{\Delta}}),\quad \phi(x)=\frac{\pi}{2}, \end{equation} where $X$ is the domain wall center position and $\Delta$ is the domain wall width. Here, $X$ is set to 0 and $\Delta=6.86$ nm from the fitting. In Cartesian coordinates, the solution is $m_y=-\tanh \frac{x-X}{\Delta}$, $m_x=\mathrm{sech} \frac{x-X}{\Delta}$, and $m_z=0$, as shown by the solid lines in the inset of Fig. \ref{fig1}, showing good agreement with the numerical data points. The domain wall width $\Delta$ is related to $A$ and the effective easy-axis ($y$ axis) anisotropy $K_y$ by $\Delta=\sqrt{\frac{A}{K_y}}$, where $K_y$ includes the magnetocrystalline anisotropy $K_u$ and the shape anisotropy. The shape anisotropy is an approximation of the demagnetization effects which only considers the homogeneous part of the demagnetization fields and ignores the inhomogenous part. The effective anistropy coefficients along $y$ and $z$ axes are $-\frac{N_y-N_x}{2}\mu_0M_s^2$ and $-\frac{N_z-N_x}{2}\mu_0M_s^2$, respectively, where $N_{x,y,z}$ are demagnetization factors related to the geometry \cite{demagfactor}. In a N\'{e}el wall configuration, there are finite bulk magnetic charges $\rho=-M_s\nabla\cdot\mathbf{m}$ at the two sides of the domain wall as schematically labeled in Fig. \ref{fig1}. Thus, the shape anisotropy is approximately that of a prism with dimensions $\Delta$, $w$, and $d$ \cite{demagfactor,Mougin2007}. The total effective easy-axis anisotropy is $K_y=K_u-[N_y(\Delta,w,d)-N_x(\Delta,w,d)]$, and the total effective hard-axis anisotropy is $K_z=-\frac{N_z-N_x}{2}\mu_0M_s^2$ which keeps the static domain wall in-plane. The domain wall width $\Delta$ satisfies, \begin{equation} \Delta=\sqrt{\frac{A}{K_u-[N_y(\Delta,w,d)-N_x(\Delta,w,d)]\mu_0M_s^2/2}}, \end{equation} where $\Delta$ can be self-consistently solved. With our parameters, we have $\Delta=6.82$ nm, which is quite closed to the numerically fitted value 6.86 nm. The demagnetization factor $N_x-N_y$ increases with the strip width $w$. Thus, $K_\text{eff}$ becomes larger and $\Delta$ becomes smaller for wider strips, although the change is insignificant because $K_u$ dominates the total $K_\text{eff}$. \section{\label{MTWB}MAGNETIC FIELD DRIVEN DOMAIN WALL DYNAMICS AND MULTI-STEP WALKER BREAKDOWN} We then apply an external field $\mathbf{B}$ along $y$ direction to investigate the field-driven dynamics of the side-by-side wall. Figure. \ref{fig2}(a) shows the domain wall speed $v$ versus the applied field for different strip width $w$. For thin strips ($w=24$ nm and 48 nm in the figure), the speed shows a Walker-like behavior \cite{Field1}: below the Walker breakdown field, the domain wall propagates like a rigid-body; above the Walker breakdown field, the domain wall rotates and oscillates. The average speed in the oscillation regime can be obtained using a well-known 1D collective-coordinate model (CCM) \cite{Shibata2011,Thiaville2006}. Because of the non-uniform out-of-plane component, the shape anisotropy is no longer a satisfactory approximation. By fitting the $w=48$ nm curve with the Walker formula and the collective coordinate results \cite{Thiaville2006}, we can obtain effective easy-axis anisotropy $K_y=2.09\times10^5$ J/m$^3$ and hard-axis anisotropy $K_z=4.337\times 10^4$ J/m$^3$, and the fitted curve shows good agreement with the numerical data. The inset shows the time evolution of the average domain wall azimuthal angle $\Phi=\arctan {\frac{\langle m_x \rangle}{\langle m_z\rangle}}$ for $B=15$ mT (marked by the arrow in the main figure). The good agreement with the 1D collective coordinate model means that the side-by-side domain wall dynamics is still quasi-1D at strip width $w=48$ nm. \begin{figure}[ht] \includegraphics[width=0.45\textwidth]{Fig2.pdf} \caption{Domain wall velocity $v$ versus applied field $B$ for different strip width $w$. The black solid line is the result of collective coordinate model with fitting parameters $K_y=2.09\times10^5$ J/m$^3$ and $K_z=4.337\times 10^4$ J/m$^3$ for $w=48$ nm. The inset shows the time evolution of the azimuthal angel $\Phi$ of the domain wall plane for $B=15$ mT. The numerical data (thick blue symbols) compares well with the collective coordinate model results (red dashed line).} \label{fig2} \end{figure} When the strip width gets wider, the dynamics in the transverse direction becomes more and more inhomogeneous. The way that domain wall chirality periodically flips changes gradually from coherent rotation to vortex generation, and the velocity drop after the Walker breakdown becomes sharper at the same time, which is similar to the well-studied PMA domain walls and HtH/TtT domain walls \cite{Yuan2014,Laurson20152,Laurson20191}. The reason is that the domain wall propagation velocity is proportional to the dissipation rate of the Zeeman energy $E_\text{Zee}$, $\frac{d E_\text{Zee}}{dt}\propto M_sBv$ \cite{WXR2009}. When the domain wall moves rigidly, the Zeeman energy dissipation is the only way of energy dissipation. Beyond the Walker breakdown, when the internal dynamics of the domain wall becomes more complex, the Zeeman energy can be temporarily stored in the tilting or deformation of the domain wall. So the average Zeeman energy dissipation rate becomes lower, resulting in a drop in velocity. However, different from the PMA domain walls \cite{Laurson20191}, the Walker breakdown field does not change much. That is because although the dynamics is not quasi-1D, the Walker breakdown field $B_w$ is still close to the 1D-model value $B_w=\frac{\alpha K_z}{M_s}$. In the side-by-side configuration, the hard axis is dominated by the shape anisotropy $-\frac{N_z(\Delta,w,d)-N_x(\Delta,w,d)}{2}\mu_0M_s^2$. The change of $w$ in $y$ direction does not affect this value too much. \begin{figure*}[!htp] \includegraphics[width=0.96\textwidth]{Fig3.pdf} \caption{Upper panel: The average domain wall position $\langle X\rangle$ against time for $w=1536$ nm and (a) $B=13.8$ mT, (b) $B=15.3$ mT. Lower panel: The snapshots of the magnetization texture near the domain wall corresponding to the moments marked by red dots in the upper panel. The in-plane angle of the magnetization is encoded in the color ring shown in the inset. The winding numbers are labelled by circles or semicircles for vortices and edge defects. The moving directions of the vortices are indicated by arrows. Movies for the domain wall dynamics are shown in the Supplemental Material \cite{SM}. (c) Close-up textures of the (anti)vortices labelled by frames of different colors in (b). The corresponding schematic spin configuration is schematically illustrated at the right hand side of each texture.} \label{fig3} \end{figure*} We can find more interesting and sophisticated behaviors by observing the details of the dynamics. {Figure \ref{fig2}(b) shows the domain wall speed at $B=14.6$ mT (just beyond the Walker breakdown field) for different strip width $w$. There are roughly three regions, separated by the red and blue dashed lines. For strips no wider than 48 nm (at the left hand side of the red dashed line), the 1D CCM is approximately valid. Thus, according to the 1D CCM and the effective anisotropy discussed above, the wider the strip, the larger the Walker breakdown field. So the DW velocity increases with the width for a fixed field just above the breakdown field. No vortex is generated and the DW motion is periodic due to the almost synchronized rotation of DW center. The inset of Fig. \ref{fig2}(b) shows how the average domain wall position $\langle X\rangle$ calculated from $\langle X\rangle=\langle m_x\rangle L/2$ \cite{DW2004,mumax} moves with time. The black line is for $w=48$ nm. In each period, there is a long fast-moving stage in which the velocity equals to the maximum rigid-body velocity at the Walker breakdown. At this stage, the domain wall almost keeps a rigid-body motion with $\Phi\approx \pi/4$, where $\Phi$ is the azimuthal angle of the domain wall plane (see Appendix \ref{AA}). There is also a short slow-down stage due to the rotation of the domain wall center. $\Phi$ quickly rotates from $\Phi\approx \pi/4$ to $\Phi\approx -\pi/4$ at this stage and the domain wall moves fast again in a new period.} {For widths between the red dashed line and blue dashed line, there will be clear vortex generation and propagation, which is similar to that observed in HtH/TtT \cite{Yuan2014,Laurson20152} and PMA domain walls \cite{Laurson20191}. As shown in the inset of Fig. \ref{fig2}(b) by the red line, starting from a N\'{e}el wall pointing to $+x$ as shown in Fig. \ref{fig1}, the domain wall still tilts out of plane, shrinks, accelerates, and moves fast with an almost uniform $\Phi$ angle at the beginning. But soon, a vortex whose center points to $-z$ (polarity $-1$) appears at the bottom edge, and the domain wall decelerates. The reason why the vortex appears at the bottom edge is that for the dipolar fields stabilize (destabilize) the $+x$ domain wall center at the top (bottom) edge. So for the domain wall center point to $-x$, the vortex is generated at the top edge. The Zeeman energy converts to the energy of the vortex and at the same time, the longitudinal motion slows down and wanders around. After the vortex annihilates at the other side, the domain wall moves fast again in the absence of vortex. After a while, a vortex (of opposite polarity) is generated at the top edge, and the motion slows down again. For widths larger than the value indicated by the blue dashed line, there is also vortex generation and the starting stage is similar to the previous case. However, after the vortex annihilates, a new vortex \textit{immediately} appears at almost the same place. There is no fast-moving stage. Thus, the average speed is significantly reduced, as shown in the inset of Fig. \ref{fig2}(b) by the blue line for $w=288$ nm.} This behavior can also be observed in wider domain walls, but the domain wall acceleration and deceleration becomes insignificant. This is because the energy cost of the vortex generation does not scale with the width since it is a local process, while the Zeeman energy is proportional to the width. So the impact of vortex generation on the Zeeman energy changing rate becomes weaker. Figure \ref{fig3}(a) shows the magnetization snapshots of a $w=1536$ nm strip under $B=13.8$ mT together with the average domain wall position $\langle X\rangle$ calculated from $\langle X\rangle=\langle m_x\rangle L/2$ \cite{DW2004,mumax}. The above mentioned single vortex generation (annihilation) processes associated with polarity flipping at the strip edges can be clearly seen. For clarity of narration, we call such processes ``single-vortex processes''. For {strips wider than 300 nm}, a multi-step Walker breakdown can be observed, similar to that in PMA strips \cite{Laurson20191} [see data for $w=768$, 1536, and 2100 nm in Fig. \ref{fig2}(a)]. When further increasing the applied field after the first breakdown, a second breakdown occurs with a velocity drop. To see what happens, we plot the magnetization snapshots of the 1536 nm wide strip under $B=15.3$ mT, just beyond the second breakdown in Fig. \ref{fig3}(b), and show several typical snapshots of the magnetic texture near the domain wall. During the first $\sim 10$ ns, the domain wall still undergo a single-vortex process as depicted above (see the 10 ns snapshot). However, after 10 ns, a vortex-antivortex pair of polarity $-1$ emerges inside the domain wall (see the 11 ns snapshot). The vortex (antivortex) has a winding number of $+1$ ($-1$) \cite{Oleg2005}. The winding number of a vortex is also called ``vorticity''. Due to the opposite vorticity and same polarity, the vortex and the antivortex have opposite gyrovectors so they move along $+y$ and $-y$ respectively according to the Thiele equation \cite{Thiele,Yuan2015}. Also due to the finite gyrovector, the longitudinal speed of the vortex (antivortex) is smaller than the transverse domain wall, so the domain wall speed is slowed down. Then the vortex hits the top edge, reverses its polarity and moves down. The antivortex annihilates with the other vortex coming up from the bottom edge, and a vortex-antivortex pair of polarity $+1$ emerges at the same place (see the 12 ns and 12.5 ns snapshots). The vortex (antivortex) moves along $-y$ ($+y$), and then hits the bottom edge (annihilate with another vortex), finishing a period. The vortex-antivortex generation and annihilation occur in the interior of the domain wall, and we call them ``two-vortex processes''. Figure \ref{fig3}(c) shows the zoom-in textures near the (anti)vortices labelled by frames of different colors in (b). To see the vorticity and the polarity more clearly, the spin configuration of each texture is schematically illustrated. The red, blue, yellow, black frames enclose antivortex of polarity $-1$, vortex of polarity $-1$, vortex of polarity $+1$, antivortex of polarity $+1$, respectively. Notice that the two-vortex processes in the interior and the single-vortex processes at the edges are asynchronous. So after time goes on, the vortex-antivortex pair may appear at different positions inside the domain wall and the dynamics may become more and more complicated. We also label the winding numbers of all the vortices ($+1$ for vortices and $-1$ for antivortices) and edge defects. No matter how complicated the domain wall transformation is, the total winding number remains zero. When $B$ further increases, there can be more pairs of vortices and antivortices. For larger applied field, more two-vortex processes occur at the same time, resulting in further breakdowns. Note that different from the PMA domain walls \cite{Laurson20191,tilting1, tilting2, tilting3} and HtH/TtT domain walls \cite{Yuan2014}, there is no global, directional in-plane tilting of the domain wall centerline {(domain wall centerline is the contour line of $m_y=0$, i.e. the domain wall center)}. Of course, transient, local tilting near the vortices is ubiquitous, as shown in Fig. \ref{fig3}(a) and (b). This is because in the side-by-side geometry, there are no magnetic charges at two {ends of the domain wall} like those in PMA and HtH/TtT domain walls, so the domain wall width $\Delta$ is almost constant along $y$ direction. For the snapshots shown in Fig. \ref{fig3}(a)(b), the difference between the smallest and largest $\Delta$ is less than $3\%$. \section{INFLUENCE OF Dzyaloshinskii-Moriya interaction} As an antisymmetric exchange interaction, DMI has been demonstrated to have a significant influence on HtH/TtT \cite{Zhuo2016} and PMA \cite{Thiaville2012,Emori2013} domain wall dynamics. There are two most widely studied types of DMI, i.e. the interfacial DMI (IDMI) and the bulk DMI (BDMI). The interfacial DMI exists in inversion symmetry breaking systems. The DMI vector direction $\hat{\mathbf{d}}_{12}$ from spin 1 to spin 2 is parallel to $\mathbf{r}_{12}\times \hat{\mathbf{z}}$, where $\mathbf{r}_{12}$ is the spatial vector from 1 to 2 and $\hat{\mathbf{z}}$ is the inversion symmetry breaking direction \cite{Thiaville2012}. The bulk DMI exists in noncentrosymmetric systems. $\hat{\mathbf{d}}_{12}$ is parallel to $\mathbf{r}_{12}$ \cite{Nagaosa2013}. The DMI vector directions are schematically illustrated in Fig. \ref{fig4}. For static domain wall configurations, it is enough to use the simplest three-spin model to decide the energetically preferred configuration in quasi-1D. Figure \ref{fig4} summarizes the influence of the two types of DMI on different types of domain walls. The side-by-side walls are different from the HtH/TtT walls and PMA walls. The IDMI does not break the degeneracy of N\'{e}el type (domain wall center in-plane) and the Bloch type (domain wall center out-of-plane). The BDMI prefers the Bloch-type, which is competing with the shape anisotropy. In the continuous model, the expressions of energy density of IDMI and BDMI are, respectively, \begin{gather} E_\mathbf{IDMI}=D_i\left[m_z\nabla\cdot\mathbf{m}-(\mathbf{m}\cdot\nabla)m_z\right],\\ E_\mathbf{BDMI}=D_b \mathbf{m}\cdot(\nabla\times \mathbf{m}), \end{gather} where $D_i$ and $D_b$ are the IDMI and BDMI strength in units of $\text{J/m}^2$. Applying the quasi-1D $X-\Phi$ collective coordinate model \eqref{ccm}, we find the total energy $\mathcal{E}$ of the side-by-side domain wall in the presence of IDMI or BDMI (see Appendix), \begin{gather} \mathcal{E}_\mathbf{IDMI}=4dw\sqrt{A\left(K_y+K_z\cos^2{\Phi}\right)}, \label{IDMI1D}\\ \mathcal{E}_\mathbf{BDMI}=dw\left[4\sqrt{A\left(K_y+K_z\cos^2{\Phi}\right)}-\pi D_b \cos\Phi\right]. \end{gather} We first focus on the influence of BDMI. For the static domain wall configuration, minimizing $\mathcal{E}_\mathbf{BDMI}$ with respect to $\Phi$, we find \begin{equation} \cos \Phi=\left\{ \begin{array}{ll} \pi D_b \sqrt{\frac{K_y}{K_z(16AK_z-\pi^2D_b^2)}}& |D_b|<D_c\\ \mathrm{sign}(D_b) & |D_b|\geq D_c \end{array}\right. \label{CCMBDMI} \end{equation} where $D_c=\frac{4K_z}{\pi}\sqrt{\frac{A}{K_y+K_z}}$. For $w=48$ nm, with $K_y=2.09\times10^5$ J/m$^3$ and $K_z=4.337\times 10^4$ J/m$^3$ obtained in the previous section, we have $D_c= 0.348$ $\text{mJ/m}^2$. From $D_b=0$ to $D_b=D_c$, the static domain wall gradually rotates from N\'{e}el-type ($\Phi=\pm\pi/2$) to Bloch-type ($\Phi=0$ for positive $D_b$ and $\Phi=\pi$ for negative $D_b$). Figure \ref{fig5}(a) shows the quasi-1D result Eq. \eqref{CCMBDMI} together with the numerical results, showing a reasonably good agreement. \begin{figure}[ht] \includegraphics[width=0.45\textwidth]{Fig4.pdf} \caption{Schematic diagrams of interfacial and bulk DMI and summary of energetically preferred static domain wall configurations. The check mark (cross mark) means the configuration is preferable (not preferable). The circle means that the DMI has no influence on the static domain wall configuration. } \label{fig4} \end{figure} Then we investigate the field-driven dynamics of side-by-side domain walls in the presence of BDMI. We have discussed that the BDMI tends to lock the domain wall in Bloch type [$D_b>0$ ($<0$) for domain wall center pointing to $+z$ ($-z$)]. Thus, when an external field is applied, the domain wall rotation is suppressed so that the Walker breakdown is postponed. Figure \ref{fig5}(b) shows how the average domain wall velocity changes with external field for the 48 nm strip. Typical Walker breakdown behaviors are observed. The breakdown field $B_W$ increases with $D_b$. The breakdown fields $B_w$ for different $D_b$ are plotted in the inset. The black symbols are the numerical results extracted from the main figure, and the red line is the result of 1D CCM (see Appendix). The numerical results qualitatively compare well with the 1D CCM model, but the $B_W$ values are smaller, mainly due to the 2D nature and the complicated demagnetization field in the numerical model. Both the CCM model and numerical data show that the domain wall velocity is symmetric for positive and negative $D_b$. \begin{figure}[ht] \includegraphics[width=0.45\textwidth]{Fig5.pdf} \caption{(a) Influence of BDMI strength $D_b$ on domain wall azimuthal angle $\Phi$. The symbols are numerical data and the solid line is the collective coordinate model result. (b) The simulation results of field-driven domain wall velocity for $w=48$ nm and different $D_b$. The inset shows the Walker breakdown field $B_W$ versus $D_b$. The symbols are numerical data and the solid line is the collective coordinate model result.} \label{fig5} \end{figure} We now consider IDMI. According to Eq. \eqref{IDMI1D}, the IDMI does not affect the static domain wall configuration in 1D model. For a 2D strip of $w=48$ nm, this is still true for weak IDMI such as $D_i=0.5$ $\text{mJ/m}^2$. However, the numerical relaxation shows that the static domain wall centerline is tilted in-plane, and the magnetization is tilted out-of-plane, as shown in Fig. \ref{fig6}(a) for $D_i=1$ $\text{mJ/m}^2$. The tilting direction of the domain wall centerline is correlated with the tilting direction of the magnetization. For $D_i>0$, when the domain wall centerline lies in the first and third (second and fourth) quadrants, the domain wall magnetization is tilted to $-z$ ($+z$). The probability of the two tilting directions is the same for different random initial states. For some initial states, it is also possible to have more complicated domain wall texture, such as that shown in the third panel of Fig. \ref{fig6}(a). Different segments of the domain wall tilt to different directions. Such DMI-induced tilting has been observed in PMA domain walls \cite{tilting1, tilting2, tilting3}. To explain the tilting, we have to introduce a 2D collective coordinate model, allowing $X$ to be dependent on $y$, $X=X(y,t)$. We assume the tilting is linear so that $\frac{dX}{dt}=c$ is constant. The static domain wall energy is, \begin{multline} \mathcal{E}_\mathbf{IDMI}\\=dw\left[2(2+c^2)\sqrt{A\left(K_y+K_z\cos^2{\Phi}\right)}+\pi c D_i \cos\Phi\right]. \label{dwenergyi} \end{multline} The first term in the bracket is related to the balance between exchange energy and anisotropy energy. The larger the tilting, the longer the domain wall, so the first term prefers smaller $c$. The second term is related to the IDMI. Only when this term is negative, the tilted domain wall can be possibly preferred. $\mathcal{E}_\mathbf{IDMI}$ can be minimized with respect to $c$ and $\Phi$. For $D_i>0$, there are two degenerated minimal points, corresponding to $c>0$, $\Phi>\pi/2$ and $c<0$, $\Phi<\pi/2$, respectively, which is consistent with the numerical results. Figure \ref{fig6}(b) plots the domain wall azimuthal angle $\cos\Phi$ (left axis) and the tilting slope $c$ (right axis), showing the comparison between numerical data and the CCM model. The solid lines and the dashed lines represent the two ways of tilting with the same energy. The numerical data almost fall on either the solid lines or the dashed lines. \begin{figure}[ht] \includegraphics[width=0.45\textwidth]{Fig6.pdf} \caption{(a) Static domain wall configuration for $w=48$ nm and $D_i=0.5$ (first row) and 1 $\text{mJ/m}^2$ (second row) numerically relaxed from different random initial states. (b) $\cos\Phi$ of domain wall plane (left axis) and tilting slope $c$ of domain wall centerline (right axis). The symbols are numerical data and the lines are 2D CCM results. The solid lines and dashed lines are two possible combinations of $\cos\Phi$ and $c$. (c) Field-driven domain wall velocity for different $D_i$. Inset: Walker breakdown field $B_W$ versus $D_i$. Symbols are numerical data and the solid line comes from the 2D CCM.} \label{fig6} \end{figure} When applying an external field, the side-by-side domain wall dynamics shows more interesting behaviors in the presence of IDMI. Figure \ref{fig6}(c) exhibits the domain wall velocity versus applied field for different $D_i$. The Walker breakdown field $B_W$ first decreases, then increases with $D_i$, which is different from the HtH/TtT and PMA domain walls. This phenomena can also be understood using the 2D CCM. For small $D_i$, the straight N\'{e}el domain wall is still the ground state, and the tilted domain wall only has slightly higher energy. Their energy difference plays a role of a low energy barrier. When $D_i$ increases, the energy of tilted domain wall becomes lower, so that the domain wall is easier to flip between the two N\'{e}el configurations, leading to a lower $B_W$. After a certain value of $D_i$, the tilted domain wall becomes the ground state, and the energy barrier becomes higher when further increasing $D_i$. Thus, the flipping of domain wall center becomes more difficult and $B_W$ increases. The inset of Fig. \ref{fig6}(c) shows $B_W$ from numerical data, and the CCM result is plotted in solid line for comparison. The simulation and CCM agree well with each other. However, the CCM indicates a symmetrical $B_W$ in positive and negative sides of $D_i$, but the numerical results are asymmetric. $B_W$ for $-|D_i|$ is always slightly smaller than that for $+|D_i|$. This qualitative discrepancy is mainly due to the IDMI-induced tilting of magnetization at the two side edges of the strip. We recall the finding in Section \ref{MTWB} that for $B>0$, the inhomogeneous flipping of domain wall center always starts at the bottom edge. This is also true in the presence of DMI. For positive $D_i$, at the bottom edge, the magnetization in the left (right) domain is tilted towards $-z$ ($+z$) to minimize the IDMI energy [this can be observed in Fig. \ref{fig6}(a)]. This tilting is clockwise with respect to $+y$ direction, which is opposite to the counterclockwise torque induced by $B$. Thus, the bottom edge is relatively robust so that larger field is required to flip the magnetization inside the domain wall. On the contrary, for negative $D_i$, the tilting is along the same direction as the torque of $B$. So the breakdown field is smaller. Since the tilting at the edge is small, this difference in $B_W$ is subtle. For larger $|D_i|$, the difference becomes more significant. We also perform simulations for wider strips in the presence of BDMI and IDMI. Figure \ref{fig7} shows the field-driven domain wall velocity for $w=1536$ nm. Before the Walker breakdown, the dynamics does not differ too much from the 48 nm strip, except the breakdown field becomes smaller. {After the Walker breakdown, the dynamics affected by BDMI and IDMI are distinct.} \begin{figure}[ht] \includegraphics[width=0.45\textwidth]{Fig7.pdf} \caption{Domain wall velocity $v$ versus external field $B$ for 1536 nm strip in the presence of different (a) BDMI, (b) IDMI. Typical domain wall texture after Walker breakdown for (c) BDMI, $D_b=0.8$ mJ/m$^2$ and $B=150$ mT, (d) IDMI, $D_i=1.3$ mJ/m$^2$ and $B=15.2$ mT. } \label{fig7} \end{figure} {In the presence of BDMI, the energy degeneracy of two chiralities is broken, and the system can be mapped to the well-studied PMA domain walls by a $\pi/2$ rotation of $\mathbf{m}$ around $x$. Thus, we also observe soliton-like domain wall motion similar to that of PMA domain wall described in Ref. \cite{Ono2016}. A typical domain wall texture is shown in Fig. \ref{fig7}(c) for $w=1536$ nm, $D_b=0.8$ mJ/m$^2$ and $B=150$ mT. The snapshot is taken at $t=1$ ns. A tortuous domain wall centerline with many vortices can be observed. The vortex generation and annihilation are similar to the Bloch line in PMA domain wall \cite{Ono2016}. The suppression of Walker breakdown is also present for $D_b$ larger than 1 mJ/m$^2$. More details are discussed in Appendix \ref{AC}. } {The IDMI has totally different affect on the domain wall dynamics. Figure \ref{fig7}(d) shows a typical domain wall texture for $w=1536$ nm, $D_b=1.3$ mJ/m$^2$ and $B=15.2$ mT. Different from the curled, tortuous domain wall in the BDMI case, the domain wall centerline is zigzag with each segment straight and the magnetization is out-of-plane for each segment. The tilting direction follows the same rule as the static case discussed above. Compared to the fast ($\sim 0.1$ ns) vortex dynamics in the BDMI case, the zigzag domain wall deforms slowly. New zigzags appear at the edges and gradually annihilate at the middle. Since the dynamics in the presence of DMI is quite complicated, we will study it in a statistical way in future.} \section{Discussion} In the calculations above, we use a large anisotropy $K_u$ so that the domain wall is thin ($\sim 7$ nm) to avoid the influence of finite strip length as much as possible. The validity of continuous model can be demonstrated by comparing the numerical domain wall profile with the Walker solution [the inset of Fig. \ref{fig1}]. Furthermore, the threshold domain wall width between continuous domain wall and abrupt domain wall is $\Delta=\sqrt{3}a/2$ where $a$ is the lattice constant (the mesh size in our case) \cite{JMMM1994,Yanpeng2012}. Our domain wall width is above this threshold. For wider domain walls, our results are still qualitatively correct. We have explained the observed dynamics in the energy point of view. The Zeeman energy is dissipated via Gilbert damping leading to propagation of domain wall, and the transit vortex generation processes temporarily store the Zeeman energy leading to the change in domain wall speed. We should note that the energy argument can only give an overall understanding, but cannot give the detailed dynamics. For weak field below the Walker breakdown, the Zeeman energy can be solely dissipated through Gilbert damping. So that the magnetic texture is able to keep unchanged, resulting in the rigid-body motion. When the field is closed to or slightly larger than the Walker breakdown, spin wave emission may occur to dissipate more energy \cite{SW2010,Field4}. For larger field, domain wall starts to rotate (in thin strips) or nucleate vortices (in wider strips). The Zeeman energy is periodically stored and released by the domain wall, as we have discussed above. For even larger field (larger than the effective field of the easy-axis anisotropy), the domain opposite to the field becomes unstable and more complicated textures emerge, such as another pair of domain walls. In different situations the Zeeman energy dissipates and converts in different ways. Analyses on the LLG equation are still necessary to know the specific dynamics. Materials with in-plane uniaxial anisotropy are necessary to experimentally observe and investigate the side-by-side domain walls. The anisotropy $K_u$ should overcome the in-plane shape anisotropy $(N_y-N_x)\mu_0M_s^2$. It has been observed that cobalt can possess such $K_u$ \citep{Cobalt2000}. The growing condition or external strain can also induce an in-plane uniaxial anisotropy \cite{Py2020,Gilbert2017}. The IDMI may be present in such materials by designing inversion-symmetry structures. It is also possible to induce in-plane uniaxial anisotropy by strain in BDMI materials \cite{Shibata2015}. Of course, we have studied an ideal theoretical model here. Finite temperature, geometrical defects like edge roughness and surface roughness, and material defects (including inhomogeneity of material parameters) would exist in reality. They will be the topics of further theoretical studies. In the Appendix, we briefly discuss the influence of inhomogeneous $K_u$, which is supposed to be ubiquitous in imperfect materials. \section{Summary} We investigate the static and dynamic properties of side-by-side domain walls. Although the observed side-to-side domain wall dynamics has many similarities to the HtH/TtT and PMA domain walls, there still exist important differences due to the different geometries. In the absence of DMI, the domain wall is in N\'{e}el configuration due to the shape anisotropy, and the domain wall width $\Delta$ can be estimated self-consistently using the shape anisotropy of a prism of dimensions $(\Delta, w,d)$. The field-driven domain wall dynamics in thin strips can still be described by 1D collective coordinate model. In wide strips, complicated multistep breakdown behavior occurs via generation, propagation and annihilation of (anti)vortices. Due to the absence of magnetic charges at the ends of the domain wall, the side-by-side domain wall width is more homogeneous than the other two kinds of domain walls, and there is no directional tilting. In the presence of BDMI, the Walker breakdown field increases with the BDMI so that the fast rigid-body domain wall motion is enhanced. The simulation results for thin strips can be well reproduced by the 1D collective coordinate model. In the presence of IDMI, domain wall tilting occurs at strong IDMI. The Walker breakdown field first decreases and then increases with the IDMI strength. The non-monotonic dependence of breakdown field on IDMI as well as the domain wall tilting can be explained by 2D collective coordinate model. Furthermore, the breakdown field shows subtle asymmetry in positive and negative IDMI, mainly due to the IDMI-induced magnetization tilting at the strip edges. For wider strips, in the presence of BDMI, the domain wall is tortuous with plenty of vortex generation and annihilation. Soliton-like dynamics similar to the PMA domain wall case is observed. In the presence of IDMI, the domain wall becomes zigzag. Our results provide more comprehensive understandings on the properties of domain walls. \begin{acknowledgments} This work is supported by the Fundamental Research Funds for the Central Universities. X. S. W. acknowledges the support from the Natural Science Foundation of China (NSFC) (Grant No. 11804045 and No. 12174093). F. X. L. acknowledges the support from the Natural Science Foundation of China (NSFC) (Grant No. 11905054). \end{acknowledgments}
proofpile-arXiv_065-4602
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The estimates of the $\bar\partial$-equation in various function spaces are one of the most important problems in several complex variables and partial differential equations and have invaluable applications in differential geometry, algebraic geometry and other subjects (cf. \cite{BS, ChS, FoK, H, O, R1, St2}). In this paper, we aim to answer the following fundamental question of the uniform estimates of the $\bar\partial$-equation raised by Kerzman in 1971 (cf. \cite{K1} pp. 311-312). This question recently was reminisced in \cite{CM1} (cf. lines 22-24 on page 409) and attracted substantial attentions. \medskip {\bf Remark} in \cite{K1} (cf. pp.311-312): We do not know whether Grauert-Lieb's and Henkin's theorem holds in polydiscs, i.e., whether there exists a bounded solution $u$ to $\bar\partial u=f$ on $\Delta^n$ whenever $f$ is bounded in $\Delta^n$, $\bar\partial f = 0$. \medskip On bounded strictly pseudoconvex domains in $\mathbb{C}^n$, the uniform estimate for $\bar\partial$-equation is obtained by Grauert-Lieb \cite{GL} and Henkin \cite{H2} in 1970. Sibony later constructed a smooth bounded weakly pseudoconvex domain in $\mathbb{C}^3$ and a $\bar\partial$-closed $(0,1)$-form $f$, continuous on the closure of the domain, such that every solution of $\bar\partial u=f$ is unbounded \cite{S} (cf. \cite{B, FS} for examples in $\mathbb{C}^2$). In 1986 Forn{\ae}ss proved uniform estimates for a class of pseudoconvex domains in $\mathbb{C}^2$, which include the Kohn-Nirenberg example \cite{F}. The uniform estimates for finite type domains in $\mathbb{C}^2$ and convex, finite type domains in $\mathbb{C}^n$ are solved by Fefferman-Kohn \cite{FeK} (cf. \cite{R1} as well) and Diederich-Fischer-Forn{\ae}ss \cite{DFF}. More recently, Grundmeier-Simon-Stens{\o}nes proved the uniform estimates for a wide class of finite type pseudoconvex domains in $\mathbb{C}^n$, including the bounded, pseudoconvex domains with real-analytic boundary \cite{GSS}. The uniform estimates for $\bar\partial$-equation has been an attractive problem for many authors and we refer the interested readers to \cite{FLZ, H1, K1, R2, RS} and references therein for detailed account of the subject and related problems. \medskip On the other hand, when the domain is not smooth, in particular, a product domain, the problem becomes somehow different. In 1971, Henkin obtained the uniform estimates for $\bar\partial u=f$ on the bidisk provided that $f$ is $C^1$ up to the boundary \cite{H3}. Landucci in 1975 proved the uniform estimates for the canonical solution on the bidisc provided that $f$ is $C^2$ up to the boundary \cite{L}. More recently, a very useful new solution integral operator was used by Chen-McNeal to prove many interesting results on product domains, including the $L^p$ estimates for $\bar\partial$-equation \cite{CM1, CM2}, and also by Fassina-Pan to prove the uniform estimates for $\bar\partial$-equation on the high dimensional product of planar domains \cite{FP}. Dong-Pan-Zhang later also apply the integral operator to further obtain the uniform estimates for the canonical solution to $\bar\partial u=f$ on the product of planar domains by assuming $f$ is merely continuous up to the boundary \cite{DPZ}. This is not only a deep result but also the proof contains fascinating ideas by combining the above-mentioned new integral operator and Kerzman's celebrated estimates of the Green function \cite{K2} as observed in \cite{BL}. For more related studies of the $\bar\partial$-equation on product domains, the interested readers may refer to \cite{CS, JY, PZ, YZZ, Z} and references therein. \medskip Heavily relying on the ideas developed in \cite{DPZ}, we are able to answer Kerzman's original question. The key difference is that in \cite{DPZ}, the canonical solution is re-written using the integral formula against $f$, which contains certain boundary integrals. This requires $f$ to be at least continuous to make sense of the boundary integral. However, we observe that the boundary integrals actually do not appear because the integral kernels vanish on the boundary. This already provides $L^p$ estimates when $f$ is sufficiently smooth, combining the deep estimates of the Green function by Kerzman. When $f$ is merely $L^p$, the estimates is achieved by using approximation as in \cite{DPZ}. \medskip The main theorem of the paper is the following $L^p$ estimates of $\bar\partial$. \begin{thm}\label{main} Let $\Omega = D_1 \times \cdots \times D_n$, where, for each $1 \leq j \leq n$, $D_j$ is a $C^2$-smooth bounded planar domain. For any $p \in [1, \infty]$, assume $f \in L^p_{(0, 1)}(\Omega)$. Then there exists a constant $C_{\Omega}>0$ (independent of $p$) such that the canonical solution to $\bar\partial u = f$ satisfies $\|{\bf T} f\|_{L^p} \leq C_{\Omega} \|f\|_{L^p}$. \end{thm} In particular, when $\Omega = \Delta^n$ is the polydisc, this answers Kerzman's question. \begin{cor}\label{cor} There exists a constant $C>0$ such that for any $f \in L^\infty_{(0, 1)}(\Delta^n)$ with $\bar\partial f =0$, the canonical solution to $\bar\partial u = f$ satisfies $\|u\|_\infty \leq C \|f\|_\infty$. \end{cor} \section{Proof of the Theorem} The majority of the proof was already carried out by Dong-Pan-Zhang in \cite{DPZ}. We try to make the argument here as self-contained as possible. \subsection{One dimensional case} We follow the argument by Barletta-Landucci \cite{BL} and Dong-Pan-Zhang in \cite{DPZ}. Let $D \subset \mathbb{C}$ be a bounded planar with $C^2$ boundary and $H(w, z) = \frac{1}{2\pi i (z-w)}$ be the Cauchy kernel on $D$. The following kernel is defined by Barletta-Landucci \cite{BL}, $$S(w, z) = L(w. z) - H(w, z),$$ where for any $w \in D$, $L(w, z)$ solves the Dirichlet problem \begin{equation*} \left\{ \begin{aligned} \Delta L(w, z) &=0, \quad z\in D; \\ L(w, z) &=H(w, z), \quad z \in \partial D. \end{aligned} \right. \end{equation*} It is known that for fixed $w \in D$, $L(w, z) \in C^{1, \alpha}(\overline{D})$ for any $\alpha \in (0, 1)$ (cf. \cite{DPZ}). Define ${\bf T} f (w) = \int_D S(w, z)f(z) d\bar z \wedge d z$. Then ${\bf T} $ is the canonical solution operator for $\bar\partial u = f d\bar z$ on $D$ (cf. Theorem in \cite{BL} or Proposition 2.3 in \cite{DPZ}). \medskip Here is the {\it key observation}: $S(w, z) = 0$ for any $w \in D, z \in \partial D$. \begin{eg} Let $D=\Delta$ be the unit disc in $\mathbb{C}$. Then it is very easy to verify that $L(w, z)= \frac{\bar z}{2\pi i(1-w \bar z)}$ and thus $S(w, z) = \frac{1-|z|^2}{2\pi i (1-w \bar z)(w-z)}$ vanishes for $z \in \partial \Delta, w \not= z $. \end{eg} \subsection{High dimensional case when $f$ is sufficiently smooth} Let $\Omega = D_1 \times \cdots \times D_n \subset \mathbb{C}^n$ be a bounded domain, where each $D_j$ is a bounded planar domain with $C^2$ boundary, $1 \leq j \leq n$. We first handle the case when $f$ is sufficiently smooth. Assume $f = \sum_{j=1}^n f_j d\bar z_j$ to be a $\bar\partial$-closed $(0, 1)$-form on $\Omega$ with $f_j \in C^{n-1}(\overline{\Omega})$, $1 \leq j \leq n$, and define \begin{equation} {\bf T} f = \sum_{s=1}^n (-1)^{s-1} \sum_{1 \leq i_1 \leq \cdots \leq i_s \leq n} {\bf T}_{i_1} \cdots {\bf T}_{i_s} \left( \frac{\partial^{s-1} f_{i_s}}{\partial \bar z_{i_1} \cdots \partial \bar z_{i_{s-1}}} \right), \end{equation} where ${\bf T}_j$ is the canonical solution operator on $D_j $. Let $$B_{1, 2, \cdots, k}(w, z) = \sum_{j=1}^k \prod_{m\not=j, m=1}^k |w_m - z_m|^2, $$ $$e^{1, 2, \cdots, k}_j(w, z)= \left( \prod_{l=1}^k S_l(w_l, z_l)\right)\frac{\prod_{m\not=j, m=1}^{k} |w_m - z_m|^2}{B_{1, 2, \cdots, k}(w, z)} .$$ Then $ \sum_{j=1}^k e^{1, 2, \cdots, k}_j = \prod_{l=1}^k S_l(w_l, z_l)$. Here is the key lemma. \begin{lem}\label{ibp} Then \begin{equation} \int_{D_1 \times \cdots \times D_k} e^{1, 2, \cdots, k}_k(w, z) \frac{\partial^{k-1} f_k(z)}{\partial \bar z_1 \cdots \partial \bar z_{k-1}} dV(z) = \int_{D_1 \times \cdots \times D_k} (-1)^k f_k(z) \frac{\partial^{k-1} e^{1, 2, \cdots, k}_k(w, z)}{\partial \bar z_1 \cdots \partial \bar z_{k-1}} dV(z). \end{equation} \end{lem} \begin{proof} Fix $w \in D_1 \times \cdots \times D_k$. Let $A_j^\epsilon = D_j \setminus \Delta_\epsilon(w_j)$ for $0 < \epsilon <<1$. Note that for any $0 \leq m \leq k-2, 1\leq i_1 < \cdots < i_m \leq k-2$ and $l \not \in \{i_1 , \cdots, i_m, k\}$, $\frac{\partial^m e^{1, 2, \cdots, k}_k(w, z)}{\partial \bar z_{i_1} \cdots \partial \bar z_{i_m}} = 0 $ on $ D_1 \times \cdots \times \partial D_l \times \cdots \times D_k$ and $\frac{\partial^m e^{1, 2, \cdots, k}_k(w, z)}{\partial \bar z_{i_1} \cdots \partial \bar z_{i_m}} \rightarrow 0$ uniformly for $z \in D_1 \times \cdots \times \partial \Delta_\epsilon(w_l) \times \cdots \times D_k$ as $\epsilon \rightarrow 0$. Applying the Stokes formula to $$e^{1, 2, \cdots, k}_k(w, z) \frac{\partial^{k-2} f_k(z)}{\partial \bar z_2 \cdots \partial \bar z_{k-1}} dz_1 \wedge d \bar z_2 \wedge d z_2 \wedge \cdots \wedge d \bar z_{k} \wedge dz_k$$ on $A_1^\epsilon \times D_2 \times \cdots \times D_k$, we have \begin{equation*} \begin{split} &~~~\int_{A_1^\epsilon \times \cdots \times D_k} \left( e^{1, 2, \cdots, k}_k(w, z) \frac{\partial^{k-1} f_k(z)}{\partial \bar z_1 \cdots \partial \bar z_{k-1}} + \frac{\partial e^{1, 2, \cdots, k}_k(w, z)}{\partial \bar z_1} \frac{\partial^{k-2} f_k(z)}{\partial \bar z_2 \cdots \partial \bar z_{k-1} } \right) d \bar z_1 \wedge d z_1 \wedge \cdots \wedge d \bar z_{k} \wedge dz_k \\ &= \left( \int_{\partial D_1 \times D_2 \times\cdots\times D_k} - \int_{\partial\Delta_\epsilon(w_j) \times D_2 \times\cdots\times D_k} \right) e^{1, 2, \cdots, k}_k(w, z) \frac{\partial^{k-2} f_k(z)}{\partial \bar z_2 \cdots \partial \bar z_{k-1}} dz_1 \wedge d \bar z_2 \wedge d z_2 \wedge \cdots \wedge d \bar z_{k} \wedge dz_k \\ &= - \int_{\partial\Delta_\epsilon(w_j) \times D_2 \times\cdots\times D_k} e^{1, 2, \cdots, k}_k(w, z) \frac{\partial^{k-2} f_k(z)}{\partial \bar z_2 \cdots \partial \bar z_{k-1}} dz_1 \wedge d \bar z_2 \wedge d z_2 \wedge \cdots \wedge d \bar z_{k} \wedge dz_k, \end{split} \end{equation*} as $e^{1, 2, \cdots, k}_k(w, z) =0 $ for $z_1 \in \partial D_1$. As $\epsilon \rightarrow 0$, $e^{1, 2, \cdots, k}_k(w, z) \rightarrow 0$ uniformly for $z \in \Delta_\epsilon(w_j) \times D_2 \times\cdots\times D_k$. It follows that $$\int_{D_1 \times \cdots \times D_k} e^{1, 2, \cdots, k}_k(w, z) \frac{\partial^{k-1} f_k(z)}{\partial \bar z_1 \cdots \partial \bar z_{k-1}} dV(z)= - \int_{D_1 \times \cdots \times D_k} \frac{\partial e^{1, 2, \cdots, k}_k(w, z)}{\partial \bar z_1} \frac{\partial^{k-2} f_k(z)}{\partial \bar z_2 \cdots \partial \bar z_{k-1} } dV(z) .$$ The lemma then follows by applying the argument repeatedly in all other variables $z_2, \cdots, z_{k-1}$. \end{proof} The next result provides $L^p$-estimates for $\bar\partial u =f$ when $f$ is smooth, where the key is an important application of the Kerzman's estimates of the Green function proved in \cite{DPZ} (cf. Proposition 4.5). Since the integrals here do not involve the boundary terms, the corresponding estimates become simpler. \begin{pro}\label{smooth} For $f \in C^{n-1}_{(0, 1)}(\overline{\Omega})$ with $\bar\partial f =0$, let $${\bf K} f(w)= \sum_{s=1}^n \sum_{1 \leq i_1 \leq \cdots \leq i_s \leq n} \sum_{j=1}^{s} \int_{D_{i_1} \times \cdots \times D_{i_s}} f_{i_j}(z) \frac{\partial^{s-1} e^{i_1, \cdots, i_s}_{i_j}(w, z)}{\partial \bar z_{i_1} \cdots \partial \bar z_{i_{j-1}} \partial \bar z_{i_{j+1}} \cdots \partial \bar z_{i_{s-1}}} d\bar z_{i_1} \wedge \cdots dz_{i_s} .$$ Then ${\bf T} f = {\bf K} f .$ Moreover, for any $p \in [1, \infty]$, there exists a universal constant $C_{\Omega}>0$ (independent of $p$) such that $u={\bf T} f$ is the canonical solution to $\bar\partial u = f$ and satisfies $\|{\bf T} f\|_{L^p} \leq C_{\Omega} \|f\|_{L^p}$. \end{pro} \begin{proof} Since $f$ is $\bar\partial$-closed, $\frac{\partial^{s-1} f_{i_1}}{\partial \bar z_{i_2} \cdots \partial \bar z_{i_s}} = \frac{\partial^{s-1} f_{i_2}}{\partial \bar z_{i_1} \partial \bar z_{i_3} \cdots \partial \bar z_{i_s}} = \cdots = \frac{\partial^{s-1} f_{i_s}}{\partial \bar z_{i_1} \cdots \partial \bar z_{i_{s-1}}}$. It follows that $$\left( \frac{\partial^{s-1} f_{i_s}(z)}{\partial \bar z_{i_1} \cdots \partial \bar z_{i_{s-1}}} \right) \prod_{j=i_1}^{i_s} S_j(w_j, z_j) = \sum_{j=1}^{s} e^{i_1, \cdots, i_s}_{i_j}(w, z) \frac{\partial^{s-1} f_{i_j}(z)}{\partial \bar z_{i_1} \cdots \partial \bar z_{i_{j-1}} \partial \bar z_{i_{j+1}} \cdots \partial \bar z_{i_{s-1}}}$$ and thus by Lemma \ref{ibp}, $${\bf T}_{i_1} \cdots {\bf T}_{i_s} \left( \frac{\partial^{s-1} f_{i_s}}{\partial \bar z_{i_1} \cdots \partial \bar z_{i_{s-1}}} \right)= (-1)^{s-1} \sum_{j=1}^{s} \int_{D_{i_1} \times \cdots \times D_{i_s}} f_{i_j}(z) \frac{\partial^{s-1} e^{i_1, \cdots, i_s}_{i_j}(w, z)}{\partial \bar z_{i_1} \cdots \partial \bar z_{i_{j-1}} \partial \bar z_{i_{j+1}} \cdots \partial \bar z_{i_{s-1}}} d\bar z_{i_1} \wedge \cdots dz_{i_s} . $$ Moreover, we have $${\bf T} f = \sum_{s=1}^n \sum_{1 \leq i_1 \leq \cdots \leq i_s \leq n} \sum_{j=1}^{s} \int_{D_{i_1} \times \cdots \times D_{i_s}} f_{i_j}(z) \frac{\partial^{s-1} e^{i_1, \cdots, i_s}_{i_j}(w, z)}{\partial \bar z_{i_1} \cdots \partial \bar z_{i_{j-1}} \partial \bar z_{i_{j+1}} \cdots \partial \bar z_{i_{s-1}}} d\bar z_{i_1} \wedge \cdots dz_{i_s} .$$ By Proposition 4.5 in \cite{DPZ}, $ \frac{\partial^{s-1} e^{i_1, \cdots, i_s}_{i_j}(w, z)}{\partial \bar z_{i_1} \cdots \partial \bar z_{i_{j-1}} \partial \bar z_{i_{j+1}} \cdots \partial \bar z_{i_{s-1}}} \in L^1(D_{i_1} \times \cdots \times D_{i_s})$ and the $L^1$ norm is independent of $w$. Therefore, $L^p$ estimates $\|{\bf T} f\|_{L^p} \leq C_{\Omega} \|f\|_{L^p}$ follows from Young's convolution inequality. Furthermore, It follows from Theorem 4.3 in \cite{DPZ} that ${\bf T} f$ is the canonical solution. \end{proof} \subsection{High dimensional case when $f$ is $L^p$} In this section, we will use approximation to handle the case when $f \in L^p_{(0, 1)}(\Omega)$ for $p \in [1, \infty]$. In fact, we will show the following general $L^p$ estimates. \begin{thm} For any $p \in [1, \infty]$, assume $f \in L^p_{(0, 1)}(\Omega)$. Then $u = {\bf K} f$ is the canonical solution to $\bar\partial u = f$ and satisfies $\|{\bf K} f\|_{L^p} \leq C_{\Omega} \|f\|_{L^p}$. \end{thm} \begin{proof} We note by the same estimates that ${\bf K}$ is bounded from $L^p$ to $L^p$. It suffices to show that $u = {\bf K} f$ is the canonical solution. This is proved in Proposition 5.1 and Theorem 1.1 in \cite{DPZ}. For completeness, we sketch the proof here. First, let $\{\Omega^{(l)}\}$ be an increasing sequence of relatively compact subdomains of $\Omega$ with each $\Omega^{(l)}$ being the product of $C^2$ planar domains such that $\cup_l \Omega^{(l)} = \Omega$. Let $e^{(l)}, {\bf K}^{(l)}$ be defined accordingly on $\Omega^{(l)}$. Then it was showed in \cite{DPZ} (cf. equation (5.8)) using Kerzman's deep estimates of the Green function that \begin{equation}\label{der} \frac{\partial^{s-1} \left(e^{(l)}\right)^{i_1, \cdots, i_s}_{i_j}(w, z)}{\partial \bar z_{i_1} \cdots \partial \bar z_{i_{j-1}} \partial \bar z_{i_{j+1}} \cdots \partial \bar z_{i_{s-1}}} \rightarrow \frac{\partial^{s-1} e^{i_1, \cdots, i_s}_{i_j}(w, z)}{\partial \bar z_{i_1} \cdots \partial \bar z_{i_{j-1}} \partial \bar z_{i_{j+1}} \cdots \partial \bar z_{i_{s-1}}} \end{equation} as $l \rightarrow \infty$. Moreover, write $\Omega_\epsilon = \{z \in \Omega: \text{dist}(z, \partial \Omega) > \epsilon \}$. Since $f \in L^p_{(0, 1)}(\Omega)$ for any $p \geq 1$, by the standard mollification argument, there exists a sequence of $\bar\partial$-closed smooth $(0, 1)$-form $f^\epsilon$ in $\Omega_\epsilon$, such that $f^\epsilon \rightarrow f$ in $L^p_{(0, 1)}(\Omega)$ for all $p >1$. Second, let $\chi $ be a $(n, n-1)$-form in $\Omega$ with compact support. Choosing $l$ sufficiently large and $\epsilon$ sufficiently small, such that $supp(\chi) \subset \Omega^{(l)} \subset \overline{\Omega^{(l)}} \subset \Omega_\epsilon$, then $f^\epsilon \in C^\infty_{(0, 1)}(\overline{\Omega^{(l)}})$ and thus $u={\bf K}^{(l)} f^\epsilon$ is the canonical solution to $ \bar\partial u = f^\epsilon$ on $\Omega^{(l)}$ by Proposition \ref{smooth}. It follows that \begin{equation*} \begin{split} \int_{\Omega} \bar\partial {\bf K}f\wedge \chi &= - \int_{\Omega^{(l)}} {\bf K} f \wedge \bar\partial \chi = - \lim_{l \rightarrow \infty} \lim_{\epsilon \rightarrow 0} \int_{\Omega^{(l)}} {\bf K}^{(l)} f^\epsilon \wedge \bar\partial \chi \\ &= \lim_{l \rightarrow \infty} \lim_{\epsilon \rightarrow 0} \int_{\Omega^{(l)}} f^\epsilon \wedge \chi = \lim_{\epsilon \rightarrow 0} \int_{\Omega} f^\epsilon \wedge \chi = \int_\Omega f\wedge \chi, \end{split} \end{equation*} where the second inequality follows from (\ref{der}). This implies that $\bar\partial {\bf K}f = f$ weakly. Lastly, assume $g$ to be a $L^2$ holomorphic function in $\Omega $. Then similarly $$\int_\Omega {\bf K}f(z) \overline{g(z)} dV(z) = \lim_{l \rightarrow \infty} \lim_{\epsilon \rightarrow 0} \int_{\Omega^{(l)}} {\bf K}^{(l)} f^\epsilon(z) \overline{g(z)} dV(z) = \lim_{l \rightarrow \infty} \lim_{\epsilon \rightarrow 0} 0 =0.$$ This means that $u = {\bf K} f$ is the canonical solution to $\bar\partial u = f$. \end{proof} \begin{rem} When $\Omega$ is the product of star-shaped planar domains, for instance, $\Omega=\Delta^n$, there is no need to approximate $\Omega$ by $\Omega^{(l)}$ as above. By the standard mollification argument, any $L^p$ integrable $\bar\partial$-closed $(0, 1)$-form $f$ on $\Omega$ can be approximated in $L^p_{(0, 1)}(\Omega)$ by a sequence of $\bar\partial$-closed, smooth $(0, 1)$-forms $\{f^\epsilon\}$ on $\Omega$. Therefore that ${\bf K} f = \lim_{\epsilon \rightarrow0} {\bf K} f^\epsilon$ is the canonical solution is the direct consequence of the boundedness of ${\bf K}$ from $L^p$ to $L^p$. On the other hand, in the case of $\Omega=\Delta^n$, it can be verified directly without using Kerzman's estimates that $ \frac{\partial^{s-1} e^{i_1, \cdots, i_s}_{i_j}(w, z)}{\partial \bar z_{i_1} \cdots \partial \bar z_{i_{j-1}} \partial \bar z_{i_{j+1}} \cdots \partial \bar z_{i_{s-1}}} \in L^1(\Delta_{i_1} \times \cdots \times \Delta_{i_s})$ and the $L^1$ norm is also independent of $w$, and therefore ${\bf K}$ is bounded from $L^p$ to $L^p$ on $\Delta^n$. \end{rem} \begin{comment} \section{Proof of the main theorem for the bidisc} We first treat the case $\Omega=\Delta^2$. Write $f=f_1 d \bar z_1 + f_2 d \bar z_2$. Since $f \in L^\infty_{(0, 1)}(\Delta^2)$, then $f \in L^p_{(0, 1)}(\Delta^2)$ for all $p > 1 $ and $\|f\|_p \leq \|f\|_\infty$. \begin{lem} There exists a sequence of $\bar\partial$-closed smooth $(0, 1)$-form with compact support $f^{(j)} \in C^\infty_{c, (0, 1)}(\Delta^2)$, such that $f^{(j)} \rightarrow f$ in $L^p$ for all $p >1$. \end{lem} \begin{proof} We use $\Delta_\delta$ to denote the disc centered at 0 of radius $\delta$. Define $\tilde f^{(j)}=\tilde f^{(j)}_1d \bar z_1 + \tilde f^{(j)}_2 d \bar z_2$, with $\tilde f^{(j)}_k (z)= f_k\left(\frac{z}{1-\frac{2}{j}} \right)$ for $z \in \Delta_{(1-\frac{2}{j})}^2 = \Delta_{(1-\frac{2}{j})} \times \Delta_{(1-\frac{2}{j})}$ and 0 otherwise for both $k=1, 2$. Then $\tilde f^{(j)}$ is a $\bar\partial$-closed $(0, 1)$-form with compact support in $\overline{ \Delta_{(1-\frac{2}{j})}^2}$. Moreover, $\tilde f^{(j)} \rightarrow f$ in $L^p$ for any $p >1$. Define $f^{(j)} = f^{(j)}_1d \bar z_1 + f^{(j)}_2 d \bar z_2$ with $f^{(j)}_k(z) = \int_{\mathbb{C}^2} \tilde f^{(j)}_k\left(z-\frac{1}{j}w\right) \chi(w) dV(w)$ for some nonnegative, radial function $\chi$ with compact support in the unit ball and $\int_{\mathbb{C}^2} \chi(w) dV(w)=1$. By the standard mollification argument, $f^{(j)}$ is a $\bar\partial$-closed smooth $(0, 1)$-form with compact support in $\Delta^2$ and $f^{(j)} \rightarrow f$ in $L^p$. \end{proof} \begin{pro}\label{xu} There exists some $C>0$ independent of $p$, such that $\|K f^{(j)}\|_p \leq C \|f^{(j)}\|_p $ for all $p \geq 2$. \end{pro} \begin{proof} For the simplicity of exposition, denote $f^{(j)} :=u_1 d \bar z_1 + u_2 d \bar z_2$. We follow the argument in \cite{DPZ} to write $K f^{(j)} = T_1 u_2 + T_2 u_1 - T_1 T_2 (Df^{(j)} )$, where $D f^{(j)} = \frac{\partial u_1}{ \partial \bar z_2} = \frac{\partial u_2}{\partial \bar z_1}$. By in \cite{DPZ}, $\|T_1 u_2 + T_2 u_1\|_p \leq C_1 \|f^{(j)}\|_p$. The key is to estimate $\|T_1 T_2 (D f^{(j)})\|_p$, which will be much easier than the argument in \cite{DPZ} since $f^{(j)}$ is now smooth with compact support. As in \cite{DPZ}, write \begin{equation}\label{11} \begin{split} T_1 T_2 (D f^{(j)}) (w) &=\int_{\Delta^2} \frac{\partial u_2(z)}{\partial \bar z_1} \frac{S_1(w_1, z_1)S_2(w_2, z_2) |w_1-z_1|^2}{|w-z|^2}dV(z) \\ &~~~+ \int_{\Delta^2} \frac{\partial u_1(z)}{\partial \bar z_2} \frac{S_1(w_1, z_1)S_2(w_2, z_2) |w_2-z_2|^2}{|w-z|^2}dV(z)\\ &:=I_1(w) + I_2(w). \end{split} \end{equation} It suffices to treat one term, say $I_2$, by symmetry. By Stokes formula, as equation (4.2) in \cite{DPZ} shows, it follows that \begin{equation}\label{22} \begin{split} I_2(w) &= \frac{i}{2} \int_{\Delta^2} u_1(z)S_1(w_1, z_1) K_2(w_2, z_2) \frac{ |w_2-z_2|^2}{|w-z|^2}dV(z) \\ &~~~+ \int_{\Delta^2} u_1(z)S_1(w_1, z_1) S_2(w_2, z_2) \frac{(w_2-z_2) |w_1-z_1|^2}{|w-z|^4}dV(z) \\ &:=II_1(w) + II_2(w), \end{split} \end{equation} since $u$ is smooth with compact support. By equation (4.5), (4.6) in \cite{DPZ}, there exists $C_2 >0$, such that $$|S_1(w_1, z_1) K_2(w_2, z_2)| \frac{ |w_2-z_2|^2}{|w-z|^2} \leq \frac{C_2}{|(w_1-z_1) (w_2-z_2)^{\frac{3}{2}+ \frac{1}{10}}|}$$ and $$|S_1(w_1, z_1) S_2(w_2, z_2)| \frac{(w_2-z_2) |w_1-z_1|^2}{|w-z|^4} \leq \frac{C_2}{|(w_1-z_1) (w_2-z_2)^{\frac{3}{2}+ \frac{1}{10}}|}.$$ It thus follows that there exists $C_3>0$ independent of $z, w$ such that \begin{equation}\notag \begin{split} \int_{\Delta^2} |S_1(w_1, z_1) K_2(w_2, z_2)| \frac{ |w_2-z_2|^2}{|w-z|^2} dV(z) &\leq C_2 \int_\Delta |w_1-z_1|^{-1}dV(z_1) \\ &~~\times \int_\Delta |w_2-z_2|^{-(\frac{3}{2}+ \frac{1}{10})}dV(z_2) \leq C_3 \end{split} \end{equation} and by symmetry, $$\int_{\Delta^2} |S_1(w_1, z_1) K_2(w_2, z_2)| \frac{ |w_2-z_2|^2}{|w-z|^2} dV(w) \leq C_3 $$ as well. Similarly, we have $$\int_{\Delta^2} |S_1(w_1, z_1) S_2(w_2, z_2)| \frac{(w_2-z_2) |w_1-z_1|^2}{|w-z|^4} dV(z) \leq C_3$$ and $$\int_{\Delta^2} |S_1(w_1, z_1) S_2(w_2, z_2)| \frac{(w_2-z_2) |w_1-z_1|^2}{|w-z|^4} dV(w) \leq C_3. $$ Therefore, by Young's inequality, $$\| II_1 \|_p \leq C_3 \|f^{(j)}\|_p, ~~~\text{and}~~~ \| II_2 \|_p \leq C_3 \|f^{(j)}\|_p.$$ Combining with (\ref{11}), (\ref{22}), we have $$ \| T_1 T_2 Df^{(j)}\|_p \leq C_4 \|f^{(j)}\|_p$$ and the proposition is thus proven. \end{proof} \medskip {\it Proof of Corollary \ref{cor} for the bidisc.} Since $f^{(j)}$ is convergent in $L^p$, by Proposition \ref{xu}, $K f^{(j)}$ is a convergent sequence in $L^p$. Moreover, by the classical H\"ormander's $L^2$ estimate, since $K$ is bounded from $L^2$ to $L^2$, $K f^{(j)} \rightarrow K f$ in $L^2$ as $j \rightarrow \infty$. It follows that $K f \rightarrow K f$ in $L^p$ and $\|K f\|_p \leq C \|f\|_p $ for the same $C$ as in Proposition \ref{xu}. Since $C$ is independent of $p$, by letting $p \rightarrow \infty$, we get $\|K f\|_\infty \leq C \| f \|_\infty$. This is the desired uniform estimate. \qed \section{Proof of the main theorem} The idea in this section is the same as that in the previous section. Without loss of generality, assume $D_\alpha$ are all star-shaped planar domains with respect to $0\in \mathbb{C}$, for $\alpha=1, \cdots, n$. Let $D_{\alpha, (1-\frac{1}{j})} := \{z \in \mathbb{C} : \frac{z}{ 1-\frac{1}{j}} \in D_\alpha \} \subset D_\alpha$. Let $\delta = \max_{1 \leq \alpha \leq n} \text{dist}( \partial D_{\alpha, (1-\frac{1}{j})}, \partial D_{\alpha} )$. Write $f=\sum_{1 \leq \alpha \leq n} f_\alpha d \bar z_\alpha \in L^\infty_{(0, 1)}(\Omega)$. Then $\|f\|_p \leq \|f\|_\infty$ for all $p > 1 $. \begin{lem} There exists a sequence of $\bar\partial$-closed smooth $(0, 1)$-form with compact support $f^{(j)} \in C^\infty_{c, (0, 1)}(\Omega)$, such that $f^{(j)} \rightarrow f$ in $L^p$ for all $p >1$. \end{lem} \begin{proof} Define $\tilde f^{(j)}=\sum_{1 \leq \alpha \leq n} \tilde f^{(j)}_\alpha d \bar z_\alpha $, with $\tilde f^{(j)}_\alpha(z) = f_\alpha \left(\frac{z}{1-\frac{1}{j}} \right)$ for $z \in \Omega_j := D_{1, (1-\frac{1}{j})} \times \cdots \times D_{n, (1-\frac{1}{j})}$ and 0 otherwise for all $\alpha$. Then $\tilde f^{(j)}$ is a $\bar\partial$-closed $(0, 1)$-form with compact support in $\overline{ \Omega}_j$. Moreover, $\tilde f^{(j)} \rightarrow f$ in $L^p$ for any $p >1$. Define $f^{(j)} = \sum_{1 \leq \alpha \leq n} f^{(j)}_\alpha d \bar z_1 $ with $f^{(j)}_\alpha (z) = \int_{\mathbb{C}^n} \tilde f^{(j)}_\alpha \left(z-\frac{\delta}{2}w\right) \chi(w) dV(w)$ for some nonnegative, radial function $\chi$ with compact support in the unit ball and $\int_{\mathbb{C}^n} \chi(w) dV(w)=1$. By the standard mollification argument, $f^{(j)}$ is a $\bar\partial$-closed smooth $(0, 1)$-form with compact support in $\Omega$ and $f^{(j)} \rightarrow f$ in $L^p$. \end{proof} \begin{pro}\label{xu} There exists some $C>0$ independent of $p$, such that $\|K f^{(j)}\|_p \leq C \|f^{(j)}\|_p $ for all $p \geq 2$. \end{pro} \begin{proof} For the simplicity of exposition, denote $f^{(j)} =\sum_{1 \leq \alpha \leq n} u_\alpha d \bar z_\alpha $. We follow the argument in \cite{DPZ} to write $$K f^{(j)} = \sum_{1 \leq l \leq n} (-1)^{l-1} \sum_{1 \leq j_1 \leq \cdots \leq j_l \leq n} T_{j_1} \cdots T_{j_l}\left( \frac{\partial^{l-1} u_{j_l}}{\partial \bar z_{j_1} \cdots \partial \bar z_{j_{l-1}}} \right).$$ By Stokes formula, since $f^{(j)}$ is $\bar\partial$-closed and smooth with compact support in $\Omega$, $T_{j_1} \cdots T_{j_l}\left( \frac{\partial^{l-1} u_{j_l}}{\partial \bar z_{j_1} \cdots \partial \bar z_{j_{l-1}}} \right)$ becomes a finite linear combination of those terms as $$\int_{\Omega} e^{1, 2, \cdots, k}_k(w, z) \frac{\partial^{l-1} u_{j_l}}{\partial \bar z_{j_1} \cdots \partial \bar z_{j_{l-1}}} dV(z)= (-1)^{l-1} \int_{\Omega} \frac{\partial^{l-1} e^{1, 2, \cdots, k}_k(w, z) }{\partial \bar z_{j_1} \cdots \partial \bar z_{j_{l-1}}} u_{j_l} dV(z) , $$ where $$e^{1, 2, \cdots, k}_k(w, z) = H^{-1} \prod_{j_1\leq \beta \leq j_l} S_\beta(w_\beta, z_\beta) \prod_{j_1\leq \beta \leq j_{l-1}} |w_\beta- z_\beta|^2 $$ and $$H= \sum_{\beta=1}^{l} \prod_{ 1\leq \cdots \leq \beta-1 \leq k \leq \beta+1 \dots \leq l } |w_{j_k} - z_{j_k} |^2.$$ Again, by Proposition 4.5 in \cite{DPZ}, there exists $C_5>0$ independent of $z, w$, such that $$\max\left\{ \int_\Omega \left| \frac{\partial^{l-1} e^{1, 2, \cdots, k}_k(w, z) }{\partial \bar z_{j_1} \cdots \partial \bar z_{j_{l-1}}} \right| dV(z), \int_\Omega \left| \frac{\partial^{l-1} e^{1, 2, \cdots, k}_k(w, z) }{\partial \bar z_{j_1} \cdots \partial \bar z_{j_{l-1}}} \right| dV(w)\right\} \leq C_5.$$ It thus follows from Young's inequality that $$\| \int_{\Omega} e^{1, 2, \cdots, k}_k(w, z) \frac{\partial^{l-1} u_{j_l}}{\partial \bar z_{j_1} \cdots \partial \bar z_{j_{l-1}}} dV(z)\|_p \leq C_5 \|f^{(j)}\|_p$$ and therefore $$\|K f^{(j)}\|_p \leq C_6 \|f^{(j)}\|_p.$$ \end{proof} The proof of Theorem \ref{main} is exactly the same as the proof of Corollary \ref{cor} for the bidisc at the end of Section 3 and we omit it. \bigskip \end{comment} {\bf Acknowledgement:} The author would like to thank Zhenghui Huo for helpful discussions. The work was done when the author was visiting BICMR in Spring 2022. He thanks the center for providing him the wonderful research environment.
proofpile-arXiv_065-4620
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In scientific computing, there is a constant need for solving larger and computationally more demanding problems with increased accuracy and improved numerical performance. This holds particularly true in multi-query scenarios such as optimization, uncertainty quantification, inverse problems and optimal control, where the problems under investigation need to be solved for numerous different parameter instances with high accuracy and efficiency. In this regard, constructing efficient numerical solvers for complex systems described by partial differential equations is crucial for many scientific disciplines. The preconditioned conjugate gradient method (PCG) \cite{Barrett1994,Benzi2000, Lin2014,Herzog2010} and the preconditioned generalised minimal residual method (PGMRES) \cite{Saad1986,Shakib1989, Baglama1998} are amongst the most powerful and versatile approaches to treat such problems. The choice of a suitable preconditioner plays a major role on the convergence and scalability of these solvers and notable examples include the incomplete Choleski factorization \cite{Concus1985} and domain decomposition methods \cite{Toselli2005, TALLEC1991}, such as the popular FETI methods \cite{Farhat1991, FARHAT1994, Fragakis2003} and the additive Schwarz methods \cite{Cai1999, DAAS2021}. In a similar fashion, Algebraic and Geometric Multigrid (AMG, GMG, resp.) \cite{Trottenberg2000} are equally well-established methods that are commonly employed for accelerating standard iterative solvers and they may also service as highly efficient preconditioners for PCG \cite{Iwamura2003, Heys2005,Langer2003} or PGMRES \cite{RAMAGE1999, Wienands2000, Vakili2009}. Nevertheless, optimizing the aforementioned solvers so as to attain a uniformly fast convergence for multiple parameter instances as required in multi-query problems, such us uncertainty quantification, sensitivity analysis and optimization, remains a challenging task to this day. To tackle this problem, several works suggest the use of interpolation methods tasked with constructing approximations of the system's inverse operator for different parameter values \cite{Zahm2016,Bergamaschi2020, Carr2021}, which can then be used as preconditioners. Another approach can be found in \cite{STAVROULAKIS2014}, where primal and dual FETI decomposition methods with customized preconditioners are developed in order to accelerate the solution of stochastic problems in the context of Monte Carlo simulation, as well as intrusive Galerkin methods. Augmented Krylov Subspace methods showed great promise in handling sequences of linear systems \cite{Saad1997}, such as those arising in parametrized PDEs, however, the augmentation of the usual Krylov subspace with data from multiple previous solves led in certain cases to disproportional computational and memory requirements. To alleviate this cost, optimal truncation strategies have been proposed in \cite{Sturler1999}, as well as deflation techniques \cite{Chapman1997,Saad2000, Daas2021a}. In recent days, the rapid advancements in the field of machine learning have offered researchers new tools to tackle challenging problems in multi-query scenarios. For instance, deep feedforward neural networks (FFNNs) have been successfully employed to construct response surfaces of quantities of interest in complex problems \cite{Papadrakakis1996,PAPADRAKAKIS2002, SEYHAN2005, HOSNIELHEWY2006,Chojaczyk2015}. Convolutional neural networks (CNNs) in conjuction with FFNNs have been employed to predict the high-dimensional system response at different parameter instances \cite{NIKOLOPOULOS2021, NIKOLOPOULOS2022,XU2020}. In addition, recurrent neural networks demonstrated great potential in transient problems for propagating the state of the system forward in time without the need of solving systems of equations \cite{Yu2019,Zhou2019}. All these non-intrusive approaches utilize a reduced set of system responses to built an emulator of the system's input-output relation for different parameter values. As such, they are particularly cheap to evaluate and can be very accurate in certain cases, however, they can be characterized as physics-agnostic in the sense that the derived solutions do not satisfy any physical laws. This problem is remedied to some extent from intrusive approaches based on reduced basis methods, such as Principal Orthogonal Decomposition (POD) \cite{Carlberg2011,Zahr2017,Agathos2020} and proper Generalized Decomposition \cite{Chinesta2010,Ladeveze2010,Ladeveze2011}. These methods rely on the premise that a small set of appropriately selected basis vectors suffices to construct a low-dimensional subspace of the system's high-dimensional solution space and the projection of the governing equations to this subspace will come at minimum error. In addition, several recent works have investigated the combination of either linear or nonlinear dimensionality reduction algorithms and non-intrusive interpolation schemes to construct cheap emulators of complex systems \cite{DALSANTO2020, SALVADOR2021,KALOGERIS2021, dosSantos2022, Kadeethum2022,VLACHAS2021, Heaney, LAZZARA2022107629}. Nevertheless, none of these surrogate modelling schemes can guarantee convergence to the exact solution of the problem. In the effort to combine the best of two worlds, a newly emergent research direction is that of enhancing linear algebra solvers with machine-learning algorithms. For instance, POD has been successfully employed to truncate the augmented Krylov subspace and retain only the high-energy modes \cite{Carlberg2016} for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. In \cite{Heinlein2019}, neural networks were trained for predicting the geometric location of constraints in the context of domain decomposition methods, leading to enhanced algorithm robustness. Moreover, the close connection between multigrid methods and CNNs has been studied in several recent works, which managed to accelerate their convergence by providing data-driven smoothers \cite{CHEN2022}, prolongation and restriction operators \cite{luz2020}. The present work focuses on the problem of developing a ML-enhanced linear algebra solver to combat the excessive computational demands of detailed finite element models in multi-query scenarios. A novel strategy is proposed to utilize ML tools in order to obtain system solutions within a prescribed accuracy threshold, with faster convergence rates than conventional solvers. Specifically, the proposed approach consists of two steps. Initially, a reduced set of model evaluations is performed and the corresponding solutions are used to establish an approximate mapping from the problem's parametric space to its solution space using deep FFNNs and CAEs. This mapping serves a means of acquiring very accurate initial predictions of the system's response to new query points at negligible computational cost. The error in these predictions, however, may or may not satisfy the prescribed tolerance threshold. Therefore, a second step is proposed herein, which further utilizes the knowledge from the already available system solutions, in order to construct a data-driven iterative solver. This solver is inspired by the idea of the Algebraic Multigrid method combined with Proper Orthogonal Decomposition, termed POD-2G, that successively refines the initial predictions towards the exact system solutions with significantly faster convergence rates. The paper is organised as follows. In Section \ref{sec2} the basic principles of the PCG and AMG iterative solvers are illustrated. In Section \ref{sec3}, the elaborated methodology for developing an ML-enhanced linear algebra solver is presented. Section \ref{sec4} presents a series of numerical examples that showcase the performance of the method compared to conventional iterative solvers. Section \ref{sec5} summarizes the outcomes of this work and discusses possible extensions. \section{Iterative solvers for FE systems} \label{sec2} \subsection{The Finite Element Method} \label{sec2.0} This work focuses on linear elliptic PDEs defined on a domain $\Omega \subseteq \mathbb{R}^{dim}$, $dim=1,2,3$, which are parametrized by a vector of parameters $\boldsymbol{\theta}\in \boldsymbol{\Theta}$, with $\boldsymbol{\Theta}\subseteq\mathbb{R}^n$ being the parameter space. The variational formulation of the PDE can be stated as: given $\boldsymbol{\theta}\in \boldsymbol{\Theta}$, find the solution $v=v(\boldsymbol{\theta})$ from the Hilbert space $\mathcal{V}=\mathcal{V}(\Omega)$ such that \begin{equation} \label{eq:weakForm} \kappa\left(v ,w; \boldsymbol{\theta}\right)=f \left(w;\boldsymbol{\theta}\right) \end{equation} \noindent for every $w\in \mathcal{V}(\Omega)$ with compact support in $\Omega$. The Lax-Milgram lemma proves that eq. \eqref{eq:weakForm} has a unique solution for every $\boldsymbol{\theta}$, provided that the bilinear form $\kappa(\cdot,\cdot;\boldsymbol{\theta})$ is continuous and coercive and $f\left(\cdot;\boldsymbol{\theta} \right)$ is a linear and continuous one-form. In practice, however, obtaining an exact solution $v$ is not feasible for most applications of interest and instead, an approximate solution is sought using numerical techniques, such the finite element method focused in this work. In this case, a finite-dimensional subspace $\mathcal{V}_h \subseteq \mathcal{V}$ is considered, spanned by a finite number of basis vectors $\lbrace N_i \rbrace_{i=1}^{\bar{N}}$, consisting of polynomials. These polynomials are compactly supported on a set of small polyhedra (finite elements) that partition the domain $\Omega$ and within each element $e$ the approximate displacement vector field $v_h\in\mathcal{V}_h$ and test functions $w_h$ can be expressed as: \begin{align} v_h^e &=\sum_{i=1}^{\bar{N}} \mathrm{u}_i^e N_i^e \\ w_h^e &=\sum_{i=1}^{\bar{N}} \mathrm{w}_i^e N_i^e \end{align} \noindent The Galerkin method relies on the linearity of the forms $\kappa,f$ and the orthogonality of the polynomial basis vectors, in order to obtain the coefficients $\boldsymbol{u}^e=[\mathrm{u}_i^e,\cdots,\mathrm{u}_{\bar{N}}^e]^T\in \mathbb{R}^{\bar{N}}$ in the expansion of the unknown field approximation. In particular, since eq. \eqref{eq:weakForm} must hold within each finite element $e$ and for any test function $w$, the system of linear equations follows: \begin{equation} \kappa\left(\sum_{j=1}^{\bar{N}} \mathrm{u}_i^e N_j, N_i; \boldsymbol{\theta}\right)=f \left(N_i;\boldsymbol{\theta}\right), \ \ \text{for} \ i=1,...,\bar{N} \end{equation} or, \begin{equation}\label{linearSystemInElement} \kappa\left(\sum_{j=1}^{\bar{N}} N_j, N_i; \boldsymbol{\theta}\right) \mathrm{u}_j^e=f \left(N_i;\boldsymbol{\theta}\right), \ \ \text{for} \ i=1,...,\bar{N} \end{equation} Equation \eqref{linearSystemInElement} describes an $\bar{N} \times \bar{N}$ linear system of equations to be satisfied within the $e$-th element. Repeating this procedure for all elements and appropriately assembling the respective equations will result in the following $d\times d$ linear system \begin{equation} \label{eq:fineProblem} \boldsymbol{K}(\boldsymbol{\theta})\boldsymbol{u}(\boldsymbol{\theta})=\boldsymbol{f}(\boldsymbol{\theta}) \end{equation} \noindent with $d$ being the total number of unknowns in the system, $\boldsymbol{K}\in\mathbb{R}^{d\times d}$ is a real symmetric positive definite matrix, $\boldsymbol{u}\in\mathbb{R}^d$ is the unknown solution vector and $\boldsymbol{f}\in\mathbb{R}^d$ the force vector. Solving such a linear system for a detailed discretization ($d \gg 1$) can be computationally intensive. This holds particularly true for multiquery problems that require numerous system evaluations for various instances of parameters $\boldsymbol{\theta}$, such as optimization, parameter inference, uncertainty propagation, sensitivity analysis, etc. Based on this, it becomes evident that efficient numerical solvers for large-scale linear systems play a crucial role in science and engineering. This section revisits the basic ideas behind two of the fastest methods for solving such systems, namely, the PCG and the AMG methods. \subsection{Preconditioned conjugate gradient method} \label{sec2.1} The Conjugate Gradient method was originally proposed by Hestenes and Stiefel as a direct method \cite{Hestenes1952} for solving linear systems, but its full potential is demonstrated as an iterative solver for large-scale sparse systems of the form $\boldsymbol{K}\boldsymbol{u}=\boldsymbol{f}$, with $\boldsymbol{K}$ being a symmetric positive definite matrix. The goal of CG is to minimize the quadratic form \begin{equation} Q(\boldsymbol{u})=\frac{1}{2}\boldsymbol{u}^T\boldsymbol{K}\boldsymbol{u}-\boldsymbol{f}^T\boldsymbol{u} \end{equation} \noindent which is equivalent to setting the residual $\boldsymbol{r}=-\nabla Q(\boldsymbol{u})=\boldsymbol{f}-\boldsymbol{K}\boldsymbol{u}$ to zero. Let us consider the Krylov subspaces, \begin{equation} \mathcal{K}_0=\lbrace 0 \rbrace, \ \ \ \mathcal{K}_k=span\lbrace \boldsymbol{f},\boldsymbol{K}\boldsymbol{f},\dots,\boldsymbol{K}^{k-1}\boldsymbol{f} \rbrace, \ \ \text{for } k\geq 1 \end{equation} \noindent These subspaces are nested, $\mathcal{K}_0\subseteq\mathcal{K}_1\subseteq \dots$, and have the key property that $\boldsymbol{K}^{-1}\boldsymbol{f} \in \mathcal{K}_d$. Then, a Krylov sequence consists of the vectors $\lbrace \boldsymbol{u}^{(k)} \rbrace$ such that \begin{equation} \boldsymbol{u}^{(k)}=\argmin_{\boldsymbol{u}\in\mathcal{K}_k} Q(\boldsymbol{u}), \ \ k=0,1,\dots \end{equation} \noindent and based on the previous property, it follows that $\boldsymbol{u}^{(d)}=\boldsymbol{K}^{-1}\boldsymbol{f}$. In this regard, CG is a recursive method for computing the Krylov sequence $\lbrace \boldsymbol{u}^{(0)},\boldsymbol{u}^{(1)},\dots \rbrace$. It can be proven that the corresponding (nonzero) residuals $\boldsymbol{r}^{(k)}$ form an orthogonal basis for the Krylov subspaces, that is \begin{equation} \mathcal{K}_k=span\lbrace \boldsymbol{r}^{(0)},\boldsymbol{r}^{(1)},\dots,\boldsymbol{r}^{(k-1)} \rbrace, \ \ \ \left(\boldsymbol{r}^{(j)}\right)^T\boldsymbol{r}^{(i)}=0, \text{ for } i\neq j \end{equation} \noindent and that the `steps' $\boldsymbol{q}_i=\boldsymbol{u}^{(i)}-\boldsymbol{u}^{(i-1)}$ are conjugate ($\boldsymbol{K}$-orthogonal): \begin{equation} \boldsymbol{q}_i^T\boldsymbol{K}\boldsymbol{q}_j, \text{ for } i\neq j, \ \ \ \boldsymbol{q}_i^T\boldsymbol{K}\boldsymbol{q}_i=\boldsymbol{q}_i^T\boldsymbol{r}^{(i-1)} \end{equation} \noindent Therefore, the vectors $\boldsymbol{q}_i$ form a conjugate basis for the Krylov subspaces \begin{equation} \mathcal{K}_k=span\lbrace \boldsymbol{q}_1,\boldsymbol{q}_2,\dots,\boldsymbol{q}_{k} \rbrace \end{equation} Introducing the coefficients \begin{equation} \alpha_k=\frac{\boldsymbol{p}_k^T\boldsymbol{r}^{(k-1)}}{\boldsymbol{p}_k^T\boldsymbol{K}\boldsymbol{p}_k} \end{equation} \noindent with $\boldsymbol{p}_k$ being the scaled versions of $\boldsymbol{q}_k$ given by the recursion: \begin{equation} \boldsymbol{p}_1=\boldsymbol{r}^{(0)}, \ \ \ \boldsymbol{p}_{k+1}=\boldsymbol{r}^{(k)}-\frac{\boldsymbol{p}_k^T\boldsymbol{K}\boldsymbol{r}^{(k)}}{\boldsymbol{p}_k^T\boldsymbol{K}\boldsymbol{p}_k}\boldsymbol{p}_k, \ k=1,2,\dots \end{equation} \noindent then, the Krylov sequence and the corresponding residuals are given by the relations: \begin{align} \boldsymbol{u}^{(k)} &= \boldsymbol{u}^{(k-1)}+\alpha_k\boldsymbol{p}_k \label{eq:CG_u} \\ \boldsymbol{r}^{(k)} &= \boldsymbol{r}^{(k-1)}-\alpha_k\boldsymbol{K}\boldsymbol{p}_k \label{eq:CG_r} \end{align} The conjugate gradient algorithm starts by an initial guess $\boldsymbol{u}^{(0)}$ with corresponding residual $\boldsymbol{r}^{(0)}=\boldsymbol{f}-\boldsymbol{K}\boldsymbol{u}^{(0)}$ and updates this guess according to equations \eqref{eq:CG_u}-\eqref{eq:CG_r} for $k=0,1,\dots$, until $\boldsymbol{r}^{(k)}$ is suffiently small. In theory CG terminates in at most $d$ steps, however, due to rounding errors it may take more than $d$ steps or even fail in practice. Also, the improvement in the approximations $\boldsymbol{u}^{(k)}$ is determined by the condition number $c(\boldsymbol{K})$ of the system matrix $\boldsymbol{K}$; the larger $c(\boldsymbol{K})$ is, the slower the improvement. A standard approach to enhance the convergence of the CG method is through preconditioning, namely the application of a linear transformation to the system with a matrix $\boldsymbol{T}$, called the preconditioner, in order to reduce the condition number of the problem. Thus, the original system $\boldsymbol{K}\boldsymbol{u}-\boldsymbol{f}=0$ is replaced with $\boldsymbol{T}\left(\boldsymbol{K}\boldsymbol{u}-\boldsymbol{f}\right)=0$, such that $c(\boldsymbol{T}\boldsymbol{K})$ is smaller than $c(\boldsymbol{K})$. The preconditioned CG consists of the following steps: \begin{algorithm} \setstretch{1.0} \caption{PCG algorithm}\label{alg:PCGalgorithm} \begin{algorithmic}[1] \State \textbf{Input:} $\boldsymbol{K}\in\mathbb{R}^{d\times d}$, rhs $\boldsymbol{f}\in\mathbb{R}^d$, preconditioner $\boldsymbol{T}\in\mathbb{R}^{d \times d_c}$, residual tolerance $\delta$ and an initial approximation $\boldsymbol{u}^{(0)}$ \State set $k=0$, initial residual $\boldsymbol{r}^{(0)}=\boldsymbol{f}-\boldsymbol{K}\boldsymbol{u}^{(0)}$ \State $\boldsymbol{s}_0=\boldsymbol{T}\boldsymbol{r}^{(0)}$ \State $\boldsymbol{p}_0=\boldsymbol{s}_0$ \While{$\Vert \boldsymbol{r}^{(k)}\Vert < \delta$ } \State $\alpha_k=\frac{\left(\boldsymbol{r}^{(k)}\right)^T\boldsymbol{s}_k}{\boldsymbol{p}_k^T\boldsymbol{K}\boldsymbol{p}_k}$ \State $\boldsymbol{u}^{(k+1)}=\boldsymbol{u}^{(k)}+\alpha_k \boldsymbol{p}_k$ \State $\boldsymbol{r}^{(k+1)}=\boldsymbol{r}^{(k)}-\alpha_k \boldsymbol{K}\boldsymbol{p}_k$ \State $\boldsymbol{s}_{k+1}=\boldsymbol{T}\boldsymbol{r}^{(k+1)}$ \State $\beta_k=\frac{\left(\boldsymbol{r}^{(k+1)}\right)^T\boldsymbol{s}_{k+1}}{\left(\boldsymbol{r}^{(k)}\right)^T\boldsymbol{s}_{k}}$ \State $\boldsymbol{p}_{k+1}=\boldsymbol{s}_{k+1}+\beta_k\boldsymbol{p}_k$ \State $k=k+1$ \EndWhile \end{algorithmic} \end{algorithm} The choice of the preconditioner $\boldsymbol{T}$ in PCG plays a crucial role in the fast convergence of the algorithm. Some generic choices include the Jacobi (diagonal) preconditioner $\boldsymbol{T}=diag(1/K_{11},\cdots,1/K_{dd})$, the incomplete Cholesky factorization $\boldsymbol{T}=\boldsymbol{\hat{K}}^{-1}$, where $\boldsymbol{\hat{K}}=\boldsymbol{\hat{L}}\boldsymbol{\hat{L}}^T$ is an approximation of $\boldsymbol{K}$ with cheap Cholesky factorization, and the incomplete LU factorization. Moreover, multigrid methods such as the AMG, elaborated on the next section, apart from standalone iterative schemes, are also very effective as preconditioners to the CG method. \subsection{Algebraic Multigrid Method} \label{sec2.2} AMG was originally introduced in the 1980's \cite{Ruge1987} as an efficient numerical approach for solving large ill-conditioned sparse linear systems and eigenproblems. Its main difference from the (geometric) multigrid method lies only in the method of coarsening. While multigrid methods require knowledge of the mesh, AMG methods extract all the needed information from the system matrix. AMG methods have been successfully applied to numerous problems including PDEs, sparse Markov chains and problems involving graph Laplacians (e.g. \cite{Stuben2001, Brezina2001, Treister2010,Napov2016, Facca2021}). The key idea in AMG algorithms is to employ a hierarchy of progressively coarser approximations to the linear system under consideration in order to accelerate the convergence of classical simple and cheap iterative processes, such as the damped Jacobi or Gauss-Seidel, commonly referred to as relaxation or smoothing. Relaxation is very efficient in eliminating the high-frequency error modes, but inefficient in reducing the low-energy modes. AMG overcomes this problem through the coarse-level correction, elaborated below. Let us consider the linear system of eq. \eqref{eq:fineProblem}, which describes the fine problem and let $\boldsymbol{u}^{(0)}$ be an initial solution to it. The two-level AMG defines a prolongation operator $ \boldsymbol{P}$, which is a full-column rank matrix in $\mathbb{R}^{d\times d_c}$, $d_c < d$ and a relaxation scheme such as the Gauss-Seidel (GS). Then, the two-level AMG algorithm consists in the following steps: \begin{algorithm} \setstretch{1.0} \caption{Two-level AMG algorithm}\label{alg:AMGalgorithm} \begin{algorithmic}[1] \State \textbf{Input:} $\boldsymbol{K}\in\mathbb{R}^{d\times d}$, rhs $\boldsymbol{f}\in\mathbb{R}^d$, prolongation operator $\boldsymbol{P}\in\mathbb{R}^{d \times d_c}$, a relaxation scheme denoted as $\mathcal{G}$, residual tolerance $\delta$ and an initial approximation $\boldsymbol{u}^{(0)}$ \State set $k=0$, initial residual $\boldsymbol{r}^{(0)}=\boldsymbol{f}-\boldsymbol{K}\boldsymbol{u}^{(0)}$ \While{$\Vert \boldsymbol{r}^{(k)}\Vert < \delta$ } \State Pre-relaxation: Perform $r_1$ iterations of the relaxation scheme on the current approximation and obtain $\tilde{\boldsymbol{u}}^{(k)}$ as: $\tilde{\boldsymbol{u}}^{(k)} \gets \mathcal{G}\left(\boldsymbol{u}^{(k)};r_1 \right)$ \State Compute the residual: $\boldsymbol{r}^{(k)}=\boldsymbol{f}-\boldsymbol{K}\tilde{\boldsymbol{u}}^{(k)}$ \State Restrict the residual to the coarser level and solve the coarse level system $\boldsymbol{K}_c\boldsymbol{e}_c^{(k)}=\boldsymbol{P}^T\boldsymbol{r}^{(k)}$, where $\boldsymbol{K}_c=\boldsymbol{P}^T \boldsymbol{K} \boldsymbol{P} \in \mathbb{R}^{d_c \times d_c}$ \State Prolongate the coarse grid error $\boldsymbol{e}^{(k)}=\boldsymbol{P}\boldsymbol{e}_c^{(k)}$ \State Correct the fine grid solution: $\tilde{\boldsymbol{u}}^{(k)}=\tilde{\boldsymbol{u}}^{(k)}+\boldsymbol{e}^{(k)}$ \State Post-relaxation: Perform additional $r_2$ relaxation iterations and obtain $\boldsymbol{u}^{(k)} \gets \mathcal{G}\left(\tilde{\boldsymbol{u}}^{(k)};r_2 \right)$ \State $k=k+1$ \EndWhile \end{algorithmic} \end{algorithm} In the above algorithm, lines 4-10 describe what is known as a $V$-cycle and it is schematically depicted in figure \ref{fig:2levelAMG}. The multi-level version of the above algorithm is easily obtained as the result of recursively applying the two-level algorithm, as shown in fig. \ref{fig:3levelAMG} for the 3-level setting. \begin{figure}[H] \centering \begin{subfigure}{0.42\textwidth} \includegraphics[width=\textwidth]{2levelAMG.png} \subcaption{} \label{fig:2levelAMG} \end{subfigure} \begin{subfigure}{0.42\textwidth} \includegraphics[width=\textwidth]{3levelAMG.png} \subcaption{} \label{fig:3levelAMG} \end{subfigure} \caption{Multigrid V-cycles in a (a) 2-level and a (b) 3-level setting} \label{fig:1d_solutions} \end{figure} To better illustrate algorithm \ref{alg:AMGalgorithm} and its convergence properties, let us consider the GS algorithm as the relaxation scheme, where the matrix $\boldsymbol{K}$ is split into $\boldsymbol{K}=\boldsymbol{L}+\boldsymbol{V}$, $\boldsymbol{L}$ being a lower triangular matrix that includes the diagonal elements and $\boldsymbol{V}$ is the upper triangular part of $\boldsymbol{A}$. The iterative scheme of the GS method is as follows: \begin{align} \label{eq:GaussSeidel} \boldsymbol{u}_{m+1}&=\boldsymbol{L}^{-1}\left(\boldsymbol{f}-\boldsymbol{V} \boldsymbol{u}_m\right) \nonumber \\ &= \boldsymbol{L}^{-1}\boldsymbol{f}-\boldsymbol{L}^{-1}\left(\boldsymbol{K}-\boldsymbol{L} \right) \boldsymbol{u}_m \nonumber\\ &= \boldsymbol{u}_{m}+\boldsymbol{L}^{-1}\left(\boldsymbol{f}-\boldsymbol{K}\boldsymbol{u}_m\right) \nonumber \\ &= \boldsymbol{u}_{m}+\boldsymbol{L}^{-1}\boldsymbol{r}_m \end{align} \noindent where the subscripts $m,m+1$ in the above equation denote the iteration number of the GS algorithm. If $\boldsymbol{u}^\star$ is the exact solution to the system and $\boldsymbol{e}_m=\boldsymbol{u}^\star-\boldsymbol{u}_m$ the error after the $m$-th iteration, then \begin{align} \label{eq:GaussSeidel_error} \boldsymbol{e}_{m+1}&=\boldsymbol{u}^{\star}-\boldsymbol{u}_{m+1} \nonumber \\ &= \boldsymbol{e}_{m}+\boldsymbol{u}_{m}-\left(\boldsymbol{u}_{m}+\boldsymbol{L}^{-1}\boldsymbol{r}_m\right) \nonumber\\ &= \boldsymbol{e}_{m}-\boldsymbol{L}^{-1}\left(\boldsymbol{K}\boldsymbol{e}_m\right) \nonumber\\ &= \left(\boldsymbol{I}-\boldsymbol{L}^{-1}\boldsymbol{K} \right) \boldsymbol{e}_m \end{align} \noindent where $\boldsymbol{I}$ is the $d\times d$ identity matrix. Setting $\boldsymbol{M}=\boldsymbol{I}-\boldsymbol{L}^{-1}\boldsymbol{K}$, then it is straightforward to show that \begin{equation} \boldsymbol{e}_{m+1}=\boldsymbol{M}\boldsymbol{e}_{m}=\boldsymbol{M}^2\boldsymbol{e}_{m-1}=\dots \boldsymbol{M}^{m+1}\boldsymbol{e}_{0} \end{equation} \noindent Returning to alg. \ref{alg:AMGalgorithm}, the error at the end of the $k$-th cycle of the two-level AMG can be computed as: \begin{align} \boldsymbol{e}^{(k)}&= \boldsymbol{M}^{r_2}\tilde{\boldsymbol{e}}^{(k)} \ \ \ \ \ \ \ \ \ \ \text{(pre-relaxation)}\nonumber \\ &= \boldsymbol{M}^{r_2}\left( \boldsymbol{I}-\boldsymbol{P}\left( \boldsymbol{P}^T \boldsymbol{K} \boldsymbol{P} \right)^{-1}\boldsymbol{K} \right)\tilde{\boldsymbol{e}}^{(k-1)} \ \ \ \ \ \ \ \ \ \ \ \text{(coarse-level correction)} \nonumber \\ &= \boldsymbol{M}^{r_2}\left( \boldsymbol{I}-\boldsymbol{P}\left( \boldsymbol{P}^T \boldsymbol{K} \boldsymbol{P} \right)^{-1}\boldsymbol{P}^T\boldsymbol{K} \right)\boldsymbol{M}^{r_1}\boldsymbol{e}^{(k-1)} \ \ \ \ \ \ \ \ \ \ \ \text{(post-relaxation)} \nonumber \\ &= \boldsymbol{M}^{r_2}\boldsymbol{C}\boldsymbol{M}^{r_1}\boldsymbol{e}^{(k-1)} \end{align} \noindent In the above equation, the matrix $\boldsymbol{M}^{r_2}\boldsymbol{C}\boldsymbol{M}^{r_1}$ determines the convergence behavior of the two-level cycle. The relaxation matrix $\boldsymbol{M}$ plays a role, however, in practice the selection of the prolongation operator $\boldsymbol{P}$ is the key to designing an efficient algorithm. In this regard, the most popular variations of AMG include the Ruge-St{\"u}ben method \cite{Ruge1987} and the smoothed aggregation based (SA) AMG \cite{Vanek1996}. Lastly, another factor the affects the number of iterations in AMG to reach the prescribed threshold of accuracy, is the choice of the initial solution. In absence of other information, $\boldsymbol{u}^{(0)}=\boldsymbol{0}$ is usually considered. \section{Machine learning accelerated iterative solvers} \label{sec3} \subsection{Problem statement} \label{sec3.1} The aim in this section is to develop an efficient data-driven solver for the parametrized system of eq. \eqref{eq:fineProblem}, by combining linear algebra-based solvers with machine learning algorithms. More specifically, the idea proposed herein, is to utilize a reduced set of high-fidelity system solutions, obtained after solving eq. \eqref{eq:fineProblem} for specified parameter instances, in two different yet complementary ways. First, a surrogate model will be established in the form of a `cheap-to-evaluate` nonlinear mapping from the problem's parameter space to its solution space using convolutional neural networks (CNNs) and feedforward neural networks (FFNNs). Even though CNNs and FFNNs have been shown to produce astonishing results even for challenging applications \cite{Mo2019, XU2020, NIKOLOPOULOS2022}, nevertheless, their black-box and physics-agnostic nature doesn't provide any means to improve the solutions they produce. To combat this problem, POD is performed on this data set of solutions and an efficient iterative solver is developed based on the idea of AMG, where in this case the prolongation operator is substituted by the projection matrix to the POD reduced space. \subsection{Surrogate model}\label{sec3.2} A surrogate model is an imitation of the original high fidelity model and serves as a 'cheap' mapping from the parametric space $\boldsymbol{\theta} \in \mathbb{R}^n$ to the solution space $\boldsymbol{u} \in \mathbb{R}^d$. In general, it is built upon an initial data set $\lbrace\boldsymbol{u}_i\rbrace_{i=1}^{N}$, which is created by solving the problem for a small, yet sufficient number, $N$, of parameter values. It is essential to span the problem's parametric space effectively, thus sophisticated sampling methods are often utilized, such as the Latin Hypercube \cite{olsson2002latin}. Many surrogate modeling techniques have been introduced over the past years, including linear \cite{Ladeveze2011,Zahr2017,Agathos2020} and nonlinear \cite{NIKOLOPOULOS2021, NIKOLOPOULOS2022, KALOGERIS2021} dimensionality reduction methods. The selection of the appropriate method is problem dependent. In the present work, a surrogate modeling scheme based on convolutional autoencoders (CAEs) and feedforward neural networks (FFNNs) that was introduced in \cite{NIKOLOPOULOS2021} for parametrized time-dependent PDEs is employed. It consists of two phases, namely the offline and the online phase. The offline phase begins with the training of a CAE that consists of an encoder and a decoder, in order to obtain low dimensional latent representations, $\boldsymbol{z}_{i} \in \mathbb{R}^l$, through the encoder with $l\ll d$ and a reconstruction map by the decoder. It is trained over the initial data set $\lbrace\boldsymbol{u}_i\rbrace_{i=1}^{N}$ to minimize the objective function: \begin{equation}\label{CAE} \mathcal{L}_{CAE} = \frac{1}{N}\sum_{i=1}^{N}||\boldsymbol{u}_{i} - \tilde{\boldsymbol{u}}_{i}||_{2}^{2} \end{equation} where $\tilde{\boldsymbol{u}}_{i}$ is the reconstructed input. After the training is completed, the latent space data set $\lbrace\boldsymbol{z}_i\rbrace_{i=1}^{N}$ is obtained. The second step of the offline phase is the training of the FFNN, which is used to establish a nonlinear mapping from the parametric space $\boldsymbol{\theta} \in \mathbb{R}^n$ to the latent space $\boldsymbol{z} \in \mathbb{R}^l$. Again, the aim of the training is the minimization of the loss function: \begin{equation}\label{FFNN} \mathcal{L}_{FFNN} = \frac{1}{N}\sum_{i=1}^{N}||\boldsymbol{z}_{i} - \tilde{\boldsymbol{z}}_{i}||_{2}^{2} \end{equation} where $\tilde{\boldsymbol{z}}_{i}$ is the network's output. Consequently, the online phase utilizes the fully trained surrogate model, which is now capable of delivering accurate predictions of the system's response for new parameter values $\boldsymbol{\theta}_{j}$ as follows: \begin{equation} \boldsymbol{u}_{j} = decoder(FFNN(\boldsymbol{\theta}_{j})):=\mathcal{F}^{sur}(\boldsymbol{\theta}_j) \end{equation} \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{Surrogate.png} \caption{Schematic representation of the surrogate model} \label{fig:Surrogate} \end{figure} A schematic representation of the surrogate model is presented in figure \ref{fig:Surrogate}. \subsection{Multigrid-inspired POD solver}\label{sec3.3} POD, also known as Principal Component Analysis, is a powerful and effective approach for data analysis and dimensionality reduction aimed at indentifying low-order modes of a system. In conjunction with the Galerkin projection procedure it is commonly utilized as an efficient method to reduce the dimensionality of large linear systems of equations \cite{Rathinam2003, LIEU2006, RAPUN2010}. The theory and application of POD is covered in many publications, however, to keep this paper as self-contained as possible the POD procedure used within this framework is summarized below. Let us denote with $\boldsymbol{U}\in \mathbb{R}^{d\times N}$ the matrix consisting of $N$ solution vectors $\left[\boldsymbol{u}_1,...,\boldsymbol{u}_N \right]$ for different parameter values $\lbrace \boldsymbol{\theta}_i\rbrace_{i=1}^N$ and with $\boldsymbol{R}=\boldsymbol{U}\boldsymbol{U}^T\in \mathbb{R}^{d \times d}$ the correlation matrix. Then POD consists in the following steps. \begin{enumerate} \item Compute the eigenvalues and eigenvectors of $\boldsymbol{R}$ that satisfy $\boldsymbol{R} \boldsymbol{\Phi}=\boldsymbol{\Phi}\boldsymbol{\Lambda}$. This step can be very demanding when $d\gg 1$, however, in practice $N\ll d$ and since $\boldsymbol{R}$, $\boldsymbol{R}^T$ have the same non-zero eigenvalues, it is computationally more convenient to solve instead the eigenvalue problem $\boldsymbol{U}^T\boldsymbol{U} \boldsymbol{\Psi}=\boldsymbol{\Psi}\boldsymbol{\Lambda}$. Then, the eigenvectors $\boldsymbol{\Phi}$ and $\boldsymbol{\Psi}$ are linked according to the formula. \begin{equation} \label{eq:PODrelationEigenvectors} \boldsymbol{\Phi}=\boldsymbol{U}\boldsymbol{\Psi}\boldsymbol{\Lambda}^{-1/2} \end{equation} \item Form the reduced basis $\boldsymbol{\Phi}_r$, be retaining only the $r$ first columns of $\boldsymbol{\Phi}$, corresponding to the largest eigenvalues. \item Under the assumption that each solution to eq. \eqref{eq:fineProblem} can be approximated as: \begin{equation} \boldsymbol{u}\equiv \boldsymbol{\Phi}_r \boldsymbol{u}_r \end{equation} \noindent with $\boldsymbol{u}_r\in \mathbb{R}^r$ being the unknown coefficients of the projection on the truncated POD basis, then the reduced-order linear system becomes: \begin{align} \label{eq:POD_ROM} &\boldsymbol{K}\boldsymbol{u}=\boldsymbol{f} \nonumber \\ &\boldsymbol{\Phi}_r^T\boldsymbol{K}\boldsymbol{\Phi}_r \boldsymbol{u}_r=\boldsymbol{\Phi}_r^T \boldsymbol{f} \nonumber \\ &\boldsymbol{K}_r \boldsymbol{u}_r=\boldsymbol{f}_r \end{align} \noindent Solving equation \eqref{eq:POD_ROM} for $\boldsymbol{u}_r$ is significantly easier since $\boldsymbol{K}_r \in \mathbb{R}^{r \times r}$, with $r$ small. \item Retrieve the solution to their original problem: \begin{equation} \boldsymbol{u}=\boldsymbol{\Phi}_r\boldsymbol{u}_r \end{equation} \end{enumerate} Based on the above, a similarity between the 2-level AMG method and POD can be observed, under the identification of $\boldsymbol{\Phi}_r$ as the prolongation operator and $\boldsymbol{\Phi}_r^T$ the corresponding restriction. Then, the algorithm \ref{alg:AMGalgorithm} remains practically the same. In this case, the iterative scheme becomes \begin{equation} \label{eq:FinalScheme} \boldsymbol{u}^{(k)}=\boldsymbol{u}^{(k)}+\boldsymbol{M}^{r_2}\left( \boldsymbol{I}- \boldsymbol{\Phi}_r\left( \boldsymbol{\Phi}_r^T\boldsymbol{K}\boldsymbol{\Phi}_r\right)^{-1} \boldsymbol{\Phi}_r^T\boldsymbol{K} \right)\boldsymbol{M}^{r_1}\boldsymbol{e}^{(k-1)} \end{equation} \subsection{Proposed data-driven framework for parameterized linear systems} \label{sec3.4} The final step is to combine the surrogate model of section \ref{sec3.2} and the multigrid-inspired POD solver of the previous section into a unified methodological framework for solving efficiently large-scale parametrized linear systems. In particular, an initial data set of system solutions $\lbrace\boldsymbol{u}_i\rbrace_{i=1}^{N}$ is performed for specified instances of the parameter vector $\lbrace\boldsymbol{\theta}_i\rbrace_{i=1}^{N}$. Then, these solution vectors are utilized as training data for the CNN and FFNN and the surrogate model can be established. The error between the exact solution and the surrogate's prediction for a given $\boldsymbol{\theta}$ can be given as: \begin{equation} \boldsymbol{e}^{sur}=\boldsymbol{u}^{\star} -\mathcal{F}^{sur}(\boldsymbol{\theta}) \end{equation} Despite one's best efforts, however, $\Vert \boldsymbol{e}^{sur}\Vert \neq 0$ and the surrogate's predictions will not satisfy exactly equation \ref{eq:fineProblem}. At this point, instead of simply performing iterations of PCG or AMG to improve the surrogate's predictions, we propose to further utilize the knowledge available to us from the data set of solution vectors, in order to enhance the performance of these iterative solvers. In particular, we perform POD to the solution matrix $\boldsymbol{U}=[\boldsymbol{u}_1,...,\boldsymbol{u}_{N}]$ and obtain the projection matrix $\boldsymbol{\Phi}_r^T$ and apply the iterative scheme of equation \eqref{eq:FinalScheme} either directly, or as a preconditioner $\boldsymbol{T}=\boldsymbol{M}^{r_2}\left( \boldsymbol{I}- \boldsymbol{\Phi}_r\left( \boldsymbol{\Phi}_r^T\boldsymbol{K}\boldsymbol{\Phi}_r\right)^{-1} \boldsymbol{\Phi}_r^T\boldsymbol{K} \right)\boldsymbol{M}^{r_2}$ in the PCG algorithm. The proposed methodology is data-driven and, as such, it is not possible to provide a priori estimates of the error for general systems. Nevertheless, under the assumption that the training data set $\boldsymbol{U}$ is `large' enough to contain almost all possible variations of the solution vector, then an estimate for the error can be provided as follows: \begin{align} \label{ineq:Main} \boldsymbol{e}^{(k)}&=\boldsymbol{M}^{r_2}\left( \boldsymbol{I}- \boldsymbol{\Phi}_r\left( \boldsymbol{\Phi}_r^T\boldsymbol{K}\boldsymbol{\Phi}_r\right)^{-1} \boldsymbol{\Phi}_r^T\boldsymbol{K} \right)\boldsymbol{M}^{r_2}\boldsymbol{e}^{(k-1)} \Rightarrow \nonumber \\ \Vert \boldsymbol{e}^{(k)} \Vert &=\Vert \boldsymbol{M}^{r_2}\left( \boldsymbol{I}- \boldsymbol{\Phi}_r\left( \boldsymbol{\Phi}_r^T\boldsymbol{K}\boldsymbol{\Phi}_r\right)^{-1} \boldsymbol{\Phi}_r^T\boldsymbol{K} \right)\boldsymbol{M}^{r_2}\boldsymbol{e}^{(k-1)} \Vert \Rightarrow \nonumber \\ &\leq \Vert \boldsymbol{M}^{r_2} \Vert \Bigg{\Vert} \left( \boldsymbol{I}- \boldsymbol{\Phi}_r\left( \boldsymbol{\Phi}_r^T\boldsymbol{K}\boldsymbol{\Phi}_r\right)^{-1} \boldsymbol{\Phi}_r^T\boldsymbol{K} \right) \Bigg{\Vert} \Vert \boldsymbol{M}^{r_2}\Vert \Vert \boldsymbol{e}^{(k-1)} \Vert \end{align} \noindent In the above, $\Vert \cdot \Vert$ denotes the $l2$-vector norm, when the input is a vector, and induced operator norm (spectral norm) when the input is a matrix. We know, by construction, that $\boldsymbol{M}$ is a stable matrix and that its spectral radius $\rho(\boldsymbol{M})$ is less than 1. Therefore, there exists $\gamma\in (0,1)$ and $L_1>0$ such that: \begin{equation} \label{ineq:sideTerms} \Vert M^q \Vert \leq L_1 \gamma^q, \ \ \ \forall q \geq 0 \end{equation} \noindent Therefore, $\Vert M^{r_1} \Vert,\Vert M^{r_2} \Vert \leq 1$ for $r_1, r_2$ large enough. Now, focusing on the term $\Bigg{\Vert} \left( \boldsymbol{I}- \boldsymbol{\Phi}_r\left( \boldsymbol{\Phi}_r^T\boldsymbol{K}\boldsymbol{\Phi}_r\right)^{-1} \boldsymbol{\Phi}_r^T\boldsymbol{K} \right) \Bigg{\Vert}$, then, by definition: \begin{equation} \Bigg{\Vert} \left( \boldsymbol{I}- \boldsymbol{\Phi}_r\left( \boldsymbol{\Phi}_r^T\boldsymbol{K}\boldsymbol{\Phi}_r\right)^{-1} \boldsymbol{\Phi}_r^T\boldsymbol{K} \right) \Bigg{\Vert}=\sup_{\boldsymbol{u}\in\mathbb{R}^d : \Vert \boldsymbol{u}=1 \Vert} \Bigg{\Vert} \left( \boldsymbol{I}- \boldsymbol{\Phi}_r\left( \boldsymbol{\Phi}_r^T\boldsymbol{K}\boldsymbol{\Phi}_r\right)^{-1} \boldsymbol{\Phi}_r^T\boldsymbol{K} \right)\boldsymbol{u} \Bigg{\Vert} \end{equation} \noindent Under the assumption that a given $\boldsymbol{u} \in \mathbb{R}^d$ can be decomposed as $\boldsymbol{u}=\boldsymbol{\Phi}_r\boldsymbol{u}_r + \boldsymbol{u}^\perp$, with $\boldsymbol{\Phi}_r\boldsymbol{u}_r\in \Phi$ and $\boldsymbol{u}^\perp\in \Phi^\perp$, where $\Phi=\overline{\text{span } \lbrace \boldsymbol{\phi}_1,...,\boldsymbol{\phi}_r} \rbrace $ and $\Phi^\perp$ its orthogonal complement in $\mathbb{R}^d$, then, \begin{align} \left( \boldsymbol{I}- \boldsymbol{\Phi}_r\left( \boldsymbol{\Phi}_r^T\boldsymbol{K}\boldsymbol{\Phi}_r\right)^{-1} \boldsymbol{\Phi}_r^T\boldsymbol{K} \right)\boldsymbol{u} &= \boldsymbol{u}- \boldsymbol{\Phi}_r\left( \boldsymbol{\Phi}_r^T\boldsymbol{K}\boldsymbol{\Phi}_r\right)^{-1} \boldsymbol{\Phi}_r^T\boldsymbol{K} \left(\boldsymbol{\Phi}_r\boldsymbol{u}_r+\boldsymbol{u}^\perp\right) \nonumber \\ &= \boldsymbol{\Phi}_r\boldsymbol{u}_r+\boldsymbol{u}^\perp- \boldsymbol{\Phi}_r\left( \boldsymbol{\Phi}_r^T\boldsymbol{K}\boldsymbol{\Phi}_r\right)^{-1} \boldsymbol{\Phi}_r^T\boldsymbol{K} \left(\boldsymbol{\Phi}_r\boldsymbol{u}_r+\boldsymbol{u}^\perp\right) \nonumber \\ &= \boldsymbol{\Phi}_r\boldsymbol{u}_r+\boldsymbol{u}^\perp- \boldsymbol{\Phi}_r\boldsymbol{u}_r-\boldsymbol{\Phi}_r\left( \boldsymbol{\Phi}_r^T\boldsymbol{K}\boldsymbol{\Phi}_r\right)^{-1} \boldsymbol{\Phi}_r^T\boldsymbol{K} \boldsymbol{u}^\perp \nonumber \\ &= \boldsymbol{u}^\perp-\boldsymbol{\Phi}_r\left( \boldsymbol{\Phi}_r^T\boldsymbol{K}\boldsymbol{\Phi}_r\right)^{-1} \boldsymbol{\Phi}_r^T\boldsymbol{K} \boldsymbol{u}^\perp \end{align} \noindent thus, \begin{equation} \label{ineq:middleTerm} \Bigg{\Vert} \left( \boldsymbol{I}- \boldsymbol{\Phi}_r\left( \boldsymbol{\Phi}_r^T\boldsymbol{K}\boldsymbol{\Phi}_r\right)^{-1} \boldsymbol{\Phi}_r^T\boldsymbol{K} \right)\boldsymbol{u}\Bigg{\Vert} \leq \Bigg{\Vert} \boldsymbol{I}-\boldsymbol{\Phi}_r\left( \boldsymbol{\Phi}_r^T\boldsymbol{K}\boldsymbol{\Phi}_r\right)^{-1} \boldsymbol{\Phi}_r^T\boldsymbol{K}\Bigg{\Vert} \Vert \boldsymbol{u}^\perp \Vert \leq L_2\Vert \boldsymbol{u}^\perp \Vert \end{equation} \noindent for some $L_2 > 0$. Due to the orthogonality of $\boldsymbol{\Phi}_r\boldsymbol{u}_r$ and $\boldsymbol{u}^\perp $, it follows that \begin{align} &\Vert \boldsymbol{u}^\perp \Vert ^2 = \Vert \boldsymbol{u} \Vert ^2 - \Vert \boldsymbol{\Phi}_r\boldsymbol{u}_r \Vert ^2 \Rightarrow \nonumber \\ &\Vert \boldsymbol{u}^\perp \Vert = \sqrt{ 1 - \Vert \boldsymbol{\Phi}_r\boldsymbol{u}_r \Vert ^2} \leq 1 \end{align} \noindent In fact, by choosing an appropriate number of eigenvectors $r$ in POD, we can obtain $\Vert \boldsymbol{u}^\perp \Vert \leq \frac{1}{L_2}$ and then, inequality \eqref{ineq:middleTerm} becomes \begin{equation} \label{ineq:middleTermUpdate} \Bigg{\Vert} \left( \boldsymbol{I}- \boldsymbol{\Phi}_r\left( \boldsymbol{\Phi}_r^T\boldsymbol{K}\boldsymbol{\Phi}_r\right)^{-1} \boldsymbol{\Phi}_r^T\boldsymbol{K} \right)\boldsymbol{u}\Bigg{\Vert} \leq \mathcal{C} \end{equation} \noindent with $\mathcal{C}\coloneqq \mathcal{C}(\boldsymbol{u}^\perp)$ and $\mathcal{C}\in (0,1)$. Inserting the inequalities \eqref{ineq:sideTerms} and \eqref{ineq:middleTermUpdate} into \eqref{ineq:Main}, we have: \begin{equation} \Vert \boldsymbol{e}^{(k)} \Vert \leq L_1 \gamma^{r_2} \mathcal{C} L_1 \gamma^{r_1} \Vert \boldsymbol{e}^{(k-1)} \Vert \end{equation} \noindent Applying the above inequality recursively, we conclude: \begin{align} \Vert \boldsymbol{e}^{(k)} \Vert &\leq (L \gamma^{r_2})^k \mathcal{C}^k (L \gamma^{r_1})^k \Vert \boldsymbol{e}^{(0)} \Vert \nonumber \\ &=(L \gamma^{r_2})^k\mathcal{C}^k (L \gamma^{r_1})^k \Vert \boldsymbol{e}^{sur} \Vert \end{align} The above inequality provides us with some valuable insight regarding the performance of the proposed data-driven solver. Most importantly, we notice the critical role that the surrogate's predictions play in the convergence, since the error is bounded be the surrogate's error $\Vert \boldsymbol{e}^{sur} \Vert$. Even though this result agrees with common intuition, nevertheless, being rigorously proven excludes the possibility of good initial predictions requiring more iterations for the solution to converge. Secondly, be retaining more eigenvectors to construct the reduced space $\Phi$ reduces the norm of $\boldsymbol{u}^\perp\in \Phi^\perp $, resulting in faster convergence. In the following section, we test the solver on numerical applications of scientific interest and assess its performance in comparison with conventional solvers. \section{Numerical applications} \label{sec4} The proposed methodology is tested on two parametrized systems. The first case is the indirect tensile strength (ITS) test, which is treated with the theory of 2D linear elasticity, while the second one is a 3D deformable porous medium problem, also known as Biot problem. \subsection{Indirect tensile strength test} A popular test to measure the tensile strength of concrete or asphalt materials is the ITS test. It contains a cylindrical specimen loaded across its diameter to failure. The specimen is usually loaded at a constant deformation rate and measuring the load response. When the developed tensile stress in the specimen under loading exceeds its tensile strength then the specimen will fail. In this application, we restrict our analysis to the linear regime and model the cylinder as a 2D disk under plain strain assumptions, as shown in \ref{fig:ex1}. In this case, the weak form of the problem reads: Find $\boldsymbol{v}\in\mathcal{V}(\Omega)$ such that \begin{equation} \begin{aligned} \int_{\Omega}& \boldsymbol{\sigma}(\boldsymbol{v}): \boldsymbol{\epsilon}(\boldsymbol{w})d\Omega = \int_{\Omega}\boldsymbol{f}\cdot \boldsymbol{w}, \quad \forall \boldsymbol{w}\in\mathcal{V}_c(\Omega) \\ \boldsymbol{\sigma} &= \lambda tr\left(\boldsymbol{\epsilon} \right)\mathbb{I}+2\mu\boldsymbol{\epsilon} \end{aligned} \label{eq:ITS} \end{equation} where, \begin{equation} \boldsymbol{\epsilon}=\begin{bmatrix} \epsilon_{xx} & \epsilon_{xy} & 0 \\ \epsilon_{xy} & \epsilon_{yy} & 0 \\0 & 0 & 0 \end{bmatrix} \end{equation} \noindent the strain tensor and $\boldsymbol{f}$ the loading. Also, $\mu$ and $\lambda$ are the Lamé's constants, which are linked to the Young modulus $E$ and the Poisson ratio according to equations \ref{eq:Lame}: \begin{equation} \begin{aligned} \mu &= \frac{E}{2(1 + \nu)} \\ \lambda &= \frac{E\nu}{(1 + \nu)(1 - 2\nu)} \end{aligned} \label{eq:Lame} \end{equation} \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{ex1} \caption{Diametrically point loaded disk} \label{fig:ex1} \end{figure} In this example, the specimen has a diameter of $150 \ mm$ and due to symmetry in geometry and loading we only need to model one quarter of the disk, as illustrated in figure \ref{fig:snapshot}. The approximate solution to eq. \eqref{eq:ITS} $\boldsymbol{u}\in\mathbb{R}^d$ is obtained through the finite element method using a mesh that consists of triangle plane-strain finite elements with a total of $d = 5656$ dofs. The Young modulus $E$ and the load $P$ are considered uncorrelated random variables following the lognormal distribution as described in table \ref{table:RandomParameters}. The Poisson ratio is considered a constant parameter $\nu = 0.3$. Figure \ref{fig:snapshot} also displays a contour plot of the displacement magnitude $ \Vert \boldsymbol{u} \Vert$ for $E = 2000 \ MPa$ and $P = -1000 \ N$. \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|} \hline Parameter & Distribution & Mean & Standard deviation \\ \hline $E(MPa)$ & Lognornal & $2000$ & $600$\\ \hline $P(N)$ & Lognormal & $-1000$ & $300$\\ \hline \end{tabular} \caption{Random parameters of the ITS test} \label{table:RandomParameters} \end{table} \begin{figure}[H] \centering \includegraphics[width=0.65\textwidth]{snapshot.png} \caption{Displacement magnitude $||\boldsymbol{u}||$ for $E = 2000 \ MPa$ and $P = -1000 \ N$} \label{fig:snapshot} \end{figure} The first step of the proposed solver is to generate a sufficient number of samples. To this purpose, the Latin Hybercube sampling method was utilized to generate $N = 200$ parameter samples $\{[E_i, P_i]\}_{i=1}^{N}$. Subsequently, the corresponding problems are solved with the finite element method and the solution vectors obtained, $\{\boldsymbol{u}_i\}_{i=1}^{N}$, are regarded as 'exact' solutions. Next, a surrogate model is trained over these solutions in order to establish a 'cheap' mapping from the parametric to solution space. The methodology for the surrogate model is described in section \ref{sec3.1}. The selected CAE and FFNN architectures are presented in figure \ref{fig:surrogate_1}. To tackle the problem of overfitting the standard hold-out approach was exploited. In particular, the data set was randomly divided into train and validation subsets with a ratio of 70\%-30\% and each network's performance on the validation data set was assessed in order to avoid overfitting. The CAE is trained for 40 epochs with a batch size of 10 and a learning rate of $5e-4$, while the FFNN is trained for 3000 epochs with a batch size of 20 and a learning rate of $1e-4$. The average normalized $l_2$ norm error of the surrogate model in the test data set is 0.54\%. \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{surrogate_1.png} \caption{Surrogate model architecture} \label{fig:surrogate_1} \end{figure} The second step is to form the POD basis $\boldsymbol{\Phi}_r$ by performing eigendecomposition on the correlation matrix $\boldsymbol{U}\boldsymbol{U}^T$ (or equiv. on $\boldsymbol{U}^T\boldsymbol{U}$), with $\boldsymbol{U} = [\boldsymbol{u}_1,\dots,\boldsymbol{u}_N]$ being the solution matrix. In this case, the number of eigenvectors kept is $r=8$, which correspond to over 99.99\% of the variance in the training data. Consequently, when all components of the proposed POD-2G solver are defined and fully trained, the methodology described in section \ref{sec3.2} can be applied to obtain new system's solutions for different parameter values. In order to test the proposed POD inspired solver, a number of $N_{test} = 500$ test parameter samples $\{[E_j, P_j]\}_{j=1}^{N_{test}}$ were generated according to their distribution. The corresponding problems were solved with the Ruge-St{\"u}ben AMG solver for 2,3 and 5 grids and the proposed POD-2G solver for different values of tolerance. The mean value of the CPU time and the number of cycles required for convergence to the desired number of tolerance are displayed in figure \ref{fig:Results_AMG_1}. \begin{figure}[H] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{ex1_AMG_time.png} \caption{Comparison of mean CPU time} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{ex1_AMG_cycles.png} \caption{Comparison of mean number of cycles} \end{subfigure} \caption{Comparison of mean CPU time and mean number of cycles over 500 analyses for different multigrid solvers} \label{fig:Results_AMG_1} \end{figure} The results indicated that the proposed POD based solver surpasses significantly traditional AMG solvers in computational cost. In particular, for $\varepsilon = 1e-5$, which is a typical target tolerance in the majority of science problems and $\boldsymbol{u}^{(0)} = \boldsymbol{0}$, a reduction of computational cost of $\times 2.53$ is observed between the proposed and the 3-grid AMG solver. A key component of the proposed methodology is to obtain a close estimation of the solution by the surrogate model, $\boldsymbol{u}^{(0)} = \boldsymbol{u}_{sur}$. As observed from the convergence analysis, an initial solution $\boldsymbol{u}^{(0)}$ with normalized error in the magnitude of 1.00\% is capable of drastically reducing the computational cost. Specifically, for $\varepsilon = 1e-5$, the decrease in CPU time is of $\times 13.89$ when compared with the case of $\boldsymbol{u}^{(0)} = \boldsymbol{0}$. Furthermore, the convergence behaviour of the proposed method when used as a preconditioner in the context of the PCG method is presented in figure \ref{fig:Results_PCG_1}. \begin{figure}[H] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{ex1_PCG_time.png} \caption{Comparison of mean CPU time} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{ex1_PCG_cycles.png} \caption{Comparison of mean PCG iterations} \end{subfigure} \caption{Comparison of mean CPU time and mean number of PCG iterations over 500 analyses for different preconditioners} \label{fig:Results_PCG_1} \end{figure} \noindent Again, the results obtained proved that the proposed methodology is superior than classic AMG preconditioners. In particular, for $\varepsilon = 1e-5$ and $\boldsymbol{u}^{(0)} = \boldsymbol{0}$, a reduction of computational cost of $\times 1.53$ is observed between the proposed method and the 3-grid AMG. In addition, the initial solution delivered by the surrogate model, $\boldsymbol{u}^{(0)} = \boldsymbol{u}_{sur}$, is again a crucial factor of fast convergence. In that case, the reduction of computational time is of $\times 4.00$ when compared with using $\boldsymbol{u}^{(0)} = \boldsymbol{0}$. Finally, in order to highlight the computational gain of the proposed framework in the context of the Monte Carlo method, $N_{MC} = 1e+05$ simulations are performed to determine the probability density function (PDF) of the vertical displacement $u_{y}^{top}$ of the top node (where the load $P$ is applied). The calculated PDF is presented in figure \ref{fig:ex1_PDF}. Each simulation is solved with PCG and two different proconditioners, namely the proposed POD-2G method and a standard three grid Ruge-St{\"u}ben AMG preconditioner. The results are displayed in figure \ref{fig:ex1_cost} and prove that the proposed method is superior to classic AMG when dealing with parametrized systems. In particular, the conventional method needed 21109 $s$ to complete the $10^5$ simulations, while the proposed data-driven solver required 4013 $s$ for the same task including the offline cost (initial simulations and training of the surrogate model). This translates to a remarkable decrease in CPU time of $\times 5.26 $. \begin{figure}[H] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{ex1_PDF.png} \caption{PDF of $u_{y}^{top}$} \label{fig:ex1_PDF} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{ex1_cost.png} \caption{Comparison of computational cost} \label{fig:ex1_cost} \end{subfigure} \caption{PDF of $u_{y}^{top}$ for $1e+05$ MC simulations and comparison of computational cost.} \end{figure} \subsection{Biot problem - deformable porous medium} Biot's theory describes wave propagation in a porous saturated medium, i.e., a medium made of a solid matrix, fully soaked with a fluid. Biot does not take into account the microscopic level and assumes that continuum mechanics can be applied to measurable macroscopic quantities \cite{2001219}. Biot problem in weak form can be stated as: Find $\boldsymbol{v}\in\mathcal{V}(\Omega;\mathbb{R}^3)$ and $p\in\mathcal{V}(\Omega;\mathbb{R})$ such that \begin{equation} \begin{aligned} \int_{\Omega} \boldsymbol{\sigma}(\boldsymbol{v}): \boldsymbol{\epsilon}(\boldsymbol{w})d\Omega - \int_{\Omega} p\boldsymbol{A}: \boldsymbol{\epsilon}(\boldsymbol{w})d\Omega = 0 &, \quad \forall \boldsymbol{w}\in\mathcal{V}_c(\Omega;\mathbb{R}^3) \\ \int_{\Omega} q\boldsymbol{A}: \boldsymbol{\epsilon}(\boldsymbol{v})d\Omega + \int_{\Omega} \nabla q \cdot \boldsymbol{D}\ (\nabla p)^T d\Omega = 0 &, \quad \forall q\in\mathcal{V}_c(\Omega;\mathbb{R}) \\ \boldsymbol{\sigma} = \lambda tr\left(\boldsymbol{\epsilon} \right)\mathbb{I}+2\mu\boldsymbol{\epsilon}& \end{aligned} \label{eq:Biot} \end{equation} \noindent with $\boldsymbol{A},\boldsymbol{D}$ being the Biot coefficient tensor and diffusion tensor, respectively. In this test case, the domain $\Omega$ is a cube and each side has a length of $L = 1.00 \ mm$. Regarding the boundary conditions, a pressure distribution $p^{left}\coloneqq p|_{x=0} = 1.0 \ MPa$ is applied on the left face of the cube along with a displacement load $u_y^{top}\coloneqq u_y|_{z=1} = 0.20 \ mm$ on the top face, while all displacements $u_x, u_y$ and $u_z$ are restrained in the bottom face ($z=0$). The problem definition is presented in figure \ref{fig:biot_bc}. \begin{figure}[H] \centering \includegraphics[width=0.6\textwidth]{biot_bc.png} \caption{Geometry and boundary conditions of Biot problem} \label{fig:biot_bc} \end{figure} The finite element mesh contains 3-d hexa elements and the solution vector $\boldsymbol{u}\in{\mathbb{R}^d}$ consists of the nodal values of displacements and pressure, where in this case the total number of dofs is $d = 34839$. The Lame's constants $\mu$ and $\lambda$ are considered uncorrelated random variables following the lognormal distribution as described in table \ref{table:RandomParameters}. The Poisson ratio $\nu$ is determined by: \begin{equation} \nu = \frac{\lambda}{2(\lambda + \mu)} < 0.5 \end{equation} \noindent We further assumed that the Biot coefficient tensor $\boldsymbol{A}$ and $\boldsymbol{D}$ are constant, taking the values: \begin{equation} \boldsymbol{A}=\begin{bmatrix} 0.13 & 0.13 & 0.13 \\ 0.09 & 0.09 & 0.09 \\ 0 & 0 & 0 \end{bmatrix}, \quad \boldsymbol{D}=\begin{bmatrix} 2.0 & 0.2 & 0 \\ 0.2 & 2.0 & 0 \\ 0 & 0 & 0.5 \end{bmatrix} \end{equation} \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|} \hline Parameter & Distribution & Mean & Standard deviation \\ \hline $\mu(MPa)$ & Lognornal & $0.30$ & $0.09$\\ \hline $\lambda(MPa)$ & Lognormal & $1.70$ & $0.51$\\ \hline \end{tabular} \caption{Random parameters of the Biot problem} \label{table:RandomParameters} \end{table} \begin{figure}[H] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{biot_disp.png} \caption{Displacement magnitude $\Vert\boldsymbol{u}\Vert$} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{biot_pressure.png} \caption{Pressure distribution $p$} \end{subfigure} \caption{Displacement magnitude $\Vert \boldsymbol{u}\Vert$ and pressure distribution $p$ for $\lambda = 1.70 \ MPa$ and $\mu = 0.30 \ MPa$ } \label{fig:biot} \end{figure} \noindent Figure \ref{fig:biot_bc} also displays a contour plot of the magnitude of $\boldsymbol{u}$ and the pressure distribution $p$ for $\mu = 0.30 \ MPa$ and $\lambda = 1.70\ MPa$. The first step of the proposed methodology is to create an initial solution space. To this purpose, the Latin Hybercube sampling method was utilized to generate $N = 300$ parameter samples $\{[\mu_i, \lambda_i]\}_{i=1}^{N}$. The next steps are similar with those of the previous numerical example. The surrogate's architecture is presented in figure \ref{fig:surrogate_2}. The CAE is trained for 100 epochs with a batch size of 10 and a learning rate of $1e-3$, while the FFNN is trained for 5000 epochs with a batch size of 20 and a learning rate of $1e-4$. The average normalized $l_2$ norm error of the surrogate model in the test data set is 0.68\%. \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{surrogate_2.png} \caption{Surrogate model architecture} \label{fig:surrogate_2} \end{figure} As in the previous numerical example, a number of $N_{test} = 500$ parameter vectors $\{[\mu_j, \lambda_j]\}_{j=1}^{N_{test}}$ were generated according to their distribution and the corresponding problems were solved with the proposed POD inspired solver and different Ruge-St{\"u}ben AMG solvers. The results are very promising in terms of computational cost. In particular, for $\varepsilon = 1e-5$ and $\boldsymbol{u}^{(0)} = \boldsymbol{0}$, a reduction of computational cost of $\times 7.65$ is achieved when comparing the proposed solver with the 5-grid AMG solver. Furthermore, obtaining an accurate initial solution $\boldsymbol{u}^{(0)}$ is again a very important component of the proposed framework. Specifically, for $\varepsilon = 1e-5$, the decrease in CPU time is of $\times 5.94$ when compared with the case of $\boldsymbol{u}^{(0)} = \boldsymbol{0}$. The mean value of the CPU time and the number of cycles required for convergence to the desired number of tolerance are displayed in figure \ref{fig:Results_AMG_2}. \begin{figure}[H] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{ex2_AMG_time.png} \caption{Comparison of CPU time} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{ex2_AMG_cycles.png} \caption{Comparison of cycles} \end{subfigure} \caption{Comparison of CPU time and number of cycles for different multigrid solvers} \label{fig:Results_AMG_2} \end{figure} Furthermore, the convergence behaviour of the proposed method when used as a preconditioner in the context of the PCG method is presented in figure \ref{fig:Results_PCG_2}. \begin{figure}[H] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{ex2_PCG_time.png} \caption{Comparison of CPU time} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{ex2_PCG_iter.png} \caption{Comparison of PCG iterations} \end{subfigure} \caption{Comparison of CPU time and number of cycles for different preconditioners} \label{fig:Results_PCG_2} \end{figure} \noindent Again, the results delivered by the proposed methodology showed its superiority not only over AMG preconditioners but also over ILU and Jacobi preconditioners. In particular, for $\varepsilon = 1e-5$ and $\boldsymbol{u}^{(0)} = \boldsymbol{0}$, a reduction of computational cost of $\times 2.37$ is observed between the proposed method and the 5-grid AMG, of $\times 1.63$ with the ILU and of $\times 1.16$ with the Jacobi. Last but not least, the initial solution delivered by the surrogate model, $\boldsymbol{u}^{(0)} = \boldsymbol{u}_{sur}$, is again a crucial factor of fast convergence. In that case, the reduction of computational time is of $\times 2.12$ when compared with using $\boldsymbol{u}^{(0)} = \boldsymbol{0}$. Finally, in order to highlight the computational gain of the proposed framework in the context of the Monte Carlo method, $N_{MC} = 2e+05$ simulations are performed to determine the probability density function (PDF) of the displacement magnitude $||u||$ of the monitored node (see figure \ref{fig:biot_bc}). The calculated PDF is presented in figure \ref{fig:ex2_PDF}. As in the previous example, each simulation is solved with PCG and two different proconditioners, namely the proposed POD-2G and a standard three grid Ruge-St{\"u}ben AMG preconditioner. Again, the results obtained by the proposed methods demonstrate significant computational advantage over conventional preconditioners. In particular, the Jacobi preconditioner needed $4.23e+05$ $s$ to complete $2e+05$ simulations, while the proposed data-driven solver required $1.75e+05$ $s$ for the same task including the offline cost (initial simulations and training of the surrogate model). This translates to a decrease in CPU time of $\times 2.42$. \begin{figure}[H] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{ex2_PDF.png} \caption{PDF of $||u||$} \label{fig:ex2_PDF} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{ex2_cost.png} \caption{Comparison of computational cost} \label{fig:ex2_cost} \end{subfigure} \caption{PDF of $||u||$ for $2e+05$ MC simulations and comparison of computational cost.} \end{figure} \section{Conclusions} \label{sec5} The present work introduced a framework for accelerating the solution of parametrized problems that require multiple model evaluations. The proposed framework consisted of two distinct yet complementary steps. The first step in the methodology was the construction of a `cheap-to-evaluate' metamodel using FFNNs and CAEs, trained over a reduced set of high-fidelity system solutions. Despite giving very accurate predictions at new parameter instances, these predictions were bound to exhibit some discrepancy with respect to the actual system solutions, since they are not constrained by any physical laws. The second step in the methodology aimed precisely at fixing this by proposing a data-driven iterative solver, inspired by the AMG method, that will refine the metamodel's predictions until a prescribed level of accuracy has been attained. In particular, using again the already available set of high-fidelity system solutions, POD was performed on this set to identify the subspace that captures most of the variation in the system responses. Next, a 2-level multigrid scheme was developed, termed POD-2G, using the projection operator from POD as the prolongation operator. This scheme was tested on numerical applications as a standalone solver, as well as a preconditioner to PCG, and in both cases, its superior performance with respect to conventional iterative solvers was demonstrated. \bibliographystyle{elsarticle-num}
proofpile-arXiv_065-4623
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Several decades of independent observations \citep{Perlmutter_1997, riess1998observational,bamba2012dark} confirm that our Universe is currently in an accelerated expansion phase. The cause of such cosmic acceleration is attributed to the so called "Dark energy", \citep{sahni2000case,RevModPhys.75.559,copeland2006dynamics,amendola_tsujikawa_2010} a fluid that violates the strong energy condition. Einstein's cosmological constant ($\Lambda$) with an effective fluid equation of state (EoS) $ P/\rho = w(z) = -1$ provides the simplest explanation for the cosmic acceleration. While, several cosmological observations are consistent with the concordance LCDM model, there are several inconsistencies from both theoretical considerations (like smallness of $\Lambda$, the `fine tuning problem'), and observations (like the low redshift measurements of $H_0$ \citep{Riess_2016}). This has led to many significant efforts in developing alternate scenarios to model dark energy and thereby explaining the cosmic acceleration without requiring a cosmological constant. Generally speaking there are two ways to tackle the problem. One approach involves modifying the gravity theory itself on large scales \citep{amendola_tsujikawa_2010}. $f(R)$ modification to the Einstein action \citep{Khoury_2004,Starobinsky_2007,Hu_2007,nojiri2007introduction} belongs to this approach of modeling cosmic acceleration. In a second approach the matter sector of Einstein's field equation is modified by considering a dark energy fluid with some nontrivial dynamics. In both the approaches one may find an effective dark energy EoS which dynamically varies as a function of redshift and in principle can be distinguished from the cosmological constant ($\Lambda$). There are many models for dark energy that predict a dynamical equation of state. For example, in the quintessence models, dark energy arises from a time dependent scalar field, $\phi$ \citep{Ratra-Peebles_1988,Steinhardt_1998,PhysRevD.59.123504,PhysRevLett.82.896,scherrer2008thawing}. However these models still require fine tuning for consistency with observations. A wide variety of phenomenological potentials have been explored for quintessence field to achieve $w \approx -1$. In all these models, the minimally coupled scalar field is expected to slowly roll in the present epoch. However, other than a few restricted class of potentials, it is difficult to prevent corrections from various symmetry breaking mechanisms which tends to spoil the slow roll condition \citep{panda2011axions}. Weak gravitational lensing by intervening large scale structure distorts the images of distant background galaxies. This is attributed to the deflection of light by the fluctuating gravitational field created by the intervening mass distribution and is quantified using shear and convergence of photon geodesics. The statistical properties of these distortion fields are quantified using the shear/convergence power spectrum. These imprint the power spectrum of the intervening matter field, as well as cosmological evolution and thereby carries the signatures of structure formation. Dark energy affects the growth of cosmic structures and geometric distances, which crucially affects the power spectrum of the lensing distortion fields. Thus, weak lensing has become one of the important cosmological probes. Several weal lensing experiments are either on-going or are upcoming, such as the Dark Energy Survey \citep{abbott2016dark}, the Hyper Suprime-Cam survey \citep{aihara2018hyper}, the Large Synoptic Survey Telescope \citep{ivezic2008large}, the WideField Infrared Survey Telescope \citep{wright2010wide,spergel2015wide}, and the Euclid \citep{laureijs2011euclid}. The 3D tomographic imaging of the neutral hydrogen (HI) distribution is one of the promising tool to understand large scale structure formation and nature of dark energy \citep{poreion1, poreion0}. The dominant part of the low density hydrogen gets completely ionized by the end of reionization around z $\sim$ 6 \citep{Gallerani_2006}. However, a small fraction of HI survives the complex processes of reionization and is believed to remain housed in the over-dense regions of IGM. These clumpy HI clouds remain neutral amidst the radiation field of background ionizing sources as they are self shielded and are the dominant source of the 21-cm radiation in post-reionization epoch. Intensity mapping of such redshifted 21-cm radiation aims to map out the large scale HI distribution without resolving the individual DLA sources and promises to be a powerful probe of large scale structure and background cosmological evolution \citep{param1, param2, param3, param4}. Several radio telescopes like the GMRT \footnote{http://gmrt.ncra.tifr.res.in/} OWFA\footnote{https://arxiv.org/abs/1703.00621}, MEERKAT\footnote{http://www.ska.ac.za/meerkat/}, MWA\footnote{https://www.mwatelescope.org/}, CHIME\footnote{http://chime.phas.ubc.ca/}, and SKA\footnote{https://www.skatelescope.org/} are in the pursuit of detecting the cosmological 21-cm signal for a tomographic imaging \citep{Mao_2008}. We consider the cross-correlation of HI 21-cm signal with the galaxy weak lensing convergence field. It is known that \citep{fonseca2017probing} cross-correlations of individual tracers of IGM often offer crucial advantages over auto-correlations. The systematic noise that arises in the individual surveys is pose less threat in the cross-correlation signal as they appear in the variance. Further, the foregrounds and contaminants of individual surveys are, in most cases, uncorrelated and hence do not bias the cross-correlation signal \citep{GSarkar_2010,Vallinotto_2009}. The cross-correlation of the post-reionization HI 21 cm signal has been extensively studied \citep{Sarkar_2009,Guha_Sarkar_2010,GSarkar_2010,Sarkar_2019,Dash_2021}. The acoustic waves in the primordial baryon-photon plasma are frozen once recombination takes place at $z \sim 1000$. The sound horizon at the epoch of recombination provides a standard ruler which can be then used to calibrate cosmological distances. Baryons imprint the cosmological power spectrum through a distinctive oscillatory signature \citep{White_2005, Hu-eisen}. The BAO imprint on the 21-cm signal has been studied \citep{sarkar2013predictions,sarkar2011imprint}. The baryon acoustic oscillation (BAO) is an important probe of cosmology \citep{Eisenstein_2005, Percival_2007, Anderson_2012, shoji2009extracting, sarkar2013predictions} as it allows us to measure the angular diameter distance $D_A(z)$ and the Hubble parameter $H(z)$ using the the transverse and the longitudinal oscillatory features respectively thereby allowing us to put stringent constraints on dark energy models. We propose the BAO imprint on the cross-correlation of 21-cm signal and weak lensing convergence as a probe of Quintessence dark energy. The paper is organized as follows. In Section-2 we discuss the cross-correlation of weak lensing shear/convergence and HI excess brightness temperature. We also discuss the BAO imprint and estimation of errors on the BAO parameters namely the expansion rate $H(z)$, angular diameter distance $D_A(z)$ and the dilation factor $D_V(z)$ from the tomographic measurement of cross-correlation power spectrum using Fisher formalism. In Section-3 we discuss the background and structure formation in quintessence dark energy models and constrain the model parameters using Markov Chain Monte Carlo (MCMC) simulation. We discuss our results and other pertinent observational issues in the concluding section. \section{The cross-correlation signal} Weak gravitational lensing \citep{Bartelmann_2001} by intervening large scale structure distorts the images of distant background galaxies. This is caused by the deflection of light by the fluctuating gravitational field created by the intervening mass distribution \citep{takada2004cosmological}. Weak lensing is a powerful cosmological probe as galaxy shear is sensitive to both spacetime geometry and growth of structures. The Weak-lensing convergence field on the sky is given by a weighted line of sight integral \citep{waerbeke2003} of the matter overdensity field $\delta$ as \begin{equation} \kappa({\vec \theta}) = \int_0^{\chi_s} ~ \mathcal {A}_{\kappa} (\chi) \delta(\chi \vec \theta, \chi) d\chi \end{equation} where $\chi_s$ is the maximum distance to which the sources are distributed and the cosmology-dependent function $\mathcal {A}_{\kappa}(\chi)$ is given by \begin{equation} \mathcal {A}_{\kappa} (\chi) = \frac{3}{2} \Omega_{m0}H_0^{2} \frac{\chi}{{a(\chi)}} \int_0^{\chi_{_{s}}} n_{s}(z) \frac{dz}{d\chi '} \frac{\chi ' - \chi}{\chi '} d\chi ' \end{equation} where $\chi$ denotes the comoving distance and $a(\chi)$, the cosmological scale factor. The redshift selection function of source galaxies, $n_s(z)$ tends to zero at both low and high redshifts. It is typically modeled as a peaked function \citep{takada2004cosmological}, parametrized by ($\alpha$, $\beta$, $z_0$) of the from \begin{equation} n_s(z) = {N_0} z^{\alpha} e^{- \left( \frac{z}{z_0} \right)^{\beta}} \end{equation} and satisfies the normalization condition \begin{equation} \int_0^\infty dz ~n_s(z) dz = \bar{n}_g \end{equation} where $\bar{n}_g$ is the the average number density of galaxies per unit steradian. On large scales the redshifted HI 21-cm signal from post reionization epoch ($z<6$) known to be biased tracers of the underlying dark matter distribution \cite{Bagla_2010, Guha_Sarkar_2012, Sarkar_2016}. We use $\delta_T$ to denote the redshifted 21-cm brightness temperature fluctuations. The post reionization HI signal has been studied extensively \citep{poreion0, poreion1, poreion2, poreion3, poreion4, poreion6, poreion7, poreion12, poreion8}. We follow the general formalism for the cross-correlation of the 21-cm signal with other cosmological fields given in (\cite{Dash_2021}). Usually for the investigations involving the 21-cm signal the the radial information is retained for tomographic study. The weak-lensing signal, on the contrary consists of a line of sight integral whereby the redshift information is lost. We consider an average over the 21-cm signals from redshift slices and thus lose the individual redshift information but improve the signal to noise ratio when cross-correlating with the weak-lensing field. We define a brightness temperature field on the sky by integrating $\delta_T (\chi \bf{\hat{n}},\chi)$ along the radial direction as \begin{equation} \label{eq:summing-of-signal} T(\hat{n}) = {\frac{1}{\chi_2 -\chi_1} \sum_{\chi_1}^{\chi_2 }\delta_T (\chi \bf{\hat{n}},\chi)} \Delta \chi \end{equation} where $\chi_1$ and $\chi_2$ are the comoving distances corresponding to the redshift slices of the 21-cm observation over which the signal is averaged. Radio interferometric observations of the redshifted 21-cm signal directly measures the complex Visibilities which are the Fourier components of the intensity distribution on the sky. The radio telescope typically has a finite beam which allows us to use the \lq flat-sky' approximation. Ideally the fields $\kappa$ and $\delta_T$ are expanded in the basis of spherical harmonics. For convenience, we use a simplified expression for the angular power spectrum by considering the flat sky approximation whereby we can use the Fourier basis. Using this simplifying assumption, we may approximately write the cross-correlation angular power spectrum as \citep{Dash_2021} \begin{equation} \label{eq:crosssignal} \nonumber C^{ T \kappa}_\ell = \frac{1 }{\pi(\chi_2- \chi_1)} \sum_{\chi_1}^{\chi_2} \frac{\Delta \chi}{\chi^2} ~ \mathcal {A}_T (\chi) \mathcal {A}_\kappa (\chi) D_{+}^2 (\chi) \int_0^{\infty} dk_{\parallel} \left [ 1 + \beta_T(\chi) \frac{k_{\parallel}^2}{k^2} \right ] P (k) \end{equation} where $k = \sqrt{k_{\parallel}^2 + \left ( \frac{\ell}{\chi} \right )^2 } $, $ D_{+}$ is the growing mode of density fluctuations, and $\beta_{T} = f/b_T$ is the redshift distortion factor - the ratio of the logarithmic growth rate $f$ and the bias function and $b_T(k,z)$. The redshift dependent function $\mathcal {A}_{T}$ is given by \citep{bali,datta2007multifrequency,Guha_Sarkar_2012} \begin{equation} \mathcal {A}_{T} = 4.0 \, {\rm {mK}} \, b_{T} \, {\bar{x}_{\rm HI}}(1 + z)^2\left ( \frac{\Omega_{b0} h^2}{0.02} \right ) \left ( \frac{0.7}{h} \right) \left ( \frac{H_0}{H(z)} \right) \label{eq:21cmkernel} \end{equation} The quantity $b_T(k, z)$ is the bias function defined as ratio of HI-21cm power spectrum to dark matter power spectrum $b_T^2 = P_{HI}(z)/P(z)$. In the post-reionization epoch $z<6$, the neutral hydrogen fraction remains with a value $ {\bar{x}_{\rm HI}} = 2.45 \times 10^{-2}$ (adopted from \citet{ Noterdaeme_2009,Zafar_2013}). The clustering of the post-reionization HI is quantified using $b_T$. On sub-Jean's length, the bias is scale dependent \citep{fang}. However, on large scales the bias is known to be scale-independent. The scales above which the bias is linear, is however sensitive to the redshift. Post-reionization HI bias is studied extensively using N-body simulations \citep{Bagla_2010, Guha_Sarkar_2012, Sarkar_2016, Carucci_2017}. These simulations demonstrate that the large scale linear bias increases with redshift for $1< z< 4$ \citep{Mar_n_2010}. We have adopted the fitting formula for the bias $b_T(k, z)$ as a function of both redshift $z$ and scale $k$ \citep{Guha_Sarkar_2012, Sarkar_2016} of the post-reionization signal as \begin{equation} \label{eqn:bias} b_{T}(k,z) = \sum_{m=0}^{4} \sum_{n=0}^{2} c(m,n) k^{m}z^{n} \end{equation} The coefficients $c(m,n)$ in the fit function are adopted from \citet{Sarkar_2016}. The angular power spectrum for two redshifts is known to decorrelate very fast in the radial direction \citep{poreion7}. We consider the summation in Eq (\ref{eq:summing-of-signal}) to extend over redshift slices whose separation is more than the typical decorrelation length. This ensures that in the computation of noise for each term in the summation may be thought of as an independent measurement and the mutual covariances between the slices may be ignored. \subsection{The Baryon acoustic oscillation in the angular power spectrum} The sound horizon at the epoch of recombination is given by \begin{equation} s(z_{d}) = \int_{0}^{a_r} \frac{c_s da}{a^2 H(a)} \end{equation} where $a_r$ is the scale factor at the epoch of recombination (redshift $z_d$) and $c_s$ is the sound speed given by $c_s(a) = c/ \sqrt{3(1+3\rho_b/4\rho_\gamma)}$ where $\rho_b$ and $\rho_\gamma$ denotes the baryonic and photon densities respectively. The WMAP 5-year data constrains the value of $z_{d}$ and $s(z_d)$ to be $z_{d} = 1020.5 \pm 1.6$ and $s (z_{d}) = 153.3 \pm 2.0 $Mpc \citep{Komatsu_2009}. We shall use these as the fiducial values in our subsequent analysis. The standard ruler \lq$s$' defines a transverse angular scale and a redshift interval in the radial direction as \begin{equation} \theta_s (z) =\frac{s(z_d)} {(1+z) D_A (z)} ~~~~~~~ \delta z_s = \frac{s (z_d) H(z)}{c} \end{equation} Measurement of $\theta_s$ and $\delta z_s$, allows the independent determination of $D_A (z)$ and $H(z)$. The BAO feature comes from the baryonic part of $P(k)$. Hence we isolate the BAO power spectrum from cold dark matter power spectrum through $P_b (k) = P(k) - P_c(k)$. The baryonic power spectrum can be written as \citep{hu1996small, seo2007improved} \begin{equation} \label{eq:baops} P_b (k) = A \frac{\sin x}{x} e^{-(k\sum_s)^{1.4}}e^{-k^2 \sum_{nl}^2/2} \end{equation} where $A$ is a normalization, $\sum_s = 1/k_{silk}$ and $\sum_s = 1/k_{nl}$ denotes the inverse scale of \lq Silk-damping' and \lq non-linearity' respectively. In our analysis we have used $k_{nl} = (3.07 h^{-1}Mpc)^{-1} $and $k_{silk} = (8.38 h^{-1}Mpc)^{-1} $ from \citet{seo2007improved} and $x = \sqrt{k_\perp^2 s_\perp^2 + k_\parallel^2 s_\parallel^2}$. We also use the combined effective distance $D_V(z)$ defined as \citep{Eisenstein_2005} \begin{equation} D_V(z) \equiv \left[ (1+z)^2 D_A^2(z) \frac{c z}{H(z)} \right]^{1/3} \end{equation} The changes in $D_A$ and $H(z)$ are reflected as changes in the values of $s_\perp$ and $s_\parallel$ respectively, and the errors in $s_\perp$ and $s_\parallel$ corresponds to fractional errors in $D_A$ and $H(z)$ respectively. We use $p_1 = \ln (s^{-1}_{\perp})$ and $p_2 = \ln (s_{\parallel})$ as parameters in our analysis. The Fisher matrix is given by \begin{equation} F_{ij} = \sum_\ell \frac{1}{\sigma_{_{T \kappa}}^2} \frac{1 }{\pi(\chi_2- \chi_1)} \sum_{\chi_1}^{\chi_2} \frac{\Delta \chi}{\chi^2} ~ \mathcal {A}_T (\chi) \mathcal {A}_\chi (\chi) D_{+}^2 (\chi) \int_0^{\infty} dk_{\parallel} \left [ 1 + \beta_T(\chi) \frac{k_{\parallel}^2}{k^2} \right ] \frac{\partial P_b (k)}{\partial p_i} \frac{\partial P_b (k)}{\partial p_j} \end{equation} \begin{equation} = \sum_\ell \frac{1}{\sigma_{_{T \kappa}}^2} \frac{ \mathcal {A}_T (\chi) \mathcal {A}_\chi (\chi)}{\pi(\chi_2- \chi_1)} \frac{\Delta \chi}{\chi^2} ~ D_{+}^2 (\chi) \int_0^{\infty} dk_{\parallel} \left [ 1 + \beta_T\frac{k_{\parallel}^2}{k^2} \right ] \left( \cos x - \frac{\sin x}{x} \right) f_i f_j ~A e^{-(k\sum_s)^{1.4}}e^{-k^2 \sum_{nl}^2/2} \end{equation} where $f_1 = k_\parallel^2 / k^2 -1$, $f_2 = k_\parallel^2 / k^2 $ and $k^2 = k_\parallel^2 + \ell^2 /\chi^2$. The variance $\sigma_{_{T \kappa}}$ is given by \begin{equation} \label{eq:variance} \sigma_{_{T \kappa}}= \sqrt{ \frac {{(C^{\kappa}_\ell + N^{\kappa}_\ell )( C^{T}_\ell + N^T_\ell )}}{ ({2\ell + 1}) f_{sky} }} \end{equation} where $C^{\kappa}_\ell$ and $C^{T}_\ell $ are the convergence and 21-cm auto-correlation angular power spectra respectively and $N^{\kappa}_\ell $ and $N^T_\ell$ are the corresponding noise power spectra. The auto-correlation power spectra are given by (\cite{Dash_2021}) \begin{equation} \label{eq:autoT} C^{ T}_\ell = \frac{1 }{\pi(\chi_2- \chi_1)^2} \sum_{\chi_1}^{\chi_2} \frac{\Delta \chi}{\chi^2} ~ \mathcal {A}_T (\chi) ^2 D_{+}^2 (\chi) \int_0^{\infty} dk_{\parallel} \left [ 1 + \beta_T(\chi) \frac{k_{\parallel}^2}{k^2} \right ]^2 P (k) \end{equation} \begin{equation} \label{eq:autok} C^{ \kappa}_\ell = \frac{1 }{\pi} \int_0^{\chi_s} \frac{d\chi}{\chi^2} ~ \mathcal {A}_\kappa (\chi)^2 D_{+}^2 (\chi) \int_0^{\infty} dk_{\parallel} P (k) \end{equation} The noise is the convergence power spectrum is dominated by Poisson noise. Thus $N^\kappa = \sigma^2_\epsilon / \bar{n_g}$ where $\sigma_{\epsilon}$ is the galaxy-intrinsic rms shear \citep{Hu_1999}. The source galaxy distribution is modeled using $( \alpha, \beta, z_0 ) = (1.28,~ 0.97,~ 0.41 )$ which we have adopted from \citet{chang2013effective}. For the survey under consideration, we have taken $\sigma_{\epsilon} = 0.4$ \citep{takada2004cosmological}. We use a visibility correlation approach to estimate the noise power spectrum $N^T_\ell$ for the 21-cm signal \citep{geil2011polarized,villaescusa2014modeling,Sarkar_2015}. \begin{equation} N^{T}_\ell = \left(\frac{ T^2_{sys} \lambda^2}{A_e}\right)^2 \frac{B}{T_{o}N_b(U,\nu)} \end{equation} where $T_{sys}$ is the system temperature, $B$ is the total frequency bandwidth, $U = \ell / 2 \pi$, $T_{o}$ is the total observation time, and $\lambda$ is the observed wavelength corresponding to the observed frequency $\nu$ of the 21 cm signal. The quantity $A_e$ is the effective collecting area of an individual antenna which can be written $A_e=\epsilon \pi (D_{d}/2)^2$, where $\epsilon$ is the antenna efficiency and $D_{d}$ is the diameter of the dish. The $N_b (U,\nu)$ is the number density of baseline $U$ and can be expressed as \begin{equation} N_b (U,\nu) = \frac{N_{ant}(N_{ant}-1)}{2}\rho_{_{2D}}(U,\nu) \Delta U \end{equation} where $N_{ant}$ is the total number of antenna in the radio array and $\rho_{_{2D}}(U,\nu)$ is the normalized baseline distribution function which follows the normalization condition $\int d^2U \rho_{_{2D}} (U,\nu)=1$. The system temperature $T_{sys}$ can be written as a sum of contributions from sky and the instrument as \begin{equation} T_{sys} = T_{inst} + T_{sky} \end{equation} where \begin{equation} T_{sky} = 60K \left ( \frac{\nu}{300 \rm MHz} \right)^{-2.5} \end{equation} We consider a radio telescope with an operational frequency range of $400-950$ MHz. We consider $200$ dish antennae in a radio interferometer roughly mimicking SKA1-Mid. The telescope parameters are summarized in table (\ref{tab:noise-params}). The full frequency range is divided into $4$ bins centered on $916$ MHz, $650$ MHz, $520$ MHz and $430$MHz and $32$ MHz bandwidth each. To calculate the normalized baseline distribution function we have assumed that baselines are distributed such that the antenna distribution falls off as $ 1/r^2$. We also assume that there is no baseline coverage below $30$m. We have also assumed $\Delta U = A_e / \lambda^2$. \begin{figure} \centering \includegraphics[height=5.0cm]{BAO_wiggle.pdf} \caption{This shows the BAO imprint on the transverse cross correlation angular power spectrum $C^{ T \kappa}_\ell$. To highlight the BAO we have divided by the no-wiggles power spectrum $C^{ T \kappa}_{nw}$ which corresponds to the power spectrum without the baryonic feature. This is shown for three redshifts $z = 1.0, ~1.5, ~2.0 $.} \label{fig:wiggles} \end{figure} \begin{figure} \centering \includegraphics[height=3.8cm]{Hz.pdf} \includegraphics[height=3.8cm]{chiz.pdf} \includegraphics[height=3.8cm]{Dvz.pdf} \caption{The figure shows the projected $1-\sigma$ error bars on $H(z)$, $D_A(z)$ and $D_{v}(z)$ at $4$ redshift bins where the galaxy lensing and HI-21cm cross correlation signal is being observed. The fiducial cosmology is chosen to be LCDM.} \label{fig:errorestimate} \end{figure} \begin{table} \begin{minipage}{.8\linewidth} \centering \begin{tabular}{|c |c | c | c | c |} \hline \hline $N_{ant}$ &Freq. range & Efficiency & $D_{d}$ & $T_{o}$ \\ [0.5ex] \hline $200$ &$400-950$ MHz & 0.7 & $15$m & $600$hrs \\ \hline \hline \end{tabular} \captionsetup{justification=centering} \caption{Table showing the parameters of the radio interferometer used for making error projections} \label{tab:noise-params} \end{minipage} \end{table} \begin{table} \begin{minipage}{.8\linewidth} \centering \begin{tabular}{| c | c | c | c |} \hline \hline Redshift$(z)$ & $(\delta H/H) \%$ &$( \delta D_A/D_A) \%$ & $(\delta D_V/D_V) \%$ \\ [0.5ex] \hline \hline $0.55~$& $4.09~$ &$ 2.02~$ & $2.24~$ \\ \hline $1.16~$& $6.23~$ &$ 2.30~$ & $2.79~$ \\ \hline $1.74$~& $10.90~$ &$ 4.035~$ & $4.62~$ \\ \hline $2.28$& $17.00$ &$ 6.40$ & $6.97$ \\ \hline \end{tabular} \captionsetup{justification=centering} \caption{Percentage $1-\sigma$ errors on $D_A$, $H(z)$ and $D_V$.} \label{tab:errors} \end{minipage} \end{table} The BAO feature manifests itself as oscillations in the linear matter power spectrum \citep{Hu-eisen}. The first BAO peak has the largest amplitude and is a $\sim 10\%$ feature in the matter power spectrum $P(k)$ at $k \approx 0.045 {\rm Mpc}^{-1}$. Figure (\ref{fig:wiggles}) shows the BAO feature in the cross-correlation angular power spectrum $C^{ T \kappa}_\ell$. The BAO, here, seen projected onto a plane appears as a series of oscillations in $C^{ T \kappa}_\ell$, The positions of the peaks scales as $\ell \sim k/\chi $. The amplitude of the first oscillation in $C^{ T \kappa}_\ell$ is the maximum as is about $ 1\%$ in contrast to the $\sim 10 \%$ feature seen in $P (k)$. This reduction in amplitude arises due to the projection to a plane whereby several $3D$ Fourier modes which do not have the BAO feature also contribute to the $\ell$ where the BAO peak is seen. For $z = 1.0$ the first peak occurs at $\ell \sim 170$ and it has a full width of $\Delta \ell \sim 75$. If the redshift is changed, the position $\ell $ and width $\Delta \ell$ of the peak both scale as $\chi $. We have made error estimates by considering four redshift bins, corresponding to four $32 \rm MHz$ bandwidth radio observations of the 21 cm signal at four observing central frequencies. The total observing time of $2400$ hrs is divided into four $600$ hrs observations at each each frequency. Figure (\ref{fig:errorestimate}) shows the projected errors on $H(z)$ and $D_A (z)$ for the fiducial LCDM cosmology. We find that $D_A(z)$ can be measured at a higher level of precision compared to $D_V(z)$ and $H(z)$. This is because the weak lensing kernel is sensitive to $D_A (z)$ and the integration over $\chi (z)$ in the lensing signal leads to stronger constraints on it. The percentage $1-\sigma$ errors are summarized in table (\ref {tab:errors}). We find that $H(z)$ is quite poorly constrained specially at higher redshifts. \section{Quintessence cosmology} We investigate spatially flat, homogeneous, and isotropic cosmological models filled with three non-interacting components: dark matter, baryobs and a scalar field $\phi$, minimally coupled with gravity. The Lagrangian density for the quintessence field is given by \begin{equation} \mathcal{L}_{\phi} = \frac{1}{2} (\partial^\mu \phi \partial_\nu \phi) - V(\phi) \end{equation} where $V(\phi)$ is the quintessence potential. The KG equation for quintessence field obtained by varying action w.r.t the $\phi$ is \begin{equation} \ddot{\phi} + 3H\phi +V_{,\phi} = 0 \end{equation} where $V_{,\phi}$ differentiation w.r.t $\phi$ and the Friedmann equation for $H$ is given by \begin{equation} H^2 = \frac{1}{3}(\rho_m + \rho_b + \rho_\phi) \end{equation} In order to study the dynamics of background quintessence model, let us define the following dimensionless quantities \citep{scherrer2008thawing,amendola_tsujikawa_2010} \begin{equation} x = \frac{{\phi'}}{\sqrt{6}}, ~~ y = \frac{\sqrt{V}}{\sqrt{3}H}, ~~ \lambda = - \frac{ V_{,\phi}}{ V} , ~ ~\Gamma = V \frac{V_{,\phi \phi}} { V_{,\phi}^2} , ~~ b = \frac{\sqrt \rho_b}{\sqrt{3} H} \end{equation} where we use units $ 8\pi G = c = 1$ and the prime ($'$) denotes the derivative w.r.t the number of e-folding $N = \log(a)$. Using the above quantities we can define the density parameter ($\Omega_\phi$) and the EoS ($w_\phi = p_\phi/\rho_\phi$) to the scalar field as follows \begin{equation} \Omega_{\phi} = x^2 + y^2,~ ~~\gamma = 1+w_{\phi} = \frac{2x^2}{x^2 + y^2} \end{equation} The dynamics of background cosmological evolution is obtained by solving a autonomous system of first order equations \citep{scherrer2008thawing,amendola_tsujikawa_2010}. \begin{eqnarray} \gamma ' = 3 \gamma (\gamma - 2 ) + \sqrt{3 \gamma \Omega_\phi}(2 - \gamma)\lambda , \nonumber \\ \Omega'_{\phi} = 3(1 - \gamma) \Omega_{\phi}(1 - \Omega_{\phi}), \nonumber \\ \lambda ' = \sqrt{3\gamma \Omega_{\phi}}\lambda ^2 (1 - \Gamma),\nonumber \\ b'= - \frac{3}{2} b \Omega_\phi (1- \gamma) \label{eq:AutoODE} \end{eqnarray} In order to solve the above set of 1st order ODEs numerically, we fix the initial conditions for $\gamma$, $\Omega_{\phi}$, $\lambda$ at the decoupling epoch. For thawing models, the scalar field is initially frozen due to large Hubble damping, and this fixes the initial condition $\gamma_i \approx 0$. The quantity $\Gamma$ which quantifies the shape of the potential is a constant for power law potentials. The parameter $\lambda_i$ is the initial slope of scalar field and measures the deviation of LCDM model. For smaller $\lambda_i$ the EoS ($w_{\phi}$) of scalar field remain close to cosmological constant, whereas larger values of $\lambda_i$ lead to a significant deviation from LCDM. Assuming the contribution of scalar field to the total energy density is negligibly small in the early universe, we fix the present value of $\Omega_{\phi}$. Similarly, we fix the initial value of $b$ (related to the density parameter for baryons) so that one gets right value of the $\Omega_{b0} = 0.049$ \citep{Planck2018} at the present epoch. \begin{figure} \centering \includegraphics[height=5.0cm]{scalar_wphi.pdf} \caption{The figure shows the EoS ($w_{\phi}$) as a function of redshift z for different quintessence field models after solving the autonomous ODE in (\ref{eq:AutoODE}). We kept the initial slope of the field $\lambda_i = 0.7$ in all the cases.} \label{fig:wphi} \end{figure} Figure (\ref{fig:wphi}) shows the dynamical evolution of the EoS of quintessence field for three models. We note that there is no departure from the LCDM at large redshifts but a prominent model sensitive departure for small redshifts. At $z\sim 0.5$ there is almost a $\sim 5\%$ departure of the EoS parameter $w_{\phi}$ from that of the non-dynamical cosmological constant. The departure of $w_{\phi}$ from its LCDM value of $-1$, imprints on the growing mode of density perturbations by virtue of the changes that it brings to the Hubble parameter $H(z)$. Growth of matter fluctuations in the linear regime provides a powerful complementary observation to put tighter constrains on cosmological parameters, and also break the possible degeneracy in diverse dark energy models. We have assumed spatially flat cosmology in our entire analysis and not constrained radiation density, as only dark matter and dark energy are dominant in the late universe. The full relativistic treatment of perturbations for Quintessence dark energy has been studied \cite{hussain2016prospects}. Ignoring super-horizon effects, we note that on sub-horizon scales, ignoring the clustering of Quintessence field, the linearized equations governing the growth of matter fluctuations is given by the ODE \citep{amendola2000coupled,amendola2004linear} \begin{equation} D_{+} '' + \left( 1 + \frac{\mathcal{H}'(a)}{\mathcal{H}(a)} \right) D_{+}' - \frac{3}{2} \Omega_{m}(a) D_{+} = 0. \end{equation} Here, the prime denotes differentiation w.r.t to \lq $\log a$', $\mathcal{H}$ is the conformal Hubble parameter defined as $\mathcal{H} = aH$ and $\delta_m$ is the linear density contrast for the dark matter. In order to solve the above ODE, we fix the initial conditions $D_{+} $ grows linearly with $a$ and the first derivative of $\frac{d D_{+} }{d a} =1 $ at early matter dominated epoch ($a= 0.001$). We now consider the BAO imprint on the cross-correlation angular power spectrum to make error predictions on Quintessence dark energy parameters which affects both background evolution and structure formation. \subsection{Statistical analysis and constraints on model parameters} We choose the following parameters $( h, \Gamma, \lambda_i, \Omega_{\phi 0})$ to quantify the Quintessence dark energy. We have use uniform priors for these parameters in the Quintessence model. The Hubble parameter at present (z = 0) in our subsequent calculations is assumed to be $H_0 = 100hKm/s/M pc$, thus define the dimensionless parameter $h$. We perform a Markov Chain Monte Carlo (MCMC) analysis using the observational data to constraint the model parameters and evolution of cosmological quantities. The analysis is carried out using the Python implementation of MCMC sampler introduced by \citet{foreman2013emcee}. We take flat priors for these parameters with ranges of $h \in [0.5,0.9]$, $\Gamma \in [-1.5,1.5]$, $\lambda_i \in [0.5,0.8], \Omega_{\phi 0} \in [0.5,0.8]$ . We first perform the MCMC analysis for the using the error bars obtained on the binned $H(z)$ and $D_A$ from the proposed 21-cm weak lensing cross-correlation. The figure (\ref{fig:mcmc}) shows the marginalized posterior distribution of the set of parameters and $( h, \Gamma, \lambda_i, \Omega_{\phi 0})$ the corresponding 2D confidence contours are obtained for the model $V(\phi) \sim \phi$. The results are summarized in table(\ref{tab:MCMC-constraints}). For a joint analysis, we employ three mainstream cosmological probes, namely cosmic chronometers (CC), Supernovae Ia (SN) and $f\sigma_8$. We have used the observational measurements of Hubble expansion rate as a function of redshift using cosmic chronometers (CC) as compiled by \citet{gomez2018h0}. The distance modulus measurement of type Ia supernovae (SN), is adopted from the Joint Lightcone Analysis sample from \citet{betoule2014improved}. We also incorporated the linear growth rate data, namely the $f\sigma_8(z) (\equiv f(z)\sigma_8 D_m (z))$ from the measurements by various galaxy surveys as compiled by \citet{nesseris2017tension}. \begin{figure*} \centering \includegraphics[height=8cm]{mcmcf4.pdf}\quad \includegraphics[height=8cm]{mcmcf1.pdf} \caption{Marginalized posterior distribution of the set of parameters and ($\Omega_{ri},\Omega_{\phi i},\lambda_i, h$) corresponding 2D confidence contours obtained from the MCMC analysis for the model $V(\phi) \sim \phi$. Left panel: utilizing the information from the fisher matrix only. Right panel: utilizing all the data sets mentioned in the discussion on the top of the fisher information.} \label{fig:mcmc} \end{figure*} The posterior probability distributions of the parameters and the corresponding 2D confidence contours are shown in figure (\ref{fig:mcmc}). The constraint obtained for different parameters are shown in table (\ref{tab:MCMC-constraints}). The joint analysis gives improved constraints compared to the constraints obtained from the analysis of only our projected BAO results. These constraints are also competitive with other probes \citep{gupta2012constraining,sangwan2018observational,yang2019constraints}. \begin{table} \centering \begin{tabular}{p{0.15\linewidth}p{0.10\linewidth}p{0.10\linewidth}p{0.10\linewidth}p{0.10\linewidth}} \hline \hline Parameters &$ \Omega_{\phi 0}$ & $ \Gamma$ & $\lambda_i$ & $h$ \\ [0.5ex] \hline\hline Constraints \\( BAO only) &~~ $0.660^{0.064}_{-0.049} $ & $0.091^{0.784}_{-1.080}$ & $0.575^{0.067}_{-0.050}$ & $0.723^{0.038}_{-0.036}$ \\ \hline Constraints \\ (BAO+CC+$f\sigma_8$+SN) & ~~~$0.616^{0.034}_{-0.020} $ & $0.157^{0.895}_{-0.956}$ & $0.548^{0.049}_{-0.036}$ & $0.701^{0.016}_{-0.015}$ \\ \hline \hline \end{tabular} \caption{The parameter values, obtained in the MCMC analysis combining all the data sets are tabulated along the $1-\sigma$ uncertainty.} \label{tab:MCMC-constraints} \end{table} \section{Conclusion} In this paper, we have explored the cross-correlation signal of weak galaxy lensing and HI 21-cm. From the tomographic study we estimated the projected errors on the $H(z)$, $D_A (z)$ and $D_V(z)$ over a redshift range $z \sim 0-3$. The quantities of interest namely $H(z)$ and $D_A(z)$ explicitly appears in the lensing kernel and also in the BAO feature of the power spectrum. The cross-angular spectrum involve a radial integral and hence loses the redshift information. We have obtained tomographic information by locating the 21-cm slice at different redshift bins before cross-correlating. Several observational challenges come in the way of measuring the cosmological 21-cm signal. The 21-cm signal is buried deep under galactic and extra-galactic foregrounds \citep{2011MNRAS.418.2584G}. We have assumed that this key challenge is addressed. Even after significant foreground removal, the cosmological origin of the 21 cm signal can only be ascertained only through a cross-correlation \citep{Guha_Sarkar_2010,Carucci_2017,Sarkar_2019}. The foregrounds for the two individual probes are expected to be significantly uncorrelated and hence leads to negligible effects in the observing cross-correlation power spectrum. We have not considered systematic error which arises from photometric redshift (or so called photo-z) errors which may significantly degrade the cosmological information in the context of lensing auto-correlation \citep{Takada_2009}. The BAO estimates of $H(z)$, $D_A (z)$ allows us to probe dark energy models. We have considered the quintessence scalar field as a potential dark energy candidate and studied the background dynamics as well as the growth perturbation in linear regime in such a paradigm. A Baysean parameter estimation using our BAO estimates indicate the possibility of good constraints on scalar field models. The constraints also improve when joint analysis with other probes is undertaken and reaches precision levels competitive with the existing literature. \bibliographystyle{mn2e}
proofpile-arXiv_065-4625
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection{Web Search Engine} While product search engines have a specific mission, the challenges they face are similar to those for web search engines. Traditional methods have modeled query and items using sparse document retrieval models such as BM25~\cite{Robertson94} for efficient retrieval. These models assume that terms are mutually independent so that the relevance score of a query-document pair is a weighted combination of the matching score of each query term with respect to the document. As term matching models pose obvious limitations such as inability to handle synonyms, various techniques have developed in hopes of doing semantic matching (Hang Li and Jun Xu. 2014. Semantic Matching in Search. Now Publishers Inc., Hanover, MA, USA). Wu et al. (Neural News Recommendation with Multi-Head Self-Attention) proposed to use multi-head self-attention network to learn news representations from news titles. However, it is difficult for these models to understand the deep semantic information in news texts~\cite{NIPS2017_3f5ee243}. In addition, their models are only learned from the supervisions in the training data, which may not be optimal for capturing the semantic information. \subsection{Prompt Tuning} Main component of our model is that we access the knowledge stored in PLM to 'answer' queries in a zero-shot manner. One method to utilize such knowledge-base is through prompt tuning (p-tuning)~\cite{liu2021gpt, lester2021power, liu2021p}. Given a pre-trained language model $M$, a sequence of discrete input tokens $x1:n = \{x_0, x_1, ..., x_n\}$ will be mapped to input embeddings $\{e(x_0), e(x_1), ..., e(x_n)\}$ by the pretrained embedding layer $e \in M$. Condition on the context $x$, we use the output embeddings of a set of target tokens $y$ for downstream processing. For instance, in the pre-training, $x$ refers to the unmasked tokens referring to queries while $y$ refers to the [MASK] tokens referring to the item categories. The function of a prompt p is to organize context $x$, target $y$ and itself into a template $T$. For example, in the task of predicting a the target product category for 'What should I get my son for a holiday gift?' a template may be “What should I get my son for a holiday gift? [MASK]” in which “[MASK]” is the target. Prompts can be so flexible that we may even insert them into the context or target. \ky{Recently, \citet{liu2021gpt} and \citet{liu2021p} empirically demonstrate that pretrained language models with properly optimized p-tuning capture far more knowledge than finetuning. Specifically, they show that the p-tuning significantly surpasses the finetuning across various model scales for NLU tasks. Thus, this work applies the p-tuning based approach in the proposed framework for effective QA-based knowledge retrieval.} \subsection{Cold-start in product retrieval} MORE ON ZERO-SHOT. \subsection{Retrieval model} Given a query $\hat{q}_i$, the goal of our retrieval model is to select the top-$K$ relevant product categories $\hat{C}$, thus \textit{effectively} reducing the search space for the subsequent ranking model. We choose products' categorical information as the target label for the retrieval stage because using categories as answers makes the template closer to natural language as a human would write it, and the performance of GPT-3 would improve. \vspace{0.05in} \noindent\textbf{Training method.} To optimize GPT-3 for our downstream task, we use the p-tuning method~\cite{liu2021gpt}. We formulate $\tilde{q}_i$ as a concatenation, "[PROMPT$_{1:d}$] [$\hat{q}_i$] [MASK]", in which [PROMPT$_{1:d}$] are the trainable continuous prompt tokens, [$\hat{q}_i$] is the context, and [MASK] is the target. $d$ is the hyperparameter determining the number of prompt tokens. In p-tuning method, only the embeddings for the trainable continuous prompt tokens are updated with the Cross-Entropy loss, \begin{equation} L = - \sum_{i=1}^{N}y_i^{\top}log\mathcal{M}(\tilde{q}_i), \end{equation} where $\mathcal{M}$ refers to the GPT-3 model, $N$ is the number of train data and $y_i$ is the one-hot vector of the target category token in the vocabulary of a language model $\mathcal{M}$. In practice, fine-tuning could also be adopted, but many work observed that GPT-style models perform poorly to NLU tasks with fine-tuning \cite{liu2021gpt}. Thus, we use p-tuning in utilizing GPT-3 as an implicit KB for desired knowledge, in other words, finding a relevant product's category for a given query. The comparison between performances of fine-tuning and p-tuning is discussed in Section~\ref{sec_tuning}. \vspace{0.05in} \noindent\textbf{Inference.} To select the top-$K$ relevant product categories $\hat{C}$, we obtain the category score $s_i$ for each category. Let's say the category $c_i$ is `baby product' and its tokens are $T = [\text{`baby'}, \text{`pro-'}, \text{`-duct'}]$. $s_i$ is formulated as, \begin{equation} s_i = \sum_{j=1}^{|T|}{\alpha_{j}\mathcal{M}(t_j | \tilde{q})}, \end{equation} where $t_j$ denotes $j$-th token in $T$. The $\alpha_j$ is the hyperparameter for the weight of each token probability and $\sum_j \alpha_j=1$. $s_i$ is calculated as the weighted average of the logit scores of tokens in $c_{i} \in C$, conditioned on the query. In our experiment, we heuristically set $\alpha_{1}$ to $0.8$ and let the rest share the same weight of $\sum_{j=2}^{|T|}\alpha_j=0.2$. We give the highest weight to the first token because it is the most important in decoding the answer from GPT-3. Finally, category set $C$ is sorted by $s_i$ and the top-$K$ categories $\hat{C}$ are used in the ranking stage. \subsection{Ranking Model} We first use the category-to-product mapping table, ~\figureautorefname~\ref{fig:main_fig}-(b), to prepare the candidate product set. The candidate products are then ranked using the ranking model, which can be any model that leverages embedding-based similarity methods. In this paper, we use BERT~\cite{devlin2019bert} with multi-layer perceptron (MLP) layers as the simple embedding method to leverage flexibility in the architecture. Learning latent representations of queries and products with this embedding method and then calculating a similarity score is shown in ~\figureautorefname~\ref{fig:main_fig}-(c). Specifically, given the query $\hat{q}$ and the candidate product $\hat{p_i}$, the similarity score is calculated as, \begin{equation} \begin{aligned} S(\hat{q}, \hat{p}_i) = f(E_{\hat{q}}; \theta) \cdot f(E_{\hat{p}_i}; w), \end{aligned} \end{equation} where $E_{\hat{q}}$ and $E_{\hat{p}_i}$ represent the BERT embeddings of $\hat{q}$ and $\hat{p}_i$, and $f$ is MLP layers parameterized by $\theta$ and $w$. We use Binary Cross-Entropy (BCE) loss between the predicted score $S(\hat{q},\hat{p}_i)$ and ground truth label $\hat{y}$ which is defined as, \begin{equation} \hat{y} = \begin{cases} 1, & (\hat{q}, \hat{p}_j) \in D^{tr}\\ 0, & \textrm{otherwise} \end{cases} \end{equation} where $D^{tr}$ is the set of query and product pairs $(q_i, p_j)$ in the data log. We also use the weighted loss to handle the class imbalance problem between positive and negative samples. The weight update happens both in the BERT parameters and MLP parameters. \subsection{Datasets and Evaluation Metrics} In this section, we conduct experiments on our e-commerce datasets, where the purchase log is transformed into query-product pairs. These queries do not describe the products, but instead, contain search intents that require the system to exhibit natural language understanding (NLU) ability. We additionally test our method on a public dataset. Detailed statistics of the datasets are described in~\tableautorefname~\ref{tab:dataset}. \vspace{0.05in} \noindent{\textbf{Gift dataset.}} We retrieved reviews that contain the word `gift' from one year of shopping review log on our e-commerce platform, spanning from May 20, 2020, to May 25, 2021. We subsampled a total of $55,217$ review logs, then involved human resources to form natural language queries from the review logs, to produce query-product pairs. Since the log contains user information, we could compute Toppop (age or gender) for users asking the queries, as baselines. \vspace{0.05in} \noindent{\textbf{Co-purchase dataset.}} We sampled $45,234$ purchase logs from our e-commerce platform, spanning from September 01, 2021, to September 5, 2021. For each purchase log, we randomly picked a product as an anchor and formed a query with its category information, \textit{i.e.}, if the anchor product is a type of vitamin, the query is `What can be co-purchased with vitamins?' The positive sample is a randomly picked product from the same purchase log. Additionally, to formulate it into a more complex NLU task, we asked 82B GPT-3~\cite{kim2021changes} for the intention of the co-purchase to include it in the query. \footnote{We exclude TopPop for Co-purchase since this dataset is collected without demographic information} \vspace{0.05in} \noindent{\textbf{Google LCC}} To verify the generality of our proposed method and allow others to reproduce our results, we additionally test on a public dataset shared by Google LCC. This dataset consists of $6,555$ English question-answer pairs, which are categorized as one of `stackoverflow', `culture', `technology', `science', or `life arts' based on the nature of the question. In this paper, we viewed questions as queries, answers as products, and categories as product categories. \vspace{0.05in} \noindent{\textbf{Evaluation metrics.}} We evaluate model performance on Gift, Co-purchase and Google LCC dataset in terms of HR@$K$, a simple yes/no metric that looks at whether any of the top-$K$ recommended products include the ground truth product, \begin{equation} \textrm{HR}@K \textrm{ for a query} = \max_{i=1,..,K} \begin{cases} 1, & r \in \mathcal{T}_i\\ 0, & \textrm{otherwise} \end{cases} \end{equation} where $r$ is the ground truth product, and $\mathcal{T}_i$ is the set of recommended products up to $i$-th rank. We get HR@$K$ across queries and compute the average. \subsection{Methods Compared} We compare against a conventional baseline (TopPop), a traditional web retrieval baseline (BM25~\cite{thakur2021beir}), and an transformer-based baseline that is widely used for NLP modelling (BERT~\cite{devlin2019bert}). All these baselines are formed as a 2-stage retrieval system, where the retrieval model follows each baseline method but the same ranking model as ours is shared across all baselines. Note that the top 10 categories were retrieved for the gift and co-purchase dataset, whereas the top one category was retrieved for the Google LLC dataset. \vspace{0.05in} \noindent{\textbf{TopPop.}} Toppop baseline retrieves categories according to the category popularity. We test toppop on two levels, age and gender of the user asking the query. \vspace{0.05in} \noindent{\textbf{BM25.}} Overall, BM25 remains a strong baseline for zero-shot text retrieval~\cite{thakur2021beir}. BM25 is a bag-of-words (BOW) information retrieval model that relies on an exact lexical match between a query and documents (categories). \vspace{0.05in} \noindent{\textbf{BERT-based similarity search.}} The current effective approaches integrate BERT~\cite{devlin2019bert} as an embedding generation component in the retrieval model, with the input of a concatenated string of query and candidate texts. BERT and a simple nonlinear model are then trained with BCE loss where incorrect pairs get penalized. \input{tab_all} \input{tab_information_retrieval} \subsection{Retrieval Performance}\label{sec_main_result} \subsubsection{Product Retrieval} We have our main comparison table in ~\tableautorefname~\ref{tab:all}. We see superior performance of our proposed method on both datasets, across all metrics. BM25~\cite{thakur2021beir} retrieval model showed the worst performance, as the tasks are formed as QA-tasks, therefore the lexical matching method is significantly suboptimal. TopPop baselines also failed to retrieve relevant categories to the query. BERT~\cite{devlin2019bert}-based similarity search was the most comparable considering it is a transformer-based pre-trained language model, however, our proposed retrieval system showed superior performance of 15.0\% and 31.6\% compared to BERT. The higher performance of our method comes from the ability to carefully consider the scores of each token of the category. \subsubsection{Public Dataset} Our method on Google LLC dataset showed superior performance compared to BERT~\cite{devlin2019bert} and BM25~\cite{thakur2021beir}, proving that our method can be generalized to not only product retrieval but also to general informational retrieval cases. The more conspicuous performance increase results from the nature of the dataset. The product retrieval dataset has many possible answers for one ground truth product, whereas in the Question-Answering scenario the right answer is apparent, thus the ranking model could perform better. \input{tab_tuning} \input{tab_cold} \subsection{Effective Tuning Method for Knowledge Retrieval}\label{sec_tuning} Interestingly, fine-tuning GPT-3 shows only a slight performance improvement compared to the zero-shot approach (\tableautorefname~\ref{tab:tuning}). We conjecture that fine-tuning the model parameters triggered catastrophic forgetting, which subsided the knowledge GPT-3 gained from pre-training. \citet{liu2021gpt} empirically demonstrated that pre-trained language models with properly optimized p-tuning can capture far more knowledge than fine-tuning, and show that such is true across various model scales for NLU tasks. We observe the same trend in our KB-based product retrieval system. ~\tableautorefname~\ref{tab:tuning} shows that our proposed method with $137$ million parameters significantly outperforms the other tuning methods, which are $1.3$ billion zero-shot and $1.3$ billion fine-tuned models. \subsection{Influence of Language Model Size}\label{sec_LM_size} Recent studies have shown that training deep neural networks with large parameters leads to significant performance gains in a wide range of tasks~\cite{NEURIPS2020_1457c0d6, shin2021scaling}. As such, we found that scaling up the size of GPT-3 empowers the ability to solve QA tasks. As presented in~\tableautorefname~\ref{tab:tuning}, 13 billion model surpasses other models by a very great margin. It is worth noting that the performance differs even more significantly when the model size varies from 1.3 billion to 13 billion, than from 137 million to 1.3 billion. This implies that increasing the model size can dramatically increase the knowledge probing ability of the language model. \subsection{Performance on Cold-Start Problem}\label{sec_cold} We evaluate the cold-start performance against two other baselines, BM25 and BERT-based search. To properly compare the performance, we take the same train dataset as before but prepare a separate test dataset consisting of search intent and product pairs not seen during training. Our method achieves an increase of 85.8\% in HR@300 metric compared to BERT-based search (\tableautorefname~\ref{tab:cold}). We conclude that the knowledge already encoded in GPT-3 helps retrieve the right products, although a particular query or product's semantic information has not been learned during training. Our product retrieval system overcomes the vocabulary upper bound problem as well as allowing flexibility in query formation. \section{Introduction} \input{01.Introduction} \section{Methodology} \input{03.Model} \section{Experiments} \input{04.Experiments} \section{Results}\label{result_sec} \input{05.ComparsionResults} \section{Conclusion} \input{06.Conclusion} \bibliographystyle{ACM-Reference-Format}
proofpile-arXiv_065-4629
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In the context of cosmological studies, the concept of counts-in-cells statistics has been put forward for a long time as a unique way to quantify the statistical properties of the cosmological fields \citep{1979MNRAS.186..145W,1995ApJS...96..401C,1989A&A...220....1B,1999A&A...349..697B}. It was then shown in particular that counts-in-cells statistics, which represents a discrete representation of the local density probability distribution function (PDF), could be directly related to the correlation hierarchy of the density field. Interest in these types of observables was recently renewed for several reasons. The size of the surveys makes accurately measuring these quantities more realistic. This is already the case for surveys such as the Dark Energy Survey \citep[DES collaboration;][]{2018PhRvD..98d3526A}, the Kilo-Degree Survey \citep[KIDS;][]{2021A&A...646A.140H}, and the Hyper Suprime Cam \citep[HSC;][]{2019PASJ...71...43H}. The future promises even larger and more powerful surveys such as Euclid \citep{2011arXiv1110.3193L,2018LRR....21....2A} and the Rubin Observatory \citep{2019ApJ...873..111I}. Moreover, the theoretical foundations for these constructions (at least in the cosmological context) has been considerably strengthen with the realization that the large-deviation theory \citep[LDT; for a general review, see][]{2011arXiv1106.4146T} could successfully be invoked, as shown in \cite{2016PhRvD..94f3520B}. It clarifies the applicability of the theory to the cosmological density field and places previous works on a much more solid foundation \citep{2002A&A...382..412V,2014PhRvD..90j3519B}. The ability of density PDF to constrain cosmology was emphasized in \cite{2016MNRAS.460.1549C} and completed in \cite{2020MNRAS.498..464F} and in \cite{2020MNRAS.495.4006U}, who showed that these observable could efficiently constrain the neutrino mass or primordial non-Gaussianities. Finally, although the matter PDF is not a direct observable, as is matter density, it can be closely approached with the help of luminous tracer statistics (\cite{2020MNRAS.498L.125R}), more convincingly in weak-lensing fields, as advocated in numerous recent papers \citep{2021MNRAS.503.5204B,2000A&A...364....1B}, or with combined approaches such as density-split statistics \citep{2018PhRvD..98b3507G,2018PhRvD..98b3508F,2018MNRAS.481.5189B}, which proved to be particularly promising. The construction of a full theory of these observable requires a detailed analysis of its global error budget, however, due to finite-size surveys, imperfect tracers, and so on. Some of these aspects have been explored in early studies such as \citet{1996ApJ...470..131S} and \citet{1999MNRAS.310..428S}, but a full theory is still lacking. The developments presented in this paper are made in this context. More precisely, the purpose of this study is to explore what determines the expression of the covariance of data vectors whose elements are local quantities, such as the density contrast and density profiles, in cosmological contexts, that is, in classical random fields with long-range correlations. Derivations were made furthermore assuming statistical homogeneity and isotropy. The domain of application encompasses both counts-in-cells statistics, basically 2D or 3D counts of density tracers, or proxies to projected densities such as mass maps for weak-lensing tomographic observations. In order to gain insights into the different contributions and the effects that might contribute to the covariance, we rely on the use of the hierarchical models to derive results we think rather general. The immense advantage of using such models is that they naturally incorporate many of the features expected in density cosmological fields (e.g., the magnitude of the high-order correlation functions), and there are also models for which many exact results can be obtained in particular for counts-in-cells statistics. The goal of these constructions is to eventually infer precisely what the performance of PDF measurements would be on the determination of cosmological parameters, taking advantage of results such as those found in \cite{2021MNRAS.505.2886B}, which give the response function of these observable to various cosmological parameters Section 2 is devoted to the presentation of the general framework. The subsequent section explores different contributions, from large-scale effects with the derivation of several bias functions to short-distance contributions. Results are derived in a framework as general as possible, including discrete noise associated with the use of a finite number of tracers. Section 4 presents the general hierarchical models, and more specifically, the Rayleigh-Levy flight model that we use as a toy model to evaluate the performances of approximate schemes. In Section 5, simplified models for the covariance matrix are presented and evaluated with the help of a set of numerical experiments. Section 6 summarizes the results that have be found and specifies their expected range of application. The text is complemented by appendices that contain a large amount of material. They present the hierarchical models, their mathematical description, and the mean-field approximation that is used throughout for explicit computations. Appendix C is more specifically devoted to the minimal tree model and the construction of the exact mean-field covariance matrix. \section{General framework. Construction of covariance matrices} The purpose of this section is to show how the elements of the covariance matrix are related to the joint density PDFs within a given survey. We first formalize this relation in a general framework before we explore its consequence in the context we are interested in. We assume we are interested in the PDF of some local quantity, $\mu$, that can be evaluated within a survey, thus defining a field $\mu({\bf x})$ throughout the survey. The a priori typical example of this quantity is the density (see below for a more precise illustration of what this quantity could be). The value of $\mu$ is assumed to lie in some ensemble ${\cal M}$ (that can be simply the real numbers), and the data vector we are interested in consists of the probabilities $p_{i}$ that $\mu$ lie within the subsets ${\cal M}_{i}$ (which are a priori nonzero within ${\cal M}$). The one-point PDF of $\mu$ is then given by \begin{equation} p_{i}({\bf x})=\int_{{\cal M}_{i}} {\rm d} \mu\ {\cal P}(\mu,{\bf x}), \end{equation} if ${\cal P}(\mu,{\bf x}){\rm d}\mu$ is the PDF of $\mu$ at location ${\bf x}$. $p_{i}({\bf x})$ is then assumed to be independent of ${\bf x}$ in the context we are interested in, for which statistical homogeneity is assumed. More formally, we can define the characteristic function $\chi_{i}({\bf x}),$ which takes the value $1$ where $\mu({\bf x})\in{\cal M}_{i}$ and $0$ otherwise. An estimation of $p_{i}$ would then be given by the volume fraction of the survey where $\mu({\bf x})\in{\cal M}_{i}$. We note this estimate as $P_{i}$ \footnote{This is an ideal estimate in the sense that $\mu$ is evaluated in an infinite number of locations. We therefore neglect here the impact of measuring $\mu$ on a finite number of locations when evaluating $P_{i}$. Regarding this aspect, a specific derivation that takes a finite number of measurements into account can be found in \citet{2016MNRAS.460.1598C} }, \begin{equation} P_{i} =\frac{1}{V} \int{\rm d} {\bf x}\ \chi_{i}({\bf x}), \end{equation} which is then itself a random variable, the properties of which we are interested in. More precisely, we would like to derive an operational form for the likelihood function of a set of $P_{i}$ variable. We limit our investigation here to the construction of the likelihood from the covariance matrix, assuming that the likelihood of $P_{i}$ is close enough to a Gaussian distribution\footnote{Whether this is a correct assumption is difficult to assess in general. It probably depends on the detailed properties of the setting. The Conclusion section contains further comments on this aspect.}. The ensemble average of $P_{i}$ is \begin{equation} \langle P_{i} \rangle =\frac{1}{V} \int{\rm d}{\bf x}\ \langle \chi_{i}({\bf x})\rangle = \frac{1}{V} \int{\rm d}{\bf x}\ p_{i}({\bf x})=p_{i.} \end{equation} We can further define a joint PDF of the same field, ${\cal P}(\mu,{\bf x};\mu',{\bf x}'),$ which is the joint PDF of $\mu$ and $\mu'$ in locations ${\bf x}$ and ${\bf x}'$. Defining $p_{ij}({\bf x},{\bf x}')$ as the joint ensemble average of ${\cal P}(\mu,{\bf x};\mu',{\bf x}'),$ we have \begin{equation} p_{ij}({\bf x},{\bf x}')=\int_{{\cal M}_{i}} {\rm d} \mu \int_{{\cal M}_{j}} {\rm d} \mu'\ {\cal P}(\mu,{\bf x};\mu',{\bf x}'). \end{equation} The elements of the covariance matrix of $P_{i}$ are then formally \begin{eqnarray} \langle P_{i} P_{j} \rangle &=& \frac{1}{V^{2}} \int{\rm d}{\bf x} \int{\rm d}{\bf x}'\ \langle \chi_{i}({\bf x})\chi_{j}({\bf x}')\rangle \nonumber \\ &=& \frac{1}{V^{2}} \int{\rm d}{\bf x} \int{\rm d}{\bf x}'\ p_{ij}({\bf x},{\bf x}')\equiv \overline{p}_{ij}\label{basicrelation}. \end{eqnarray} This gives the relation between the covariances and joint PDF. If $p_{ij}({\bf x},{\bf x}')$ depends only on the relative distance between ${\bf x}$ and ${\bf x}'$, this expression can be recast in terms of the distribution of such distances, $P_{s}(r_d)$, in the form \begin{eqnarray} \langle P_{i} P_{j} \rangle &=&\int{\rm d} r_d\,P_{s}(r_d) p_{ij}(r_d). \label{keyrelationCov} \end{eqnarray} The precise form of $P_{s}(r_d)$ depends on the detail of the survey. Explicit forms can be given in case of simple regular surveys such as square surveys\footnote{For a square survey of unit size (with nonperiodic boundary conditions), the distance distribution function $P_{s}(r_{d})$ is given by \begin{equation} P_{s}(r_{d})= \begin{cases} 2 r_{d} ((r_{d}-4) r_{d}+\pi ) & 0<r_{d}<1 \\ -2 r_{d} \Big(2_{d}+r_{d}^2-4 \sqrt{r_{d}^2-1} & \nonumber\\ \hspace{.1cm}-2 \sec ^{-1}\left(\frac{r_{d}}{\sqrt{r_{d}^2-1}}\right)+2 \sec ^{-1}(r_{d})\Big) & 1<r_{d}<\sqrt{2} \end{cases}, \label{PdExpression} \end{equation} as can be obtained after integrating over three of four of the position coordinates. }. In the context of statistically homogeneous and isotropic random fields, this latter expression is used. In particular, we wish to determine the configurations that contribute most to $\overline{p}_{ij}$. They obviously depend on both the random processes we consider and on the definition of ${\cal M}_{i}$ and ${\cal M}_{j}$. In order to be more specific, we assume in the following that $\mu$ is a local density assigned to be in bins $(i)$ centered on $\rho_{i}$ and with width ${\rm d}\rho_{i}$ (assumed a priori to be arbitrarily small), so that \begin{equation} P_{i}=P(\rho_{i}){\rm d}\rho_{i}. \end{equation} If necessary, local densities could be obtained after the field $\mu({\bf x})$ has been convolved with a window function $W_{R}({\bf x})$, associated with a scale $R$ that is \begin{equation} \rho({\bf x})=\int{\rm d}{\bf x}'\,\mu({\bf x}-{\bf x}')W_{R}({\bf x}'). \end{equation} It is then assumed $R$ is small compared to the sample size in order to identify what the leading contributions to the joint PDFs might be. In practice, $W_{R}$ might also be a simple top-hat window function, but this is not necessarily the case. It could be more elaborated filters, such as compensated filters (of zero average), such as those introduced for cosmic shear analysis \citep{1996MNRAS.283..837S,1998ApJ...498...26K,2000A&A...364....1B}. We furthermore allow the estimated densities $\rho_{i}$ to be defined with respect the overall density of the sample $\rho_{s}$, \begin{equation} \rho_{s}=\frac{1}{V}\int{\rm d}{\bf x}\ \mu({\bf x}). \end{equation} For instance, we could be interested in $\hat{\rho}_{i}\equiv\rho_{i}/\rho_{s}$ or $\overline{\rho}_{i}\equiv\rho_{i}-(\rho_s-1),$ which are frequently encountered situations in praxis. Then $\rho_{s}$ is itself a random variable whose correlation with $\rho({\bf x})$ ought to be taken into account. We then need to explore the properties of either $P(\rho_{i},\rho_{j};{\bf x},{\bf x}')$ or $P(\rho_{s},\rho_{i},\rho_{j};{\bf x},{\bf x}')$ from which functions of interest can be built, that is, \begin{eqnarray} P(\hat{\rho},\hat{\rho}')&=&\int{\rm d}\rho_{s}\,\rho_{s}^{2}\,P(\rho_{s},\hat{\rho}\rho_{s},\hat{\rho}'\rho_{s};{\bf x},{\bf x}')\\ P(\overline{\rho},\overline{\rho}')&=&\int{\rm d}\rho_{s}\,P(\rho_{s},\overline{\rho}_{i}-1+\rho',\overline{\rho}'-1+\rho_{s};{\bf x},{\bf x}'), \end{eqnarray} from which the covariance elements such as \begin{eqnarray} {\rm Cov}(\rho_{i},\rho_{j}){\rm d}\rho_{i}{\rm d}\rho_{j}&=& \int{\rm d} r_{d}\,P_{s}(r_d) P(\rho_{i},\rho_{j};r_d){\rm d}\rho_{i}{\rm d}\rho_{j}\nonumber\\ &&-P(\rho_{i})P(\rho_{j}){\rm d}\rho_{i}{\rm d}\rho_{j} \end{eqnarray} can be derived and whose properties we wish to explore. We wish in particular to build a model of the likelihood function from such a covariance, requiring full knowledge of its eigenvalues and eigendirections. In this respect, it is implicit that the number of bins $(i)$ to be used is finite. We nonetheless present at least in this first section the results in the continuous limit for $\rho_{i}$. It is finally to be noted that as stated before, we restrict our analysis to covariance matrices, but higher-order correllators might also be considered by generalizing the relation (\ref{basicrelation}) to a higher number of variables. \section{PDF covariances in the context of cosmological models} \label{sec:PDFcovariances} \subsection{Modeling the joint PDF} To make progress, we need to make further assumptions about the mathematical structure of the joint PDF. In the following, we assume in particular that joint PDFs can be obtained from their cumulant generating functions (CGF)\footnote{This is not necessarily so, as exemplified in \citet{2011ApJ...738...86C,2012ApJ...750...28C}.}, $\varphi(\lambda_{i},\lambda_{j};r_{d})$. The latter is defined as \begin{eqnarray} \exp\left(\varphi(\lambda_{i},\lambda_{j};r_{d})\right)&=&\langle \exp\left({\lambda_{i}\rho_{i}+\lambda_{j}\rho_{j}}\right)\rangle\nonumber\\ &&\hspace{-2.5cm} =\int {\rm d} \rho_{i}{\rm d}\rho_{j}\,P(\rho_{i},\rho_{j};r_d)\,\exp\left({\lambda_{i}\rho_{i}+\lambda_{j}\rho_{j}}\right), \end{eqnarray} and it is assumed that this relation can be inverted to give the joint PDF from Laplace inverse transformations, \begin{equation} P(\rho_{i},\rho_{j};r_{d})=\int\frac{{\rm d}\lambda_{i}}{2\pi{\rm i}}\frac{{\rm d}\lambda_{j}}{2\pi{\rm i}}e^{-\lambda_{i}\rho_{i}-\lambda_{j}\rho_{j}+\varphi(\lambda_{i},\lambda_{j};r_{d})}, \end{equation} where the integrations are made a priori along the imaginary axis. The CGFs themselves are closely related to the averaged correlation functions of the underlying field. In the following, we develop models for which these correlation functions can be computed precisely. \subsection{Large-distance contributions to the joint density PDF} We start by assuming that covariances are dominated by long-range correlation and not by proximity effects (e.g., densities taken in nearby regions). Whether this assumption is correct obviously depends on the particular model and setting we consider, as we detail below. There are large sets of models for which general expressions can be given in this regime. They are the so-called hierarchical models, originally introduced in \citet{1980lssu.book.....P}, discussed in more detail in \citet{1984ApJ...279..499F,1984ApJ...277L...5F,1989A&A...220....1B,1992A&A...255....1B}, and further formalized in \citet{1999A&A...349..697B} as described below; it is also true in the quasilinear regime as originally pointed out in \citet{1996A&A...312...11B} and derived in more detail in \cite{2016MNRAS.460.1598C}. In these regimes, we obtain the following functional form (see previous references and the detailed derivation in Appendix B): \begin{eqnarray} \varphi(\lambda_{s},\lambda_{i},\lambda_{j})&=& \lambda_{s}+\varphi_{0}(\lambda_{i})+\varphi_{0}(\lambda_{j}) \nonumber\\ && \hspace{-2.5cm}+\frac{\lambda_{s}^{2}}{2}\int{\rm d}{\bf x}_{s}{\rm d}{\bf x}'_{s}\,\xi({\bf x}_{s},{\bf x}'_{s}) +\lambda_{s}\int{\rm d}{\bf x}_{s}\,\xi({\bf x}_{s},{\bf x}_{1})\,\varphi_{1}(\lambda_{i}) \nonumber\\ &&\hspace{-2.5cm}+\lambda_{s}\int{\rm d}{\bf x}_{s}\,\xi({\bf x}_{s},{\bf x}_{2})\,\varphi_{1}(\lambda_{j}) +\varphi_{1}(\lambda_{i})\,\xi({\bf x}_{1},{\bf x}_{2})\,\varphi_{1}(\lambda_{j})\label{gen3ptbias} ,\end{eqnarray} where $\xi({\bf x},{\bf x}')$ is the two-point correlation function of the density field at positions ${\bf x}$ and ${\bf x}'$ , and $\varphi_{0}(\lambda)$ and $\varphi_{1}(\lambda)$ are specific functions of $\lambda$ that depend on the details of the model. Then, setting $\lambda_{s}$ to zero, we can easily obtain the expression of the joint PDF at leading order in $\xi(r_{d})$, \begin{equation} P(\rho_{i},\rho_{j};r_{d})=P(\rho_{i})P(\rho_{j})\left(1+b(\rho_{i})\xi(r_d)\,b(\rho_{j})\right).\label{jointpdflinear} \end{equation} Here $P(\rho_{i})$ is the one-point density PDF (i.e., implicitly at scale $R$), and $b(\rho_{i})$ is the density-bias function (to be distinguished from the standard halo-bias function). It also depends on $\rho_{i}$ (and on the scale $R$) so that in the previous expression, the dependence on $\rho_{i}$, $\rho_{j}$ , and $r_{d}$ can be factorized out. To be more precise, $P(\rho_{i})$ is given by the inverse Laplace transform of the CGF (see, e.g., \cite{1989A&A...220....1B} and \cite{2013arXiv1311.2724B} for a detailed derivation of this inversion), \begin{equation} P(\rho_{i})=\int\frac{{\rm d} \lambda}{2\pi {\rm i}}\exp\left(-{\rm i}\lambda\rho_{i}+\varphi_{0}(\lambda)\right),\label{Prho} \end{equation} where $\varphi_{0}(\lambda)$ is the CGF of the density taken at scale $R$ (i.e., for the filter $W_{R}$). The function $b(\rho_{i})$ is defined through a similar relation, \begin{equation} b(\rho_{i})P(\rho_{i})=\int\frac{{\rm d} \lambda}{2\pi {\rm i}}\,\varphi_{1}(\lambda)\,\exp\left(-{\rm i}\lambda\rho_{i}+\varphi_{0}(\lambda)\right).\label{biasdefinition} \end{equation} The function $\varphi_{1}(\lambda)$ can be explicitly computed in the context of perturbation theory calculations~(\cite{2016MNRAS.460.1598C}). This is the case also for models in the so-called hierarchical models (see appendices). In the latter case, these calculations can be extended to higher order, as we describe below, providing ways to better assess the domain of validity of this expansion. According to the previous relation, this implies that this form translates into the expression of the covariance coefficients for the density PDF. More precisely, we expect \begin{equation} {\rm Cov}(\rho_{i},\rho_{j})=b(\rho_{i})P(\rho_{i})\,\overline{\xi}_{s}\,b(\rho_{j})P(\rho_{j})\label{NaiveCov} ,\end{equation} where $\overline{\xi}_{s}$ is the average value of the two-point correlation function $\xi(r_{d})$ within the sample. It is to be noted, however, that this is true if \begin{itemize} \item the term in $\overline{\xi}_{s}$ is indeed the leading contribution of the expansion (\ref{gen3ptbias}). This is obviously not the case for samples with periodic boundary conditions, for which $\overline{\xi}_{s}$ vanishes by construction; \item the density is defined regardless of the density of the sample. Its expectation value therefore does not coincide with $\rho_{s}$ for a given sample. \end{itemize} It can also be noted that in the Gaussian limit, we have $b(\rho_{i})=\delta_{i}/\xi$. Applying the relation (\ref{jointpdflinear}) to the density within one cell and to the sample density $\rho_{s}=1+\delta_{s}$ leads then to the following expression for the conditional expression of density PDF, \begin{equation} P(\rho_{i}\vert \rho_{s})=P(\rho_{i})\left(1+\delta_{s}\,b(\rho_{i})\right). \end{equation} This leads to the interpretation of the function $b(\rho_{i})$ as the response function of the density PDF to the sample density. This means that although the density-bias function cannot be derived from the density PDF alone, we should be able to derive it if we are in possession of an operational method to compute the density PDF for arbitrary cosmological parameters (in a way similar to the derivation of halo-bias function as pioneered in \citet{1996MNRAS.282..347M}). Undoubtedly, this result is closely related to the so-called supersample effects \citep[as described for the power spectra covariance in][]{2013PhRvD..87l3504T}, that is, the impact of modes of scale comparable to or larger than the sample size. This is not necessarily their only contribution (because subdominant large-scale contributions can also contribute to the covariance), however, but likely to be the most important contribution, as described below. The density-bias function obeys the following consistency relations: \begin{eqnarray} \int{\rm d} \rho\, b(\rho)P(\rho)&=&0,\label{bconsistency1}\\ \int{\rm d} \rho\, \rho\ b(\rho)P(\rho)&=&1,\label{bconsistency2} \end{eqnarray} as initially pointed out in \cite{1992A&A...255....1B}. \subsection{Case of relative densities} The previous formula applies to the local densities, evaluated regardless of the sample density. It does not apply in particular to standard settings (e.g., densities measured out of galaxy counts) where the density is defined with respect to the mean density of the sample. To address this case in particular, we should consider \begin{equation} \hat{\rho}_{i}=\frac{\rho_{i}}{\rho_{\rm s}} \end{equation} as the observable for which the covariance is to be computed. In this case, the formal derivation of the PDFs is presented in the appendix, and it leads to \begin{eqnarray} P(\hat{\rho}_{i})&=&\int \frac{{\rm d}\lambda_{i}}{2\pi{\rm i}} \left[ \frac{\partial\varphi}{\partial\lambda_{s}} \right]_{\big\vert_{\lambda_{s}=-\lambda_{i}\hat{\rho}_{i}}} \exp\left[\varphi(-\lambda_{i}\hat{\rho}_{i},\lambda_{i})\right]\\ P(\hat{\rho}_{i},\hat{\rho}_{j})&=&\int \frac{{\rm d}\lambda_{i}}{2\pi{\rm i}}\, \frac{{\rm d}\lambda_{j}}{2\pi{\rm i}} \left[ \left( \frac{\partial\varphi}{\partial\lambda_{s}} \right)^{2}+\frac{\partial^{2}\varphi}{\partial\lambda_{s}^{2}} \right]_{\big\vert_{\lambda_{s}=-\lambda_{i}\hat{\rho}_{i}-\lambda_{j}\hat{\rho}_{j}}} \nonumber\\ &&\times\exp\left[\varphi(-\lambda_{i}\hat{\rho}_{i}-\lambda_{j}\hat{\rho}_{j},\lambda_{i},\lambda_{j})\right]. \label{joinhrhoPDF} \end{eqnarray} We then use relation (\ref{gen3ptbias}) to compute the form of this function. At this stage, it is to be noted that the expressions $\int{\rm d}{\bf x}_{0}{\rm d}{\bf x}'_{0}\,\xi({\bf x}_{0},{\bf x}'_{0})$, $\int{\rm d}{\bf x}_{0}\,\xi({\bf x}_{0},{\bf x}_{1})$ and $\xi_{12}$ all take the same averaged value when integrated over the sample. We note here this common value as $\overline{\xi}_{s}$. Inserting the resulting expressions of the CGF in both the expressions of $P(\hat{\rho}_{i})$ and $P(\hat{\rho}_{i}, \hat{\rho}_{j})$ and expanding all terms at linear oder in $\overline{\xi}_{s}$ , we obtain\begin{eqnarray} P(\hat{\rho}_{i},\hat{\rho}_{j})-P(\hat{\rho}_{i})\,P(\hat{\rho}_{j})&=& \overline{\xi}_{s}\,\int \frac{{\rm d}\lambda_{i}}{2\pi{\rm i}}\, \frac{{\rm d}\lambda_{j}}{2\pi{\rm i}} \nonumber \\ && \hspace{-3.5cm}\times \left( 1+\varphi_{1}(\lambda_{i})-\lambda_{i} \hat{\rho}_{i} \right)\left( 1+\varphi_{1}(\lambda_{j})-\lambda_{j}\hat{\rho}_{j} \right)\ \nonumber \\ && \hspace{-3.5cm}\times \exp\left[-\lambda_{i} \hat{\rho}_{i}-\lambda_{j} \hat{\rho}_{j}+\varphi_{0}(\lambda_{i})+\varphi_{0}(\lambda_{j})\right] .\end{eqnarray} This leads to the definition of the first sample-bias function, \begin{equation} b_{\rm s1}(\hat{\rho}_{i})=\frac{1}{P(\hat{\rho}_{i})} \int \frac{{\rm d}\lambda}{2\pi{\rm i}} \left( 1+\varphi_{1}(\lambda)-\lambda \hat{\rho}_{i} \right) \exp\left[-\lambda \hat{\rho}_{i}+\varphi_{0}(\lambda)\right] ,\end{equation} which can be re-expressed in terms of the density-bias function defined in Eq. (\ref{biasdefinition}) and the derivative of $P(\hat{\rho}_{i})$ with respect to $\hat{\rho}_{i}$ \begin{equation} b_{\rm s1}(\hat{\rho}_{i})=b(\hat{\rho}_{i})+1+\frac{\partial\log(P(\hat{\rho}_{i}))}{\partial\log \hat{\rho}_{i}}. \end{equation} In this case, the covariance matrix elements are then expected to be given by \begin{equation} {\rm Cov}(\hat{\rho}_{i},\hat{\rho}_{j})=b_{\rm s1}(\hat{\rho}_{i})P(\hat{\rho}_{i})\,\overline{\xi}_{s}\,b_{\rm s1}(\hat{\rho}_{j})P(\hat{\rho}_{j}). \label{NaiveCovs1} \end{equation} Remarkably, $b_{\rm s1}(\rho)$ can entirely be expressed in terms of $b(\rho)$. For the sake of completeness, we also consider the case of $\overline{\rho}_{i}=\rho_{i}-(\rho_{s}-1)$. In this case, it is easy to show that \begin{eqnarray} P(\overline{\rho}_{i})&=& \int \frac{{\rm d}\lambda_{i}}{2\pi{\rm i}} \exp\left[-\lambda_{i}\overline{\rho}_{i}+\varphi(-\lambda_{i},\lambda_{i})\right]\\ P(\overline{\rho}_{i},\overline{\rho}_{j})&=&\int \frac{{\rm d}\lambda_{i}}{2\pi{\rm i}} \frac{{\rm d}\lambda_{j}}{2\pi{\rm i}}\ \nonumber \\ && \hspace{-1.0cm}\times \exp\left[-\lambda_{i}\overline{\rho}_{i}-\lambda_{j}\overline{\rho}_{j}+\varphi(-\lambda_{i}-\lambda_{j},\lambda_{i},\lambda_{j})\right] \label{joinrhobPDF} .\end{eqnarray} Following the same approach as for the previous case, the leading-order expression in $\overline{\xi}_{s}$ of the connected joint PDF is \begin{eqnarray} P(\overline{\rho}_{i},\overline{\rho}_{j})-P(\overline{\rho}_{i})\,P(\overline{\rho}_{j})&=& \overline{\xi}_{s}\,\int \frac{{\rm d}\lambda_{i}}{2\pi{\rm i}}\, \frac{{\rm d}\lambda_{j}}{2\pi{\rm i}} \nonumber \\ && \hspace{-3.5cm}\times \left( \varphi_{1}(\lambda_{i})-\lambda_{i} \right)\left( \varphi_{1}(\lambda_{j})-\lambda_{j} \right)\ \nonumber \\ && \hspace{-3.5cm}\times \exp\left[-\lambda_{i} \overline{\rho}_{i}-\lambda_{j} \overline{\rho}_{j}+\varphi_{0}(\lambda_{i})+\varphi_{0}(\lambda_{j})\right]. \end{eqnarray} It leads to the definition of the second sample-bias function, \begin{equation} b_{\rm s2}(\overline{\rho}_{i})=b(\overline{\rho}_{i})+\frac{\partial\log(P(\overline{\rho}_{i}))}{\partial\overline{\rho}_{i}}, \end{equation} so that \begin{equation} {\rm Cov}(\overline{\rho}_{i},\overline{\rho}_{j})=b_{\rm s2}(\overline{\rho}_{i})P(\overline{\rho}_{i})\,\overline{\xi}_{s}\,b_{\rm s2}(\overline{\rho}_{j})P(\overline{\rho}_{j}). \label{NaiveCovs2} \end{equation} The three bias functions are therefore closely related. Although the density-bias function $b(\rho)$ cannot be derived from the shape of $P(\rho)$ alone, as mentioned before, the relations between $b(\rho)$ and either $b_{s1}(\rho)$ and $b_{s2}(\rho)$ depend on the PDF alone. Furthermore, the two relative density bias functions obey the following consistency relations: \begin{eqnarray} \int{\rm d} \rho_{i}\, b_{s\#}(\rho_{i})P(\rho_{i})&=&0\\ \int{\rm d} \rho_{i}\, \rho_{i}\ b_{s\#}(\rho_{i})P(\rho_{i})&=&0. \end{eqnarray} The second relation is at variance with the corresponding relation (\ref{bconsistency2}) for the density-bias function. It indicates that for typical values of $\rho P(\rho)$, the sample bias functions, $b_{s\#}(\rho)$, are likely to be smaller than the density-bias function $b(\rho)$. \subsection{Structure of the covariance matrix} The consequences of these formulae on the structure of the covariance matrix are illustrated below with the help of the Rayleigh-Levy flight model. Fig. \ref{DPDF_SetA_0p5} compares the results from exact derivations of the covariance matrix with these prescriptions. The diagonal parts of the covariance matrices are well accounted for by these formulae. The root mean square of the measured local density PDF in particular exhibits the expected density dependence, at least for mild values of the density. In all the formulae (\ref{NaiveCov}, \ref{NaiveCovs1}, \ref{NaiveCovs2}), the expression of the covariance exhibits a simple structure, as it is factorizable in the two densities. This implies, for instance, that the reduced covariance matrix \begin{equation} {\rm Cov}_{\rm reduced}(\rho_{i},\rho_{j})=\frac{{\rm Cov}(\rho_{i},\rho_{j})}{\sqrt{{\rm Cov}(\rho_{i},\rho_{i}) {\rm Cov}(\rho_{j},\rho_{j})}} \end{equation} has an extremely simple structure: it is given by the sign of the product of the bias functions (i.e., ${\rm sign}[b(\rho_{i})b(\rho_{j})]$, ${\rm sign}[b_{s1}(\rho_{i})b_{s1}(\rho_{j})]$ , and ${\rm sign}[b_{s2}(\rho_{i})b_{s2}(\rho_{j})]$ for the three different measurement strategies). This leads to the butterfly-like structure in the plotted matrices, as illustrated in Fig. \ref{NumRedCovariance}. This simple form betrays the fact that the density covariance is only poorly known. To be more specific, formulae (\ref{NaiveCov}, \ref{NaiveCovs1}, \ref{NaiveCovs2}) give only a single eigendirection of the covariance matrix (namely $b(\rho_{i})P(\rho_{i})$) and the amplitude of a single eigenvalue associated with it. The numerical calculations suggest that it is the leading one when $\overline{\xi}_{s}$ does not vanish, as illustrated on Fig. \ref{FirstEigenvect}. These formulae do not offer any indication of the amplitude of the covariance in orthogonal directions, however. Taken at face value, they imply that the other eigenvalues all vanish, preventing the covariance matrix from being invertible. These formulae therefore cannot be used alone to model the covariance for practical purposes, and complementary contributions have to be derived from other (and a priori subdominant) effects. \subsection{Beyond leading-order effects} In the previous subsection, we identified the long-distance leading contributions. As mentioned before, this leads to only limited information of the covariance structure. This difficulty is even more acute for covariances evaluated in numerical experiments consisting of a collection of independent samples, each of them with periodic boundary conditions (this does not have to be so, but it is often the case in practice). By construction, the mean correlation function within the sample then vanishes, $\overline{\xi}_{s}\to 0,$ making the term we have computed identically zero. All these considerations indicate that further contributions need to be identified. The identification of the next-to-leading order effects in Eq. (\ref{jointpdflinear}) is difficult to do a priori, however: \begin{itemize} \item One natural next-to-leading contribution is obtained by taking into account second-order terms in $\xi(d)$ in Eq. (\ref{jointpdflinear}), that is, by considering doubles lines between cells in a diagrammatic representation. This would induce a term of about $\xi(d)^{2}$ , whose average never vanishes\footnote{In the minimal tree model, it is possible to compute these terms in the so-called mean-field approximation (see appendix), but they do not lead to a positive definite covariance matrix and therefore cannot be the sole, or dominant, contribution to the covariances.}. As shown in the appendix, these contributions can be formally derived in the context of the hierarchical models. This leads to correction terms that can be organized in a sum of factorized terms. Therefore, although it can indeed provide corrective terms to the covariance matrix, only a limited number of eigendirections can be generated. \item Other contributions naturally come from proximity effects due to the fact that cells are finite, and could even overlap, which makes the expansion in $\xi(d)$ ineffective. In a diagrammatic point of view, they are due to the fact that many more diagrams contributed when cells are too close. This has dramatic effects for overlapping cells. For hierarchical models, an approximate form can be used to help model these effects, which we use below. \item Finally, effects due to the fact that discrete tracers are used in count-in-cells statistics might also play a role at short distances. They are also tentatively modeled below. \end{itemize} In the following, we propose some modeling of these effects and explore how they depend on the properties of the survey. \subsubsection{Joint PDF at short distances} There are no general forms for the joint PDF at close distance. The hierarchical models suggest the following form (derived from the saddle point approximation, which is valid for moderate values of $\overline{\xi}$ and of the density contrast), however: \begin{eqnarray} P_{{\rm short\ dist.}}(\rho_{i},\rho_{j}){\rm d}\rho_{i}{\rm d}\rho_{j}&=&P(\rho_{m}) \nonumber \\ && \hspace{-3.5cm}\times \exp\left[-\frac{\delta_{\rho}^{2}}{\rho_{m}^{\alpha}\Delta_{\xi}(d)}\right]\frac{{\rm d}\rho_{m}{\rm d}\delta_{\rho}}{\sqrt{\pi\rho_{m}^{\alpha}\Delta_{\xi}(d)}} \label{shortdistjPDF} ,\end{eqnarray} where $\rho_{m}=(\rho_{i}+\rho_{j})/2$ and $\delta_{\rho}=(\rho_{i}-\rho_{j})/2,$ and where $\alpha$ is model-dependent parameter. In other words, the PDF of the difference between $\rho_{i}$ and $\rho_{j}$ can be described by a simple Gaussian with a known width driven by the expression of $\Delta_{\xi}(d)\equiv \overline{\xi}-\xi_{12}(d),$ provided it is small compared to $\overline{\xi}$. We note that $\Delta_{\xi}(d)$ obviously vanishes at $d=0$, it then leads to a Kronecker $\delta$ function at zero separation as expected, and generically scales like $d^{2}$ at short distances\footnote{This limited form would induce a minimum contribution to ${\rm Cov}(\rho_{i},\rho_{i})$ given by $\Delta_{\rho_{i}}/{\cal P}(\rho_{i}),$ where $\Delta_{\rho_{i}}$ is the bin size in density.}. Interestingly, for the minimal tree model the form (\ref{shortdistjPDF}) is exact for $\alpha=1$ (see appendix). In general, this is also the expected form based on the saddle point approximation (valid when $\overline{\xi}$ is small) for generic hierarchical tree models. The value of $\alpha$ can be identified from small-order cumulants, \begin{equation} \alpha=\frac{2}{3}S_{3} ,\end{equation} where $S_{3}$ is the reduced third-order cumulant, \begin{equation} S_{3}=\frac{\left<\delta^{3}\right>}{\left<\delta^{2}\right>^{2}}. \end{equation} This form is probably not very accurate in general. It can be used to model the impact of close distances to the covariance matrix, however, as shown below. \subsubsection{Poisson noise and minimum separation} A further contribution to this joint PDF can come from discrete effects that arise because the density is evaluated from the counting of discrete tracers (as explored in \cite{1996ApJ...470..131S} or more recently in \cite{2021MNRAS.500.3631R}). In this subsection, we assume that the density corresponds to the density obtained after application of a top-hat filter and that tracers are Poisson realizations of continuous fields \citep[although it is possible to encounter sub- or over-Poissonnian noises;][]{2018PhRvD..98b3508F}. The use of other filters can be explored but would require specific developments that we do not pursue here. Within such hypotheses then, the joint distribution of counts-in-cells $N_{i}$ is given by the convolution of the joint density PDF, ${\cal P}(\left\{\rho_{i}\right\})$, in the continuous limit convolved by Poisson counts-in-cells as \begin{equation} {\cal P}(\{N_{i}\})=\int\Pi_{i} {\rm d} \rho_{i}\,P_{{\rm Poisson}}(N_{i};\overline{N}_{i}\rho_{i})\,{\cal P}(\left\{\rho_{i}\right\}) ,\end{equation} where $P_{{\rm Poisson}}(N;\overline{N})$ is more precisely the probability of having $N$ tracers in a cell where the mean density of tracers is $\overline{N}$. In practice, $\overline{N}_{i}$ is given by $nV_{i}$ , where $n$ is the number density of tracers and $V_{i}$ is the volume of the cell $V_{i}$. Discrete effects would then induce further scatter between the estimated values of $\rho_{i}$ and $\rho_{j}$. The latter are given by Poisson noise induced by the nonoverlapping parts of the cells, as shown in \cite{1996ApJ...470..131S}, further contributing to the scatter. The scatter in the difference in the number of points is \begin{equation} \sigma^{2}_{{\rm Poisson}}=\frac{2}{\overline{N}}\rho_{m}\,f_{e}(d)\label{PoissContr} .\end{equation} It can be incorporated as a contribution to the variance of the PDF of $\delta_{\rho}$ of the form \begin{equation} \sigma^{2}_{{\rm Poisson}}=\frac{1}{2 \overline{N}}\rho_{m}\,f_{e}(d)\label{PoissContr} ,\end{equation} where $f_{e}(d)$ is the fraction of the volume of each cell that does not overlap with the other as a function of the cell distance. For short distances (i.e., for about $d\lesssim R$), it is in the 2D case given by \begin{equation} f_{e}(d)=\frac{2d}{\pi\,R}. \end{equation} The expression (\ref{PoissContr}) is then a priori to be added to the variance term that appears in Eq. (\ref{shortdistjPDF}) so that the total variance for the density difference reads \begin{equation} \sigma_{\delta_{\rho}}^{2}(d)=\frac{1}{2}\rho^{\alpha}_{i}\Delta_{\xi}(d)+\frac{1}{2\overline{N}}\rho_{i}\,f_{e}(d)\label{FulldeltaVariance} .\end{equation} Equation (\ref{shortdistjPDF}) then fully encodes the fact that nearby cells are likely to have similar densities. This encodes, for instance, that nearby cells are within the same haloes. This contribution is expected to enhance the covariance terms. It shows that the amount of information is limited at small scales: there is therefore a minimum separation between cells smaller than which no gain in precision is expected of PDF measurements. The minimum distance depends on the bin size: $d_{\min}$ is the distance such that the densities in two cells separated by less than $d_{min}$ are almost certainly in the sale density bin. $d_{\min}$ therefore depends on the bin width $\Delta_{i}$. From eq. (\ref{FulldeltaVariance}), it is possible to infer this value. We desire \begin{equation} \sigma_{\delta_{\rho}}^{2}(d_{\min})\ll \Delta_{i}^{2}. \end{equation} This suggests that a minimum distance between cells can be adopted, given by \begin{equation} d_{\min {\rm Poisson}}=\frac{\pi\Delta_{i}^{2}}{{\overline{N}}}\,R. \end{equation} The other upper limit comes on $d$ from the expression of $\Delta_{\xi}$ as a function of $d$. The latter depends on both the shape of the power spectrum and on the filter that is used. In general (e.g., for a Gaussian filter), $\Delta_{\xi}(d)$ scales like $d^{2}/R^{2}$ , where $R$ is the filtering radius, with a coefficient $c_{n_{s}}$ that depends on the power spectrum index $n_{s}$ and is proportional to its amplitude. Top-hat filters have different analytical properties. We give here the formal expression of $\Delta_{\xi}(d)$ at 2D for a power-law spectrum of index $n_{s}$, \begin{eqnarray} \frac{\Delta_{\xi}(d)}{\overline{\xi}}&\!\!\!\!\!=&\!\!\!\!\!-\frac{2^{n_s\!-\!1} \Gamma \left(1\!-\!\frac{n_s}{2}\right) \Gamma \left(2\!-\!\frac{n_s}{2}\right) \Gamma \left(\frac{1}{2} \left(n_s\!-\!1\right)\right) }{\sqrt{\pi } \Gamma \left(\frac{1}{2}\!-\!\frac{n_s}{2}\right) \Gamma \left(\frac{3}{2}\!-\!\frac{n_s}{2}\right) \Gamma \left(\frac{n_s}{2}+1\right)} \left(\frac{d}{R}\right)^{1\!-\!n_s}\nonumber\\ &\!\!\!\!\!\approx&\!\!\!\! 0.72\,\frac{d^{3/2}}{R^{3/2}} \hbox{ for }n_{s}=0.5. \end{eqnarray} This is the situation we encounter below in the numerical experiments we perform. This leads to the following form: \begin{equation} d_{\min {\rm halo}}=\left(\frac{\Delta_{i}^{2}}{0.72\,\overline{\xi}}\right)^{2/3}\,R\label{dminhalo}. \end{equation} It is to be noted that it can be in practice a rather short distance, shorter than the filtering scale $R$. For instance, for a bin width of $1/4$, a variance of about unity, $d_{\min {\rm halo}}$ is about $R/5$. Equation (\ref{shortdistjPDF}), together with the expressions of the bias functions described above, is the main results of this paper. We illustrate below how they can be used to compute the covariance matrices. \section{Hierarchical models} In order to illustrate the previous findings, we make use of toy models for which explicit computations can be made. \subsection{General formalism} Hierarchical models are a class of non-Gaussian fields whose correlation functions follow the so-called hierarchical ansatz, \begin{equation} \xi_{p}({\bf r}_{1},\dots,{\bf r}_{p})=\sum_{t\in{\rm trees}}Q_{p}(t)\,\prod_{{\rm lines}\in t}\xi({\bf r}_{i},{\bf r}_{j}) ,\end{equation} where the sum is made over all possible trees that join the $p$ points (diagram without loops), and the tree value is obtained by the product of a fixed weight (that depends only on the tree topology) and the product of the two-point correlation functions for all pairs that are connected together in the given tree. This construction ensures that the average p-point function, $\overline{\xi_{p}}$, scales like the $\overline{\xi}^{p-2}$ , where $\overline{\xi}$ is the averaged two-point function. More precisely, there are $S_{p}$ parameters such that \begin{equation} \overline{\xi_{p}}=S_{p}\,\overline{\xi}^{p-2}. \end{equation} The precise value of the $S_{p}$ parameters depend on the $Q_{p}$ parameters and on the averages of the product of $\xi(r_{ij})$ functions. A very good approximation is to assume that the average of the products of this function is given by the product of these averages. Then the $S_{p}$ coefficients depend solely on $Q_{p}$, \begin{equation} S_{p}=\sum_{t}Q_{p}(t). \end{equation} \subsection{The (minimal) tree model} The tree models are based on a further assumption on the $Q_{p}$ parameters. It is basically assumed that tree expressions can be computed locally\footnote{Perturbation theory results do not exactly follow this construction as vertices are then dependent on the geometry of the incoming lines. However, in this case, $Q_{p}$ values are indeed obtained from a product of vertices.} , that is, \begin{equation} Q(t)=\prod_{{\rm vertices}\in t} \nu_{p} ,\end{equation} where $\nu_{p}$ is a weight attributed to all vertices with $p$ incoming lines ($\nu_{0}=\nu_{1}=1$ for completion). In this formalism, the vertex generating function is generally introduced, \begin{equation} \zeta(\tau)=\sum_{p}\frac{\nu_{p}}{p!}\tau^{p} .\end{equation} The minimal tree model is a model in which $\nu_{2}$ alone does not vanish. In the minimal model\footnote{it is minimal in the sense that it can be shown that $\nu_{2}$ cannot be smaller than $1/2$ in the strongly nonlinear regime (Peebles, 1980).} , its value is fixed and is given by $\nu_{2}=1/2,$ so that \begin{equation} \zeta_{{\rm RL}}(\tau)=(1+\tau/2)^{2}. \end{equation} Together with the Gaussian case (which corresponds to $\zeta(\tau)=1+\tau$), this is the only case for which we are sure that it can be effectively built (in the sense that other models may be unphysical). In this model, it is possible to build the cumulant generating function of the local density. For the one-point case, assuming the mean-field approximation, the CGF is given by \begin{equation} \varphi(\lambda)=\lambda\left[\zeta(\tau)-\frac{1}{2}\tau\zeta'(\tau)\right] \end{equation} with \begin{equation} \tau=\lambda\,\overline{\xi}\,\zeta'(\tau).\label{taueq} \end{equation} This is not the result of large deviation principle calculations, but of mere combinatorics, although it leads to the same formal transformation between the CGF and the vertex-generating function. In case of the minimal model, Eq. (\ref{taueq}) takes a simple form that can be easily solved. We finally have \begin{equation} \varphi(\lambda)=\frac{\tau(\lambda)}{\overline{\xi}},\ \ \ \tau(\lambda)=\frac{\lambda\overline{\xi}}{1-\lambda\overline{\xi}/2}. \end{equation} The one-point PDF of the density can then be computed explicitly (see appendix), \begin{equation} P(\rho)=\frac{4}{\overline{\xi}^{2}}\exp\left[-\frac{2}{\overline{\xi}}(1+\rho)\right] \ _{0}F_{1}\left(2,\frac{4\rho}{\overline{\xi}}\right) \label{LFpdf} ,\end{equation} as can the density-bias function, \begin{equation} b(\rho)=\frac{\ _{0}F_{1}\left(1,\frac{4\rho}{\overline{\xi}}\right)}{\ _{0}F_{1}\left(2,\frac{4\rho}{\overline{\xi}}\right)}-\frac{2}{\overline{\xi}},\end{equation} where $\overline{\xi}$ is the averaged two-point correlation function within the cell. \subsection{Rayleigh-Levy flight model} \begin{figure} \centering \includegraphics[width=7cm]{samplepoints.pdf} \caption{Example of a realization of a Rayleigh-Levy walk. Points mark the end point of each displacement. They are clearly correlated.} \label{samplepoints} \end{figure} The minimal tree model can be implemented with Rayleigh-Levy random walks (or rather Rayleigh-Levy flights, as described in Peebles 1980). This is a Markov random walk where the PDF of the step length $\ell$ follows a power law, \begin{equation} P(\ell)\sim\frac{1}{\ell^{\alpha}} ,\end{equation} with a regularizing cutoff at small separation, and where $\alpha$ satisfies \begin{equation} 0 < \alpha < 2. \end{equation} The sample points are then all the step points reached by the walker. More precisely, the cumulative distribution function of step of length $\ell$ is \begin{eqnarray} P(>\ell_{0})&=&1,\\ P(>\ell)&=&\left(\frac{\ell_{0}}{\ell}\right)^{\alpha}\ \ \hbox{for}\ \ \ell>\ell_{0} ,\end{eqnarray} where $\ell_{0}$ is a small-scale parameter. The two- and higher-order correlation functions can then be explicitly computed. Starting with a first point at position ${\bf r}_{0}$, the density of the subsequent point (first descendant) at position ${\bf r}$ is given by \begin{eqnarray} f_{1}({\bf r})&=&\frac{\alpha}{2\pi}\frac{\ell_{0}^{\alpha}}{\vert{\bf r}-{\bf r}_{0}\vert^{2+\alpha}}\ \ \hbox{in 2D space};\\ f_{1}({\bf r})&=&\frac{\alpha}{4\pi}\frac{\ell_{0}^{\alpha}}{\vert{\bf r}-{\bf r}_{0}\vert^{3+\alpha}}\ \ \hbox{in 3D space}.\end{eqnarray} In the following, the dimension of space is denoted $D$. The density of the descendants, assuming there are an infinity of them, of a point at position ${\bf r}_{0}$ is then given by a series of convolutions, \begin{equation} f({\bf r}_{0},{\bf r})=f_{1}({\bf r})+\int{\rm d}^{D}{\bf r}_{1}\ f_{1}({\bf r}-{\bf r}_{1})f_{1}({\bf r}_{1}-{\bf r}_{0})+\dots ,\end{equation} with subsequent convolutions and where the integral is done in the whole space. Defining the Fourier transform of $f_{1}({\bf r})$ as $\psi(k)$, \begin{equation} \psi(k)=\int{\rm d}^{D}{\bf r}\ f_{1}({\bf r})\,e^{-{\rm i}{\bf k}.{\bf r}}, \end{equation} which is then a function of $k$ only, it is easy to see that \begin{eqnarray} f({\bf r}_{0},{\bf r})&=&\int\frac{{\rm d}^{D}{\bf k}}{(2\pi)^{D}}\ e^{{\rm i}{\bf k}.({\bf r}-{\bf r}_{0})}\left[\psi(k)+\psi^{2}(k)+\dots\right]\nonumber\\ &=&\int\frac{{\rm d}^{D}{\bf k}}{(2\pi)^{D}}\ e^{{\rm i}{\bf k}.({\bf r}-{\bf r}_{0})}\,\frac{1}{1-\psi(k)} ,\end{eqnarray} where we take advantage of the expression of convolutions in Fourier space and their resummations. The two-point correlation function is then given by two possible configurations: a neighbor can either be an ascendant or a descendant, so that the two-point correlation functions between positions ${\bf r}_{1}$ and ${\bf r}_{2}$ are given by \begin{eqnarray} \xi_{2}({\bf r}_{1},{\bf r}_{2})&=&\frac{1}{n}\left[f({\bf r}_{1},{\bf r}_{2})+f({\bf r}_{2},{\bf r}_{1})\right]\nonumber\\ &=&\frac{1}{n} \int\frac{{\rm d}^{D}{\bf k}}{(2\pi)^{D}}\ e^{{\rm i}{\bf k}.({\bf r}_{2}-{\bf r}_{1})}\,\frac{2}{1-\psi(k)}\label{exactxiRL} ,\end{eqnarray} where $n$ is the number density of points in the sample that can be associated with a typical length $\ell_{n,}$ \begin{equation} n=\frac{1}{\ell_{n}^{D}}. \end{equation} At large scale, this expression causes the power spectra to be power laws. They scale like $k^{-\alpha}$ , and the resulting two-point correlation function then takes the form in the large separation limit, \begin{eqnarray} \xi_{\alpha,\,2D}(r)&=&\frac{\alpha}{\pi }\ r^{\alpha -2} \ell_{0}^{-\alpha }\ell_{n}^{2}\label{axiRL2D}\\ \xi_{\alpha,\,3D}(r)&=&\frac{\left(1-\alpha ^2\right) \tan \left(\frac{\pi \alpha }{2}\right)}{\pi ^2}\ r^{\alpha -3} \ell_{0}^{-\alpha}\ell_{n}^{3} \label{axiRL3D} .\end{eqnarray} It is to be noted, however, that this expression does not take into account the boundary conditions, in particular if they are assumed to be periodic. This case is examined in some detail in the next paragraph. It is to be noted, however, that in this case, the function $\xi(r)$ has a more complex form. It is in particular no more isotropic. Higher-order correlation functions can also be computed in this model: $n$ points are correlated when they are embedded in a chronological sequence that can be run in one direction or the other. Thus the three-point function is simply given by \begin{equation} \xi_{\alpha}({\bf r}_{1},{\bf r}_{2},{\bf r}_{3})=\frac{1}{n^{2}}\left[f({\bf r}_{1},{\bf r}_{2})f({\bf r}_{2},{\bf r}_{3})+...\right] \label{xi3expression1} ,\end{equation} with five other terms obtained by all combinations of the indices. Expressing the result in terms of the two-point function, we have \begin{eqnarray} \xi_{\alpha}({\bf r}_{1},{\bf r}_{2},{\bf r}_{3})&=&\frac{1}{2}\left[\xi_{\alpha}({\bf r}_{1},{\bf r}_{2})\xi_{\alpha}({\bf r}_{2},{\bf r}_{3})+\right.\nonumber\\ &&\hspace{-2cm}\left.\xi_{\alpha}({\bf r}_{2},{\bf r}_{3})\xi_{\alpha}({\bf r}_{3},{\bf r}_{1})+\xi_{\alpha}({\bf r}_{3},{\bf r}_{1})\xi_{\alpha}({\bf r}_{1},{\bf r}_{2}) \right] \label{xi3expression2} ,\end{eqnarray} corresponding to a tree structure with $\nu_{2}=1/2$. Higher-order correlation functions can be computed similarly. They follow a tree structure in the sense above, with $\nu_{2}=1/2$ and $\nu_{p}=0$ for $p\ge 3$. \subsection{Periodic boundary conditions} We briefly explore here the case of periodic boundary conditions. Then the multipoint density field $g^{PBC}({{\bf r}_{i}})$ for periodic boundary conditions can be expressed in terms of the former density field $g({{\bf r}_{i}})$ as sums of all copies of the sample, that is, \begin{equation} g^{{\rm PBC}}(\{{\bf r}_{i}\})=\sum_{{\bf n}_{i}}g(\{{\bf r}_{i}+{\bf n}_{i}L\}) ,\end{equation} where ${\bf n}_{i}$ are vectors whose components are integers, ${\bf n}_{i}=(n_{i}^{x},n_{i}^{y},n_{i}^{z})$ and the sums run over all integer values for all $i$; $L$ is the size of the sample (assumed to be the same in all directions). When it is applied in this context, we can construct the $n-$point density function out of the density function $f$ computed previously. Thus the two-point density function is given by \begin{equation} g^{{\rm PBC}}({\bf r}_{1},{\bf r}_{2})=n^{{\rm PBC}}\sum_{{\bf n}_{12}}f({\bf r}_{1},{\bf r}_{2}-{\bf r}_{1}+{\bf n}_{12}L) ,\end{equation} where ${\bf n}_{12}={\bf n}_{2}-{\bf n}_{1}$ and $n^{{\rm PBC}}$ is the resulting one-point (and therefore homogeneous) density in the sample. This expression is therefore written in terms of the function \begin{equation} f^{{\rm PBC}}({\bf r}_{0},{\bf r})= \sum_{{\bf n}}f({\bf r}_{0},{\bf r}-{\bf r}_{0}+{\bf n} L). \end{equation} We can now compute its expression in terms of the power spectra, or more specifically, the function $\psi(k)$ defined previously. We have \begin{equation} f^{{\rm PBC}}({\bf r}_{0},{\bf r})= \int\frac{{\rm d}^{D}{\bf k}}{(2\pi)^{D}}\ e^{{\rm i}{\bf k}.({\bf r}-{\bf r}_{0})}\,\sum_{{\bf n}}e^{{\rm i}{\bf n}.{\bf k}\,L}\frac{1}{1-\psi(k)}, \end{equation} and the latter sum ensures that the contributing wave modes ${\bf k}$ are only those that are periodic in all three directions, that is, those whose components are multiples of $2\pi/L$ so that \begin{equation} f^{{\rm PBC}}({\bf r}_{0},{\bf r})=\sum_{{\bf n}^{*}}\frac{1}{L^{D}}\ e^{2\pi {\rm i}\ {\bf n}.({\bf r}-{\bf r}_{0})/L}\,\frac{1}{1-\psi(k_{n})} ,\end{equation} with \begin{equation} k_{n}=({\bf n}.{\bf n})^{1/2}\frac{2\pi}{L} ,\end{equation} and where the sum is all over possible integer triplets but ${\bf n}=(0,0,0)$. The two-point correlation function is now given by \begin{eqnarray} \xi_{\alpha}^{{\rm PBC}}({\bf r}_{1},{\bf r}_{2})=\frac{1}{n^{{\rm PBC}}}\left[f^{{\rm PBC}}({\bf r}_{1},{\bf r}_{2})+f({\bf r}_{2},{\bf r}_{1})\right] .\end{eqnarray} A similar result can be obtained for the three-point correlation function with \begin{eqnarray} \xi_{\alpha}^{{\rm PBC}}({\bf r}_{1},{\bf r}_{2},{\bf r}_{{3}})&=&\nonumber\\ &&\hspace{-2cm}\frac{1}{(n^{{\rm PBC}})^{2}}\left[f^{{\rm PBC}}({\bf r}_{1},{\bf r}_{2})\ f^{{\rm PBC}}({\bf r}_{2},{\bf r}_{3})+\dots\right]. \end{eqnarray} As a consequence, the functional relation between the three-point correlation function and the two-point correlation function is left unchanged. This is also the case at all orders. \subsection{Covariance matrix of the minimal tree model} Remarkably, in case of the minimal tree model, the derivation of the CGF can also be made for multiple cells, and in particular, for two cells. Its expression is derived in the appendix. It takes the form \begin{equation} \varphi(\lambda_{1},\lambda_{2})= \frac{\lambda_{1}+\lambda_{2}+(\xi_{12}-\overline{\xi})\lambda_{1}\lambda_{2}}{1-(\lambda_{1}+\lambda_{2})\,\overline{\xi}/2-\lambda_{1}\lambda_{2}\,(\xi_{12}^{2}-\overline{\xi}^{2})/4}.\label{MeanFPhi21} \end{equation} In this case, it is then possible to expand its expression in powers of $\xi_{12}$ for distant cells or in powers of $(\overline{\xi}-\xi_{12})$ for close cells, and in both cases, closed forms can be obtained to any order. It leads to the possibility of computing the joint PDF for any configuration (see the appendix for details) and finally to evaluate the covariance matrix directly. This is even possible for any of the thre sets of variables we consider, $\{\rho_{i}\}$, $\{\hat{\rho}_{i}\}$, or $\{\overline{\rho}_{i}\}$. We performed these computations for the minimal tree model with a power-law behavior $\xi(r)\sim r^{-1.5}$ ($\alpha=0.5$), a 2D survey with a size of $200^{2}$ pixels, and a top-hat smoothing radius of $4.25$ pixels. The amplitude of the correlation function was fixed to give $\overline{\xi}=1.09$ at the smoothing scale. It precisely corresponds to the setting of the numerical simulations of Rayleigh-Levy flights we also performed, as described in the next section. It allows us to compare the two approaches. These analytic results have two limitations: the results are based on the mean-field approximation for the computation of the two-variable GCF, and the covariance elements are computed ignoring the bin sizes (i.e., by evaluating the expression of the covariance for their central values). Although in most cases this should not be an issue, it still might have a non-negligible impact when the PDF varies rapidly with the density. \section{Simplified models of the covariance matrix} The purpose of this section is then to propose two levels of modeling of the covariance matrix based on the previous results and to compare these propositions with results of either the full analytic results presented before or with the results of numerical experiments based on Rayleigh-Levy flights. \subsection{Modeling the covariance matrix} More specifically, we considered two approximate forms for the full covariance. The first approximation is fully analytic. It makes use of the large-scale contributions and those from the short distance expression (\ref{shortdistjPDF}). It reads as the sum of the two contributions \begin{eqnarray} {\rm Cov}(\rho_{i},\rho_{j})&=&b_{\#}(\rho_{i})P(\rho_{i})\overline{\xi}_{s} b_{\#}(\rho_{j})P(\rho_{j}) \nonumber \\ && \hspace{-.5cm}+ \int_{0}^{r_{\max}} {\rm d} r_{d}\,P_{s}(r_{d})\, P_{{\rm short\ dist.}}(\rho_{i},\rho_{j},r_{d}). \label{CovExpAp1} \end{eqnarray} In this expression, the only free parameter is $r_{\max}$. This is indeed a crucial parameter as it determines to a large extent the amplitude of the short-distance effects. In the following, we take $r_{\max}=R$, that is, the filtering scale. It is found to give a good result for the 2D case and for $n_{s}=-1/2,$ but this choice is likely to depend on the shape of the power spectrum. In general, this formula is intended to give a good account of the general properties of the covariance matrix, it cannot provide reliable quantitative results a priori. The other form we propose is intended to be much more precise quantitatively. Is is given by the following expression: \begin{equation} {\rm Cov}(\rho_{i},\rho_{j})=b_{\#}(\rho_{i})P(\rho_{i})\overline{\xi}_{s} b_{\#}(\rho_{j})P(\rho_{j}) +{\rm Cov}^{{\rm PBC}}(\rho_{i},\rho_{j}) \label{CovExpAp2} ,\end{equation} where ${\rm Cov}^{{\rm PBC}}(\rho_{i},\rho_{j})$ is the expression of the covariant matrix for periodic boundary conditions. It is obtained here simply by replacing $P(\rho_{i},\rho_{j},\overline{\xi},\xi_{12}(r_d))$ by $P(\rho_{i},\rho_{j},\overline{\xi}-\overline{\xi}_{s},\xi_{12}(r_d)-\overline{\xi}_{s})$ before integrating over $r_{d}$ so that the averaged joint correlations vanish identically. The rationale for this proposition is that ${\rm Cov}^{{\rm PBC}}(\rho_{i},\rho_{j})$ could be more easily estimated from specific numerical experiments. In both cases, the short-distance contributions are the same for the three types of observables $\rho_{i}$, $\hat{\rho}_{i}$ , and $\overline{\rho}_{i}$. These forms are then compared to numerical results. \subsection{Numerical experiments with the Rayleigh-Levy flight model} \begin{figure} \centering \includegraphics[width=7cm]{densityPDF-TH.pdf} \hspace{.5cm} \includegraphics[width=7cm]{reduced-densityPDF-TH.pdf} \caption{One-point density PDF obtained with top-hat filters compared with the theoretical predictions, Eq. (\ref{LFpdf}). The values of $\overline{\xi}$ are 0.8 and 1.09 for the blue and red curves, respectively, corresponding to two different values of $l_{0}$. The bottom panel shows the residuals. Departure from theory might be due to binning and/or to the finite number of samples.} \label{densityPDF-TH} \end{figure} \begin{figure*}[tbp] \centering \includegraphics[width=5.5cm]{DPDF_alpha0p5_RawDensity.pdf} \includegraphics[width=5.5cm]{DPDF_alpha0p5_S1Density.pdf} \includegraphics[width=5.5cm]{DPDF_alpha0p5_S2Density.pdf} \caption{Measured variance of the density PDF, i.e., diagonal elements of the covariance matrix, in sets ${\cal A}$ for $\alpha=0.5$ and different prescription of the measured density. From left to right, raw density $\rho_{i}$, scaled density $\hat{\rho}_{i}$ , and scaled density $\overline{\rho}_{i}$. The blue dots and solid lines are from the mean-field analytical expressions, and the large gold symbols are from the numerical simulations. The dashed black lines are what is expected from the large-scale leading contribution. The variance at cell scale is about $1.09,$ and the variance at sample scale, $\overline{\xi}_{s}$, is about $0.09$.} \label{DPDF_SetA_0p5} \end{figure*} \begin{figure*}[tbp] \centering \includegraphics[width=5.5cm]{RedCovMat_alpha0p5_RawDensity.pdf} \includegraphics[width=5.5cm]{RedCovMat_alpha0p5_S1Density.pdf} \includegraphics[width=5.5cm]{RedCovMat_alpha0p5_S2Density.pdf} \caption{Resulting reduced covariance matrix for the three types of observables for set ${\cal A}$. The covariance matrix is dominated by its leading eigenvalue and direction, leading to this typical butterfly shape of the reduced covariance matrix. \label{NumRedCovariance}} \end{figure*} \begin{figure} \centering \includegraphics[width=7cm]{DPDF_alpha0p5_SetB.pdf} \caption{Measured variance of the density PDF obtained for set ${\cal B}$). Symbols are the same as in Fig. \ref{DPDF_SetA_0p5}.} \label{DPDF_SetB_0p5} \end{figure} A series of experiments of 2D walks with a large number of samples were performed as described below. We restricted our analysis to $\alpha=0.5$ with $l_{0}=0.003$ pixel size (the dependence on $l_{0}$ was tested as illustrated on Fig. \ref{densityPDF-TH}, where $l_{0}=.006$ was also used, but the analyses were made for a fixed value of $l_{0}$). Fig. \ref{samplepoints} illustrates how points are distributed in these samples. The point distribution does not show the filamentary structure of realistic cosmological simulations. It exhibits the presence of concentrated halos surrounded by empty regions, however, which are reminiscent of the structure of the largest matter concentrations of the cosmic web. Two different setting were employed to explore different aspects of the results that were found: \begin{itemize} \item Set ${\cal A}$: 1600 samples extracted from a single numerical realization (with periodic boundary conditions) with a size of $8000 \times 8000$ pixels$^{2}$ containing $64 \times 10^{6}$ points. Each sample then has $200 \times 200$ pixels$^{2}$ containing an average of $200^{2}$ points each. For this set of samples, the average and covariance of the PDF were extracted following the three procedures mentioned before: either the density was taken with respect to the mean density of the realization, with respect to the density of each sample, or by subtracting the sample density. It therefore corresponds to an evaluation of the mean and covariance of the PDF of $\rho_{i}$, $\hat{\rho}_{i}$ , and $\overline{\rho}_{i}$ , respectively. \item Set ${\cal B}$: 1600 samples, each with periodic boundary conditions, with a size of $200 \times 200$ pixels$^{2}$ containing $200^{2}$ points each. By construction, the average two-point function in the sample, $\overline{\xi}_{s}$ , vanishes in this case, and covariance is entirely due to proximity effects. \end{itemize} In each case, the local density was obtained after a filtering procedure. The point positions were first pixelized, that is, each point was attributed to a pixel so that the mean number of points per pixel was one. The field was then filtered by a (quasi) circular top-hat functions. In practice, the number of pixels in the window function was 57. This makes the effective smoothing radius about 4.25 in pixel units. The resulting density was then measured at each pixel location. Their histograms were then computed after density binning. To avoid large undue discrete effects, the bin width was chosen to be a multiple of $1/57,$ and in order to ensure that the requirement (\ref{dminhalo}) was met at the pixel distance, we chose a bin width of about $1/4$, more precisely, of $14/57$. Fig. \ref{densityPDF-TH} shows the resulting PDF as measured in the simulations and how it compares to the theoretical prediction, Eq. (\ref{LFpdf}), for two different choices of $l_{0}$. The expected scaling for $\overline{\xi}$ is recovered. The measured PDF also follows theoretical predictions for a wide range of probabilities remarkably well. It gives us confidence in the whole procedure and in the approaches used to compute PDFs in this model. The detailed comparisons were made for $l_{0}=0.003,$ leading to $\overline{\xi}=1.09,$ and a sample density variance in sets ${\cal A}$ given by $\overline{\xi}_{s}=0.09$. The measured variance of the density PDF is obtained from 1600 samples in each case. The resulting shapes are presented in Figs. \ref{DPDF_SetA_0p5} and \ref{DPDF_SetB_0p5} for the different cases, density in a supersample realization, and in samples with periodic boundary conditions. The results show the comparison between results obtained in the numerical simulations with yellow symbols, and results derived from the analytic prescriptions as blue dots, based on the mean-field approximation. The agreement between the two is very good. The overall shape of the variance and its dependence on the density is well reproduced. Discrepancies can be observed for densities above 4 or 5, however, where the theoretical predictions are seen to underestimate the results. The reasons for these discrepancies are not clear at this stage. A possible explanation might be the finite number of samples that is used to infer the variances\footnote{Although the number of samples is large, the number of haloes contained in each sample is finite leading to discretization errors in the estimate of the covariance. Estimate of the minimal number of realizations required to make such estimates is beyond the scope of this paper.}. The variance of the density PDF is also compared with the large-scale contributions (\ref{NaiveCov}), (\ref{NaiveCovs1}), and (\ref{NaiveCovs2}) for set ${\cal A}$ depending on the cases (at this order, the covariance vanishes for set ${\cal B}$). It shows that this formula captures some features of the variance (especially at low and moderate densities), but does not account for all. This is also illustrated in Fig. \ref{NumRedCovariance}, which shows the reduced covariance. The fact that the covariance is determined to a large extent by its leading large-scale contribution leads to values of the reduced covariance close to 1 or -1, leading to these butterfly patterns. Proximity effects, not captured in these forms, also contribute to the covariances at a significant level, however. This is already apparent in Fig. \ref{DPDF_SetA_0p5}. \begin{figure*}[tbp] \centering \includegraphics[width=5.5cm]{DPDF_alpha0p5_ApForm_RawDensity.pdf} \includegraphics[width=5.5cm]{DPDF_alpha0p5_ApForm_S1Density.pdf} \includegraphics[width=5.5cm]{DPDF_alpha0p5_ApForm_S2Density.pdf} \caption{Measured variance of the density PDF, i.e., diagonal elements of the covariance matrix, in sets ${\cal A}$ and comparisons with proposed approximate forms. The yellow line and symbols are the results obtained in the numerical experiments. The dot-dashed line is the prediction derived from relation (\ref{CovExpAp2}), and the dashed gray line shows the prediction from Eq. (\ref{CovExpAp1}). The dot-dashed black lines correspond to the large-scale contributions.} \label{DPDF_ApForm_0p5} \end{figure*} \begin{figure*}[tbp] \centering \includegraphics[width=5.5cm]{FirstEigenvect_0p5.pdf} \includegraphics[width=5.5cm]{FirstEigenvect_s1_0p5.pdf} \includegraphics[width=5.5cm]{FirstEigenvect_s2_0p5.pdf} \caption{Behavior of the first eigenvector with the same color-coding as in Fig. 6. The dashed black lines are the large-scale prediction, $b_{\#}(\rho_{i}) P(\rho_{i})$ appropriately normalized. The size of the data vector is 30.} \label{FirstEigenvect} \end{figure*} \subsection{Testing models of covariance matrices} Expressions (\ref{CovExpAp1}) and (\ref{CovExpAp2}) are precise propositions to show how the large-scale contributions can be completed to account for the full form of the covariance. The comparisons between the predicted form and those obtained from the numerical experiments are explored in detail at different levels and using the following criteria: \begin{itemize} \item amplitude of the PDF variance, \item density dependence of the first eigenvalue of the covariance matrix, \item amplitude of the eigenvalues of the covariance matrix, and \item resulting $\chi^{2}$ distribution of a set of data vectors drawn from the original covariance. \end{itemize} These comparisons are shown in figures \ref{DPDF_ApForm_0p5} to \ref{ChiSquareTests}. For model (\ref{CovExpAp2}), the term ${\rm Cov}^{{\rm PBC}}(\rho_{i},\rho_{j})$ is taken from the measured covariance of set ${\cal B}$. Figs. \ref{DPDF_ApForm_0p5} and \ref{FirstEigenvect} show that these two prescriptions give a good account of the leading behavior of the covariance matrix. The conclusion is quite sensitive for the choice of $r_{\max}$ for prescription (\ref{CovExpAp1}). On the other hand, there is no free parameter that can be adjusted for model (\ref{CovExpAp2}). Interestingly, Fig. \ref{FirstEigenvect} shows that the PDF variance also departs significantly from the large-scale term. The first eigenvector reproduces the functional form of the large-scale density-bias functions very faithfully. The last two criteria are designed to verify that the reconstructed covariances also capture the subleading behavior of the matrix and can eventually be safely inverted and used as a model of likelihood. To avoid numerical uncertainties and make the comparison tractable, we chose to reduce the binning to six bins (through a rebinning of the histograms and densities ranging from $0.5$ to $6.5$). The resulting eigenvalues are shown in the top panel of Fig. \ref{ChiSquareTests}. It shows that the eigenvalues decrease rapidly in amplitude, suggesting that the eigendirections are well sequenced and that the approximate form captures their values rather accurately. Form (\ref{CovExpAp2}) in particular reproduces all six eigenvalues almost exactly. Finally, $\chi^{2}$ distributions were computed from a set of random values $P_{i} ^{{\rm ex}}$ drawn in each case from a Gaussian likelihood built from the measured covariance (with six bins). The values of $\chi^{2}(P_{i} ^{{\rm ex}})$ were then computed for each data vector, and their histogram was computed from each of the proposed models (including the original model for reference), \begin{equation} \chi_{{\rm model}}^{2}(P_{i} ^{{\rm ex}})=\frac{1}{2}\sum_{ij}{\cal N}_{{\rm model}}^{ij}P_{i} ^{{\rm ex}}P_{j} ^{{\rm ex}} ,\end{equation} where ${\cal N}_{{\rm model}}^{ij}$ is the inverse of the covariance matrix, either computed from Eq. (\ref{CovExpAp1}) or from Eq. (\ref{CovExpAp2}). For the original model, the expected distribution of the $\chi^{2}$ values is then expected to be precisely that of a $\chi^{2}$ distribution with six degrees of freedom. This is indeed what is almost exactly obtained for model (\ref{CovExpAp2}). Results obtained from prescription (\ref{CovExpAp1}) are not quite as good. This is expected as the short-distance effects are estimated rather crudely in Eq. (85). The performance of this prescription deteriorates when the dimension of the data vector (i.e., the number of bins) increases. \begin{figure*}[tbp] \centering {\includegraphics[width=5.5cm]{Eigenvalues_0p5.pdf} \includegraphics[width=5.5cm]{Eigenvalues_s1_0p5.pdf} \includegraphics[width=5.5cm]{Eigenvalues_s2_0p5.pdf}\vspace{.3cm}} \includegraphics[width=5.5cm]{ChiSquare_0p5.pdf} \includegraphics[width=5.5cm]{ChiSquare_s1_0p5.pdf} \includegraphics[width=5.5cm]{ChiSquare_s2_0p5.pdf} \caption{ Performances of the approximate forms of the covariance matrix in terms of rigenvalues and $\chi^{2}$-distributions. Top panel: Eigenvalues of the covariance matrices (rebinned into six bins) compared to what can be obtained from the proposed approximate forms; same color-coding as for Fig. \ref{DPDF_ApForm_0p5}. The $\chi^{2}$ distributions are shown in the bottom panel. Model (\ref{CovExpAp2}) reproduces the very same $\chi^{2}$ distributions. Model (\ref{CovExpAp1}), in gray, is not as accurate and tends to slightly overestimate the $\chi^{2}$. This latter behavior is amplified when a larger number of bins is used. \label{ChiSquareTests}} \end{figure*} \section{Conclusions and lessons} \begin{figure} \centering \includegraphics[width=6.5cm]{xi3der_moments.pdf} \includegraphics[width=6.5cm]{xi2der_moments.pdf} \caption{Scale dependence of the matter correlation functions for a realistic cosmological model \citep[cosmological parameters derived from Plank,][]{2020A&A...641A...6P} for the 3D density and the projected density (for a uniformly sampled survey with a depth of about 800 $h^{-1}$Mpc between $z=0.75$ and $z=1.25$). The top panel shows $r_{d}^{3}\xi(r_{d})$ (solid blue line) and $r_{d}^{3}\xi^{2}(r_{d})$ (dashed red line) for the 3D density field, and the bottom panel shows $\theta^{2}_{d}\xi(\theta_{d})$ and $\theta^{2}_{d}\xi^{2}(\theta_{d})$ for the projected density. In both cases, the average value of the first moment of the two-point correlation function is dominated by large-distance contributions, whereas short-distance contributions dominate the second moment, assuming survey sizes of about 100 $h^{-1}$Mpc or above. \label{xider_moments}} \end{figure} We presented key relations that give the large-scale behavior of the joint PDF, and hence the leading behavior of the covariance matrix of the density PDF. These contributing terms do not give the expression of a covariance matrix that can be used to build a likelihood function, however, as it is not invertible. Further significant contributions are found to be due to small separation effects, and an approximate form is proposed in eq. (\ref{shortdistjPDF}). The latter is found to encapsulate most of the proximity effects, that is, it informs on the fact that nearby regions are likely to be correlated. They also give an indication on the minimal grid size that can be used the maximum bin size that can be used without information loss for a given bin width. We then used a toy model for which numerical experiments can easily be performed and for which the exact PDF and large-scale covariance can be derived. It allows us to evaluate the efficiency of approximate schemes precisely. The conclusions of these comparisons are listed below.\begin{itemize} \item The theoretical forms Eqs. (\ref{NaiveCov}, \ref{NaiveCovs1}, and \ref{NaiveCovs2}) give the leading-order expression of the covariance elements when supersample effects are taken into account. It gives an accurate prediction of the leading eigenvalue and eigendirection of the covariance matrix. \item Whether subdominant effects can be accounted for by subsequent terms depends on the behavior of the two-point function: if the r.m.s. of the two-point function is dominated by large separations, then next-to-leading-order effect need to be taken into account; otherwise, short-distance effects will be the dominant contributor. \item In case short-distance effects dominate, the covariance matrix can be accessed from small simulations provided the relevant dominant large-scale contributions are added. \item This suggests that in realistic situations, the supersample effects, that is, the effects due to modes whose wavelength is larger than the size of the survey, have limited impact on the structure of the covariance matrix and that they can be captured by the only leading large-scale contribution. This is supported by a further analysis of the behavior of $\xi(r_{d})$ in realistic cosmological settings. For the standard model of cosmology \citep[as derived from cosmic microwave background observations,][]{2020A&A...641A...6P}, the behavior of the matter correlation function can be derived. This is illustrated in Fig. \ref{xider_moments}, which illustrates the scales that are the main contributors to the first two moments of the two-point correlation function. Whether in 2D or in 3D, the first moment is dominated by large-scale contributions, whereas the second moment is dominated by small-scale contributions. \item In the context of this study, we assumed that the measured $P_{i}$ were Gaussian distributed. Although it is difficult to assess the accuracy of this hypothesis, the structure uncovered in section \ref{sec:PDFcovariances} can be used to make such an attempt. In tree models, higher-order expressions of the joint density PDFs are expected to preserve the tree structure; see \citet{1999A&A...349..697B}. The connected part of the three-point joint density PDF is then expected to take the form \begin{eqnarray} &&{\rm Cov}(\rho_{i},\rho_{j},\rho_{k})=\nonumber\\ &&b_{2}(\rho_{i})P(\rho_{i})\,\overline{\xi}_{s}\,b(\rho_{j})P(\rho_{j})\,\overline{\xi}_{s}\,b(\rho_{k})P(\rho_{k})+{\rm sym.},\end{eqnarray} where $b_{2}(\rho)$ is the two-line bias function of amplitude similar to $b^{2}(\rho)$. This implies in particular that the third-order cumulant is about $b(\rho)^{4}P(\rho)^{3}\overline{\xi}_{s}^{2}$, much smaller than $\left[b(\rho)^{2}P(\rho)^{2}\overline{\xi}_{s}\right]^{3/2}$, making the distribution of the measured values of $P(\rho)$ (quasi-) Gaussian distributed. There might be some combination of $\rho_{i}$ and values of $\overline{\xi}_{s}$ , however, for which a higher-order term could play a role in the expression of the likelihood function. \end{itemize} For the application of these formulae in practical cases, some limitations have to be noted. We list them below. \begin{itemize} \item In the proposed form, the fact that in practice, PDFs are generally measured on a grid, that is, on a finite set of locations, is not taken into account. For instance, the exclusion of nonoverlapping cells is not considered. this is expected to introduce additional noise in the PDF estimates. The covariance matrix for these constructions cannot then be derived from general formulae (\ref{keyrelationCov}) even when the integral in $r_{d}$ is restricted above a given threshold. \item Relation (\ref{shortdistjPDF}) has been derived in a specific regime (using saddle point approximations) for tree hierarchical models. They are expected to capture the phenomenon at play for ``typical'' values of the densities, but they may not perform so well in the rare event tails (the exception being the minimum model, for which it is exact). Further checks of the validity of (\ref{shortdistjPDF}) should therefore certainly be done. \item The general formulae (\ref{NaiveCov}, \ref{NaiveCovs1}, and \ref{NaiveCovs2}) are valid for any type of filtering schemes, even for a compensated filter. This is not the case for relation (\ref{shortdistjPDF}). The proximity effects for compensated filters ought to be considered specifically. \item Prescription (\ref{CovExpAp2}) is found to give a very precise account of the properties of the covariance matrix. It is based on the proposition that large-scale (supersample) effects can be added separately from the proximity effects and that the latter can be evaluated with small-scale mocks in which supersample effects are absent (with periodic boundary conditions). This is not an exact result, however,. It relies in particular on the fact that the r.m.s. of the $\xi_{s}$ is dominated by scales much smaller than the sample size. \item Prescription (\ref{CovExpAp1}) is less solid. It can be used for a quick assessment of the different contributing terms, or to build fully invertible covariance matrices, but it is unlikely to give reliable predictions at the $\chi^{2}$ level. \end{itemize} In all cases, prescriptions (\ref{CovExpAp1}) and (\ref{CovExpAp2}) can be the starting point of a more precise evaluation of the covariance from specific numerical experiments that can complement its evaluation following the approach presented in \cite{2018MNRAS.473.4150F}, for instance. The authors also showed that some strategies could be adopted to limit the number of realizations required to reach a specific precision. This point is not discussed here. \begin{acknowledgements} The author of this article is indebted to Cora Uhlemann, Alex Gough, Oliver Friedrich, Sandrine Codis, Aoife Boyle and Alexandre Barthelemy for many comments and careful examination of the preparatory notes of this manuscript. \end{acknowledgements} \bibliographystyle{aa}
proofpile-arXiv_065-4630
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} There are two families of competitive algorithms for arbitrary-precision computation of elementary functions: the first uses Taylor series together with argument reduction and needs $O(\mathsf{M}(B) \log^2 B)$ time for $B$-bit precision~\cite{brent1976complexity}, while the second is based on the arithmetic-geometric mean (AGM) iteration for elliptic integrals and achieves complexity $O(\mathsf{M}(B) \log B)$~\cite{Brent1976}.\footnote{$\mathsf{M}(B)$ is the complexity of $B$-bit multiplication. We can take $\mathsf{M}(B) = O(B \log B)$~\cite{Harvey2021}.} Due to constant-factor overheads, optimized implementations of Taylor series tend to perform better than the AGM for practical sizes of $B$, possibly even for $B$ in the billions. The degree of argument reduction is a crucial tuning parameter in Taylor series methods. For example, the standard algorithm for the exponential function\footnote{The logarithmic and trigonometric functions have analogous algorithms; alternatively, they can be computed from the exponential via connection formulas and root-finding for inverses. For a more comprehensive overview of techniques for elementary function evaluation, see Smith~\cite{Smith1989}, Muller~\cite{Muller2016}, Brent and Zimmermann~\cite{mca}, and Arndt~\cite{arndt2010matters}.} amounts to choosing a reduction parameter $r \ge 0$ and evaluating \begin{equation} \exp(x) = (\exp(t))^{2^r}, \quad t = x/2^r. \label{eq:expred} \end{equation} If $|x| < 1$, this costs~$r$ squarings plus the summation of $N \le B / r$ terms of~the series for $\exp(t)$, or better $N/2$ terms for $s = \sinh(t)$ (with $\exp(t) = s + \smash{\sqrt{s^2 + 1}}$). In moderate precision (up to around $B = 10^4$) the series evaluation costs $O(\sqrt{N})$ multiplications; for quasilinear complexity as $B \to \infty$, we use the ``bit-burst algorithm'': we write $\exp(t) = \exp(t_1) \cdot \exp(t_2) \cdots$ where $t_j$ extracts~$2^j$ bits in the binary expansion of $t$ and evaluate each $\exp(t_j)$ series using binary splitting. Asymptotically, $r$ should grow at most logarithmically with $B$, or the $O(r \mathsf{M}(B))$ time spent on squarings will dominate. In practice, the best $r$ will be of order 10 to 100 (varying with $B$) and these $r$ squarings may account for a large fraction of the work to evaluate the function. This prompts the question: can we reduce the argument to size $2^{-r}$ \emph{without} the cost of $r$ squarings? The only known solution relies on precomputation. For example, we need only a single multiplication for $r$-bit reduction if we have a precomputed table of $\exp(j/2^r)$, $0 \le j < 2^r$, or $m$ multiplications with an $m$-partite table of $m 2^{r/m}$ entries. Tables of this kind are useful up to a few thousand bits~\cite{Johansson2015elementary}, but they are rarely used at higher precision since they yield diminishing returns as the space and precomputation time increases linearly with $B$ and exponentially with $r$. Most commonly, arbitrary-precision software will only cache higher-precision values of the constants $\pi$ and $\log(2)$ computed at runtime, used for an initial reduction to ensure $|x| < 1$. \subsection*{Sch\"{o}nhage's method} In 2006, Sch\"{o}nhage \cite{schonhage2006,schonhage2011} proposed a method to compute elementary functions using ``diophantine combinations of incommensurable logarithms'' which avoids the problem with large tables. The idea is as follows: given a real number $x$, we determine integers $c, d$ such that \begin{equation} x \,\approx\, c \log(2) + d \log(3) \label{eq:xapprox23} \end{equation} within some tolerance $2^{-r}$ (it is a standard result in Diophantine approximation that such $c, d$ exist for any $r$). We can then use the argument reduction formula \begin{equation} \exp(x) = \exp(t) \, 2^c 3^d, \quad t = x - c \log(2) - d \log(3). \label{eq:scho23} \end{equation} There is an analogous formula for complex $x$ and for trigonometric functions using Gaussian primes. The advantage of Sch\"{o}nhage's method is that we only need to precompute or cache the two constants $\log(2)$ and $\log(3)$ to high precision while the rational power product $2^c 3^d$ can be computed on the fly using binary exponentiation. If $3^c < 2^B$, this step costs $O(\mathsf{M}(B))$.\footnote{In binary arithmetic, we only need to evaluate $3^c$ since multiplying by a power of two is free. This optimization is not a vital ingredient of the algorithm, however.} Sch\"{o}nhage seems to have considered this method useful only for $B$ in the range from around 50 to 3000 bits (in his words, ``medium precision''). The problem is that the coefficients $c, d$ in~\eqref{eq:scho23} grow exponentially with the desired amount of reduction. Indeed, solutions with $|t| < 2^{-r}$ will generally have $c, d = O(2^{r/2})$. It is also not obvious how to compute the coefficients $c$ and $d$ for a given $x$; we can use a lookup table for small $r$, but this retains the exponential scaling problem. \subsection*{Our contribution} In this work, we describe a version of Sch\"{o}nhage's algorithm in which we perform reduction using a basis of $n$ primes, where~$n$ is arbitrary and in practice may be 10 or more. The coefficients (power-product exponents) will then only have magnitude around $O(2^{r/n})$, allowing much greater reduction than with a single pair of primes.\footnote{Unfortunately, the only published records of Sch\"{o}nhage's algorithm are two seminar talk abstracts which are light on details. The abstracts do mention the possibility of combining three primes instead of a single pair ``for an improved design'', but there is no hint of a practical algorithm working with arbitrarily large $n$, $r$ and $B$, which will be presented here.} Section \ref{sect:reduction} presents an algorithm for quickly finding an approximating linear combination of several logarithms, which is a prerequisite for making the method practical. Section~\ref{sect:algorithm} describes the main algorithm for elementary functions in more detail. Section \ref{sect:machin} discusses use of Machin-like formulas for fast precomputation of logarithms or arctangents, where we tabulate new optimized multi-evaluation formulas for special sets of values. Our implementation results presented in section~\ref{sect:implementation} show that the new version of Sch\"{o}nhage's algorithm scales remarkably well: we can quickly reduce the argument to magnitude $2^{-r}$ where we may have $r \ge 100$ at moderately high precision (a few thousand bits) and perhaps $r \ge 500$ at millions of bits. When $n$ is chosen optimally, the new algorithm runs roughly twice as fast as the best previous elementary function implementations (both Taylor and AGM-based) for bit precisions $B$ from a few thousand up to millions. The storage requirements ($nB$ bits) and precomputation time (on par with one or a few extra function evaluations) are modest enough that the method is ideal as a default algorithm in arbitrary-precision software over a large range of precisions. \subsection*{Historical note} With the exception of Sch\"{o}nhage's work, we are not aware of any previous investigations into algorithms of this kind for arbitrary-precision computation of elementary functions of real and complex arguments. However, the underlying idea of exploiting differences between logarithms of prime numbers in a computational setting goes back at least to Briggs' 1624~\emph{Arithmetica logarithmica}~\cite{briggs1624,roegel2010reconstruction}. Briggs used a version of this trick when extending tables of logarithms of integers. We revisit this topic in section~\ref{sect:machin}. \section{Integer relations} \label{sect:reduction} We consider the following \emph{inhomogeneous integer relation problem}: given real numbers $x$ and $\alpha_1, \ldots, \alpha_n$ and a tolerance $2^{-r}$, find a vector $(c_1, \ldots, c_n) \in \ensuremath{\mathbb{Z}}^n$ with small coefficients such that \begin{equation} x \,\approx\, c_1 \alpha_1 + \ldots c_n \alpha_n \label{eq:imhrel} \end{equation} with error at most $2^{-r}$. We assume that the equation $c_1 \alpha_1 + \ldots + c_n \alpha_n = 0$ has no solution over the integers. In the special case where $P = \{p_1, \ldots p_n\}$ is a set of prime numbers and $\alpha_i = \log(p_i)$, solving \eqref{eq:imhrel} will find a $P$-smooth rational approximation \begin{equation} \exp(x) \,\approx\, p_1^{c_1} \cdots p_n^{c_n} \in \ensuremath{\mathbb{Q}} \label{eq:imhprod} \end{equation} with small numerator and denominator. Integer relation problems can be solved using lattice reduction algorithms like LLL~\cite{lenstra1982factoring,Coh1996}. However, directly solving \begin{equation} c_0 x + c_1 \alpha_1 + \ldots + c_n \alpha_n \approx 0 \end{equation} will generally introduce a denominator $c_0 \ne 1$, requiring a $c_0$-th root extraction on the right-hand side of \eqref{eq:imhprod}. In any case, running LLL each time we want to evaluate an elementary function will be too slow. Algorithm~\ref{alg:linred} solves these issues by precomputing solutions to the homogeneous equation $c_1 \alpha_1 + \ldots c_n \alpha_n \approx 0$ and using these relations to solve the inhomogeneous version \eqref{eq:imhrel} through iterated reduction. \begin{algorithm \caption{Approximate $x \in \ensuremath{\mathbb{R}}$ to within $2^{-r}$ by a linear combination $$x \,\approx\, c_1 \alpha_1 + \ldots + c_n \alpha_n, \quad c_i \in \ensuremath{\mathbb{Z}}$$ given $\alpha_i \in \ensuremath{\mathbb{R}}$ which are linearly independent over $\ensuremath{\mathbb{Q}}$. Alternatively, find a good approximation subject to some size constraint $f(c_1, \ldots, c_n) \le M$.} \label{alg:linred} \begin{enumerate}[leftmargin=0.65cm] \item Precomputation (independent of $x$): choose a real convergence factor $C > 1$. For $i = 1, 2, \ldots$, LLL-reduce $$ \renewcommand\arraystretch{1.1} \begin{pmatrix} 1 & 0 & \ldots & 0 & \lfloor C^i \alpha_1 + \tfrac{1}{2} \rfloor) \\ 0 & 1 & \ldots & 0 & \lfloor C^i \alpha_2 + \tfrac{1}{2} \rfloor) \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \ldots & 1 & \lfloor C^i \alpha_n + \tfrac{1}{2} \rfloor) \end{pmatrix}.$$ This yields an approximate integer relation \begin{equation} \varepsilon_i = d_{i,1} \alpha_1 + \ldots d_{i,n} \alpha_n, \quad \varepsilon_i = O(C^{-i}). \label{eq:approxrel} \end{equation} (In fact, it yields $n$ such relations; we can choose any one of them.) We store tables of the coefficients $d_{i,j}$ and floating-point approximations of the errors $\varepsilon_i$. We stop after the first $i$ where $|\varepsilon_i| < 2^{-r}$. \vskip5pt \item Reduction (given $x$). \begin{itemize} \item Let $(c_1, \ldots, c_n) = (0, \ldots, 0)$. \item For $i = 1, 2, \ldots$, compute $m_i = \lfloor x / \varepsilon_i + 1/2 \rfloor$ and update: $$(c_1, \ldots, c_n) \; \gets \; (c_1 + m_i d_{i,1}, \ldots, c_n + m_i d_{i,n}),$$ $$x \; \gets \; x - m_i \varepsilon_i.$$ Stop and return the relation $(c_1, \ldots, c_n)$ when $|x| < 2^{-r}$ or when the next update will give $f(c_1, \ldots, c_n) > M$. \end{itemize} \end{enumerate} \end{algorithm} \subsection*{Analysis of Algorithm~\ref{alg:linred}} We assume heuristically that each step in the precomputation phase (1) succeeds to find a relation~\eqref{eq:approxrel} with $\varepsilon_i$ within a small factor of $\pm C^{-i}$ and with coefficients $(d_{i,1}, \ldots, d_{i,n})$ of magnitude $O(C^{i/n})$. We will simply observe that this always seems to be the case in practice; a rigorous justification would require further analysis. It can happen that picking the first integer relation computed by LLL yields the same relation consecutively ($\varepsilon_i = \varepsilon_{i+1}$). In that case, we can just pick a different relation (while keeping the $\varepsilon_i$ sorted) or skip the duplicate relation. However, a decrease by much more than a factor $C$ between successive step should be avoided as it will result in larger output coefficients. Phase (1) terminates when $i = N \approx r \log(2)/\log(C)$. The multiplier $m_i$ computed in each step of the phase (2) reduction has magnitude around $C$. The coefficients $(c_1, \ldots, c_n)$ at the end of phase (2) will therefore have magnitude around \begin{equation} \sum_{i=1}^{N} C^{i/n+1} = \frac{C^{1/n+1}}{C^{1/n}-1} \left(2^{r/n}-1\right) \approx \frac{C n}{\log(C)} 2^{r/n} \end{equation} or perhaps a bit smaller than this since on average there can be some cancellation. The prefactor $C/\log(C)$ is minimized when $C = e$, or in other words it is theoretically optimal to force $\varepsilon_i = \Theta(\exp(-i))$. However, this prefactor does not vary strongly with $C$, and a choice like $C = 2$ (one bit per step) or $C = 10$ (one decimal per step) may be convenient. Step $i$ of phase (1) requires LLL-reducing a matrix with $\beta$-bit entries where $\beta = O(i)$. The standard complexity bound for LLL is $O(n^{5+\varepsilon} \beta^{2+\varepsilon})$, so phase (2) costs $O(n^{5+\varepsilon} r^{3+\varepsilon})$.\footnote{The factor $r^{3+\varepsilon}$ can be improved to $r^{2+\varepsilon}$ using a quasilinear version of LLL~\cite{novocin2011lll}.} In our application, the tables generated in phase (1) are small (a few kilobytes) and do not need to be generated at runtime, so it suffices to note that the computations are feasible for ranges of $n$ and $r$ of interest; for empirical results, see section~\ref{sect:implementation}. Phase (2) requires $O(n r)$ arithmetic operations with $r$-bit numbers, for a running time of $O(n r^{2+\varepsilon})$. It is convenient to treat $x$ and $\alpha_i$ as fixed-point numbers with $r$-bit fractional part. As an optimization, we can work with a machine-precision (53-bit) floating-point approximations of $x'$ and the errors $\varepsilon_i$. We periodically recompute $$x' = x - (c_1 \alpha_1 + \ldots + c_n \alpha_n)$$ accurately from the full-precision values only when this approximation runs out of precision, essentially every $53/\log_2(C)$ steps. The resulting algorithm has very low overhead. We will not consider asymptotic complexity improvements since $r$ will be moderate (a small multiple of the word size) in our application. \section{Computation of elementary functions} \label{sect:algorithm} Given $x \in \ensuremath{\mathbb{R}}$ and a set of prime numbers $P = \{p_1, p_2, \ldots, p_n\}$, the algorithm described in the previous section allows us to find integers $c_1, \ldots, c_n$ such that \begin{equation} t = x - (c_1 \log(p_1) + \ldots + c_n \log(p_n)) \end{equation} is small, after which we can evaluate the real exponential function as \begin{equation} \exp(x) = \exp(t) \, p_1^{c_1} \cdots p_n^{c_n}. \end{equation} Algorithm~\ref{alg:exp} describes the procedure in some more detail. \begin{algorithm \caption{Computation of $\exp(x)$ for $x \in \mathbb{R}$ to $B$-bit precision using argument reduction by precomputed logarithms of primes.} \label{alg:exp} \begin{enumerate}[leftmargin=0.65cm] \item Precomputation (independent of $x$): select a set of prime numbers $P = \{p_1, p_2, \ldots, p_n\}$ with $p_1 = 2$. Compute $\log(p_1), \ldots, \log(p_n)$ to $B$-bit precision. \item Using Algorithm~\ref{alg:linred}, find an integer relation $x \approx c_1 \log(p_1) + \ldots + c_n \log(p_n)$, attempting to make the error as small as possible subject to $\| c_1, \ldots, c_n \|_P \le B$. This step can use low precision (about $r$ bits where $2^{-r}$ is the target reduction, in practice no more than a few machine words). \item Compute the power product $v/w = p_2^{c_2} \cdots p_n^{c_n}$ as an exact fraction, using binary splitting to recursively split the set of primes in half and using binary exponentiation to compute the individual powers. \item Calculate $t = x - (c_1 \log(p_1) + \ldots + c_n \log(p_n))$ using the precomputed logarithms. \item Compute $u = \exp(t)$ using Taylor series: depending on $B$, either use rectangular splitting for the sinh series or use the bit-burst decomposition $t = t_1 + t_2 + \ldots$ with binary splitting (see~e.g.\ \cite{mca} for details). \item Return $2^{c_1} u v / w$. \end{enumerate} \end{algorithm} \subsubsection*{Remarks} The bottleneck in the argument reduction is the cost of evaluating the power product $p_1^{c_1} \cdots p_n^{c_n} \in \ensuremath{\mathbb{Q}}$. How large coefficients (exponents) should we allow? A reasonable heuristic, implemented in~Algorithm~\ref{alg:exp}, is to choose coefficients such that the weighted norm \begin{equation} \nu = \| c_1, \ldots, c_n \|_P = \sum_{\substack{i=1 \\ p_i \ne 2}}^n |c_i| \log_2(p_i) \end{equation} is smaller than $B$: this ensures that the rational power product $p_1^{c_1} \cdots p_n^{c_n}$ has numerator and denominator bounded by $B$ bits. We discount the prime 2 in the norm with the assumption that we factor out powers of two when performing binary arithmetic. If $|x| > 1$, we should use $\log(2)$ alone for the first reduction in Algorithm~\ref{alg:linred} so that the corresponding exponentiation is free. We note that when computing the power product, there is no need to compute GCDs since the numerator and denominator are coprime by construction. There is not much to say about numerical issues; essentially, we need about $\log_2 (\sum_i |c_i| \log(p_i))$ guard bits to compensate for cancellation in the subtraction, which in practice always will be less than one extra machine word. If $|x| \gg 1$, we need an additional $\log_2 |x|$ guard bits for the accurate removal of $\log(2)$. \subsection{Numerical example} We illustrate computing $\exp(x)$ to 10000 digits (or $B = 33220$ bits) where $x = \sqrt{2} - 1$, using $n = 13$ primes. The following Pari/GP output effectively shows the precomputations of phase~(1) of Algorithm~\ref{alg:linred} with convergence rate $C = 10$. Since $2^{-100} \approx 7.9 \cdot 10^{-31}$, reducing by 32 relations with $C = 10$ is equivalent to $r = 100$ squarings in \eqref{eq:expred}. \begin{small} \begin{verbatim} ? n=13; for(i=1, 32, localprec(i+10); P=vector(n,k,log(prime(k))); d=lindep(P,i)~; printf( [0, 0, 0, 0, -1, 1, 0, 0, 0, 0, 0, 0, 0] 0.16705 [0, 0, 1, 0, -1, 0, -1, 0, 0, 0, 0, 1, 0] -0.010753 [-1, 0, 0, 0, 0, -1, 1, -1, 0, 1, 0, 0, 0] -0.0020263 [-1, 0, 0, 0, -1, 0, 1, -1, 1, -1, 1, 0, 0] -8.2498 e-5 [1, 0, 1, -1, 0, 1, -1, 1, -1, 0, 0, -1, 1] 9.8746 e-6 [0, 1, 0, -1, -1, 0, 2, -1, 0, -1, -1, 1, 1] 1.5206 e-6 [1, -1, 0, 1, 1, 2, -1, 0, -2, 1, -1, -1, 1] 3.2315 e-8 [1, -1, 0, 1, 1, 2, -1, 0, -2, 1, -1, -1, 1] 3.2315 e-8 [1, 0, 4, -1, -2, 0, 0, 2, 0, -2, -2, 1, 1] 4.3825 e-9 [0, -2, 0, 0, -2, 0, 0, 2, -4, 4, -1, 1, 0] -2.1170 e-10 [1, 1, 4, 1, -1, 1, -2, -3, 0, -4, 3, 1, 1] -7.0743 e-11 [0, -2, -1, 0, 2, 4, 4, 0, 3, 1, -6, -1, -3] 3.3304 e-12 [3, 2, -1, -6, 2, 3, -2, -2, 3, 1, 5, -4, -2] 2.5427 e-13 [-4, -2, 4, -4, 3, 1, 7, 0, -3, -4, 4, -7, 3] -9.9309 e-14 [1, -1, -7, -2, 5, 5, -6, 2, 0, -10, 5, 2, 3] -9.5171 e-15 [3, -2, -7, -9, 6, 6, 3, 9, 1, 8, -15, -4, 0] 6.8069 e-16 [-1, 13, -5, -7, -3, -3, -13, 3, 0, -1, 6, -3, 12] -7.1895 e-17 [-2, 3, -2, 2, -15, 16, 4, -7, 11, -15, 0, 9, -4] 8.1931 e-18 [2, 0, -9, -11, -5, -11, 21, 9, -9, -4, -1, -4, 13] 5.6466 e-19 [6, -9, 0, 9, 9, -2, -4, -22, 4, -7, 0, 5, 11] 4.6712 e-19 [1, -27, 22, -14, -2, 0, 0, -27, -3, -5, 18, 10, 9] -1.0084 e-20 [1, 41, -2, 5, -42, 6, -2, 13, 5, 3, -5, 7, -9] -1.3284 e-21 [4, -5, 8, -8, 6, -25, -38, -16, 24, 13, -10, 10, 24] -8.5139 e-23 [4, -5, 8, -8, 6, -25, -38, -16, 24, 13, -10, 10, 24] -8.5139 e-23 [-43, -2, 4, 9, 19, -26, 92, -30, -6, -24, 11, -4, -18] -4.8807 e-24 [8, 38, -4, 34, -31, 60, -75, 31, 44, -32, -1, -43, 17] 2.7073 e-25 [48, -31, 21, -27, 34, -23, -29, 41, -50, -65, 33, 20, 40] 5.2061 e-26 [-41, 8, 67, -84, 7, -22, -58, -35, 17, 58, -18, 13, 40] -7.9680 e-27 [20, 15, 50, -1, 48, 72, -67, -96, 75, 48, -38, -126, 68] 2.7161 e-28 [26, 20, -35, 16, -1, 75, -13, 2, -128, -100, 130, 46, -13] -3.3314 e-29 [-26, -20, 35, -16, 1, -75, 13, -2, 128, 100, -130, -46, 13] 3.3314 e-29 [137, -26, 127, 45, -14, -73, -66, -166, 71, 76, 122, -154, 53] -1.4227 e-31 \end{verbatim} \end{small} We prepend the relation $[1, 0, \ldots]$ for an initial reduction by $\log(2)$, and we can eliminate the duplicate entries. The phase (2) reduction in Algorithm~\ref{alg:linred} with $x = \sqrt{2}-1$ now yields the relation $$[-274, -414, -187, -314, -211, 651, -392, 463, -36, -369, -231, 634, 0]$$ or $$\exp(x) \approx \frac{2^{c_1} v}{w} = \frac{13^{651} \cdot 19^{463} \cdot 37^{634}}{2^{274} \cdot 3^{414} \cdot 5^{187} \cdot 7^{314} \cdot 11^{211} \cdot 17^{392} \cdot 23^{36} \cdot 29^{369} \cdot 31^{231}}$$ where the numerator and denominator have 7679 and 7678 bits, comfortably smaller than $B$. We compute the reduced argument $t = x - \log(2^{c_1} v/w) \approx -1.57 \cdot 10^{-32}$ by subtracting a linear combination of precomputed logarithms. Now taking 148 terms of the Taylor series for $\sinh(t)$ yields an error smaller than $10^{-10000}$. Evaluating this Taylor series using rectangular splitting costs roughly $2 \sqrt{148} \approx 24$ full 10000-digit multiplications, and this makes up the bulk of the time in the $\exp(x)$ evaluation. For comparison, computing $\exp(x)$ using \eqref{eq:expred} without precomputation, it is optimal to perform $r \approx 20$ squarings after which we need 555 terms of the sinh series, for a cost of $r + 2 \sqrt{555} \approx 67$ multiplications.\footnote{This estimate is not completely accurate because a squaring is somewhat cheaper than a multiplication (theoretically requiring 2/3 as much work). The same remark also concerns series evaluation, where some operations are squarings. We also mention that computing $\exp(x/2^r)$ with the bit-burst algorithm might be faster than using the sinh series at this level of precision, though probably not by much; we use the sinh series here for the purposes of illustration since the analysis is simpler.} Alternatively, computing $\log(x)$ using the AGM requires 25 iterations, where each iteration $a_{n+1}, b_{n+1} = (a_n+b_n)/2, \sqrt{a_n b_n}$ costs at least as much as two multiplications. Counting arithmetic operations alone, we can thus expect Algorithm~\ref{alg:exp} to be at least twice as fast as either method in this example. As we will see in section~\ref{sect:implementation}, this back-of-the-envelope estimate is quite accurate. \subsection{Trigonometric functions} We can compute the real trigonometric functions via the exponential function of a pure imaginary argument, using Gaussian primes $a + bi \in \mathbb{Z}[i]$ for reduction. Enumerated in order of norm $a^2+b^2$, the nonreal Gaussian primes are \begin{equation} \label{eq:gaussprimes} 1+i, \, 2+i, \, 3+2i, \, 4+i, \, 5+2i, \, 6+i, \, 5+4i, \, 7+2i, \, 6+5i, \ldots \end{equation} where we have discarded entries that are equivalent under conjugation, negation or transposition of real and imaginary parts (we choose here, arbitrarily, the representatives in the first quadrant and with $a \ge b$). The role of the logarithms $\log(p)$ is now assumed by the \emph{irreducible angles} \begin{equation} \alpha = \frac{1}{i} \left[ \log(a + bi) - \log(a - bi) \right] = 2 \operatorname{atan}\!\left(\frac{b}{a}\right) \end{equation} which define rotations by $e^{i\alpha} = (a+bi)/(a-bi)$ on the unit circle. We have the argument reduction formula \begin{equation} \cos(x) + i \sin(x) = \exp(i x) = \exp(i (x - c \alpha)) \frac{(a+bi)^c}{(a-bi)^c}, \quad c \in \ensuremath{\mathbb{Z}} \end{equation} which can be iterated over a combination of Gaussian primes. Algorithm~\ref{alg:expi} computes $\cos(x)$ and $\sin(x)$ together using this method. \begin{algorithm \caption{Computation of $\cos(x) + i \sin(x) = \exp(ix)$ for $x \in \mathbb{R}$ to $B$-bit precision using argument reduction by precomputed irreducible angles.} \label{alg:expi} \begin{enumerate}[leftmargin=0.65cm] \item Precomputation (independent of $x$): select a set of Gaussian prime numbers $Q = \{a_1 + b_1 i, \ldots, a_n + b_ni\}$ from \eqref{eq:gaussprimes} with $a_1+b_1 = 1+i$. Compute $2 \operatorname{atan}(b_1/a_1), \ldots, 2 \operatorname{atan}(b_n/a_n)$ to $B$-bit precision. \item Using Algorithm~\ref{alg:linred}, find an integer relation $x \approx c_1 2 \operatorname{atan}(b_1/a_1) + \ldots + c_n 2 \operatorname{atan}(b_n/a_n)$, attempting to make the error as small as possible subject to $\| c_1, \ldots, c_n \|_Q \le B$. This step can use low precision (about $r$ bits where $2^{-r}$ is the target reduction, in practice no more than a few machine words). \item Compute the power product \begin{equation} \frac{v}{w} = \frac{(a_2+b_2 i)^{c_2} \cdots (a_n+b_n i)^{c_n}}{(a_2-b_2 i)^{c_2} \cdots (a_n-b_n i)^{c_n}} \in \mathbb{Q}(i) \label{eq:gaussprod} \end{equation} as an exact fraction, using binary splitting to recursively split the set of primes in half and using binary exponentiation to compute the individual powers. \item Calculate $t = x - (c_1 2 \operatorname{atan}(b_1/a_1) + \ldots + c_n 2 \operatorname{atan}(b_1/a_1))$ using the precomputed arctangents. \item Compute $u = \exp(i t)$ using Taylor series (depending on $B$, either using rectangular splitting for the sin series or using the bit-burst decomposition $t = t_1 + t_2 + \ldots$ with binary splitting). \item Return $i^{c_1} u v / w$. \end{enumerate} \end{algorithm} \subsubsection*{Remarks} Here, a suitable norm is \begin{equation} \nu = \| c_1, \ldots, c_n \|_Q = \sum_{\substack{j=1 \\ p_j \ne 1+i}}^n |c_j| \log_2(a_j^2+b_j^2). \end{equation} The special prime 2 in the argument reduction for the real exponential is here replaced by the Gaussian prime $1+i$, for which \begin{equation} \frac{(1+i)^c}{(1-i)^c} = i^c \end{equation} can be evaluated in constant time; the angle reduction corresponds to removal of multiples of $\pi/2$. We only need to compute the factors in the numerator of the right-hand side of \eqref{eq:gaussprod} since the remaining product can be obtained via complex conjugation. As in the real case, all factors are coprime so we can multiply numerators and denominators using arithmetic in $\ensuremath{\mathbb{Z}}[i]$ without the need for GCDs. We can save a marginal amount of work (essentially in the last division) if we want either the sine of the cosine alone, or if we want $\tan(x)$. \subsection{Inverse functions} The formulas above can be transposed to compute the inverse functions. For example, \begin{equation} \log(x) = \log\left(\frac{x}{p_1^{c_1} \cdots p_n^{c_n}}\right) + (c_1 \log(p_1) + \ldots + c_n \log(p_n)). \end{equation} For the complex logarithm or arctangent, we need to be careful about selecting the correct branches. As an alternative, we recall the standard method of implementing the inverse functions using Newton iteration, starting from an low-precision approximation obtained with any other algorithm. The constant-factor overhead of Newton iteration can be reduced with an $m$-th order method derived from the addition formula for the exponential function~\cite[section 32.1]{arndt2010matters}. If $y = \log(x) + \varepsilon$, then \begin{equation} \log(x) = y + \log(1+\delta), \quad \delta = x \exp(-y) - 1. \label{eq:logy} \end{equation} We first compute $y \approx \log(x)$ at precision $B/m$ (calling the same algorithm recursively until we hit the basecase range) so that the unknown error $\varepsilon$ is $O(2^{-B/m})$. Then, we evaluate \eqref{eq:logy} at precision $B$ using the Taylor series for $\log(1+\delta)$ truncated to order $O(\delta^m)$. This gives us $\log(x)$ with error $O(2^{-B})$. The inverse trigonometric functions can be computed analogously via the arc\-tangent: if $y = \operatorname{atan}(x) + \varepsilon$, then \begin{equation} \operatorname{atan}(x) = y + \operatorname{atan}(\delta), \quad \delta = \frac{x-t}{1 + t x} = \frac{c x - s}{c + s x}, \quad t = \tan(y) = \frac{s}{c} = \frac{\sin(y)}{\cos(y)}. \label{eq:atany} \end{equation} With a suitably chosen $m$ (between 5 and 15, say) and rectangular splitting for the short Taylor series evaluation, the inverse functions are perhaps 10\%-30\% more expensive than the forward functions with this method. \section{Precomputation of logarithms and arctangents} \label{sect:machin} The precomputation of logarithms and arctangents of small integer or rational arguments is best done using binary splitting evaluation of trigonometric and hyperbolic arctangent series \begin{equation} \operatorname{atan}\!\left(\frac{1}{x}\right) = \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k+1)} \frac{1}{x^{2k+1}}, \quad \operatorname{atanh}\!\left(\frac{1}{x}\right) = \sum_{k=0}^{\infty} \frac{1}{(2k+1)} \frac{1}{x^{2k+1}}. \label{eq:atanseries} \end{equation} We want the arguments $x$ in \eqref{eq:atanseries} to be integers, and ideally large integers so that the series converge rapidly. It is not a good idea to use the primes~$p$ or Gaussian integer tangents $b/a$ directly as input since convergence will be slow; it is better to recycle values and evaluate differences of arguments (Briggs' method). For example, if we have already computed $\log(2)$, we can compute logarithms of successive primes using~\cite{gourdon2004logarithmic} \begin{equation} \log(p) = \log(2) + \frac{1}{2} \left( \log\!\left(\frac{p-1}{2}\right) + \log\!\left(\frac{p+1}{2}\right)\right) + \operatorname{atanh}\!\left(\frac{1}{2p^2-1}\right). \label{eq:logpdiff} \end{equation} Methods to reduce arctangents to sums of more rapidly convergent arctangent series have been studied by Gauss, Lehmer, Todd and others~\cite{lehmer1938arccotangent,todd1949problem,wetherfield1996enhancement}. The prototype is Machin's formula \begin{equation} \frac{\pi}{4} = \operatorname{atan}(1) = 4 \operatorname{atan}\!\left(\frac{1}{5}\right) - \operatorname{atan}\!\left(\frac{1}{239}\right). \label{eq:machin} \end{equation} \subsection{Simultaneous Machin-like formulas} If we have the option of computing the set of values $\log(p_1), \ldots, \log(p_n)$ or $\operatorname{atan}(b_1/a_1), \ldots, \operatorname{atan}(b_n/a_n)$ in any order (not necessarily one by one), then we can try to look for optimized simultaneous Machin-like formulas~\cite{arndt2010matters}. Given the first $n$ primes, we will thus look for a set of integers $X = \{x_1, x_2, \ldots, x_n\}$, as large as possible, such that there is an integer relation \begin{equation} \begin{pmatrix} \log(p_1) \\ \vdots \\ \log(p_n) \end{pmatrix} = M \begin{pmatrix} 2 \operatorname{atanh}(1/x_1) \\ \vdots \\ 2 \operatorname{atanh}(1/x_n) \end{pmatrix}, \quad M \in \ensuremath{\mathbb{Q}}_{n \times n} \label{eq:intrel1} \end{equation} or similarly (with different $X$ and $M$) for Gaussian primes \begin{equation} \begin{pmatrix} \operatorname{atan}(b_1/a_1) \\ \vdots \\ \operatorname{atan}(b_n/a_n) \end{pmatrix} = M \begin{pmatrix} \operatorname{atan}(1/x_1) \\ \vdots \\ \operatorname{atan}(1/x_n) \end{pmatrix}, \quad M \in \ensuremath{\mathbb{Q}}_{n \times n}. \label{eq:intrel2} \end{equation} For example, the primes $P = \{2, 3\}$ admit the simultaneous Machin-like formulas $\log(2) = 4 \operatorname{atanh}(1/7) + 2 \operatorname{atanh}(1/17)$, $\log(3) = 6 \operatorname{atanh}(1/7) + 4 \operatorname{atanh}(1/17)$, i.e.\ $$X = \{7, 17\}, \quad M = \small \begin{pmatrix} 2 & 1 \\ 3 & 2 \end{pmatrix}.$$ The following method to find relations goes back to Gauss who used it to search for generalizations of Machin's formula. Arndt~\cite[section 32.4]{arndt2010matters} also discusses the application of simultaneous computation of logarithms of several primes. The search space for candidate sets $X$ in \eqref{eq:intrel1} and \eqref{eq:intrel2} is a priori infinite, but it can be narrowed down as follows. Let $P = \{p_1, \ldots, p_n\}$. Since $$2 \operatorname{atanh}(1/x) = \log(x+1) - \log(x-1) = \log\!\left(\frac{x+1}{x-1}\right),$$ we try to write each $p \in P$ as a power-product of $P$-smooth rational numbers of the form $(x+1)/(x-1)$. We will thus look for solutions $X$ of \eqref{eq:intrel2} of the form \begin{equation} X \subseteq Y, \quad Y = \{ x: x^2 -1 \text{ is } P\text{-smooth}\}, \label{eq:setY} \end{equation} i.e.\ such that both $x+1$ and $x-1$ are $P$-smooth. Similarly, we look for solutions of \eqref{eq:intrel2} of the form \begin{equation} X \subseteq Z, \quad Z = \{ x: x^2 + 1 \text{ is } Q\text{-smooth}\} \label{eq:setZ} \end{equation} where $Q$ is the set of norms $\{a_1^2+a_1^2, \ldots, a_n^2+b_n^2\}$. It is a nontrivial fact that the sets $Y$ and $Z$ are finite for each fixed set of primes $P$ or $Q$. For the 25 first primes $p < 100$, the set $Y$ has 16223 elements which have been tabulated by Luca and Najman~\cite{Luca2010,Luca2013}; the largest element\footnote{Knowing this upper bound, the Luca-Najman table can be reproduced with a brute force enumeration of 97-smooth numbers $x-1$ with $x \le 19182937474703818751$, during which one saves the values $x$ for which trial division shows that $x+1$ is 97-smooth. This computation takes two hours on a 2022-era laptop. Reproducing the table $Z$ takes one minute.} is $x = 19182937474703818751$ with $$x - 1 = 2 \cdot 5^{5} \cdot 11 \cdot 19 \cdot 23^{2} \cdot 29 \cdot 59^{4} \cdot 79,$$ $$x + 1 = 2^{22} \cdot 3 \cdot 17^{3} \cdot 37 \cdot 41 \cdot 43 \cdot 67 \cdot 71.$$ For the first 22 Gaussian primes, having norms $a^2+b^2 < 100$, the set $Z$ has 811 elements which have been tabulated by Najman~\cite{najman2010smooth}; the largest element is $x = 69971515635443$ with $$x^2 + 1 = 2 \cdot 5^{5} \cdot 17 \cdot 37 \cdot 41^{2} \cdot 53^{2} \cdot 89 \cdot 97^{3} \cdot 137^{2} \cdot 173.$$ Given a candidate superset $Y = \{ y_1, \ldots, y_r \}$ or $Z = \{ z_1, \ldots, z_s \}$, we can find a formula $X$ with large entries using linear algebra: \begin{itemize} \item Let $X = \{\}$, and let $R$ be an initially empty ($0 \times n$) matrix. \item For $x = y_r, y_{r-1}$, $\ldots$ or $x = z_r, z_{s-1}$, $\ldots$ in order of decreasing magnitude, let $E = (e_1, \ldots, e_n)$ be the vector of exponents in the factorization of the rational number $$(x+1)/(x-1) = p_1^{e_1} \cdots p_n^{e_n},$$ respectively, $$x^2+1 = (a_1^2+b_1^2)^{e_1}, \ldots, (a_n^2+b_n^2)^{e_n}.$$ \item If $E$ is linearly independent of the rows of $R$, add $x$ to $X$ and adjoin the row $E$ to the top of $R$; otherwise continue with the next candidate $x$. \item When $R$ has $n$ linearly independent rows, we have found a complete basis~$X$ and the relation matrix is given by $M = R^{-1}$. \end{itemize} Tables~\ref{tab:logrelations} and ~\ref{tab:atanrelations} give the Machin-like formulas found with this method using the exhaustive Luca-Najman tables for $Y$ and $Z$. We list only the set~$X$ since the matrix~$M$ is easy to recover with linear algebra (in fact, we can recover it using LLL without performing any factorization). The corresponding \emph{Lehmer measure} $\mu(X) = \sum_{x \in X} 1 / \log_{10}(|x|)$ gives an estimate of efficiency (lower is better). \subsection{Remarks about the tables} We conjecture that the formulas in Tables~\ref{tab:logrelations} and ~\ref{tab:atanrelations} are the best possible (in the Lehmer sense) $n$-term formulas for the respective sets of $n$ primes or Gaussian primes. Apart from the first few entries which are well known, we are not aware of a previous tabulation of this kind. There is an extensive literature about Machin-like formulas for computing $\pi$ alone, but little about computing several arctangents simultaneously. There are some preexisting tables for logarithms, but they are not optimal. Arndt~\cite{arndt2010matters} gives a slightly less efficient formula for the 13 primes up to 41 with $\mu(X) = 1.48450$, which appears to have been chosen subject to the constraint $\max(X) < 2^{32}$. Gourdon and Sebah~\cite{gourdon2004logarithmic} give a much less efficient formula for the first 25 primes derived from \eqref{eq:logpdiff}, with $\mu(X) > 7.45186$. The claim that the formulas in Tables~\ref{tab:logrelations} and ~\ref{tab:atanrelations} are optimal comes with several caveats. We can achieve lower Lehmer measures if we add more arctangents. Indeed, the formula for $P = \{2, 3, 5, 7\}$ has a lower Lehmer measure than the formulas for $\{2\}$, $\{2, 3\}$ and $\{2, 3, 5\}$, so we may just as well compute four logarithms if we want the first one or three. A more efficient formula for $\log(2)$ alone is the three-term $X = \{26, 4801, 8749\}$ with $\mu(X) = 1.232 05$ which however cannot be used to compute $\log(3)$, $\log(5)$ or $\log(7)$ (the set $X^2-1$ is 7-smooth but does not yield a relation for either 3, 5 or 7). The 1-term formula for $\operatorname{atan}(1) = \pi/4$ has infinite Lehmer measure while Machin's formula \eqref{eq:machin}, which follows from the 13-smooth factorizations $5^2 + 1 = 2 \cdot 13$ and $239^2+1 = 2 \cdot 13^4$, achieves $\mu(X) = 1.85112$. In practice $\mu(X)$ is not necessarily an accurate measure of efficiency: it overestimates the benefits of increasing $x$, essentially because the running time in binary splitting tends to be dominated by the top-level multiplications which are independent of the number of leaf nodes. It is therefore likely an advantage to keep the number of arctangents close to $n$. A curiosity is that in the logarithm relations, we have $\det(R) = \pm 1$ and therefore $M \in \ensuremath{\mathbb{Z}}^{n \times n}$ for the first 21 sets of primes $P$, but for $P$ containing the primes up to 79, 83, 89 and 97 respectively the determinants are $-2$, $-6$, $-4$ and $-4$. \begin{table} \centering \caption{\small $n$-term Machin formulas $\{\operatorname{atanh}(1/x) : x \in X\}$ for simultaneous computation of $\log(p)$ for the first $n$ primes $p \in P$.} \label{tab:logrelations} \tiny \renewcommand{\arraystretch}{1.2} \begin{tabular}{ r | l | >{\raggedright}p{9cm} | l } $n$ & $P$ & $X$ & $\mu(X)$ \\ \hline 1 & 2 & 3 & 2.09590 \\ 2 & 2, 3 & 7, 17 & 1.99601 \\ 3 & 2, 3, 5 & 31, 49, 161 & 1.71531 \\ 4 & 2 \ldots 7 & 251, 449, 4801, 8749 & 1.31908 \\ 5 & 2 \ldots 11 & 351, 1079, 4801, 8749, 19601 & 1.48088 \\ 6 & 2 \ldots 13 & 1574, 4801, 8749, 13311, 21295, 246401 & 1.49710 \\ 7 & 2 \ldots 17 & 8749, 21295, 24751, 28799, 74359, 388961, 672281 & 1.49235 \\ 8 & 2 \ldots 19 & 57799, 74359, 87361, 388961, 672281, 1419263, 11819521, 23718421 & 1.40768 \\ 9 & 2 \ldots 23 & 143749, 672281, 1419263, 1447874, 4046849, 8193151, 10285001, 11819521, 23718421 & 1.40594 \\ 10 & 2 \ldots 29 & 1419263, 1447874, 11819521, 12901780, 16537599, 23718421, 26578124, 36171409, 192119201, 354365441 & 1.38570 \\ 11 & 2 \ldots 31 & 1447874, 11819521, 12901780, 16537599, 23718421, 36171409, 287080366, 354365441, 362074049, 740512499, 3222617399 & 1.42073 \\ 12 & 2 \ldots 37 & 36171409, 42772001, 55989361, 100962049, 143687501, 287080366, 362074049, 617831551, 740512499, 3222617399, 6926399999, 9447152318 & 1.40854 \\ 13 & 2 \ldots 41 & 51744295, 170918749, 265326335, 287080366, 362074049, 587270881, 831409151, 2470954914, 3222617399, 6926399999, 9447152318, 90211378321, 127855050751 & 1.42585 \\ 14 & 2 \ldots 43 & 287080366, 975061723, 980291467, 1181631186, 1317662501, 2470954914, 3222617399, 6926399999, 9447152318, 22429958849, 36368505601, 90211378321, 127855050751, 842277599279 & 1.43055 \\ 15 & 2 \ldots 47 & 2470954914, 2473686799, 3222617399, 4768304960, 6926399999, 9447152318, 22429958849, 36974504449, 74120970241, 90211378321, 127855050751, 384918250001, 569165414399, 842277599279, 2218993446251 & 1.42407 \\ 16 & 2 \ldots 53 & 9943658495, 15913962107, 19030755899, 22429958849, 22623739319, 36974504449, 90211378321, 123679505951, 127855050751, 187753824257, 384918250001, 569165414399, 842277599279, 1068652740673, 2218993446251, 2907159732049 & 1.44292 \\ 17 & 2 \ldots 59 & 22429958849, 56136455649, 92736533231, 122187528126, 123679505951, 127855050751, 134500454243, 187753824257, 384918250001, 569165414399, 842277599279, 1829589379201, 2218993446251, 2569459276099, 2907159732049, 22518692773919, 41257182408961 & 1.45670 \\ 18 & 2 \ldots 61 & 123679505951, 210531506249, 367668121249, 384918250001, 711571138431, 842277599279, 1191139875199, 1233008445689, 1829589379201, 2218993446251, 2569459276099, 2907159732049, 3706030044289, 7233275252995, 9164582675249, 22518692773919, 41257182408961, 63774701665793 & 1.46360 \\ 19 & 2 \ldots 67 & 664954699135, 842277599279, 932784765626, 1191139875199, 1233008445689, 1726341174999, 1829589379201, 2198699269535, 2218993446251, 2569459276099, 2907159732049, 3706030044289, 7233275252995, 8152552404881, 9164582675249, 22518692773919, 25640240468751, 41257182408961, 63774701665793 & 1.51088 \\ 20 & 2 \ldots 71 & 932784765626, 1986251708497, 2200009162625, 2218993446251, 2907159732049, 5175027061249, 7233275252995, 8152552404881, 8949772845287, 9164582675249, 12066279000049, 13055714577751, 22518692773919, 25640240468751, 31041668486401, 41257182408961, 63774701665793, 115445619421397, 121336489966251, 238178082107393 & 1.52917 \\ 21 & 2 \ldots 73 & 7233275252995, 8152552404881, 8949772845287, 9164582675249, 10644673332721, 13055714577751, 21691443063179, 22518692773919, 25640240468751, 25729909301249, 41257182408961, 54372220771987, 63774701665793, 103901723427151, 106078311729181, 114060765404951, 115445619421397, 121336489966251, 238178082107393, 1796745215731101, 4573663454608289 & 1.53515 \\ 22 & 2 \ldots 79 & 38879778893521, 41257182408961, 44299089391103, 62678512919879, 63774701665793, 69319674756179, 70937717129551, 103901723427151, 106078311729181, 114060765404951, 115445619421397, 117774370786951, 121336489966251, 217172824950401, 238178082107393, 259476225058051, 386624124661501, 478877529936961, 1796745215731101, 2767427997467797, 4573663454608289, 19182937474703818751 & 1.52802 \\ 23 & 2 \ldots 83 & 103901723427151, 112877019076249, 114060765404951, 115445619421397, 117774370786951, 121336489966251, 134543112911873, 148569359956291, 201842423186689, 206315395261249, 217172824950401, 238178082107393, 259476225058051, 386624124661501, 473599589105798, 478877529936961, 1796745215731101, 1814660314218751, 2767427997467797, 4573663454608289, 17431549081705001, 34903240221563713, 19182937474703818751 & 1.55501 \\ 24 & 2 \ldots 89 & 134543112911873, 148569359956291, 166019820559361, 201842423186689, 206315395261249, 211089142289024, 217172824950401, 238178082107393, 259476225058051, 330190746672799, 386624124661501, 473599589105798, 478877529936961, 1796745215731101, 1814660314218751, 2767427997467797, 2838712971108351, 4573663454608289, 9747977591754401, 11305332448031249, 17431549081705001, 34903240221563713, 332110803172167361, 19182937474703818751 & 1.58381 \\ 25 & 2 \ldots 97 & 373632043520429, 386624124661501, 473599589105798, 478877529936961, 523367485875499, 543267330048757, 666173153712219, 1433006524150291, 1447605165402271, 1744315135589377, 1796745215731101, 1814660314218751, 2236100361188849, 2767427997467797, 2838712971108351, 3729784979457601, 4573663454608289, 9747977591754401, 11305332448031249, 17431549081705001, 21866103101518721, 34903240221563713, 99913980938200001, 332110803172167361, 19182937474703818751 & 1.60385 \\ \end{tabular} \end{table} \begin{table} \centering \caption{\small $n$-term Machin formulas $\{\operatorname{atan}(1/x) : x \in X\}$ for simultaneous computation of the irreducible angles $\operatorname{atan}(b/a)$ for the first $n$ nonreal Gaussian primes $a+bi$, having norms $a^2+b^2 \in Q$.} \label{tab:atanrelations} \tiny \renewcommand{\arraystretch}{1.2} \begin{tabular}{ r | l | >{\raggedright}p{9cm} | l } $n$ & $Q$ & $X$ & $\mu(X)$ \\ \hline 1 & 2 & 1 & $\infty$ \\ 2 & 2, 5 & 3, 7 & 3.27920 \\ 3 & 2, 5, 13 & 18, 57, 239 & 1.78661 \\ 4 & 2 \ldots 17 & 38, 57, 239, 268 & 2.03480 \\ 5 & 2 \ldots 29 & 38, 157, 239, 268, 307 & 2.32275 \\ 6 & 2 \ldots 37 & 239, 268, 307, 327, 882, 18543 & 2.20584 \\ 7 & 2 \ldots 41 & 268, 378, 829, 882, 993, 2943, 18543 & 2.33820 \\ 8 & 2 \ldots 53 & 931, 1772, 2943, 6118, 34208, 44179, 85353, 485298 & 2.01152 \\ 9 & 2 \ldots 61 & 5257, 9466, 12943, 34208, 44179, 85353, 114669, 330182, 485298 & 1.95679 \\ 10 & 2 \ldots 73 & 9466, 34208, 44179, 48737, 72662, 85353, 114669, 330182, 478707, 485298 & 2.03991 \\ 11 & 2 \ldots 89 & 51387, 72662, 85353, 99557, 114669, 157318, 260359, 330182, 478707, 485298, 24208144 & 2.06413 \\ 12 & 2 \ldots 97 & 157318, 330182, 390112, 478707, 485298, 617427, 1984933, 2343692, 3449051, 6225244, 22709274, 24208144 & 1.96439 \\ 13 & 2 \ldots 101 & 683982, 1984933, 2343692, 2809305, 3014557, 6225244, 6367252, 18975991, 22709274, 24208144, 193788912, 201229582, 2189376182 & 1.84765 \\ 14 & 2 \ldots 109 & 2298668, 2343692, 2809305, 3014557, 6225244, 6367252, 18975991, 22709274, 24208144, 168623905, 193788912, 201229582, 284862638, 2189376182 & 1.91451 \\ 15 & 2 \ldots 113 & 2343692, 2809305, 3801448, 6225244, 6367252, 7691443, 18975991, 22709274, 24208144, 168623905, 193788912, 201229582, 284862638, 599832943, 2189376182 & 2.01409 \\ 16 & 2 \ldots 137 & 4079486, 6367252, 7691443, 8296072, 9639557, 10292025, 18975991, 19696179, 22709274, 24208144, 168623905, 193788912, 201229582, 284862638, 599832943, 2189376182 & 2.12155 \\ 17 & 2 \ldots 149 & 9689961, 10292025, 13850847, 18975991, 19696179, 22709274, 24208144, 32944452, 58305593, 60033932, 168623905, 193788912, 201229582, 284862638, 314198789, 599832943, 2189376182 & 2.18157 \\ 18 & 2 \ldots 157 & 22709274, 32944452, 58305593, 60033932, 127832882, 160007778, 168623905, 193788912, 201229582, 284862638, 299252491, 314198789, 361632045, 599832943, 851387893, 2189376182, 2701984943, 3558066693 & 2.14866 \\ 19 & 2 \ldots 173 & 127832882, 160007778, 168623905, 193788912, 201229582, 299252491, 314198789, 327012132, 361632045, 599832943, 851387893, 1117839407, 2189376182, 2701984943, 3558066693, 12139595709, 12957904393, 120563046313, 69971515635443 & 2.09258 \\ 20 & 2 \ldots 181 & 299252491, 314198789, 327012132, 361632045, 599832943, 851387893, 1112115023, 1117839407, 1892369318, 2189376182, 2701984943, 2971354082, 3558066693, 5271470807, 12139595709, 12957904393, 14033378718, 18986886768, 120563046313, 69971515635443 & 2.10729 \\ 21 & 2 \ldots 193 & 1112115023, 1117839407, 1479406293, 1696770582, 1892369318, 2112819717, 2189376182, 2701984943, 2971354082, 3558066693, 4038832337, 5271470807, 7959681215, 8193535810, 12139595709, 12957904393, 14033378718, 18710140581, 18986886768, 120563046313, 69971515635443 & 2.13939 \\ 22 & 2 \ldots 197 & 1479406293, 1892369318, 2112819717, 2189376182, 2701984943, 2971354082, 3558066693, 4038832337, 5271470807, 6829998457, 7959681215, 8193535810, 12139595709, 12185104420, 12957904393, 14033378718, 18710140581, 18986886768, 20746901917, 104279454193, 120563046313, 69971515635443 & 2.19850 \\ \end{tabular} \end{table} \section{Implementation results} \label{sect:implementation} The algorithms have been implemented in Arb~\cite{Joh2017} version 2.23. The following results were obtained with Arb~2.23 linked against GMP~6.2.1~\cite{GMP}, MPFR~4.1.0~\cite{Fousse2007}, and FLINT~2.9~\cite{Hart2010}, running on an AMD Ryzen 7 PRO 5850U (Zen3). \subsection{Default implementations with fixed $n$} Previously, all elementary functions in Arb used Taylor series with precomputed lookup tables up to $B = 4608$ bits. The tables are $m$-partite giving $r$-bit reduction with $r \le 14$ and $m \le 2$, requiring 236 KB of fixed storage~\cite{Johansson2015elementary}. At higher precision, the previous implementations used argument reduction based on repeated argument-halving (requiring squaring or square roots) together with rectangular splitting or bit-burst evaluation of Taylor series, with the exception of log which wrapped the AGM-based logarithm in MPFR. To the author's knowledge, these were the fastest arbitrary-precision implementations of elementary functions available in public software libraries prior to this work. In Arb 2.23, all the elementary functions were rewritten to use the new algorithm with the fixed number $n = 13$ of primes, starting from a precision between $B = 2240$ bits (for exp) and $B = 3400$ bits (for atan) up to $B = 4000000$ bits (just over one million digits). The Newton iterations~\eqref{eq:logy} and \eqref{eq:atany} are used to reduce log and atan to the exponential and trigonometric functions. The $B$-bit precomputations of logarithms and arctangents are done at runtime using the $n = 13$ Machin-like formulas of Table~\ref{tab:logrelations} and Table~\ref{tab:atanrelations}. We compare timings for the old and new implementations in~Table \ref{tab:oldnew13}. \begin{table} \centering \caption{\small Time to compute elementary functions to $D$ decimal digits ($B \approx 3.32 D$) with Arb 2.23. \emph{Old} is the time in seconds with the new algorithm disabled. \emph{New} is the time in seconds with the new algorithm enabled, using the fixed default number $n = 13$ of primes. \emph{First} is the time for a first function call, and \emph{Repeat} is the time for repeated calls (with logarithms and other data already cached). We show average timings for 100 uniformly random input $x \in (0, 2)$.} \label{tab:oldnew13} \scriptsize \setlength{\tabcolsep}{3pt} \renewcommand{\arraystretch}{1.4} \begin{tabular}{ c c | c c | c c | c c | c c } \multicolumn{2}{c}{} & \multicolumn{2}{c}{$\exp(x)$} & \multicolumn{2}{c}{$\log(x)$} & \multicolumn{2}{c}{$(\cos(x), \sin(x))$} & \multicolumn{2}{c}{$\operatorname{atan}(x)$} \\ \hline $D$ & & First & Repeat & First & Repeat & First & Repeat & First & Repeat \\ \hline 1000 & Old & 2.92e-05 & 2.91e-05 & 0.000145 & 3.69e-05 & 3.49e-05 & 3.49e-05 & 3.52e-05 & 3.52e-05 \\ & New & 0.000182 & 2.04e-05 & 0.000188 & 2.58e-05 & 0.00019 & 2.84e-05 & 3.52e-05 & 3.52e-05 \\ & Speedup & 0.16$\times$ & 1.43$\times$ & 0.77$\times$ & 1.43$\times$ & 0.18$\times$ & 1.23$\times$ & 1.00$\times$ & 1.00$\times$ \\ \hline 2000 & Old & 0.000103 & 0.000101 & 0.000367 & 0.000110 & 0.000217 & 9.92e-05 & 0.000423 & 0.000217 \\ & New & 0.000480 & 4.9e-05 & 0.000500 & 6.07e-05 & 0.000542 & 7.92e-05 & 0.000564 & 9.83e-05 \\ & Speedup & 0.22$\times$ & 2.06$\times$ & 0.73$\times$ & 1.81$\times$ & 0.40$\times$ & 1.25$\times$ & 0.75$\times$ & 2.21$\times$ \\ \hline 4000 & Old & 0.000355 & 0.000353 & 0.00103 & 0.000348 & 0.000511 & 0.000341 & 0.000915 & 0.000660 \\ & New & 0.00107 & 0.000149 & 0.00111 & 0.000187 & 0.00119 & 0.000211 & 0.00124 & 0.000269 \\ & Speedup & 0.33$\times$ & 2.37$\times$ & 0.93$\times$ & 1.86$\times$ & 0.43$\times$ & 1.62$\times$ & 0.74$\times$ & 2.45$\times$ \\ \hline 10000 & Old & 0.00185 & 0.00168 & 0.00439 & 0.00166 & 0.0022 & 0.00177 & 0.00323 & 0.00272 \\ & New & 0.00384 & 0.000826 & 0.00418 & 0.000977 & 0.00417 & 0.000935 & 0.00461 & 0.00122 \\ & Speedup & 0.48$\times$ & 2.03$\times$ & 1.05$\times$ & 1.70$\times$ & 0.53$\times$ & 1.89$\times$ & 0.70$\times$ & 2.23$\times$ \\ \hline 100000 & Old & 0.0541 & 0.0536 & 0.143 & 0.0632 & 0.0880 & 0.0818 & 0.0957 & 0.0896 \\ & New & 0.107 & 0.0354 & 0.114 & 0.0377 & 0.129 & 0.0509 & 0.140 & 0.0586 \\ & Speedup & 0.51$\times$ & 1.52$\times$ & 1.25$\times$ & 1.68$\times$ & 0.68$\times$ & 1.61$\times$ & 0.68$\times$ & 1.53$\times$ \\ \hline 1000000 & Old & 1.10 & 1.09 & 2.84 & 1.36 & 1.66 & 1.61 & 2.02 & 1.97 \\ & New & 2.18 & 0.864 & 2.31 & 0.982 & 2.83 & 1.25 & 3.02 & 1.58 \\ & Speedup & 0.51$\times$ & 1.26$\times$ & 1.23$\times$ & 1.39$\times$ & 0.59$\times$ & 1.29$\times$ & 0.67$\times$ & 1.25$\times$ \\ \end{tabular} \end{table} \subsubsection*{Remarks} The average speedup is around a factor two ($1.3\times$ to $2.4\times$) over a large range of precisions. The typical slowdown for a first function call is also roughly a factor two, i.e.\ the precomputation takes about as long as a single extra function call.\footnote{The figures are a bit worse at lower precision due to various overheads which could be avoided.} This is clearly a worthwhile tradeoff for most applications; e.g.\ for a numerical integration $\smash \int_a^b f(x) dx$ where the integrand $f$ will be evaluated many times, we do observe a factor-two speedup in the relevant precision ranges. The relatively large speedup for atan is explained by the fact that the traditional argument reduction method involves repeated square roots which are a significant constant factor more expensive than the squarings for exp. The relatively small speedup for sin and cos is explained by the fact that traditional argument reduction method only requires real squarings (via the half-angle formula for cos), while the new method uses complex arithmetic. Previously, the AGM-based logarithm was neck and neck with the Taylor series for exp at any precision (these algorithms were therefore roughly interchangeable if one were to use Newton iteration to compute one function from the other). With the new algorithm, Taylor series have a clear lead. The default parameter $n = 13$ was chosen to optimize performance around a few thousand digits, this range being more important for typical applications than millions of digits. As shown below, it is possible to achieve larger speedup at very high precision by choosing a larger $n$. \subsection{Precomputation of reduction tables} Table~\ref{tab:statictime} shows sample results for the precomputation phase of Algorithm~\ref{alg:linred} to generate tables of approximate relations over $n$ logarithms or arctangents. \begin{table} \centering \caption{\small Static precomputation of reduction tables: phase (1) of Algorithm~\ref{alg:linred}.} \label{tab:statictime} \scriptsize \renewcommand{\arraystretch}{1.4} \begin{tabular}{ c c c c c c } $\alpha_1, \ldots, \alpha_n$ & $n$ & Smallest $\varepsilon_i$ & Max $r$ & Data & Time \\ \hline Logarithms & 2 & $\varepsilon_{7} = +1.82 \cdot 10^{-5}$ & 15 & 0.2 KiB & 0.0000514 s \\ & 4 & $\varepsilon_{11} = -1.46 \cdot 10^{-14}$ & 45 & 0.3 KiB & 0.000228 s \\ & 8 & $\varepsilon_{33} = +7.66 \cdot 10^{-33}$ & 106 & 1.1 KiB & 0.00249 s \\ & 16 & $\varepsilon_{67} = +5.18 \cdot 10^{-71}$ & 233 & 3.2 KiB & 0.0447 s \\ & 32 & $\varepsilon_{144} = -1.51 \cdot 10^{-141}$ & 467 & 11 KiB & 1.24 s \\ & 64 & $\varepsilon_{268} = -4.42 \cdot 10^{-266}$ & 881 & 38 KiB & 34.2 s \\ \hline Arctangents & 2 & $\varepsilon_{7} = -4.75 \cdot 10^{-5}$ & 14 & 0.2 KiB & 0.0000472 s \\ & 4 & $\varepsilon_{14} = -2.95 \cdot 10^{-15}$ & 48 & 0.4 KiB & 0.000248 s \\ & 8 & $\varepsilon_{33} = +6.43 \cdot 10^{-33}$ & 106 & 1.1 KiB & 0.00256 s \\ & 16 & $\varepsilon_{64} = +1.77 \cdot 10^{-71}$ & 235 & 3.0 KiB & 0.0448 s \\ & 32 & $\varepsilon_{143} = +1.70 \cdot 10^{-140}$ & 464 & 11 KiB & 1.22 s \\ & 64 & $\varepsilon_{270} = +1.42 \cdot 10^{-267}$ & 886 & 38 KiB & 34.6 s \\ \end{tabular} \end{table} Here we choose the convergence factor $C = 10$ (each approximate relation $\varepsilon_i$ adds one decimal) and we terminate before the first relation with a coefficient $|d_{i,j}| \ge 2^{15}$. This bound was chosen for convenience of storing table entries in 16-bit integers; it is also a reasonable cutoff since larger exponents will pay off only for multi-million $B$ (as we will see below). We test the method up to $n = 64$, where the smallest tabulated $\varepsilon_i$ corresponds to an argument reduction of more than $r = 800$ bits.\footnote{Part of the implementation uses machine-precision floating-point numbers with a limited exponent range, making $|\varepsilon_i| < 2^{-1024} \approx 10^{-300}$ inaccessible. Like the 16-bit limit, this is again a trivial technical restriction which we do not bother to lift since there would be a pay-off only for multi-million $B$.} Since the tables are small (a few KiB) and independent of $B$, they can be precomputed once and for all, so the timings (here essentially just exercising FLINT's LLL implementation) are not really relevant. Indeed, in the previously discussed default implementation of elementary functions, the $n = 13$ tables are stored as static arrays written down in the source code. However, the timings are reasonable enough that tables could be generated at runtime in applications that will perform a large number of function evaluations. \begin{table} \centering \caption{\small Computation of the exponential function and the trigonometric functions. The argument is taken to be $x = \sqrt{2}-1$. \emph{Precomp} is the time (in seconds) to precompute $n$ logarithms or arctangents for use at $B$-bit precision. The cached logarithms or arctangents take up \emph{Data} space. \emph{Time} is the time to evaluate the function once this data has been precomputed. The argument is reduced to size $2^{-r}$.} \label{tab:functime} \scriptsize \renewcommand{\arraystretch}{1.2} \begin{tabular}{ c c c | l c l | l c l } \multicolumn{3}{c}{} & \multicolumn{3}{c}{$\exp(x)$} & \multicolumn{3}{c}{$\cos(x) + i \sin(x) = \exp(ix)$} \\ $B$ & $n$ & Data & Precomp & $r$ & Time & Precomp & $r$ & Time \\ \hline 3333 & 0 & & & & 2.89e-05 & & & 3.56e-05 \\ & 2 & 0.8 KiB & 5.33e-05 & 11 & 2.88e-05 & 6.34e-05 & 11 & 3.49e-05 \\ & 4 & 1.6 KiB & 5.42e-05 & 15 & 2.71e-05 & 7.35e-05 & 22 & 2.74e-05 \\ & 8 & 3.3 KiB & 7.61e-05 & 32 & 2.06e-05 & 9.65e-05 & 33 & 2.72e-05 \\ & 16 & 6.5 KiB & 0.000131 & 73 & 1.78e-05 & 0.000136 & 37 & 2.89e-05 \\ & 32 & 13.0 KiB & 0.000268 & 60 & 1.97e-05 & 0.000411 & 38 & 2.92e-05 \\ & 64 & 26.0 KiB & 0.000605 & 60 & 2.2e-05 & 0.00104 & 38 & 3.15e-05 \\ \hline 10000 & 0 & & & & 0.000202 & & & 0.000207 \\ & 2 & 2.4 KiB & 0.000238 & 11 & 0.000183 & 0.000281 & 13 & 0.000209 \\ & 4 & 4.9 KiB & 0.000240 & 27 & 0.000137 & 0.000333 & 30 & 0.000159 \\ & 8 & 9.8 KiB & 0.000335 & 52 & 0.000106 & 0.000412 & 41 & 0.000144 \\ & 16 & 19.5 KiB & 0.000579 & 83 & 8.48e-05 & 0.000633 & 61 & 0.000114 \\ & 32 & 39.1 KiB & 0.00123 & 86 & 8.75e-05 & 0.00187 & 47 & 0.000129 \\ & 64 & 78.1 KiB & 0.00270 & 72 & 9.71e-05 & 0.00468 & 47 & 0.000131 \\ \hline 33333 & 0 & & & & 0.00166 & & & 0.00178 \\ & 2 & 8.1 KiB & 0.00135 & 18 & 0.00135 & 0.0016 & 13 & 0.00167 \\ & 4 & 16.3 KiB & 0.00136 & 44 & 0.00107 & 0.00186 & 30 & 0.00133 \\ & 8 & 32.6 KiB & 0.00199 & 56 & 0.000938 & 0.00239 & 65 & 0.00110 \\ & 16 & 65.1 KiB & 0.00330 & 89 & 0.000748 & 0.00371 & 90 & 0.000932 \\ & 32 & 130.2 KiB & 0.00683 & 139 & 0.000637 & 0.0103 & 138 & 0.000841 \\ & 64 & 260.4 KiB & 0.0152 & 168 & 0.000614 & 0.0256 & 63 & 0.00103 \\ \hline 100000 & 0 & & & & 0.00895 & & & 0.0125 \\ & 2 & 24.4 KiB & 0.00679 & 18 & 0.00747 & 0.00786 & 17 & 0.0119 \\ & 4 & 48.8 KiB & 0.0068 & 44 & 0.00638 & 0.00922 & 40 & 0.00987 \\ & 8 & 97.7 KiB & 0.00977 & 71 & 0.00565 & 0.0119 & 65 & 0.00754 \\ & 16 & 195.3 KiB & 0.0164 & 106 & 0.00534 & 0.0179 & 90 & 0.00625 \\ & 32 & 390.6 KiB & 0.0337 & 161 & 0.00445 & 0.0491 & 138 & 0.00523 \\ & 64 & 781.2 KiB & 0.0755 & 240 & 0.00383 & 0.125 & 126 & 0.00612 \\ \hline 1000000 & 0 & & & & 0.221 & & & 0.337 \\ & 2 & 244.1 KiB & 0.159 & 18 & 0.195 & 0.187 & 17 & 0.322 \\ & 4 & 488.3 KiB & 0.159 & 47 & 0.175 & 0.219 & 40 & 0.295 \\ & 8 & 976.6 KiB & 0.228 & 99 & 0.154 & 0.271 & 96 & 0.273 \\ & 16 & 1.9 MiB & 0.37 & 142 & 0.140 & 0.419 & 118 & 0.260 \\ & 32 & 3.8 MiB & 0.77 & 161 & 0.136 & 1.14 & 171 & 0.255 \\ & 64 & 7.6 MiB & 1.72 & 454 & 0.120 & 2.91 & 391 & 0.178 \\ \hline 10000000 & 0 & & & & 4.36 & & & 6.50 \\ & 2 & 2.4 MiB & 3.02 & 18 & 3.89 & 3.56 & 17 & 6.18 \\ & 4 & 4.8 MiB & 3.01 & 47 & 3.53 & 4.1 & 40 & 5.75 \\ & 8 & 9.5 MiB & 4.14 & 110 & 3.18 & 5.03 & 109 & 5.24 \\ & 16 & 19.1 MiB & 6.57 & 222 & 2.90 & 7.49 & 203 & 5.13 \\ & 32 & 38.1 MiB & 13.8 & 338 & 2.61 & 20.6 & 348 & 4.64 \\ & 64 & 76.3 MiB & 31.3 & 551 & 2.39 & 53.4 & 592 & 4.50 \\ \end{tabular} \end{table} \subsection{Function evaluation with variable $n$ and $B$} Table~\ref{tab:functime} shows timings for the computation of the exponential function and trigonometric functions for different combinations of precision $B$ and number of primes $n$. The $n = 0$ reference timings correspond to the old algorithm without precomputation, in which repeated squaring will be used instead. At lower precisions, using 10-20 primes seems to be optimal. It is interesting to note that roughly a factor-two speedup can be achieved across a huge range of precision when $n$ increases with $B$. It seems likely that $n = 128$ or more primes could be useful at extreme precision, though the precomputation will increase proportionally. We used the Machin-like formulas from Table~\ref{tab:logrelations} and Table~\ref{tab:atanrelations} only up to $n = 25$ or $n = 22$; for $n = 32$ and $n = 64$ we fall back on less optimized formulas, which results in a noticeably slower precomputation. \section{Extensions and further work} We conclude with some ideas for future research. \subsection{Complexity analysis and fine-tuning} It would be interesting to perform a more precise complexity analysis. Under some assumptions about the underlying arithmetic, it should be possible to obtain a theoretical prediction for the optimal number of primes $n$ as a function of the bit precision $B$, with an estimate of the possible speedup when $B \to \infty$. There are a large number of free parameters in the new algorithm (the number of primes $n$, the choice of primes, the precise setup of the precomputed relations $\varepsilon_i$, the allowed size of the power product, choices in the subsequent Taylor series evaluation...). Timings can fluctuate depending on small adjustments to these parameters and with different values of the argument~$x$. It is plausible that a consistent speedup can be obtained by simply tuning all the parameters more carefully. \subsection{Complex arguments} All elementary functions of complex arguments can be decomposed into real exponentials, logarithms, sines and arctangents after separating real and imaginary parts. An interesting alternative would be to compute $\exp(z)$ or $\log(z)$ directly over $\mathbb{C}$, reducing $z$ with respect to complex lattices generated by pairs of Gaussian primes. We do not know whether this presents any advantages over separating the components. \subsection{$p$-adic numbers} The same methods should work in the $p$-adic setting. For the $p$-adic exponential and logarithm, we can choose a basis of $n$ prime numbers $p_i \ne p$ and use LLL to precompute relations $\sum_{i=1}^n c_i \log(p_i) = O(p^i)$ for $i = 1, 2, \ldots, r$. We can then use these relations to reduce the argument to order $O(p^r)$ before evaluating the function using Taylor series or the $p$-adic bit-burst method~\cite{caruso2021fast}. We have not attempted to analyze or implement this algorithm. \subsection{More Machin-like formulas} It would be useful to have larger tables of optimized Machin-like formulas for multi-evaluation of logarithms and arctangents. In practice, formulas need not be optimal as long as they are ``good enough''; for example, one could restrict the search space to 64-bit arctangent arguments $x$. Nevertheless, a large-scale computation of theoretically optimal tables would be an interesting challenge of its own. \section{Acknowledgements} The author was present at RISC in 2011 where Arnold Sch\"{o}nhage gave one of the talks~\cite{schonhage2011} presenting his original ``medium-precision'' version of the algorithm using a pair of primes. Ironically, the author has no memory of the event beyond the published talk abstract; the inspiration for the present work came much later, with Machin-like formulas for logarithms as the starting point, and the details herein were developed independently. Nevertheless, Sch\"{o}nhage certainly deserves credit for the core idea. We have tried unsuccessfully to contact Sch\"{o}nhage (who is now retired) for notes about his version of the algorithm. The author learned about the process to find Machin-like formulas thanks to MathOverflow comments by Douglas Zare and the user ``Meij''~\cite{125687} explaining the method and pointing to the relevant chapter in Arndt's book. Algorithm~\ref{alg:linred} was inspired by a comment by Simon Puchert in 2018 proposing an iterative argument reduction using smooth fractions of the form $(m+1)/m$. We have substantially improved this algorithm by using LLL to look for arbitrary smooth fractions close to 1 instead of restricting to a set of fractions of special form, and by working with the logarithmic forms during reduction. The author was supported by ANR grant ANR-20-CE48-0014-02 NuSCAP. \bibliographystyle{alpha}
proofpile-arXiv_065-4651
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Appendixes} \begin{acknowledgments} The authors acknowledge Prof. J. McCord from Kiel University for fruitful discussions. This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 860060 “Magnetism and the effect of Electric Field” (MagnEFi), the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - TRR 173 - 268565370 (project A01 and B02), the DFG funded collaborative research center (CRC)1261 / project A6 and the Austrian Research Promotion Agency (FFG). The authors acknowledge support by the chip production facilities of Sensitec GmbH (Mainz, DE), where part of this work was carried out and the Max-Planck Graduate Centre with Johannes Gutenberg University. \end{acknowledgments} \section*{Author Declarations} The following article has been submitted to Applied Physics Letters. After it is published, it will be found at \textit{publishing.aip.org} . \subsection*{Conflict of interest } The authors have no conflicts to disclose. \section*{Data Sharing Policy } The data that support the findings of this study are available from the corresponding author upon reasonable request \nocite{*} \subsection*{\textbf{S1} - Intermixing characterization and alloy composition} In figure S\ref{fig_S02} a) and b) measurements obtained by X-Ray Diffraction (XRD) are presented, in order to probe the crystalline structure of our multilayer stack. Figure S\ref{fig_S02} a) contains an XRD angular scan of the Ni/Fe multilayer and confirms that our sputtered layers are textured. In figure S\ref{fig_S02} b) an X-Ray reflectivity (XRR) angular scan is shown for as-deposited state and samples with selected irradiation fluences. The best fit considers a relative roughness of the layers of $\simeq1$ $nm$ for the as deposited case. All curves contain two types of periodic oscillation. The short period oscillation with $0.2^{\circ}$ period correspond to the full thickness of the stack ($41$ $nm$ in total). The long period oscillations around 2, 4 and $6^{\circ}$, correspond to the ML repetitions $t_p=4$ $nm$ (black arrows in figure S\ref{fig_S02} b) ). The amplitude of this oscillations is progressively reduced as the fluence of He$^+$ is increased during irradiation. In figure S\ref{fig_S02} b) this effect of increasing irradiation is best visible for the peak at 6.2$^\circ$. The data suggest a degradation of the Ni/Fe interfaces, indicating an increased level of intermixing. This is qualitatively confirmed by a fitting model where the layer roughness of Ni and Fe is increased. This is in agreement with our ToF-SIMS and STEM measurements. A different method to promote atom diffusion in magnetic materials is the use of thermal energy provided by annealing\cite{Annealing}. We compare the effects of He$^+$ ion irradiation with the annealing in vacuum at 300$^\circ$C for 4.5 hours of our magnetic multilayer. In the inset of figure S\ref{fig_S02} a) the 110/111 reflection peak is compared for multilayer as-deposited, after annealing and after irradiation with fluence of $1\times10^{16}$ $cm^{-2}$. The data suggest that the crystalline texture is not altered significantly after these two material treatments. The atomic diffusion, caused by the adopted material treatment, can be observed in the ToF - SIMS measurements by comparing figure S\ref{fig_S02} c) with figure S\ref{fig_S02} d) - e) after irradiation and the annealing, respectively. As mentioned in the manuscript, the irradiation promotes intermixing as the oscillations in the signals of Fe and Ni are attenuated (fig. S\ref{fig_S02} d) ) with respect to the as deposited case. This effect can be similarly observed after the annealing treatment (fig. S\ref{fig_S02} e) ). However a clear difference between the figures can be seen in the signal of Cr (from the seed layer). After irradiation the atomic diffusion is confined at the layer interface, instead after annealing the intermixing is long range and involves the non-magnetic NiFeCr seed layer. In addition to that, the coercive field and the magnetic anisotropy of the multilayer are unchanged after the annealing, in contrast to the improved magnetic softness reported after the irradiation. The above mentioned differences between the two material treatment can be attributed to the different activation mechanism for atomic displacement: kinetic energy for irradiation and thermal energy for annealing. \begin{figure}[h!] \centering\includegraphics[width=16cm]{figures/S02.PNG} \caption{\label{fig_S02} a) XRD angular scan of the Ni/Fe multilayer sample after sputtering. In the inset: Fe 110/Ni 111 peak of the multilayer as-deposited, annealed and after irradiation. b) X-Ray reflectometry (XRR) measurement for a multilayer of $[Ni(2$ $nm)/Fe(2$ $nm)]\times 8$ irradiated with different He$^+$ fluences. The changes in the curves indicate increasing intermixing at the interfaces of our multilayer with increasing ion fluences. c), d) and e) ToF-SIMS measurements for multilayer as-deposited, after irradiation and thermal annealing respectively. } \end{figure} In addition to ToF-SIMS measurements, the level of intermixing caused by ion irradiation was characterized by Scanning Transmission Electron Microscopy (STEM). Cross sections of Fe/Ni/NiFeCr on SiO$_2$/Si were prepared using the focused-ion-beam method (FIB). STEM images were acquired in high-angle annular dark field (HAADF) mode on a probe CS(spherical aberration coefficient)-corrected Titan$^3$ G$^2$ 60–300 microscope operating at an accelerating voltage of 300 $kV$ using a probe-forming aperture of 25 $mrad$ and annular ranges of 80-200 mrad on the detector. Nanoscale chemical analysis via energy dispersive X-ray spectroscopy (EDX) was performed in STEM mode using a Super-X detector setup with 4 symmetrically aligned in-column detectors. The structure of our multilayer is polycrystalline with (110)-textured layers of Fe and (111)-textured layers of Ni as can be seen in figure S\ref{fig_S04}. The structural motif of [100] Fe with (110) out-of-plane orientation was evidenced by Fast Fourier Transforms (FFT) on a crystalline region within a Fe-layer. Within the Ni-layers, the dominant structural motif of [101] Ni was observed with (111) out-of-plane orientation before He$^+$-ion irradiation as shown in figure S\ref{fig_S04} a) - c). The same measurements have been repeated after ion irradiation with a fluence of $1\times10^{16}$ $cm^{-2}$. In this case the polycrystalline multilayers of Fe and Ni after irradiation ( FFT images in figure S\ref{fig_S04} d) - e) ) show the identical crystalline texture compared to the as-deposited state (figure S\ref{fig_S04} b) - c) ), allowing to exclude significant changes to the crystalline structure after the used irradiation treatment. \begin{figure}[h!] \centering\includegraphics[width=12cm]{figures/S04.PNG} \caption{\label{fig_S04} High-Resolution STEM micrographs of the Fe/Ni multilayer system before and after He$^+$-ion irradiation. a) repetitions of (110)-textured layers of Fe and (111)-textured layers of Ni are evidenced by specific Z-contrast and individual Fast Fourier Transforms (FFT) of regions b) and c). d) repetitions of Fe and Ni layers showing the identical crystalline texture after irradiation by comparison of FFT images. e) Noise-filtered micrograph displaying the atomic structure of the multilayers. The structural motifs of [100] Fe and [111] Fe are shown for crystalline regions within the Fe-layers. } \end{figure} Montecarlo (TRIM) simulations were performed. Using TRIM simulations it is possible to calculate kinetic phenomena associated with the ion’s energy loss: in our case target atom displacement (normalized by the incoming ion fluence) as a function of the vertical depth of the sample. The system is initialized with perfect interfaces and the kinetic energy of the incoming ions is set to 20 $keV$. The results of TRIM simulations are presented in figure S\ref{fig_S03} a) . The solid lines represent the recoil atomic distribution after the collision with He$^+$ ions. In the overlapping region of two curves we have coexistence of different atomic species (intermixing/alloying). The simulations suggests that the displacement is uniform through the magnetic stack for the selected ion energy, we can therefore expect the same amount of intermixing at each Ni/Fe interface. Furthermore we do not see any significant intermixing of the non-magnetic capping and seed layers with the magnetic stack. This is most likely an effect of the short-range nature of the collisions with He$^+$ ions \cite{Fassbender}. The outcome of the simulations is in line with the ToF-SIMS and STEM-EDX measurements. In the manuscript we attribute the changes in the magnetoelastic coupling to the intermixing induced by ion irradiation. To describe the magnetostriction of a multilayer system in the presence of intermixing we can use the expression\cite{model} \begin{equation} \label{eq_free_ener} \lambda_s=\frac{\lambda_s^{Ni}+\lambda_s^{Fe}}{2}+\left(2\lambda_s^{Ni_{x}Fe_{1-x}}-\lambda_s^{Ni}-\lambda_s^{Fe}\right)\frac{t_{Ni_{x}Fe_{1-x}}}{t_p}, \end{equation} where we ascribe the changes of magnetostriction after the irradiation process to the combination of $\lambda_s$ of three different materials: Ni , Fe (which are the sputtered layers) and the $Ni_{x}Fe_{1-x}$ alloy at the interfaces (induced by ion irradiation). In our case the thickness $t_{Ni_{x}Fe_{1-x}}$ grows under the effect of irradiating ions. The predicted values by equation \ref{eq_free_ener} are obtained considering, for the formed alloy, the value of magnetostriction for relative composition $x=50\%$. All the values are reported in table \ref{tab_material_film}. In realistic conditions, the amount of intermixing will be changing gradually at the interface. Consequently $\lambda_s^{Ni_{x}Fe_{1-x}}= \lambda_s^{Ni_{x}Fe_{1-x}} (x)$ will not be constant. \begin{figure}[h!] \centering\includegraphics[width=16cm]{figures/S03.PNG} \caption{\label{fig_S03} a) Montecarlo simulation using the software TRIM\cite{ziegler}. The atomic distribution of different elements after the collision with incoming ions is shown along the vertical depth of the multilayer. The results are normalized by the incoming fluence of ions. b) saturation magnetostriction $\lambda_s$ of a $NiFe$ alloy as a function of the Ni composition ($\%$). Data points from Judy et al.\cite{fabrication}. } \end{figure} The approximation used is justified by the content of figure S\ref{fig_S03}. Montecarlo simulations indicate that the transition between the atomic distribution of the sputtered materials (Ni and Fe) is exponential as shown in figure S\ref{fig_S03} a). This indicates that the formed alloy is confined at the interfaces. Additionally the magnetostriction of permalloy with relative Ni composition between $x=40-70\%$ does not deviate significantly from the value in table \ref{tab_material_film}. This is because the magnetostriction of permalloy has a local maximum around $50\%$ relative composition as can be seen in figure S\ref{fig_S03} b). Therefore, a constant value of $\lambda_s^{Ni_{x}Fe_{1-x}}$ is expected to give consistent results, as has been the case for previous works\cite{model,nagai1988properties}. Discrepancies between the calculated values and the measured ones, can be attributed to surface and interface effects which sum to the presence of the intermixed layer. \subsection*{ \textbf{S2} - Evaluation of Magnetostriction} All layers are deposited by magnetron sputtering ( using a Singulus Rotaris system). The substrate is $1.5$ $\mu m$ SiOx on top of $625$ $\mu m$ undoped Si. The magnetic material was sputtered in a rotating magnetic field of $50$ $Oe$. Our film are structured using optical lithography and Ar$^+$ ion etching into a circular pattern of $80$ $\mu m$ diameter and $3$ $\mu m$ of spacing as shown in figure S\ref{fig_S01} b). This design has been chosen to probe the local magnetic properties of the film while, at the same time, minimizing the shape anisotropy contribution. The magnetic measurements were performed using Kerr microscopy with a $20\times$ objective and a white light source. Coils for in-plane magnetic field are used. We measure the hysteresis loops detecting differential contrast changes in the magneto-optical Kerr effect (MOKE) in a longitudinal configuration of the polarized light. Both longitudinal and transversal configuration are instead used to image the magnetization state (domains) in a grey scale sum image as can be seen in figure S\ref{fig_S01} c). \begin{figure}[h!] \centering\includegraphics[width=14cm]{figures/S01.PNG} \caption{\label{fig_S01} a) hysteresis loops measured with Kerr microscopy that are used to estimate $\lambda_s$. Inset: schematic of the sample holder used to strain the substrate. b) optical microscope image of the sample patterned in array of $80$ $\mu m$ diameter and c) magnetic domains imaged with longitudinal configuration of polarized light. } \end{figure} To obtain information about the magnetoelastic properties of the material, the substrate was bent mechanically with a 3 point bending sample holder, as shown schematically in the inset of figure S\ref{fig_S01} a). A square sample of 1 by 1 cm is vertically constrained on two sides and pushed uniformly from below by a cylinder that has an off-centered rotation axis. The device generates a tensile strain in the plane of the sample up to $0.1$ $\%$ when the cylinder is rotated by 90$^\circ$. The strain is mostly uniaxial and has been measured with a strain gauge on the substrate surface. Magnetic hysteresis loops are recorded before and after the application of the tensile strain and are used to estimate the saturation magnetostriction of the material. As previously reported\cite{spinvalve,paperarea} the magnetic anisotropy $K_{u}$ is linked to the energy stored in the magnetization curves. For example the (uniaxial) magnetic anisotropy energy is given by the area enclosed between the magnetic loops measured along two in-plane directions perpendicular to each other. If then the strain in the film is non-zero, the magneto-elastic coupling contributes in principle to the effective anisotropy. Two hysteresis loops measurements, before and after the application of strain, are sufficient to estimate $\lambda_s$. Indeed the total anisotropy of the system is $K_{eff}=K_{u}$ and $K_{eff}=K_{u}+K_{ME}$ before and after the application of strain, respectively. The magnetoelastic anisotropy $K_{ME}=\frac{3}{2}\lambda_s Y \epsilon$ is linked to reversible part of the hysteresis loops (close to the saturation) according to \begin{equation} \label{eq_strain_eanis} K_{ME}=M_s \Delta E=\frac{3}{2}\lambda_s Y \epsilon \end{equation} where $\Delta E$ is the anisotropy energy measured by the difference in area below the strained and unstrained curves. This corresponds to the reversible part, i.e. the red marked area in figure S\ref{fig_S01} a). The experimental values of magnetostriction were calculated using the values of the the Young's modulus ($Y$) and saturation magnetization ($M_s$) of the stack taken from literature and reported in table \ref{tab_material_film}. \begin{table}[h!] \centering \begin{tabular}{||c c c c||} \hline Material & $M_{s}$ $ (T)$ & $\lambda_s$ x$10^{-6}$ & $Y$ ($GPa$) \\ [0.5ex] \hline\hline $Fe$ & 2.15 & -9 & 211\\ \hline $Ni $ & 0.55 & -30 & 180 \\ \hline $Ni_{50} Fe_{50}$ & 1.5 & 19 & 200 \\ \hline \end{tabular} \caption{Parameters from literature\cite{nagai1988properties,cullity2011introduction,bur2011strain,bozorth1993ferromagnetism,klokholm1981saturation} of the magnetic materials after deposition (no irradiation). Here, $M_s$ is the saturation magnetization, $\lambda_s$ is the saturation magnetostriction and $Y$ is the Young's modulus. The same $Y$ is considered for as-deposited and irradiated samples. } \label{tab_material_film} \end{table} \newpage
proofpile-arXiv_065-4652
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{SEC_intro} Diabetes is a lifelong condition, and a diabetic person is at lifetime risk for developing foot ulcer wounds, which severely affects the life quality. Getting an infection further complicates the situation and may lead to limb amputations and even death. Such diabetic foot ulcer wounds need to be examined regularly, by the healthcare professionals, for diagnosis and prognosis, including assessing current condition, devising a treatment plan, and estimation of complete recovery accordingly. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{samples} \caption{Typical challenging cases from the Foot Ulcer Segmentation (FUSeg) Challenge dataset: (a) heterogeneous wound shapes and their random positions, (b) color variations of wounds, (c) changes in skin tone, (d) background clutter, and (e) change in viewpoints. These images are cropped, and padding is stripped off for better display.} \label{FIG_1} \end{figure} Innovations in technology have resulted in better sensors and storage media thus, paving the way for advanced clinical procedures. The use of cameras and smartphones is getting common to obtain images of ulcer wounds each time a patient comes for an examination. The foot ulcer analysis is a lengthy process beginning from the visual inspection of wounds to determining their class type, severity, and growth over time by comparing past images side by side. Such subjective measures may cause human errors resulting, even with the utmost care, in an additional variability in enormously gathered data and hours of work in producing annotations. By utilizing artificial intelligence (AI) algorithms in general and deep learning (DL) techniques in particular, a vast amount of medical data is possible to process and analyze faster, accurately, and affordably. These algorithms are helping the healthcare industry to administer improved medical procedures, rapid healing, save huge expenses, and boost patient satisfaction. The segmentation is an essential step in a foot ulcer analysis pipeline. Having a reliable and efficient wound segmentation model could better aid in the evaluation of the condition, analysis, and deciding an optimal treatment procedure. The goal of foot ulcer wound segmentation is to label every pixel in an image either as wound \textit{(foreground)} or everything else \textit{(background}). There are several challenges in performing foot ulcer segmentation (as shown in Fig. \ref{FIG_1}) like heterogeneity in wound shape and color, skin color, different viewpoints, background clutter, lighting conditions, and capturing devices. In this study, we propose an end-to-end lightweight deep neural network to perform foot ulcer wound segmentation which is robust to the challenges and generalizes well across the dataset without requiring any user interaction. Our model is inspired by the U-Net \cite{Ronneberger2015} and ResNet \cite{He2016} and includes the key features of both models. Each residual block in the proposed model has group convolution layers \cite{Krizhevsky2012} to keep the number of learnable parameters low. In addition, a residual connection, channel attention, and spatial attention are also integrated within each convolution block to highlight the relevant features and identify the most suitable channels to improve the prediction accuracy. The following are the main contributions of this study: \begin{itemize} \item[--] We propose an end-to-end lightweight model for the foot ulcer wound segmentation primarily utilizing group convolutions. \item[--] Channel and spatial attention layers are combined with the residual connection within each block to form new \textit{residual attention} (ResAttn) block. There is no need to use standalone attention blocks resulting only in an increase in total trainable parameters and having a significant toll on overall model training time. \item[--] We use test time augmentations (TTA) with the majority voting technique to get better segmentation results. \item[--] Experimental evaluation on publicly available Foot Ulcer Segmentation (FUSeg) dataset shows superior results. Our method stood second when compared with the top methods from the FUSeg Challenge leaderboard\footnote[1]{(\url{https://uwm-bigdata.github.io/wound-segmentation}) last accessed on Jan. 6, 2022.}. \end{itemize} The remainder of this paper is organized as follows. In Sect. \ref{SEC_related_work}, we provide an overview of the related work on the segmentation problem and attention techniques. Section \ref{SEC_proposed_method} describes our proposed model and experimental setup. Section \ref{SEC_experiments} presents the experimental details, results, and a brief discussion. Finally, the conclusion is given in Sect. \ref{SEC_conclusion}. \section{Related Work} \label{SEC_related_work} \subsection{Classical Segmentation Methods} Several probabilistic and image processing methods, machine learning, and deep learning techniques fall under this category. Edge detection, clustering, adaptive thresholding, K-means, and region-growing algorithms are a few well-known image processing methods used for segmentation \cite{Bahdanau2014}. These methods being not data hungry are fast, and most struggle to generate a reliable outcome for unseen data and thus fail to generalize their performance. Earlier machine learning algorithms typically made the best use of hand-crafted features based on image gradients, colors, or textures for segmentation. Such algorithms include classifiers such as multi-layer perceptron (MLP), decision trees, support vector machine (SVM) \cite{Wang2017}. \subsection{Deep Learning-Based Segmentation Methods} \textit{Convolution neural networks} (CNNs) have been successfully used for biomedical segmentation tasks such as segmenting tumors from breast, liver, and lungs using MRI and CT scans, nuclei segmentation in histological images \cite{Kumar2020,Caicedo2019}, skin lesion, polyp, and wound segmentation in RGB images \cite{Long2015,He2020,Wang2020}. Deep learning-based approaches have outperformed other approaches for foot ulcer segmentation \cite{Caicedo2019} since they are good to learn hidden patterns and generalize well for new data. Some well-known CNN-based architectures such as \textit{Fully Convolutional Neural Network} (FCN), \textit{U-Net}, \textit{Mask-RCNN}, and lightweight mobile architecture like \textit{EfficientNet} \cite{Ronneberger2015,Long2015,He2020} are utilized to perform wound segmentation in various studies \cite{Wang2020,Chino2020}. \subsection{Attention Mechanisms} These mechanisms allow a vision model to pay better attention to the salient features or regions in the input feature maps. This concept is closely related to image filtering in computer vision and computer graphics to reduce the noise and extract useful image structures. Bahdanau et al. \cite{Bahdanau2014} made the very first successful attempt to include the attention mechanism for an automated natural language translation task. \textit{Residual Attention Network} proposed by Wang et al. \cite{Wang2018} used non-local self-attention to capture long-range pixel relationships. Hu et al. \cite{Hu2018} used global average pooling operations to emphasize the most contributing channels in their proposed \textit{Squeeze-and-Excitation} (SE) blocks. Several other efforts have been made to incorporate spatial attention. Woo et. al \cite{Woo2018} made a notable effort with the \textit{Convolutional Block Attention Module} (CBAM). It consisted of the channel and spatial attention in a sequential fashion which led to significant improvement in the model representation power. Wu et al. \cite{Wu2021} proposed an \textit{Adaptive Dual Attention Module} (ADAM) that captured multi-scale features for recognizing skin lesion boundaries. \section{Proposed Method} \label{SEC_proposed_method} \subsection{Model Overview} Our proposed model derives its key strength from the U-Net and ResNet architectures. We extended a U-shape model with the \textit{residual attention} (ResAttn) blocks. In each ResAttn block, convolution layers with variable receptive fields combined with channel and spatial attention better emphasize the contribution of meaningful features at different scales. Fig. \ref{FIG_2} shows the proposed architecture having two branches for image encoding and decoding purposes. Each branch contains a series of ResAttn blocks either with max-pooling or transpose convolution layers. Given an input image, feature extraction is performed during downsampling (encoding), followed by the reconstruction branch to upscale the feature maps (decoding) back to the input size. A series of transpose convolution layers upscales the element-wise summation of the feature maps. These feature maps come from the previous block and skip connections and thus have the same spatial size. The last ResAttn block outputs 64 channels that are reduced to 1 by a 1×1 convolution layer. Finally, a sigmoid function scales the dynamic range to [0,1] interval. \begin{figure}[t] \centering \includegraphics[width=12cm]{architecture_rev} \caption{The proposed foot ulcer segmentation model is a U-shape model with redesigned convolution blocks as \textit{ResAttn} blocks. Final activation is sigmoid ($\sigma$), used instances of ResAttn are given in every block name (e.g., ``ResAttn×2'' means two block), $F$ indicates the number of output feature maps, and dotted lines represent a skip connection between encoding and decoding blocks. } \label{FIG_2} \end{figure} We carefully considered the impact of design choices. The \textit{point kernel convolutions} were preferred inside the ResAttn blocks since they require fewer training parameters than a convolution with a 3×3 or higher kernel. Our model initially produces 32 feature maps for each input RGB patch rather than 64 in the case of a standard U-Net. Since these feature maps grew twice in number by each encoding block, we saved a large amount of memory. Likewise, we found the \textit{group convolutions} extremely useful in remarkably reducing network parameters. The total trainable parameters of our model went down to 17\% as of its vanilla counterpart. We also observed that setting the value of \textit{groups} parameter to a multiple of 32 was sufficient for producing quality segmentation results. \subsection{Loss Function} In the training process, we used a linear combination of binary cross entropy loss $\mathcal{L}_{bce}$ and dice similarity loss $\mathcal{L}_{dice}$. The total segmentation loss $\mathcal{L}_{seg}$ was calculated as: \begin{align} \mathcal{L}_{seg}&={\lambda_1\mathcal{L}}_{bce}+{\lambda_2\mathcal{L}}_{dice}, \label{EQ_1}\\ \mathcal{L}_{dice}&=1-2\frac{\sum_{i}{g_ip_i}}{\sum_{i} g_i\sum_{i} p_i},\\ \mathcal{L}_{bce}&=-\sum_{i}{(g_i\ln{\left(p_i\right)}+(1-g_i)\ln{(1-p_i)}),} \end{align} where $g$ is the ground truth binary mask, $p$ is the model prediction, $\lambda_1$ and $\lambda_2$ in Eq. \ref{EQ_1} are weighing parameters which were set to 1. The segmentation loss $\mathcal{L}_{seg}$ well trained our model and produced satisfactory segmentation performance. \begin{figure}[!b] \centering \includegraphics[width=9cm]{res_attn_block} \caption{Residual attention (ResAttn) block in our lightweight model. Each convolution layer produces the same number of output feature maps ($F_{out}$). As long as the input and output channels are the same (i.e., $F_{in}=F_{out}$), the block input serves as a residual connection; otherwise, the dotted path is used. All convolution operations are followed by the batch norm and activation layers.} \label{FIG_3} \end{figure} \subsection{Residual Attention Block} Each ResAttn block has three convolution layers with kernel sizes of 1×1, 3×3, and 1×1, respectively, along the main path. The fourth convolution layer with a 1×1 kernel serves as a \textit{residual connection} only when the number of input channels ($F_{in}$) is not equal to the number of output channels ($F_{out}$). All convolution layers are followed by the activation and batch norm layers. The total number of network parameters was reduced by choosing small and fixed-size kernels. Such small sized kernels reduce the effective receptive field resulting in the loss of spatial information. Furthermore, every pixel within a receptive field does not contribute equally to the output \cite{Luo2016}. This constraint can be alleviated by utilizing an attention mechanism to capture the global context information and improve the representation capability of extracted features. The spatio-channel attention as shown in Fig. \ref{FIG_3} remarkably increased the ability of model to pay attention to the meaningful task related information. \begin{itemize} \item[--] \textbf{Channel Attention:} The channel attention vector $F_c\in\mathcal{R}^{(C\times1\times1)}$ was obtained by squeezing the spatial dimension of an input feature map. We used adaptive max-pooling followed by a sigmoid function to get the probability estimate of the distinctiveness of each feature. \item[--] \textbf{Spatial Attention:} Unlike most spatial attention mechanisms proposed previously, we found that a 2D softmax over features to each spatial location was enough to yield a spatial map $F_s\in\mathcal{R}^{(1\times H\times W)}$. It attended the meaningful regions within the patches. \end{itemize} Both attention maps were multiplied with the residual connection. It is either the block input or 1×1 convolution of the block input when the number of input channels were different from the number of output channels. Then their element wise summation with the output was obtained from the $conv-bn-gelu$ path. These operations can be expressed as Eq. \eqref{EQ_4} whereas the detailed scheme is given in Fig. \ref{FIG_3}. \begin{align} \label{EQ_4} F_{out}=F_{conv}+\alpha F_c+\beta F_s, \end{align} where $F_{conv}$ is the output from the series of $conv-bn-gelu$, $F_c$ is the channel attention, $F_s$ is the spatial attention, and the two learnable weights are denoted as $\alpha$ and $\beta$. In our experiments, Gaussian Error Linear Units (GELU) were preferred over the Rectified Linear Unit (ReLU) for its stochastic regularization effect \cite{Hendrycks2016}. GELU activation function has shown promising results in the state-of-the-art architectures like GPT-3 \cite{Brown2020}, BERT \cite{Devlin2018}, and vision transformers \cite{Dosovitskiy2020}. \subsection{Experimental Setup} We implemented our model in PyTorch \cite{Paszke2019} on a Windows 10 PC having an 8-core 3.6 GHz CPU and an NVIDIA TITAN Xp (12 GB) GPU. The training was carried out using input images cropped to 224×224 in a non-overlapped fashion. LAMB optimizer \cite{You2019} was used to update the network parameters with a learning rate of 0.001 and batch size of 16. The network was trained for 100 epochs only and, the epoch yielding the best dice similarity score was included in the results. No pre-training or transfer learning technique was used in any performed experiments except the Xavier weight initialization. At test time, 224×224 sized patches of validation images were used to generate predictions. We used test time augmentations (TTA) \cite{Simonyan2015} at patch-level. Such augmentations included horizontal/vertical flips and random rotation by the multiple of ${90}^o$. We did not use multi crops at test time because the quality gain was negligible over the increase in computation time. The majority voting technique was used to decide the label at the pixel level. \section{Experiments} \label{SEC_experiments} \subsection{Dataset} This dataset was released for the \textit{Foot Ulcer Segmentation Challenge} at the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2021 \cite{Wang2022}. It is an extended version of the chronic wound dataset and has 810 training, 200 validation, and 200 test images. The size of images was kept fixed at 512×512 pixels by applying zero-padding either at the left side or bottom of the image. The ground truth masks for the test images were held private by the organizers for the final evaluation of challenge participants so we evaluated the model performance for validation images only. We employed online data augmentation transformations including horizontal/vertical flips, multiple random rotate by ${90}^o$, and random resized crops with high probability ($p\sim1.0$). Other augmentations of significantly low probability ($p\sim0.3$) included randomly setting HSV colors, random affine transformations, median blur, and Gaussian noise. \subsection{Evaluation Metrics}\label{SUBSEC_evaluation} The quality of predicted segmentation masks was evaluated comprehensively against the ground truth using five different measures such as Dice similarity index (DSC), Jaccard similarity index (JSI), sensitivity (SE), specificity (SP), precision (PR), which are defined as: \begin{align} DSC&=\frac{2TP}{2TP+FP+FN},\\ JSI&=\frac{TP}{TP+FP+FN},\\ SE&=\frac{TP}{TP+FN},\\ SP&=\frac{TN}{TN+FP}, \text{ and}\\ PR&=\frac{TP}{TP+FP}, \end{align} where TP, FN, TN, and FP represent the number of true positive, false negative, true negative, and false positive respectively. The output values of all these measures range from 0 to 1, and a high score is desired. Before evaluating the model performance, all obtained predictions were first binarized using a threshold value of 0.5. \subsection{Comparison with Baseline Model} We evaluated all model predictions obtained for the validation data on both patch-level and image-level for a fair comparison with other methods. A standard U-Net was trained from scratch, keeping the training configuration and augmentation transformations close to the original paper \cite{Ronneberger2015}, gave the best dice score of 89.74\% as compared to 91.18\% achieved by our lightweight architecture as shown in Table \ref{TAB_1}. The total number of parameters and the total number of floating-point operations per second (FLOPS) were significantly reduced to 16\% of the vanilla U-Net model. The first column in Table \ref{TAB_1} has the total network parameters in millions, the second column is for \textit{giga-floating-point operations per second} (GLOPS), and the rest of the columns present the performance metrics given in section \ref{SUBSEC_evaluation}. \begin{table}[t] \centering \caption{Architecture and performance comparison (in terms of \%) between at the patch level. The best results shown in bold.} \label{TAB_1} \begin{tabular}{l|l|l|l|l|l|l|l} \hline Model & Param(M)$\downarrow$ & GFLOPS$\downarrow$\textit{ } & DSC$\uparrow$ & JSI$\uparrow$ & SE$\uparrow$ & SP$\uparrow$ & PR$\uparrow$ \\ \hline U-Net (vanilla) & 31.03 & 30.80 & 89.74 & 81.39 & 89.01 & \textbf{99.73} & \textbf{90.48} \\ \hline Proposed method & \textbf{5.17} & \textbf{4.9} & \textbf{91.18} & \textbf{83.79} & \textbf{92.99} & 99.69 & 89.44 \\ \hline \end{tabular} \end{table} \begin{figure}[!b] \centering \includegraphics[width=\textwidth]{results} \caption{Example patches from the images of FUSeg validation data (\textit{top row}), ground truth masks in red color (\textit{middle row}), and segmentation prediction obtained from the proposed model in green color (\textit{last row}).} \label{FIG_4} \end{figure} Some example of patches extracted from the validation dataset images are shown in Fig. \ref{FIG_4}. The predicted segmentation results were almost identical to the ground truth masks. In some cases, as in Fig. \ref{FIG_4} (c) and (d), the model showed sensitivity to fresh wounds since they were high in color contrast in comparison to their surroundings. Fig. \ref{FIG_4} (a) represents a case where the model exhibited poor performance in capturing the fine-grained details potentially due to the extremely low number of learnable parameters. \subsection{Comparison with Challenge Records} For image-level evaluations, all 224×224 patch-level predictions were unfolded to recover the original image of size 512×512. The statistical results of our method for the validation images are given as Table \ref{TAB_2} in comparison with the participating teams in the challenge. Our method ranked second on the leaderboard and successfully competed with other wider and deeper architectures. These models often utilized pre-trained backbone in a U-shape architecture along with extensive ensemble approaches. \begin{table}[!t] \centering \caption[]{The leaderboard of MICCAI 2021 Foot Ulcer Segmentation (FUSeg) Challenge. Our proposed method achieved the second-best place.} \label{TAB_2} \begin{tabular}{c|l|l|c} \hline \# & Team & Model & DSC$\uparrow$ \\ \hline 1 & \begin{tabular}[c]{@{}l@{}}Amirreza Mahbod, Rupert Ecker, Isabella \\Ellinger~\textcolor{darkgray}{(Medical University of Vienna,}\\\textcolor{darkgray}{TissueGnostics GmbH)}\end{tabular} & U-Net+LinkNet & 0.8880 \\ \hdashline[1pt/1pt] ~ & \textbf{Proposed method} & \begin{tabular}[c]{@{}l@{}}\textbf{U-Net with residual }\\\textbf{attention blocks}\end{tabular} & \textbf{0.8822} \\ \hdashline[1pt/1pt] 2 & \begin{tabular}[c]{@{}l@{}}Yichen Zhang~\textcolor{darkgray}{(Huazhong University of }\\\textcolor{darkgray}{Science and~Technology)}\end{tabular} & \begin{tabular}[c]{@{}l@{}}U-Net with HarDNet68 \\as encoder backbone\end{tabular} & 0.8757 \\ \hline 3 & \begin{tabular}[c]{@{}l@{}}Bruno Oliveira \\\textcolor{darkgray}{(University of Minho)}\end{tabular} & ~- & 0.8706 \\ \hline 4 & \begin{tabular}[c]{@{}l@{}}Adrian Galdran~\\\textcolor{darkgray}{(University of Bournemouth)}\end{tabular} & Stacked U-Nets~~~~~~ & 0.8691 \\ \hline 5 & \begin{tabular}[c]{@{}l@{}}Jianyuan Hong, Haili Ye, Feihong Huang,\\~Dahan Wang~\textcolor{darkgray}{(Xiamen University of }\\\textcolor{darkgray}{Technology)}\end{tabular} & ~- & 0.8627 \\ \hline 6 & \begin{tabular}[c]{@{}l@{}}Abdul Qayyum, Moona Mazher, Abdesslam\\~Benzinou, Fabrice~Meriaudeau \textcolor{darkgray}{(University }\\\textcolor{darkgray}{of~Bourgogne Franche-Comté)}\end{tabular} & ~- & 0.8229 \\ \hline 7 & Hongtao Zhu \textcolor{darkgray}{(Shanghai University)} & U-Net with ASPP & 0.8213 \\ \hline 8 & Hung Yeh \textcolor{darkgray}{(National United University)} & ~ & 0.8188 \\ \hline \end{tabular} \end{table} \section{Conclusion} \label{SEC_conclusion} The use of deep learning methods for automated foot ulcer segmentation is the best solution to the laborious annotation task and analysis process. We proposed using ResAttn block based on the residual connection, spatial attention, and channel attention. Our lightweight architecture, with ResAttn blocks, outperformed several recent state-of-the-art architectures at the leaderboard of the Foot Ulcer Segmentation Challenge from MICCAI 2021. In addition, this study offers an alternative perspective by showing how minor yet highly valuable design choices can lead to excellent results when using a simple network architecture. \subsubsection{Acknowledgment.} This study was supported by the BK21 FOUR project (AI-driven Convergence Software Education Research Program) funded by the Ministry of Education, School of Computer Science and Engineering, Kyungpook National University, Korea (4199990214394).
proofpile-arXiv_065-4661
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:intro} Game artificial intelligence (AI) is a computer program that plays board games, such as chess and Shogi and has been studied for a long time. In particular, computer chess has a long history. Computer chess programs that can even outperform humans have been developed~\cite{CAMPBELL200257}. However, since these programs were specialized for chess, they could not be generalized to other games. Recently, AlphaZero~\cite{Silver1140} has been gaining considerable attention as a general-purpose game AI. AlphaZero is a more generalized model of the AlphaGo Zero program~\cite{article111}, which demonstrated a higher performance than humans in Go by using a neural network (NN) to represent the rules of the game and being trained only through reinforcement learning from self-play. AlphaZero used a single network structure and defeated world champion programs in three different classical games, Go, chess, and Shogi, without any knowledge other than the rules of each game. Thus, general-purpose game AI can be created with high performance but the heuristic knowledge of the game. However, it is not possible to input all the information on the board directly into the NN for training, making it necessary to extract the features of the information on the board. In other words, some information compression is required. At present, the important information is extracted from the board heuristic ally, which is a crucial part of NN training. Therefore, a general method for compressing information on the board without any domain-specific knowledge is desired. One of the candidate information compression methods is singular value decomposition (SVD). SVD is commonly used for information compression. It is a matrix decomposition method that allows low-rank approximation while retaining important information in the matrix. Therefore, it is often applied to reduce the number of parameters and compress the model size of NNs or tensor networks in fields such as image processing~\cite{6637309,inproceedings}, signal processing~\cite{8288893}, automatic speech recognition~\cite{article}, and quantum mechanics~\cite{8914525,PhysRevA.97.012327,PhysRevLett.115.180405}. However, this technique has not yet been applied to game AI, to our best knowledge. In this study, we apply SVD to approximate the information on a game board and investigate the effect of approximation on a game AI's winning rate. We adopt Tic-Tac-Toe as the board game since the information space is small and we can search the entire game space. The board of Tic-Tac-Toe is a three-by-three grid. There are nine cells in total, and each cell takes on three different states. Thus, the state of the game board can be regarded as a ninth-order tensor. We first construct the perfect evaluation function for Tic-Tac-Toe and obtain approximated evaluation functions through low-rank approximation. Then, we investigate the relationship between the approximation accuracy and the game AI's winning rate. Since the evaluation function is a higher-order tensor, the decomposition is non-trivial. Thus, we consider two methods of decomposition, simple SVD and higher-order SVD (HOSVD)~\cite{ho,7070}. We compare the approximation accuracy and winning rate between the strategies approximated by simple SVD and HOSVD. The rest of the article is organized as follows. The method is described in the next section. The results are shown in Sec.~\ref{sec:results}. Section~\ref{sec:summary} is devoted to summary and discussion. \section{Method}\label{sec:method} \subsection{Complete evaluation function}\label{subsec:method2} \begin{figure}[htbp] \centering \includegraphics[width=3.5cm]{fig1.eps} \caption{Typical state of Tic-Tac-Toe.} \label{fig:three_eyes1} \end{figure} Tic-Tac-Toe is a simple board game in which the first player plays with a circle and the second player plays with a cross on a $3 \times 3$ square board (Fig.~\ref{fig:three_eyes1})~\cite{708,298,800}. If a cell is not empty, it cannot be selected. The first player to place three of their objects in a row vertically, horizontally, or diagonally wins. The game is a draw if neither player can make a vertical, horizontal, or diagonal row. Tic-Tac-Toe is classified as a two-player, zero-sum, and perfect information game~\cite{RAGHAVA}. In this paper, we refer to how much of an advantage either player has on a board as an evaluation value. The game AI examines the evaluation value from the information on the board and chooses the next move to increase the evaluation value. Therefore, it is necessary to define the board's evaluation value to construct the game AI's strategy. We refer to a function that returns an evaluation value of a given board as an evaluation function. Suppose the current state of the board is $S$, which is the set of nine cell states. Each cell is numbered serially from $1$ to $9$. Then, a state is expressed as $S = \{c_1, c_2, \cdots, c_9\}$, where $c_i$ is the state of the $i$th cell and its value is $0$, $1$, and $2$ for empty, circle, and cross, respectively. The evaluation function $f(S)$ gives an evaluation value for a given state $S$. Since $S$ is the set of nine cell states and each cell can have three values, the evaluation function can be considered as a ninth-order tensor with dimension $3 \times 3 \times \cdots \times 3$. Since the total number of states in Tic-Tac-Toe is at most $3^9 = 19~683$, even ignoring constraints and symmetries, we can count all possible states and construct the complete evaluation function. We refer to the evaluation function obtained by the full search as the perfect evaluation function $f_\mathrm{all}$. It is known that the game will always end in a draw if both players make their best moves. Thus, if we assume that both players choose the best move, the evaluated value of all states will be zero. Therefore, we calculate the complete evaluation value assuming that both players make moves entirely at random. We first determine the evaluation values when the game is over. There are three terminal states in Tic-Tac-Toe: the first player wins, the second player wins, and the game is draw, with evaluation values of $1$, $-1$, and $0$, respectively. Next, we recursively define an evaluation value for a general state. Suppose the $i$th cell is empty for a given state $S$, \textit{i.e.}, $S = \{\cdots, c_i=0, \cdots\}$. The evaluation value $\alpha_i$ when the $i$th cell is chosen as the next move is given by $$ \alpha_i = \begin{cases} f_\mathrm{all}(\{\cdots, c_i=1, \cdots\}) & \text{for the first player}, \\ f_\mathrm{all}(\{\cdots, c_i=2, \cdots\}) & \text{for the second player}. \end{cases} $$ The evaluation value $f_\mathrm{all}(S)$ of state $S$ is defined as the average for the possible moves as follows. \[ f_\mathrm{all}(S) = \frac{1}{M} \sum_{i} \alpha_i, \] where $M$ is the number of possible next moves and the summation is taken over all possible moves. By repeating this process recursively, the state will reach one of the terminal states. The first player wins, the second player wins, and the game is draw. Then, the evaluation values for all the states are determined recursively. An example of the recursive tree for determining the evaluation value of a state is shown in Fig.~\ref{fig:three_eyes}. There are three possible next moves since cells 3, 4, and 9 are empty. The evaluation value $f_\mathrm{all}(S)$ of the current state $S$ is calculated as $$ \begin{aligned} f_\mathrm{all}(S) & = \frac{1}{3} \left( \alpha_3 + \alpha_4 + \alpha_9\right) \\ & = \frac{1}{3}(0.5 - 0.5 + 0) \\ & = 0 . \end{aligned} $$ Here, all possible moves are equally weighted, which corresponds to the players choosing the next moves randomly. The closer the evaluation value is to 1, the more likely the first player will win when both players choose a random move, and the closer it is to -1, the more likely the second player will win. \begin{figure}[ht] \centering \includegraphics[width=15cm]{fig2.eps} \caption{Calculation of the perfect evaluation function. The evaluation value of a given state $S$ is defined as the average of the evaluation values for the currently possible moves. The evaluation values are defined recursively. The evaluation value is defined as $1$ for a win, $-1$ for a loss, and $0$ for a draw. } \label{fig:three_eyes} \end{figure} \subsection{Approximation of the evaluation function} \begin{figure}[bthp] \centering \includegraphics[width=15cm]{fig3.eps} \caption{Decomposition and approximations of $f_{\mathrm{all}}$, which is a ninth-order tensor with dimension $3 \times 3 \times \cdots \times 3$. (a) $f_{\mathrm{all}}$ is considered to be a matrix with dimension $3 ^ 4 \times 3 ^ 5$ and simple SVD is applied. (b) $f_{\mathrm{all}}$ is considered to be a third-order tensor with dimension $3 ^ 3 \times 3 ^ 3 \times 3 ^ 3$ and HOSVD is applied. Each index refers to the location of a cell. $r$ is the number of remaining singular values.} \label{fig:decomposition} \end{figure} The purpose of this study is to investigate how the approximation of the evaluation function $f_\mathrm{all}$ affects the winning rate. To approximate $f_\mathrm{all}$, we adopt SVD. However, the method of approximation is not uniquely determined since $f_\mathrm{all}$ is a higher-order tensor. We examined two approximation methods in the present study: simple SVD and HOSVD. Since $f_\mathrm{all}$ is a ninth-order tensor with dimension $3 \times 3 \times \cdots \times 3$, it can be considered to be a $3^4 \times 3^5$ matrix. Then, it can be decomposed into two matrices $Q$ and $S$ by SVD as \[ f_{\mathrm{all}} = U\Sigma V^* \equiv QS, \] where $Q=U\sqrt{\Sigma}$ and $S=\sqrt{\Sigma}V^*$. If we take $r$ singular values, $Q$ becomes a $3^4 \times r$ matrix $\tilde{Q}$ and $S$ becomes an $r \times 3^5$ matrix $\tilde{S}$. Then the approximated evaluation function is given by \[ f_{\mathrm{all}} \sim f_{\mathrm{SVD}} = \tilde{Q} \tilde{S} . \] Schematic illustrations of this decomposition and approximation are shown in Fig.~\ref{fig:decomposition}~(a). Simple SVD ignores the information on the game board. Therefore, we adopt HOSVD as a decomposition method that reflects the information on the game board. To apply HOSVD, we regard $f_{\mathrm{all}}$ as a third-order tensor with dimension $3^3 \times 3^3 \times 3^3$. Then, it can be decomposed to \[ f_{\mathrm{all}} = LCR, \] where $L$ and $R$ are matrices and $C$ is a third-order tensor. Then, the approximated evaluation function is given by \[ f_{\mathrm{all}} \sim f_{\mathrm{HOSVD}} = \tilde{L}\tilde{C}\tilde{R}, \] where $\tilde{L}$ and $\tilde{R}$ are matrices with dimensions $3^3 \times r$ and $r \times 3^3$ respectively, and $\tilde{C}$ is a third-order tensor with dimension $r \times 3^3 \times r$. Schematic illustrations of this decomposition and approximation are shown in Fig.~\ref{fig:decomposition}~(b). \subsection{Compression ratio and relative error} We introduce the compression ratio $C$ and the relative error $E$ to evaluate the quality of the approximations. $C$ is the ratio of the total number of elements in the approximated tensor to the number of elements in the original tensor. Suppose matrix $X$ is approximated as $X \simeq \tilde{Q}\tilde{S}$, then the compression ratio is defined as \[ C = \frac{N(\tilde{Q}) + N(\tilde{S})}{N(X)} , \] where $N(X)$ is the number of elements in matrix $X$. We define the compression ratio as high when $C$ is small and low when $C$ is high. For simple SVD, the tensor with $3^9$ elements is approximated by two matrices with dimensions $3^4 \times r$ and $r \times 3^5$. Therefore, the $r$ dependence of the compression ratio is $$ C(r) = \frac{(3^4+3^5)r}{3^9} = \frac{4r}{3^5}. $$ Since $r$ ranges from $0$ to $81$, the compression ratio of the non-approximated evaluation function is $C=4/3$, which is greater than $1$. Similarly, the $r$ dependence of the compression ratio for HOSVD is $$ C(r) = \frac{3^3r + r^2 3^3 + 3^3r}{3^9} = \frac{r^2+2r}{3^6}. $$ The relative error $E$ is defined as \[ E = \frac{\parallel X - \tilde{Q}\tilde{S} \parallel}{\parallel X \parallel} , \] where $\parallel X \parallel$ is the Frobenius norm of matrix $X$. With this definition, the compression ratio dependence of the relative error is equivalent to the singular value distribution of the original matrix. We adopt similar definitions for HOSVD. \subsection{Strategy of game AI} The game AI stochastically chooses the next move on the basis of the complete or approximated evaluation function. Suppose the current state of the board is $S$ and the evaluation function of the game AI is $f(S)$. The evaluation value when the position of the next move is the $j$th cell is denoted by $\alpha_j$. The probability of choosing the $i$th cell for the next move, $p_i$, is determined by a softmax-type function~\cite{NIPS,max} as \[ p_i = \frac{\mathrm{exp}(w \alpha_i)}{\sum_{j} \mathrm{exp}(w \alpha_j)}, \] where $w$ is a parameter that determines how much the weight is emphasized. The game AI chooses the cell with the largest evaluation value more frequently as $w$ increases. When $w$ is $0$, the evaluation value is ignored, and the game AI chooses the next move randomly. Therefore, the parameter $w$ plays the role of the inverse temperature of a Boltzmann weight. We choose $w = 10$ throughout the present study. Since the evaluation value is set to $1$ when the first player wins and $-1$ when the second player wins, we adopt $-f(S)$ for the evaluation function for the second player. \section{Results}\label{sec:results} We allow game AIs to play games with each other with evaluation functions compressed at various compression ratios. We perform $500$ games in each case, switching the first and second players. Each player assumes that the opponent adopts the same evaluation function. The compression ratio and the number of remaining singular values are summarized in Table~\ref{table:compression}. \subsubsection{Rank dependence of winning rates} We first compare the complete evaluation function $f_\mathrm{all}$ and the evaluation function approximated by simple SVD, $f_\mathrm{SVD}$, to investigate the effect of low-rank approximation on the winning rate. Since we consider $f_\mathrm{all}$ as a matrix with dimension $3^4 \times 3^5$, the maximum number of singular values is $81$. Therefore, we examine the winning rate by varying the rank from $0$ to $81$. The winning and draw rates as functions of the compression ratio are shown in Fig.~\ref{fig:res1}. One can see that the winning rates of $f_\mathrm{all}$ and $f_\mathrm{SVD}$ are almost constant down to a compression ratio of $0.3$. This result means that we can reduce the amount of data of the evaluation function by 70\% without performance degradation. The relative error is also shown in Fig.~\ref{fig:res1}. As the compression ratio decreases, the relative error increases as expected. However, it is not entirely proportional to the winning rate of the game AI with the approximated evaluation function. Although the winning rate of $f_{\mathrm{all}}$ increases sharply when the compression ratio is lower than $0.3$, the relative error changes gradually. Since the relative error is the sum of the ignored singular values, this result shows that the performance of the evaluation function of a board game does not entirely depend on the singular value distribution. \begin{figure}[htbp] \centering \includegraphics[width=15cm]{fig4.eps} \caption{(Color online) Winning rates of game AIs with the evaluation functions $f_{\mathrm{all}}$ (red) and $f_{\mathrm{SVD}}$ (blue). The winning rates are almost constant down to a compression ratio of $0.3$. The relative error (black) is also shown. The winning rate is not perfectly proportional to the relative error.} \label{fig:res1} \end{figure} \subsubsection{Dependence of decomposition methods} \begin{table}[htbp] \centering \caption{Compression ratio and number of remaining singular values} \begin{tabular}{|c||c|c|c|c|c|c|c|c|} \hline Compression ratio & 0.0 & 0.049 & 0.13 & 0.20 & 0.31 & 0.43 & 0.80 & 1.0 \\ \hline Number of singular values of $f_{\mathrm{SVD}}$ & 0 & 3 & 8 & 12 & 19 & 26 & 49 & 61 \\ \hline Number of singular values of $f_{\mathrm{HOSVD}}$ & 0 & 7 & 12 & 14 & 17 & 19 & 24 & 26 \\ \hline \end{tabular} \label{table:compression} \end{table} Next, we investigate whether the decomposition method changes the game AI's performance. We allow two game AIs to play the game, one with the evaluation function approximated by simple SVD and the other with the evaluation function approximated by HOSVD. The compression ratio of the evaluation function is controlled by $r$, which is the rank of the approximated matrix. We choose the value of $r$ so that the compression ratio of $f_{\mathrm{SVD}}$ and $f_{\mathrm{HOSVD}}$ are equal. The values of the ranks and the compression ratios are summarized in Table~\ref{table:compression}. The winning and draw rates of $f_{\mathrm{SVD}}$ and $f_{\mathrm{HOSVD}}$ are shown in Fig.~\ref{fig:res2}. When the compression ratio is close to $1$, most games are draws, indicating that there is little difference between the two strategies. On the other hand, the winning rate of both strategies becomes $0.5$ when the compression ratio is close to $0$, which means that the two game AIs choose the next moves randomly. When $0 < C < 0.3$, the game AI with the evaluation function $f_{\mathrm{HOSVD}}$ exhibits a significantly higher winning rate. This result indicates that the approximation accuracy of HOSVD is greater than that of SVD, which is reflected in the winning rate. \begin{figure}[htbp] \centering \includegraphics[width=15cm]{fig5.eps} \caption{Results of the games between $f_{\mathrm{SVD}}$ and $f_{\mathrm{HOSVD}}$. The red and blue graphs show the winning rate of each evaluation function. The green graph shows the draw rate. When the compression ratio is between 0 and 0.3, HOSVD wins more games than SVD.\label{fig:res2}} \label{fig:task4} \end{figure} \section{Summary and Discussion}\label{sec:summary} We performed a low-rank approximation of the evaluation function regarding game board information as a tensor. As a first step to extract features from a game board non-empirically, we studied Tic-Tac-Toe, in which we can construct the perfect evaluation function. We performed low-rank approximation by considering the perfect evaluation function as a ninth-order tensor and investigated the performance of game AIs with approximated evaluation functions. We found that we could reduce the amount of the information of evaluation function by 70\% without significantly degrading the winning rate. As the rank of the approximated evaluation function decreases, the winning rate of the AI with the perfect evaluation function increases. However, the winning rate is not perfectly proportional to the approximation error. This result means that the performance of the approximated evaluation function does not depend only on the singular value distribution. We also investigated the performance of two game AIs with evaluation functions approximated by two different approximation methods: simple SVD and HOSVD. Although there was little difference in winning rate when the compression ratio was close to 0 or 1, HOSVD significantly outperformed simple SVD for intermediate values. The evaluation function of Tic-Tac-Toe is defined on $3 \times 3$ cells, and the low-rank approximation by simple SVD corresponds to dividing the board into four and five cells, whereas HOSVD divides it into three rows of three cells. Since the purpose of the game is to place three marks in a horizontal, vertical, or diagonal row, the decomposition by HOSVD more closely preserves the game's structure than simple SVD. Therefore, it is reasonable that HOSVD has superior performance to simple SVD at the same compression ratio. We believe that the method proposed in this paper can be applied to complex games where it is difficult to obtain a complete evaluation function. However, the specific implementation method is a topic for future study. \begin{acknowledgment} This work was supported by JSPS KAKENHI Grant Number 21K11923. \end{acknowledgment} \bibliographystyle{jpsj}
proofpile-arXiv_065-4668
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Consider a robot tasked with loading a dishwasher. Such a robot has to account for task constraints (e.g. only an open dishwasher rack can be loaded), and dynamic environments (e.g. more dishes may arrive once the robot starts loading the dishwasher). Dishwasher loading is a canonical example of personal preferences, where everyone has a different approach which the robot should adapt to. Classical task planning deals with task constraints through symbolic task description, but such descriptions are difficult to design and modify for new preferences in complex tasks. Building easily adaptable long-horizon task plans, under constraints and uncertainty, is an open problem in robotics. Machine learning (ML) enables learning complex tasks without extensive expert intervention: robotic navigation \cite{habitatsim2real20ral, truong2020learning, kumar2021rma}, in-hand manipulation \cite{kalashnikov2018qt, nagabandi2020deep, wirnshofer2020controlling, qin2020keto, simeonov2020long}, and planning \cite{yang2020plan2vec, pertsch2020accelerating, singh2020parrot, driess2020deep, andreas2017modular}. Within task planning, ML is used to replace user-defined symbolic descriptions \cite{silver2022inventing}, deal with uncertainty \cite{gordon2019should}, and adapt to preferences \cite{kaushik2020fast}. Recent work \cite{kaplan2017beating} has shown Transformer networks \cite{vaswani2017attention} can learn temporally-consistent representations, and exhibit generalization to new scenarios \cite{brown2020language, sanh2021multitask, liu2021p}. Our central question is: \textit{Can a Transformer network learn task structure, adapt to user preferences, and achieve complex long-horizon tasks using no symbolic task representations?} We hypothesize that task structure and preference are implicitly encoded in demonstrations. When loading a dishwasher, a user pulls out a rack before loading it, inherently encoding a structural constraint. They may place mugs on the top rack and plates on the bottom, encoding their preference. Learning user preferences from long-horizon demonstrations requires policies with temporal context. For example, a user might prefer to load the top rack before the bottom tray. The policy needs to consider the \textit{sequence of actions} demonstrated, rather than individual actions. Transformers are well-suited to this problem, as they have been shown to learn long-range relationships \cite{chaplot2021differentiable}, although not in temporal robotic tasks. We propose Transformer Task Planner{} (TTP{}) - an adaptation of a classic transformer architecture that includes temporal-, pose- and category-specific embeddings to learn object-oriented relationships over space and time. TTP generalizes beyond what was seen in demonstrations -- to variable numbers of objects and dynamic environments. By pre-training TTP{} on multiple preferences, we build temporal representations that can be generalized to new preferences. \begin{figure}[t] \centering \includegraphics[width=0.24\textwidth, trim={ 22cm 0cm 15cm 18cm},clip]{figures/hardware_demo/latest_robot/robot-2.jpg} \includegraphics[width=0.24\textwidth, trim={ 22cm 0cm 15cm 18cm},clip]{figures/hardware_demo/latest_robot/robot-3.jpg} \includegraphics[width=0.24\textwidth, trim={ 22cm 0cm 15cm 18cm},clip]{figures/hardware_demo/latest_robot/robot-4.jpg} \includegraphics[width=0.24\textwidth, trim={ 22cm 0cm 15cm 18cm},clip]{figures/hardware_demo/latest_robot/robot-5.jpg} \\ \includegraphics[width=0.24\textwidth, trim={ 22cm 0cm 15cm 18cm},clip]{figures/hardware_demo/latest_robot/robot-6.jpg} \includegraphics[width=0.24\textwidth, trim={ 22cm 0cm 15cm 18cm},clip]{figures/hardware_demo/latest_robot/robot-7.jpg} \includegraphics[width=0.24\textwidth, trim={ 22cm 0cm 15cm 18cm},clip]{figures/hardware_demo/latest_robot/robot-8.jpg} \includegraphics[width=0.24\textwidth, trim={ 22cm 0cm 15cm 18cm},clip]{figures/hardware_demo/latest_robot/robot-9.jpg} \caption{(Left-to-Right, Top-to-bottom) A Franka Emika Panda arm organizing 4 dishes into 2 drawers, following preference shown in a demonstration. The robot opens a top drawer, places objects in the top drawer, closes it, and does the same for the bottom drawer. The high-level policy that makes decisions about when to open drawers, what objects to pick, etc is learned in simulation and transferred zero-shot to the real-world. } \label{fig:demo} \end{figure} The main contributions of our work are: (1) Introduce transformers as a promising architecture for learning task plans from demonstrations, using object-centric embeddings (2) Demonstrate that preference conditioned pre-training generalizes at test time to new, unseen preferences with a single user demonstration. Our experiments use a complex high-dimensional dishwasher loading environment (Fig. \ref{fig:dishwasher_loading}) with several challenges: complex task structure, dynamically appearing objects and human-specific preferences. TTP{} successfully learns this task from seven preferences with 80 demonstrations each, and generalizes to unseen scenes and preferences and outperforms competitive baselines \cite{lin2022efficient, kapelyukh2022my}. Finally, we transfer TTP{} to a rearrangement problem in the real-world, where a Franka arm places dishes in two drawers, using a single human demonstration (Fig. \ref{fig:demo}). \section{Prior Work} \label{sec:prior_work} \paragraph{Object-centric representations for sequential manipulation} Several works build object-centric pick-place representations using off-the-shelf perception methods \cite{wang2019normalized,deng2020self,zeng2017multi,zhu2014single,yoon2003real,jain2021learning}. Once estimated, object states are used by a task and motion planner for sequential manipulation \cite{garrett2020integrated, paxton2019representing}, but objects in the scene are known. \cite{florence2019self} combine motor learning with object-centric representation, but transfer of policies is challenging. Transporter Nets \cite{zeng2020transporter} use a visual encoder-decoder for table-top manipulation tasks; SORNet \cite{yuan2021sornet} extracts object-centric representations from RGB images and demonstrates generalization in sequential manipulation task. Inspired from these, we learn policies for dishwasher loading, choosing from visible objects to make pick-place decisions. \vspace{-0.5em} \paragraph{Transformers for sequence modeling} Transformers in NLP \cite{vaswani2017attention} and vision \cite{he2022masked} have focused on self-supervised pretraining due to abundant unsupervised data available. Recent works have repurposed transformers for other sequence modeling tasks \cite{Liu2021StructFormerLS, sun2022plate, jain2020predicting, chen2021decision, janner2021sequence, putterman2022pretraining}. Prompt Decision Transformer \cite{xu2022prompt} considers a single model to encode the prompt and the successive sequence of state. We consider state as variable number of instance PlaTe \cite{sun2022plate} proposes planning from videos, while \cite{chen2021decision, janner2021sequence, putterman2022pretraining} model a sequential decision-making task. We consider long-horizon tasks with partially observable state features, and user-specific preferences. \vspace{-0.5em} \paragraph{Preferences and prompt training} In literature, there are several ways of encoding preferences. \cite{kapelyukh2022my} propose VAE to learn user preferences for spatial arrangement based on just the final state, while our approach models temporal preference from demonstrations. Preference-based RL learns rewards based on human preferences \cite{wang2022skill, lee2021b, liang2022reward, knox2022models}, but do not generalize to unseen preferences. On complex long-horizon tasks, modeling human preferences enables faster learning than RL, even with carefully designed rewards \cite{christiano2017deep}. We show generalization to unseen preferences by using prompts. Large language and vision models have shown generalization through prompting \cite{chen2022adaprompt, ramesh2021zero}. Prompting can also be used to guide a model to quickly switch between multiple task objectives \cite{Raffel2020ExploringTL, Lewis2020BARTDS, Song2019MASSMS}. Specifically, language models learn representations that be easy transferred to new tasks in a few-shot setting \cite{schick2020s, liu2021pre, brown2020language, chen2022adaprompt}. Our approach similarly utilizes prompts for preference generalization in sequential decision-making robotic tasks. \section{Preliminaries} \section{Transformer Task Planner (TTP)} \label{sec:method} We introduce TTP, a Transformer-based policy architecture for learning sequential manipulation tasks. We assume low-level `generalized' pick-place primitive actions that apply to both objects like plates, bowls, and also to dishwasher door, racks, etc. TTP{} learns a high-level policy for pick-place in accordance with the task structure and preference shown in demonstrations. The following sections are described with dishwasher-loading as an example, but our setup is applicable to most long-horizon manipulation tasks with `generalized' pick-place. \vspace{-0.5em} \paragraph{State-Action Representations} We consider a high-level policy that interacts with the environment at discrete time steps. At every timestep $t$, we receive observation $\boldsymbol{o}_t$ from the environment which is passed through a perception pipeline to produce a set of rigid-body instances $\{x_i\}^n_{i=1}$, corresponding to the $n$ objects currently visible. We express an \textit{instance} as: $x_i = \{p_i, c_i, t\}$, where $p_i$ is the pose of the object, $c_i$ is its category, and $t$ is the timestep while recording $\boldsymbol{o}_t$ (Fig. \ref{fig:scene_to_instance}). For example, for a bowl at the start of an episode, $p$ is its current location w.r.t a global frame, $c$ is its category (bowl), and $t=0$. The pick \textit{state} $S^{pick}_t$ is described as the set of instances visible in $\boldsymbol{o}_t:$ $S^{pick}_t = \{x_0, x_1,\cdots, x_n\}$. $S^{pick}_t$ is passed to a learned policy $\pi$ to predict a pick action $\boldsymbol{a}_t$. We describe the \textit{action} $\boldsymbol{a}_t$ in terms of what to pick and where to place it. Specifically, the pick action chooses one instance from $x_i$ observed in $\boldsymbol{o}_t$: $\pi(S^{pick}) \rightarrow x_{target}, \text{ where } x_{target} \in \{x_i\}^n_{i=1}.$ Once a pick object is chosen, a similar procedure determines where to place it. We pre-compile a list of discrete placement poses corresponding to viable place locations for each object-category. These poses densely cover the dishwasher racks, and are created by randomly placing objects in the dishwasher and measuring the final pose they land in. All possible placement locations for the picked object category, whether free or occupied, are used to create a set of place instances $\{g_j\}_{j=1}^l$. Similar to $x_i$, $g_j = \{p_j, c_j, t\}$, consists of a pose, category and timestep. A boolean value $\mathit{r}$ in the attribute set distinguishes the two instance types. The place state is $S^{place} = S^{pick} \cup \{g_j\}_{j=1}^l.$ The \textit{same} policy $\pi$ then chooses where to place the object (Fig.\ref{fig:scene2inst-n-inst2pred}). Note that the input for predicting place includes both objects and place instances, since the objects determine whether a place instance is free to place or not. $x_{target}$ and $g_{target}$ together make action $\boldsymbol{a}$, sent to a low-level pick-place policy. The policy $\pi$ is modeled using a Transformer \cite{vaswani2017attention}. \begin{figure} \centering \begin{subfigure}{0.545\textwidth} \centering \includegraphics[width=1\textwidth]{figures/scene_to_instance_compact.jpg} \caption{Scene to Instance embeddings} \label{fig:scene_to_instance} \end{subfigure} \begin{subfigure}{0.445\textwidth} \centering \includegraphics[width=0.95\textwidth]{figures/instances_to_pred.png} \caption{Instances to prediction} \label{fig:arch} \end{subfigure} \caption{(Left) Architecture overview of how a scene is converted to a set of instances. Each instance is comprised of attributes, i.e. pose, category, timestep, and whether it is an object or place instance. (Right) Instance attributes (\textcolor{Gray}{gray}) are passed to the encoder, which returns instance embeddings for pickable objects (\textcolor{RedOrange}{red}), placeble locations (\textcolor{OliveGreen}{green}), and $<$ACT$>$ embeddings (\textcolor{CornflowerBlue}{blue}). The transformer outputs a chosen pick (\textcolor{RedOrange}{red}) and place (\textcolor{OliveGreen}{green}) instance embedding.} \label{fig:scene2inst-n-inst2pred} \vspace{-0.6cm} \end{figure} \vspace{-0.5em} \paragraph{Instance Encoder} Each object and goal instance $x_i$, $g_j$ is projected into a higher-dimensional vector space (Fig. \ref{fig:scene_to_instance}). Such embeddings improve the performance of learned policies \cite{vaswani2017attention, mildenhall2020nerf}. For the pose embedding $\Phi_p$, we use a positional encoding scheme similar to NeRF \cite{mildenhall2020nerf} to encode the 3D positional coordinates and 4D quaternion rotation of an instance. For category, we use the dimensions of the 3D bounding box of the object to build a continuous space of object types and process this through an MLP $\Phi_c$. For each discrete 1D timestep, we model $\Phi_t$ as a learnable embedding in a lookup table, similar to the positional encodings in BERT \cite{devlin2018bert}. To indicate whether an instance is an object or placement location, we add a 1D boolean value vectorized using a learnable embedding function $\Phi_r$. The concatenated $d$-dimensional embedding for an instance at timestep $t$ is represented as $f_e(x_i) = \Phi_t || \Phi_c || \Phi_p || \Phi_r = e_i$. The encoded state at time $t$ can be represented in terms of instances as: $S^{enc}_t = [e_0, e_1, \cdots e_N]$. We drop $()^{enc}$ for brevity. \vspace{-0.5em} \paragraph{Demonstrations}are state-action sequences $\mathcal{C} = \{(S_0, a_0), (S_1, a_1), \cdots, (S_{T-1}, a_{T-1}), (S_{T}) \}$. Here $S_i$ is the set of object and place instances and $a_i$ are the pick-place actions chosen by expert at time $i$. At every time step, we record the state of the objects, the pick instance chosen by the expert, place instances for the corresponding category, and the place instance chosen by the expert. Expert actions are assumed to belong to the set of visible instances in $S_i$. However, different experts can exhibit different preferences over $a_i$. For example, one expert might choose to load bowls first in the top rack, while another expert might load bowls last in the bottom rack. In the training dataset, we assume labels for which preference each demonstration belongs to, based on the expert used for collecting the demonstration. Given $K$ demonstrations per preference $m \in \mathcal{M}$, we have a dataset for preference $m$: $\mathcal{D}_m = \{\mathcal{C}_1, \cdots, \mathcal{C}_K \}$. The complete training dataset consists of demonstrations from all preferences: $\mathcal{D} = \bigcup_{m=1}^M \mathcal{D}_m$. During training, we learn a policy that can reproduce all demonstrations in our dataset. This is challenging, since the actions taken by different experts are different for the same input, and the policy needs to disambiguate the preference. At test time, we generalize the policy to both unseen scenes and unseen preferences. \subsection{Prompt-Situation Transformer} We use Transformers~\cite{vaswani2017attention}, a deep neural network that operates on sequences of data, for learning a high-level pick-place policy $\pi$. The input to the encoder is a $d$-dimensional token\footnote{Terminology borrowed from natural language processing where tokens are words; here, they are instances.} per instance $e_i\in[1,..N]$ in the state $S$. In addition to instances, we introduce special $<$\textsc{ACT}$>$ tokens\footnote{Similar to $<CLS>$ tokens used for sentence classification.}, a zero vector for all attributes, to demarcate the end of one state and start of the next. These tokens help maintain the temporal structure; all instances between two $<$\textsc{ACT}$>$ tokens in a sequence represent one observed state. A trajectory $\tau$, without actions, is: $\tau = \big[S_{t=0}, <$\text{ACT}$>, \cdots, S_{t=T-1}, <$\text{ACT}$>, S_{t=T}\big].$ To learn a common policy for multiple preferences, we propose a prompt-situation architecture (Fig. \ref{fig:promptsituation}). The prompt encoder receives one demonstration trajectory as input, and outputs a learned representation of the preference. These output prompt tokens are input to a situation decoder, which also receives the current state as input. The decoder $\pi$ is trained to predict the action chosen by the expert for the situation, given a prompt demonstration \label{sec:promt-sit} \begin{wrapfigure}{r}{0.44\textwidth} \centering \includegraphics[width=0.44\textwidth]{figures/PromptSituationOverview.pdf} \caption{Prompt-Situation Architecture. The left is the prompt encoder which takes as input a prompt demonstration and outputs a learned preference embedding. The right half is the situation decoder, which conditioned on preference embedding from prompt encoder, acts on the current state.} \vspace{-0.3cm} \label{fig:promptsituation} \vspace{-0.2cm} \end{wrapfigure} The left half is prompt encoder $\psi$ and the right half is the situation decoder or policy $\pi$ acting on given state. The prompt encoder $\psi: f_{slot} \circ f_{te} \circ f_{e}$ consists of an instance encoder $f_e$, transformer encoder $f_{te}$, and a slot-attention layer $f_{slot}$ \cite{locatello2020object}. $\psi$ takes the whole demonstration trajectory $\tau_\text{prompt}$ as input and returns a fixed and reduced preference embedding $\gamma = \psi(\tau_\text{prompt})$ of sequence length $H$. Slot attention is an information bottleneck, which learns semantically meaningful representations of the prompt The situation decoder is a policy $\pi: f_{td} \circ f_e$ that receives as input $N$ instance tokens from the current scene $S$, consisting of objects, as well as, placement instances separated by $<$\textsc{ACT}$>$ tokens (Fig. \ref{fig:arch}). The policy architecture is a transformer decoder \cite{vaswani2017attention} with self-attention layers over the $N$ input tokens. This is followed by a cross-attention layer with preference embedding $\gamma$ ($H$ tokens) from the prompt encoder. We select the output of the situation decoder at the $<$\textsc{ACT}$>$ token and calculate dot-product similarity with the $N$ input tokens $e_i$. The token with the maximum dot-product is chosen as the predicted instance: $x_{pred} = \max_{e_i , i \in \{1, N\}} \big(\hat{x}_{<\textsc{ACT}>} \cdot e_i\big)$. The training target is extracted from the demonstration dataset $\mathcal{D}$, and the policy trained with cross-entropy to maximize the similarity of output latent with the expert's chosen instance embedding. \subsection{Multi-Preference Task Learning} \label{subsec:multi-pref} We adopt a prompt-situation architecture for multi-preference learning. This design (1) enables multi-preference training by disambiguating preferences, (2) learns task-level rules shared between preferences (e.g. dishwasher should be open before placing objects), (3) can generalize to unseen preferences at test time, without fine-tuning. Given a `prompt' demonstration of preference $m$, our policy semantically imitates it in a different `situation' (i.e. a different initialization of the scene). To this end, we learn a representation of the prompt $\gamma^m$ which conditions $\pi$ to imitate the expert. \begin{align} \psi(\tau_\text{prompt}^m)&\rightarrow \gamma^m \\ \pi(S_\text{situation} | \gamma^m) & \rightarrow a^{m}_t = \{x_\text{pred}^m, g_\text{pred}^m\} \end{align} We train neural networks $\psi$ and $\pi$ together to minimize the total prediction loss over all preferences using the multi-preference training dataset $\mathcal{D}$. The overall objective becomes: \begin{equation} \min_{m \sim M, \tau \sim \mathcal{D}_m, (S,a) \sim\mathcal{D}_m} \mathcal{L}_{CE} (a , \pi(S, \psi(\tau))) \end{equation} For every preference $m$ in the dataset, we sample a demonstration from $\mathcal{D}_m$ and use it as a prompt for all state-action pairs $(S,a)$ in $\mathcal{D}_m$. This includes the state-action pairs from $\tau_\text{prompt}$ and creates a combinatorially large training dataset. At test time, we record one prompt demo from a seen or unseen preference and use it to condition $\pi$ and $\psi$ : $a = \pi(S, \psi(\tau_\text{prompt}))$. All policy weights are kept fixed during testing, and generalization to new preferences is zero-shot using the learned preference representation $\gamma$. Unlike \cite{kapelyukh2022my}, $\gamma$ captures not just the final state, but a temporal representation of the whole demonstration. Building a temporal representation is crucial to encode demonstration preferences like order of loading racks and objects. Even though the final state is the same for two preferences that only differ in which rack is loaded first, our approach is able to distinguish between them using the temporal information in $\tau_\text{prompt}$. To the best of our knowledge, our approach is the first to temporally encode preferences inferred from a demonstration in learned task planners. \section{Experiments} \label{sec:experiments} \begin{figure}[t] \centering \begin{subfigure}{0.19\textwidth} \includegraphics[width=\textwidth]{figures/dishwasher_loading/dl_1.png} \end{subfigure} \begin{subfigure}{0.19\textwidth} \includegraphics[width=\textwidth]{figures/dishwasher_loading/dl_2.png} \end{subfigure} \begin{subfigure}{0.19\textwidth} \includegraphics[width=\textwidth]{figures/dishwasher_loading/dl_3.png} \end{subfigure} \begin{subfigure}{0.19\textwidth} \includegraphics[width=\textwidth]{figures/dishwasher_loading/dl_4.png} \end{subfigure} \begin{subfigure}{0.19\textwidth} \includegraphics[width=\textwidth]{figures/dishwasher_loading/dl_5.png} \end{subfigure} \caption{Dishwasher Loading demonstration in AI Habitat Kitchen Arrange Simulator. Objects dynamically appear on the counter-top (ii-iv), and need to be placed in the dishwasher. If the dishwasher racks are full, they land in sink (v)} \label{fig:dishwasher_loading} \vspace{-0.5cm} \end{figure} We present the ``Replica Synthetic Apartment 0 Kitchen"\footnote{``Replica Synthetic Apartment 0 Kitchen" was created with the consent of and compensation to artists, and will be shared under a Creative Commons license for non-commercial use with attribution (CC-BY-NC).} (see figure \ref{fig:dishwasher_loading}, appendix and video), an artist-authored interactive recreation of the kitchen of the ``Apartment 0" space from the Replica dataset \cite{replica19arxiv}. We use selected objects from the ReplicaCAD \cite{szot2021habitat} dataset, including seven types of dishes, and generate dishwasher loading demonstrations using an expert-designed data generation script (see Appendix \ref{appsubsec:dataset}). Given 7 categories of dishes and two choices in which rack to load first, the hypothesis space of possible preferences is $2\times 7!$. Our dataset consists of 12 preferences (7 train, 5 held-out test) with 100 sessions per preference. In a session, $n \in \{3, ..., 10\}$ instances are loaded in each rack. The training data consists of sessions with 6 or 7 objects allowed per rack. The held-out test set contains 5 unseen preferences and settings for $\{3, 4, 5, 8, 9, 10\}$ objects per rack. Additionally, to simulate a dynamic environment, we randomly initialize new objects mid-session on the kitchen counter. This simulates situations where the policy does not have full information of every object to be loaded at the start of the session, and has to learn to be reactive to new information. We train a 2-head 2-layer Transformer encoder-decoder with 256 input and 512 hidden dimensions, and 50 slots and 3 iterations for Slot Attention (more details in Appendix \ref{appsec:train}). We test both in- and out-of-distribution performance in simulation. For in-distribution evaluation, 10 sessions are held-out for testing for each training preference. For out-of-distribution evaluation, we create sessions with unseen preferences and unseen number of objects. We evaluate trained policies on `rollouts' in the simulation, a more complex setting than accuracy of prediction. Rollouts require repeated decisions in the environment, without any resets. A mistake made early on in a rollout session can be catastrophic, and result in poor performance, even if the prediction accuracy is high. For example, if a policy mistakenly fails to open a dishwasher rack, the rollout performance will be poor, despite good prediction accuracy. To measure success, we rollout the policy from an initial state and compare the performance of the policy with an expert demonstration from the same initial state. Note that the policy does not have access to the expert demonstration, and the demonstration is only used for evaluation. Specifically, we measure (1) how well is the final state packed and (2) how much did the policy deviate from the expert demonstration? \textbf{Packing efficiency}: We compare the number of objects placed in the dishwasher by the policy to that in the expert's demonstration. Let $a_i$ be number of objects in top rack and $b_i$ be objects on bottom rack placed by the expert in the $i^{th}$ demonstration. If the policy adheres to the preference and places $\hat{a}_i$ and $\hat{b}_i$ on the top and bottom respectively, then the $ \textit{packing efficiency (PE)} = \sum_{i} \Big( \frac{\hat{a}_i}{max(\hat{a}_i, a_i)} + \frac{\hat{b}_i}{max(\hat{b}_i, b_i)} \Big) $. Packing efficiency is between 0 to 1, and higher is better. Note that if the policy follows the wrong preference, then PE is 0, even if the dishwasher is full. \textbf{Inverse Edit distance}: We also calculate is the \textit{inverse edit distance} between the sequence of actions taken by the expert versus the learned policy. We compute the Levenshtein distance\footnote{ \texttt{Levenshtein distance= textdistance(learned\_seq, expert\_seq) / len(expert\_seq) }} (LD) between the policy's and expert's sequence of pick and place instances. Inverse edit distance is defined as $ED = 1 - LD$; higher is better. This measures the temporal deviation from the expert, instead of just the final state. If the expert preference was to load the top rack first and the learned policy loads the bottom first, $PE$ would be perfect, but inverse edit distance would be low \subsection{Evaluation on simulated dishwasher loading } \label{sec:results} \paragraph{Baselines:} We compare our approach against Graph Neural Network (GNN) based preference learning from \cite{lin2022efficient} and \cite{kapelyukh2022my}. Neither of these works are directly suitable for our task, so we combine them to create a stronger baseline. We use ground-truth preference labels, represented as a categorical distribution and add them to the input features of the GNN, similar to \cite{kapelyukh2022my}. \textbf{For unseen preferences, this is privileged information} that our approach does not have access to. We use the output of the GNN to make sequential predictions, as in \cite{lin2022efficient}. Thus, by combining the two works and adding privileged information about ground-truth preference category, we create a GNN baseline which can act according to preference in a dishwasher loading scenario. We train this policy using imitation learning (IL), and reinforcement learning (RL), following \cite{lin2022efficient}. GNN-IL is trained from the same set of demonstrations as TTP using behavior cloning. For GNN-RL, we use Proximal Policy Optimization \cite{schulman2017proximal} from \cite{raffin2021stable}. GNN-RL learns from scratch by directly interacting with the dishwasher environment and obtaining a dense reward. For details, see Appendix \ref{subsec:rewardRL}. \begin{figure} \centering \begin{subfigure}{0.495\textwidth} \includegraphics[width=0.9995\textwidth, trim={1cm 0cm 2cm 1cm},clip]{figures/results/main_result_4_Packing_Efficiency.png} \caption{Packing efficiency} \end{subfigure} \begin{subfigure}{0.495\textwidth} \includegraphics[width=0.9995\textwidth, trim={1cm 0cm 2cm 1cm},clip]{figures/results/main_result_4_Inverse_Edit_Distance.png} \caption{Inverse Edit Distance} \end{subfigure} \label{fig:results} \caption{ Comparisons of TTP, GNN-IL/RL and a Random policy in simulation across two metrics. TTP shows good performance at the task of dishwasher loading on seen and unseen preferences, and outperforms the GNN and random baselines. However TTP's generalization to unseen \# objects if worse, but still close to GNN-IL, and better than GNN-RL and random. } \vspace{-0.5cm} \end{figure} GNN-IL does not reach the same performance as TTP for in-distribution tasks (PE of $0.34$ for GNN-IL vs $0.62$ for TTP). Note that unseen preferences are also in-distribution for GNN-IL since we provide ground-truth preference labels to the GNN. Hence, there isn't a drop in performance for unseen preferences for GNN, unlike TTP. Despite having no privileged information, TTP outperforms GNN-IL in unseen preferences, and performs comparably on unseen \#objects. Due to the significantly long time horizons per session (more than 30 steps), GNN-RL fails to learn a meaningful policy even after a large budget of 32,000 environment interactions and a dense reward (PE $0.017 \pm 0.002$ using GNN-RL). Lastly, we find that the random policy RP is not able to solve the task at all due to the large state and action space. TTP is able to solve dishwasher loading using unseen preferences well (PE $0.54$). In contrast, classical task planners like \cite{garrett2020integrated} need to be adapted per new preference. This experiment shows that Transformers make adaptable task planners, using our proposed prompt-situation architecture. However, TTP's performance on unseen \#objects deteriorates ($0.62$ for seen versus $0.34$ on unseen \#objects), and we look more closely at that next. \textbf{Generalization to unseen \# objects}: Fig. \ref{fig:numobj} examines PE on out-of-distribution sessions with lesser i.e. 3-5 or more i.e. 8-10 objects per rack. The training set consists of demonstrations with 6 or 7 objects per rack. The policy performs well on 5-7 objects, but poorer as we go further away from the training distribution. Poorer PE is expected for larger number of objects, as the action space of the policy increases, and the policy is more likely to pick the wrong object type for a given preference. Poorer performance on 3-4 objects is caused by the policy closing the dishwasher early, as it has never seen this state during training. Training with richer datasets, and adding more randomization in the form of object masking might improve out-of-distribution performance of TTP. \subsection{Real-world dish-rearrangement experiments} \label{sec:real-world} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/real_hw_pipeline.jpg} \caption{Pipeline for Real Hardware Experiments} \label{fig:real_hw_pipeline} \vspace{-1em} \end{figure} We zero-shot transfer our policy trained in simulation to robotic hardware, by assuming low-level controllers. We use a Franka Panda equipped with a Robotiq 2F-85 gripper, controlled using the Polymetis control framework \cite{Polymetis2021}. For perception, we find the extrinsics of three Intel Realsense D435 RGBD cameras \cite{keselman2017intel} using ARTags \cite{fiala2005artag}. The camera output, extrinsics, and intrinsics are combined using Open3D \cite{zhou2018open3d} and fed into a segmentation pipeline \cite{xiang2020learning} to generate object categories. For low-level pick, we use a grasp candidate generator \cite{fang2020graspnet} applied to the object point cloud, and use it to grasp the target object. Place is approximated as a `drop' action in a pre-defined location. Our hardware setup mirrors our simulation, with different categories of dishware (bowls, cups, plates) on a table, a ``dishwasher'' (cabinet with two drawers). The objective is to select an object to pick and place it into a drawer (rack) (see Fig. \ref{fig:demo}). We use a policy trained in simulation and apply it to a scene with four objects (2 bowls, 1 cup, 1 plate) through the hardware pipeline described above. We start by collecting a prompt human demonstration (more details in Appendix \ref{appsec:hardware_setup})., The learned policy, conditioned a prompt demonstration, is applied to two variations of the same scene, and the predicted actions executed. The policy was successful once with 100\% success rate, and once with 75\%, shown in Figure \ref{fig:demo}, bottom. The failure case was caused due to a perception error -- a bowl was classified as a plate. This demonstrates that such TTP can be trained in simulation and applied directly to hardware. The policy is robust to minor hardware errors, such as if a bowl grasp fails, it just repeats the grasping action (see video and Appendix \ref{appsec:hardware_setup}). However, it relies on accurate perception of the state. In the future, we would like to further evaluate our approach on more diverse real-world settings and measure its sensitivity to the different hardware components, informing future choices for learning robust policies. \subsection{Metrics} \subsection{Ablations} \label{sec:ablations} \begin{figure}[th] \begin{subfigure}[b]{0.34\textwidth} \includegraphics[width=0.995\textwidth, trim={0.1cm 0cm 1.2cm 1.2cm},clip]{figures/results/num_obj_PE.png} \caption{ Performance decays for OOD number of objects, i.e 3-5 \& 8-10} \label{fig:numobj} \end{subfigure}\hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.995\textwidth, trim={0.5cm 0cm 1.2cm 1.2cm},clip]{figures/results/num_demo_PE.png} \caption{ Performance improves with the \# of unique sessions in training } \label{fig:numdemo} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.995\textwidth, trim={0.5cm 0cm 1.2cm 1.2cm},clip]{figures/results/num_pref_PE.png} \caption{ Performance improves with the \# of unique preferences in training } \label{fig:numpref} \end{subfigure} \caption{(a) Out-of-distribution generalization to \#objects. (b-c) ablation experiments.} \vspace{-0.3cm} \end{figure} We study the sensitivity of our approach to training hyperparameters. First, we vary the number of training sessions per preference and study generalization to unseen scenarios of the same preference. Figure \ref{fig:numdemo} shows that the performance of TTP improves as the number of demonstrations increase, indicating that our model is not overfitting to the training set, and might benefit from further training samples. Next, we vary the number of training preferences and evaluate generalization performance to unseen preferences. Figure \ref{fig:numpref} shows that the benefits of adding additional preferences beyond 4 are minor, and similar performance is observed when training from 4-7 preferences. This is an interesting result, since one would assume that more preferences improve generalization to unseen preferences. But for the kinds of preferences considered in our problem, 4-7 preference types are enough for generalization. \input{ablation_attributes_table} Finally, we analyze which instance attributes are the most important for learning in a object-centric sequential decision making task. We mask different instance attributes to remove sources of information from the instance tokens. As seen in Table \ref{tab:abl_attributes}, all components of our instance tokens play significant role ($0.606$ with all, versus the next highest of $0.517$). The most important attribute is pose of the objects (without the pose, top PE is $0.142$), followed by the category. The timestep is the least important, but the best PE comes from combining all three. (more details in Appendix \ref{appsubsec:design_instance}). \section{Conclusions and Limitations} \label{sec:limitations} We present Tranformer Task Planner (TTP): a high-level, sequential, preference-based policy from a single demonstration using a prompt-situation architecture. We introduced a simulated dishwasher loading dataset with demonstrations that adhere to varying preferences. TTP can solve a complex, long-horizon dishwasher-loading task in simulation and transfer to the real world. We have demonstrated the TTP's strong performance in the dishwasher setting. This environment is both complex by virtue of its strict sequential nature and yet incomplete as we assume doors and drawers can be easily opened and perception is perfect. In real settings, the policy needs to learn how to recover from its low-level execution mistakes. More complex preferences may depend on differences in visual or textural patterns on objects, for which the instance encoder would require modifications to encode such attributes. An important question to address is how more complex motion plans interact with or hinder the learning objective, especially due to different human and robot affordances. Finally, prompts are only presented via demonstration, while language might be a more natural interface for users. \section{Additional Ablation Experiments} In Section 3.3, we presented ablation experiments over number of demonstrations per preference used for training, and the number of unique preferences used. In this Section, we present additional ablation experiments over the design of instance encodings in TTP. Additionally, we also present results where we increase the temporal context of TTP and study its effect on performance. \subsection{Design of Instance Encoding} \label{appsubsec:design_instance} \begin{figure}[t] \centering \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=1\textwidth,trim={0.5em 1.5em 0.5em 3.5em},clip]{figures/ablation_temporal2.png} \subcaption{Temporal encoding} \label{fig:ablation_temporal} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=1\textwidth,trim={0cm 0.5cm 0cm 1.5cm},clip]{figures/ablation_category.png} \subcaption{Category encoding} \label{fig:ablation_category} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=1\textwidth,trim={0cm 0.5cm 0cm 1.5cm},clip]{figures/ablation_pose.png} \subcaption{Pose encoding} \label{fig:ablation_pose} \end{minipage} \caption{[Left-to-Right] Comparing different design choices of attribute encoders in terms of category token accuracy on held-out test prompt-situation session pairs. } \label{fig:ablations} \end{figure} \paragraph{How much does \textbf{temporal} encoding design matter?} Fig. \ref{fig:ablation_temporal} shows that learning an embedding per timestep or expanding it as fourier transformed vector of sufficient size achieves high success. On the other hand, having no timestep input shows slightly lower performance. Timestep helps in encoding the order of the prompt states. The notion of timestep is also incorportated by autoregressive masking in both the encoder and the decoder. \vspace{-0.5em} \paragraph{How much does \textbf{category} encoding design matter?} In our work, we represent category as the extents of an objects' bounding box. An alternative would be to denote the category as a discrete set of categorical labels. Intuitively, bounding box extents captures shape similarity between objects and their placement implicitly, which discrete category labels do not. Fig. \ref{fig:ablation_category} shows that fourier transform of the bounding box achieves better performance than discrete labels, which exceeds the performance with no category input. \vspace{-0.5em} \paragraph{How much does \textbf{pose} encoding design matter?} We encode pose as a 7-dim vector that includes 3d position and 4d quaternion. Fig. \ref{fig:ablation_pose} shows that the fourier transform of the pose encoding performs better than feeding the 7 dim through MLP. Fourier transform of the pose performs better because such a vector encodes the fine and coarse nuances appropriately, which otherwise either require careful scaling or can be lost during SGD training. \subsection{Markov assumption on the current state in partial visibility scenarios} \label{appsubsec:context_history} \begin{wrapfigure}{r}{0.505\textwidth} \vspace{-0.5em} \centering \includegraphics[width=0.45\textwidth]{figures/context_history.png} \caption{Category level accuracy for single preference training with varying context windows. } \label{fig:context_history} \vspace{-1em} \end{wrapfigure} Dynamic settings, as used in our simulation, can be partially observable. For example, when the rack is closed, the policy doesn't know whether it is full or not from just the current state. If a new object arrives, the policy needs to decide between opening the rack if there is space, or dropping the object in sink if the rack is full. In such partially observed settings, the current state may or may not contain all the information needed to reason about the next action. However, given information from states in previous timesteps, the policy can decide what action to take (whether to open the rack or directly place the object in the sink). To this end, we train a single preference pick only policy for different context history. As shown in Fig. \ref{fig:explain_cw}, context window of size $k$ processes the current state as well as $k$ predecessor states, that is, in total $k+1$ states. While larger context window size learns faster, the asymptotic performance for all context windows converges in our setting. Let context history $k$ refer to the number of previous states included in the input. Then the input is a sequence of previous $k$ states' instances (including the current state), as shown in Fig. \ref{fig:explain_cw}. Fig \ref{fig:context_history} shows that TTP gets $> 90\%$ category level prediction accuracy in validation for all context windows. While larger context windows result in faster learning at the start of the training, the asymptotic performance of all contexts is the same. This points to the dataset being largely visible, and a single context window capturing the required information. In the future, we would like to experiment with more complex settings like mobile robots, which might require a longer context. \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{figures/explain_cw.jpg} \caption{Processing with previous Context History $k$} \label{fig:explain_cw} \end{figure} \section{Limitations and Future scope} In Section 6, we briefly discussed the limitations and risks. Here we enlist more details and highlight future directions. \vspace{-0.5em} \paragraph{Pick grasping depends on accurate segmentation and edge detection} Grasping policy depends on quality of segmentation and edge detection of the selected object. Due to noise in calibration, shadows and reflections, there are errors in detecting the correct edge to successfully grasp the object. For example, it is hard to grasp a plate in real setting. Plate is very close to the ground and the depth cameras cannot detect a clean edge for grasping. Therefore, in our work, we place the plate on an elevated stand for easy grasping. Grasping success also depends on the size and kind of gripper used. \vspace{-0.5em} \paragraph{Placement in real setting} For placement, the orientation of final pose is often different from initial pose and may require re-grasping. The placement pose at final settlement is different from the robot's end-effector pose while releasing the object from its grasp. Similar to picking, placement accuracy will largely depend on approperiate size and shape of gripper used. Due to these reasons, placement in real world is an open challenging problem and we hope to address this future work. \vspace{-0.5em} \paragraph{Hardware pipeline issues due to calibration} The resulting point cloud is generated noisy due to two reasons. First, incorrect depth estimation due to camera hardware, lighting conditions, shadows and reflections. Second, any small movements among cameras that affects calibration. If we have a noisy point cloud, it is more likely to have errors in subsequent segmentation and edge detection for grasp policy. Having sufficient coverage of the workspace with cameras is important to mitigate issues due to occlusions and incomplete point clouds. \vspace{-0.5em} \paragraph{Incomplete information in prompt} The prompt session may not contain all the information to execute on the situation. For example, in a prompt session there might be no large plates seen, which in incomplete or ambiguous information for the policy. This can be mitigated by ensuring complete information in prompt demo or having multiple prompts in slightly different initialization. \section{Appendix} \section{Simulation Setup} \begin{figure} \centering \includegraphics[width=\textwidth]{figures/human_demo_point_n_click.jpg} \caption{Human demonstration with point and click in simulation} \label{fig:human_demo_point_n_click} \end{figure} \paragraph{Dataset} \label{appsubsec:dataset} ``Replica Synthetic Apartment 0 Kitchen" consists of a fully-interactive dishwasher with a door and two sliding racks, an adjacent counter with a sink, and a ``stage" with walls, floors, and ceiling. We use selected objects from the ReplicaCAD \cite{szot2021habitat} dataset, including seven types of dishes (cups, glasses, trays, small bowls, big bowls, small plates, big plates) which are loaded into the dishwasher. Fig. \ref{fig:human_demo_point_n_click} shows a human demonstration recorded in simulation by pointing and clicking on desired object to pick and place. We initialize every scene with an empty dishwasher and random objects placed on the counter. Next, we generate dishwasher loading demonstrations, adhering to a given preference, using an expert-designed data generation script. Expert actions include, opening/closing dishwasher/racks, picking/placing objects in feasible locations or the sink if there are no feasible locations left. Experts differ in their preferences, and might choose different object arrangements in the dishwasher. \input{pref_example_table} \paragraph{Expert Preferences} \label{subsec:pref_example} We define a preference in terms of expert demonstration `properties', like which rack is loaded first with what objects? There are combinatorially many preferences possible, depending on how many objects we use in the training set. For example, Table \ref{tab:examples_of_pref} describes the preferences of dishwasher loading in terms of three properties - first loaded tray, objects in top and bottom tray. Each preference specifies properties such as which rack to load first and their contents. In Table \ref{tab:examples_of_pref}, Preferences 1 \& 2 vary in the order of which rack is loaded first, while 2 \& 3 both load the bottom rack first with similar categories on top and bottom but with different orderings for these categories. Other preferences can have different combinations of objects loaded per rack. To describe a preference, let there be $k$ properties, where each can take $m_k$ values respectively. For example, a property to describe preference can be which rack is loaded first, and this can take two values; either top or bottom rack. The total number of possible preferences is $G = \prod_{i=1}^{k} m_i.$ In our demonstration dataset, we have 100 unique sessions per preference. Each session can act as a prompt to indicate preference as well as provide situation for the policy. Each session is about $\sim 30$ steps long. With $7$ preferences, this leads to $70,000 \times 30 = 2,100,000$ $\sim 2$ million total training samples, creating a relatively large training dataset from only 100 unique demonstrations per preference. Individual task preferences differ in the sequence of expert actions, but collectively, preferences share the underlying task semantics. \paragraph{Dynamically appearing objects} To add additional complexity to our simulation environment, we simulate a setting with dynamically appearing objects later in the episode. During each session, the scene is initialized with $p\%$ of maximum objects allowed. The policy/expert starts filling a dishwasher using these initialized objects. After all the initial objects are loaded and both racks are closed, new objects are initialized one-per-timestep to the policy. The goal is to simulate an environment where the policy does not have perfect knowledge of the scene, and needs to reactively reason about new information. The policy reasons on both object configurations in the racks, and the new object type to decide whether to `open a rack and place the utensil' or `drop the object in the sink'. \section{Training} \label{appsec:train} In this Section we describe details of the different components of our learning pipeline. \subsection{Baseline: GNN} \paragraph{Architecture} We use GNN with attention. The input consists of 12 dimensional attribute inputs (1D-timestep, 3D-category bounding box extents, 7D-pose, 1D-is object or not?) and 12 dimensional one-hot encoding for the preference. \begin{minted}{python} input_dim: 24 hidden_dim: 128 epochs: 200 batch_size: 32 \end{minted} \paragraph{Optimizer}: Adam with $lr=0.01$ and weight\_decay$=1e-3$. \paragraph{Reward function for GNN-RL} \label{subsec:rewardRL} Reward function for the RL policy is defined in terms of preference. The policy gets a reward of +1 every time it predicts the instance to pick that has the category according to the preference order and whether it is placed on the preferred rack. \subsection{Our proposed approach: TTP} \paragraph{Architecture} We use a 2-layer 2-head Transformer network for encoder and decoder. The input dimension of instance embedding is 256 and the hidden layer dimension is 512. The attributes contribute to the instance embedding as follows: \begin{minted}{python} C_embed: 16 category_embed_size: 64 pose_embed_size: 128 temporal_embed_size: 32 marker_embed_size: 32 \end{minted} For the slot attention layer at the head of Transformer encoder, we use: \begin{minted}{python} num_slots: 50 slot_iters: 3 \end{minted} \paragraph{Optimizer} We use a batch-size of 64 sequences. Within each batch, we use pad the inputs with 0 upto the max sequence length. Our optimizer of choice is SGD with momentum 0.9, weight decay 0.0001 and dampening 0.1. The initial learning rate is 0.01, with exponential decay of 0.9995 per 10 gradient updates. We used early stopping with patience 100. \begin{figure}[t] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=0.8\textwidth, trim={0 0 0 5cm},clip]{figures/acc_wrt_bs.png} \caption{Category level accuracy grouped by batch size for prompt-situation training.} \label{fig:cataccbybs} \end{subfigure} \begin{subfigure}[b]{0.67\textwidth} \centering \includegraphics[width=1.\textwidth, trim={3em, 0em, 0em, 4em}, clip]{figures/results/main_result_4_Temporal_Efficiency.png} \caption{TE (SPL) metric for held-out test settings.} \label{fig:te} \end{subfigure} \end{figure} \subsection{Metrics} In Section 3, we presented packing efficiency (PE) and edit distance (ED) metrics collected on a policy rollout. We present additional metrics about training progress and rollout here. \textbf{Category-token Accuracy} indicates how well the policy can mimic the expert's action, given the current state. We monitor training progress by matching the predicted instance to the target chosen in demonstration (Fig. \ref{fig:cataccbybs}). We see that TTP is able to predict the same category object to pick perfectly (accuracy close to $1.0$). However, this is a simpler setting that sequential decision making. During rollout, any error in a state could create a setting that is out-of-distribution for the policy. Thus, category token accuracy sets an upper bound for rollout performance, that is, while having high category token accuracy is necessary, it is not sufficient for high packing efficiency and inverse edit distance. \textbf{Temporal efficiency}: Just like SPL \cite{anderson2018evaluation} for navigation agents, we define the efficiency of temporal tasks in policy rollout, in order to study how efficient the agent was at achieving the task. For episode $i \in [1,..N]$, let the agent take $p_i$ number of high-level interactions to execute the task, and the demonstration consists of $l_i$ interactions for the initial state. We scale the packing efficiency $PE_i$ of the policy by the ratio of steps taken by expert versus policy. Temporal efficiency is defined between 0 to 1, and higher is better. This value will be equal to or lower than the packing efficiency. This especially penalizes policies that present a `looping' behavior, such as repeatedly open/close dishwasher trays, over policies that reach a low PE in shorter episodes (for example, by placing most objects in the sink). Fig \ref{fig:te} shows the temporal efficiency or SPL over our 4 main held-out test settings. \section{Hardware Experiments} \label{appsec:hardware_setup} \subsection{Real-world prompt demonstration} Here we describe how we collected and processed a visual, human demonstration in the real-world to treat as a prompt for the trained TTP policy (Fig. \ref{fig:human_prompt_demo}). Essentially, we collect demonstration pointcloud sequences and manually segment them into different pick-place segments, followed by extracting object states. At each high-level step, we measure the state using three RealSense RGBD cameras\cite{keselman2017intel}, which are calibrated to the robot frame of reference using ARTags \cite{fiala2005artag}. The camera output, extrinsics, and intrinsics are combined using Open3D \cite{zhou2018open3d} to generate a combined pointcloud. This pointcloud is segmented and clustered to give objects' pose and category using the algorithm from \cite{xiang2020learning} and DBScan. For each object point cloud cluster, we identify the object pose based on the mean of the point cloud. For category information we use median RGB value of the pointcloud, and map it to apriori known set of objects. In the future this can be replaced by more advanced techniques like MaskRCNN \cite{he2017mask}. Placement poses are approximated as a fixed, known location, as the place action on hardware is a fixed `drop' position and orientation. The per step state of the objects is used to create the input prompt tokens used to condition the policy rollout in the real-world, as described in Section 3.2. \begin{figure}[h] \centering \includegraphics[width=0.995\textwidth]{figures/hardware_demo/latest_human/human_demo_all.jpg} \caption{Human demonstration of real-world rearrangement of household dishes.} \label{fig:human_prompt_demo} \vspace{-3em} \end{figure} \subsection{Hardware policy rollout} We zero-shot transfer our policy $\pi$ trained in simulation to robotic hardware, by assuming low-level controllers. We use a Franka Panda equipped with a Robotiq 2F-85 gripper, controlled using the Polymetis control framework \cite{Polymetis2021}. Our hardware setup mirrors our simulation, with different categories of dishware (bowls, cups, plates) on a table, a ``dishwasher'' (cabinet with two drawers). The objective is to select an object to pick and place it into a drawer (rack) (see Fig. \ref{fig:human_prompt_demo}). Once we collect the human prompt demonstration tokens, we can use them to condition the learned policy $\pi$ from simulation. Converting the hardware state to tokens input to $\pi$ follows the same pipeline as the ones used for collecting human demonstrations. At each step, the scene is captured using 3 Realsense cameras, and the combined pointcound is segmented and clustered to get object poses and categories. This information along with the timestep is used to generate instance tokens as described in Section 2 for all objects visible to the cameras. For visible already placed objects, the place pose is approximated as a fixed location. The policy $\pi$, conditioned on the human demo, reasons about the state of the environment, and chooses which object to pick. Next, we use a grasp generator from \cite{fang2020graspnet} that operates on point clouds to generate candidate grasp locations on the chosen object. We filter out grasp locations that are kinematically not reachable by the robot, as well as grasp locations located on points that intersect with other objects in the scene. Next, we select the top 5 most confident grasps, as estimated by the grasp generator, and choose the most top-down grasp. We design an pre-grasp approach pose for the robot which is the same final orientation as the grasp, located higher on the grasping plane. The robot moves to the approach pose following a minimum-jerk trajectory, and then follows a straight line path along the approach axes to grasp the object. Once grasped, the object is moved to the pre-defined place pose and dropped in a drawer. The primitives for opening and closing the drawers are manually designed on hardware. The learned policy, conditioned on prompt demonstrations, is applied to two variations of the same scene, and the predicted pick actions are executed. Fig.\ref{fig:pcd_n_grasps} shows the captured image from one of the three cameras, the merged point cloud and the chosen object to pick and selected grasp for the same. The policy was successful once with 100\% success rate, and once with 75\%, shown in Fig.\ref{fig:demo}. The failure case was caused due to a perception error -- a bowl was classified as a plate. This demonstrates that our approach (TTP) can be trained in simulation and applied directly to hardware. The policy is robust to minor hardware errors like a failed grasp; it just measures the new state of the environment and chooses the next object to grasp. For example, if the robot fails to grasp a bowl, and slightly shifts the bowl, the cameras measure the new pose of the bowl, which is sent to the policy. However, TTP relies on accurate perception of the state. If an object is incorrectly classified, the policy might choose to pick the wrong object, deviating from the demonstration preference. In the future, we would like to further evaluate our approach on more diverse real-world settings and measure its sensitivity to the different hardware components, informing future choices for learning robust policies. \begin{figure}[t] \centering \includegraphics[width=0.957565\textwidth]{figures/pcd_n_grasps.jpg} \caption{Point cloud and grasps for different objects during policy rollout.} \vspace{-1.5em} \label{fig:pcd_n_grasps} \end{figure} \subsection{Transforming hardware to simulation data distribution} The policy trained in simulation applies zero-shot to real-world scenarios, but it requires a coordinate transform. Fig. \ref{fig:frame_of_ref} shows the coordinate frame of reference in simulation and real world setting. Since our instance embedding uses the poses of objects, it is dependant on the coordinate frame that the training data was collected in. Since hardware and simulation are significantly different, this coordinate frame is not the same between sim and real. We build a transformation that converts hardware measured poses to the simulation frame of reference, which is then used to create the instance tokens. This ensures that there is no sim-to-real gap in object positions, reducing the challenges involved in applying such a simulation trained policy to hardware. In this section we describe how we convert the real world coordinates to simulation frame coordinates for running the trained TTP policy on a Franka arm. \begin{wrapfigure}{l}{0.35\textwidth} \vspace{-1em} \includegraphics[width=0.35\textwidth]{figures/frame_of_ref.jpg} \caption{Coordinate Frame of reference in simulation (left) and real world setting (right). Red is x-axis, green is y-axis and blue is z-axis.} \label{fig:frame_of_ref} \end{wrapfigure} We use the semantic work area in simulation and hardware to transform the hardware position coordinates to simulation position coordinates. We measure the extremes of the real workspace by manually moving the robot to record positions and orientations that define the extents of the workspace for the table. The extents of the drawers are measured by placing ARTag markers. We build 3 real-to-sim transformations using the extents for counter, top rack and bottom rack: Let $X \in \mathbb{R}^{3\times N}$ contain homogeneous $xz-$ coordinates of a work area, along its column, as follows: \begin{equation} X = \begin{bmatrix} x^{(1)} & x^{(2)} & \cdots\\ z^{(1)} & z^{(2)} & \cdots\\ 1 & 1 & \cdots\\ \end{bmatrix} = \begin{bmatrix} \boldsymbol{x}^{(1)} & \boldsymbol{x}^{(2)} & \cdots \end{bmatrix} \end{equation} As the required transformation from real to simulation involves scaling and translation only, we have 4 unknowns, namely, $\boldsymbol{a} = [\alpha_x, \alpha_y, x_{trans}, z_{trans}]$. Here $\alpha_x, \alpha_z$ are scaling factors and $x_{trans}, z_{trans}$ are translation offset for $x$ and $z$ axis respectively. To solve $ X_{sim} = A X_{hw}$, we need to find the transformation matrix $A = \hat{\boldsymbol{a}} = \begin{bmatrix} \alpha_x & 0 & x_{trans}\\ 0 & \alpha_z & z_{trans} \\ 0 & 0 & 1 \\ \end{bmatrix} $. \begin{align} X_{sim} &= \hat{\boldsymbol{a}} X_{hw} \\ \text{Rewriting the } & \text{ system of linear equations,} \\ \implies \begin{bmatrix} x^{(1)}_{sim}\\ z^{(1)}_{sim}\\ x^{(2)}_{sim} \\ z^{(2)}_{sim} \\ \vdots \\ \end{bmatrix} &= \begin{bmatrix} x^{(1)}_{hw} & 0 & 1 & 0 \\ 0 & z^{(1)}_{hw} & 0 & 1 \\ x^{(2)}_{hw} & 0 & 1 & 0 \\ 0 & z^{(2)}_{hw} & 0 & 1 \\ \vdots & \vdots & \vdots & \vdots \\ \end{bmatrix} \boldsymbol{a}^T \\ \end{align} Let the above equation be expressed as $Y_{sim} = Z_{hw} a^T$ where $Y_{sim} \in \mathbb{R}^{2N\times 1}$, $Z_{hw} \in \mathbb{R}^{2N \times 4}$, and $a^T \in \mathbb{R}^{4 \times 1}$. Assuming we have sufficient number of pairs of corresponding points in simulation and real world, we can solve for $\boldsymbol{a}$ by least squares $a = (Z_{hw}^T Z_{hw})^{-1} Z_{hw}^T Y_{sim}$. The height $y_{sim}$ is chosen from a look-up table based on $y_{hw}$. Once we compute the transformation $A$, we store it for later to process arbitrary coordinates from real to sim, as shown below. \begin{minted}{python} def get_simulation_coordinates(xyz_hw: List[float], A: np.array) -> List: xz_hw = [xyz_hw[0], xyz_hw[2]] X_hw = get_homogenous_coordinates(xz_hw) X_sim_homo = np.matmul(A, X_hw) y_sim = process_height(xyz_hw[1]) X_sim = [X_sim_homo[0]/X_sim_homo[2], y_sim, X_sim_homo[1]/X_sim_homo[2]] return X_sim \end{minted} The objects used in simulation training are different from hardware objects, even though they belong to the same categories. For example, while both sim and real have a small plate, the sizes of these plates are different. We can estimate the size of the objects based on actual bounding box from the segmentation pipeline. However, it is significantly out-of-distribution from the training data, due to object mismatch. So, we map each detected object to the nearest matching object in simulation and use the simulation size as the input to the policy. This is non-ideal, as the placing might differ for sim versus real objects. In the future, we would like to train with rich variations of object bounding box size in simulation so that the policy can generalize to unseen object shapes in the real world.
proofpile-arXiv_065-4670
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION} Segmentation of medical images is a long-standing problem with excessive number of deep learning based methods already available nowadays. Although there are recent paradigm-shifting works at segmentation literature such as capsule based segmentation~\cite{lalonde2021capsules} and transformer based segmentation~\cite{chen2021transunet}, most of the medical image segmentation literature in deep learning field are based on standard U-Net or its derivation based methodologies. In this study, we approach the segmentation from a slightly different angle where our unique clinical imaging conditions infer some constraints on the problem formulation. In many clinical scenarios, for instance, multi-modality images are necessary for a more appropriate evaluation of the clinical condition through better tissue characterization (anatomically and/or functionally). Multi-modal brain imaging, PET/CT, PET/MRI, and multi-contrast MRIs are some of the mostly used examples in this context. \begin{figure}[!ht] \centering \includegraphics[width = 0.3\textwidth]{allcontrastsgt.png} \caption{MRI contrasts (first row): fat-suppressed, water-fat, water-suppressed. Segmented tissues-muscle, fat, bone and bone marrow-(second row).} \label{fig:multi_tissue} \end{figure} Despite the strengths of combining multiple modality images to characterize a clinical condition better, or quantify, there are further challenges to be addressed. First, handling more than one modality for image segmentation is already more challenging than single modality images. Second, multi-object segmentation is another hurdle compared to single object segmentation, which is often the case in multi-modality image analysis. Third, clinical workflow has deficiencies and not always all modalities are available for further analysis. Missing slices or missing scans are not rare especially in multi-contrast evaluation of MRI scans. In this study, our goal is to develop a successful segmentation strategy, based on deep networks, that accepts multi-contrast MRI scans and perform a multi-tissue segmentation even there are missing scans. To achieve our overall goal by addressing the challenges defined above, we focus on musculoskeletal (MSK) radiology examples: delineation of thigh tissues from multi-contrast MRI scans. Figure \ref{fig:multi_tissue} demonstrates such a multi-contrast MRI scan's slices from the same patient, from left to right in top row: Fat-suppressed: MRI1, water-fat: MRI2, and water-suppressed: MRI3. Our clinical motivation comes from the fact that MSK radiology applications are critical for several diseases spanning from obesity, metabolic syndromes to cartilage quantification. For instance, according to American Cancer Society studies in 2021 \cite{society2021cancer}, some of the most effective measures for decreasing cancer risk are having a healthy body weight, healthy diet and being physically active. Excess body weight (obesity), alcohol consumption, physical inactivity, and a poor diet are thought to be responsible for 18\% of cancer cases and 16\% of cancer deaths. Of all cancer risk factors, excess body weight is believed to be responsible for 5\% of cancers in males and 11\% of cancers in women. In this respect, sarcopenia is related with general loss of body mass and excess body weight , and have strong relation with cancer risk factors \cite{ligibel2020sarcopenia}. In this work, we propose a systematic approach to (1) synthesize MRI contrasts, (2) train synthesized images on a deep learning based segmentation engine, and (3) evaluate the efficacy of the segmentation model on true multi-contrast MRI images. We target segmenting thigh tissues (muscle, fat, bone and bone marrow). We also conduct an ablation study where training model includes true, synthesized, and mixed (true and synthesized) images towards a segmentation model. Comprehensive quantitative and qualitative segmentation results showed that proposed approach can be used effectively for multi-modal image analysis. This is essentially useful when there is not enough medical imaging data, a typical constraint in medical imaging problems. Our major contributions are as follows: \begin{itemize} \item Application-wise our study is the first one handling missing contrast issue while retaining a high accuracy in segmenting multiple tissues from thigh MRI. \item Our method is generic, any deep segmentation or GAN based methods can be replaced within our framework. \item We present a comprehensive evaluation, carefully analyzing three contrasts of MRI, their relations and effect on the final segmentation results. \item We examine whether it is robust and feasible enough to run a segmentation training on completely synthesized and mixed data, opening new discussions about the use of completely synthesized data to obtain clinically accepted segmentation results on real MRI data. \end{itemize} \section{Related Work} There is a relatively small body of literature that is concerned with muscle, fat, bone and bone marrow segmentation in MSK radiology despite its clinical importance \cite{shin2021deep}. Available deep learning based studies focus on U-Net based standard segmentation methods on single or multi-tissues but mostly in single modality MRI scans. When there are missing scans, there is no particular method presented for MSK applications. GAN (generative adversarial networks) based methods are being increasingly used for several applications spanning from brain imaging to functional imaging. One interesting work utilizes a multi-modal generative adversarial network (MM-GAN) \cite{sharma2019missing}, a variant of pix2pix \cite{isola2017image} network. Authors integrate multi-modal data from existing Brain image sequences in a single forward pass training to synthesize missing sequences. In another major work \cite{gadermayr2019domain}, authors used popular CycleGAN network on thigh MRI for increasing the data size for segmentation purposes. Our work has some similarities with this work, but unlike focusing on data augmentation aspect of particular tissue, we generate the whole sequence(s), using them to train segmentation models, and explore the relationship of MRI contrast in an ablation study, leading us to train the complete segmentation process on synthetic MRI scans. In pre-deep learning era, there are some segmentation studies available too. It might be worth to mention that \cite{irmakci2018novel} proposed an architecture based novel affinity propagation within the fuzzy connectivity for segmenting multi contrast thigh MRI. The most recent work in this domain is handling the lack of labeling problem from a semi-supervised deep learning aspect \cite{anwar2020semi} utilizing Tiramisu network. However, the synthesis of one or more MRI contrasts remains a major challenge, and not considered in those works. Herein we propose a comprehensive evaluation and generic approach for handling missing MRI contrast and its effect on multi-tissue segmentation problems. \begin{figure*} \centering \includegraphics[width = 0.45\textwidth]{anygan2.png} \includegraphics[width = 0.47\textwidth]{gan_generation.png} \caption{Fat-suppressed: MRI1, water-fat: MRI2, water-suppressed: MRI3. \textbf{Left.} Generation procedure for all Synthesized MRI contrasts. (A) Synthesized generations from only R MRI1 B) Synthesized generations from only R MRI2 C) Synthesized generations from only R MRI3. \textbf{Right.} Different combinations of MRI synthesis procedure are shown where ($\leftarrow$ or $\rightarrow$) indicates synthesis. For example, R MRI1$\rightarrow$F MRI2 or F MRI1$\leftarrow$R MRI2 both indicates synthesize operation from Real contrasts.} \label{fig:any_gan2} \end{figure*} \begin{table*} \caption{Segmentation performance of Single Input Multi Output MR contrasts (5-fold cross validation) (Avg.=Average and Std.=Standard deviation) } \centering \resizebox{1\textwidth}{!}{% \begin{tabular}{@{}llllllllllllllllll@{}} \toprule \textbf{SINGLE INPUT} & \multicolumn{5}{c}{\textbf{MUSCLE}} & \multicolumn{4}{c}{\textbf{FAT}} & \multicolumn{4}{c}{\textbf{BONE}} & \multicolumn{4}{c}{\textbf{BONE MARROW}} \\ \midrule & \textbf{} & \textbf{DSC.} & \textbf{ACC.} & \textbf{SENS.} & \textbf{SPEC.} & \textbf{DSC.} & \textbf{ACC.} & \textbf{SENS.} & \textbf{SPEC.} & \textbf{DSC.} & \textbf{ACC.} & \textbf{SENS.} & \textbf{SPEC.} & \textbf{DSC.} & \textbf{ACC.} & \textbf{SENS.} & \textbf{SPEC.} \\ R MRI1 & Avg. & \textbf{0,9264} & 0,9831 & 0,9521 & 0,9868 & \textbf{0,8826} & 0,9793 & 0,8985 & 0,9897 & \textbf{0,8245} & 0,9985 & 0,8383 & 0,9992 & \textbf{0,8397} & 0,9994 & 0,8482 & 0,9997 \\ & Std & 0,0340 & 0,0110 & 0,0337 & 0,0097 & 0,0956 & 0,0206 & 0,0622 & 0,0126 & 0,0943 & 0,0008 & 0,0969 & 0,0005 & 0,0943 & 0,0004 & 0,1134 & 0,0002 \\ R MRI2 & Avg. & 0,9312 & 0,9846 & 0,9541 & 0,9883 & 0,9100 & 0,9870 & 0,9246 & 0,9917 & \textbf{0,9591} & 0,9997 & 0,9612 & 0,9999 & \textbf{0,9682} & 0,9999 & 0,9693 & 1,0000 \\ & Std & 0,0384 & 0,0100 & 0,0448 & 0,0078 & 0,0811 & 0,0122 & 0,0619 & 0,0087 & \textbf{0,0321} & 0,0002 & 0,0422 & 0,0001 & \textbf{0,0254} & 0,0001 & 0,0420 & 0,0001 \\ R MRI3 & Avg. & \textbf{0,9468} & 0,9884 & 0,9587 & 0,9919 & \textbf{0,9467} & 0,9925 & 0,9608 & 0,9945 & 0,8296 & 0,9985 & 0,8324 & 0,9992 & 0,8897 & 0,9996 & 0,8848 & 0,9998 \\ & Std & \textbf{0,0211} & 0,0057 & 0,0276 & 0,0061 & \textbf{0,0411} & 0,0057 & 0,0205 & 0,0058 & 0,0919 & 0,0009 & 0,1045 & 0,0007 & 0,0846 & 0,0003 & 0,1023 & 0,0002 \\ F MRI1($\leftarrow$ R MRI2) TEST ON R MRI1 & Avg. & 0,9063 & 0,9774 & 0,9805 & 0,9770 & 0,8815 & 0,9817 & 0,9112 & 0,9882 & 0,8445 & 0,9986 & 0,8665 & 0,9992 & 0,8514 & 0,9994 & 0,8527 & 0,9997 \\ & Std. & 0,0387 & 0,0130 & 0,0248 & 0,0127 & 0,0813 & 0,0157 & 0,0526 & 0,0115 & 0,0999 & 0,0008 & 0,0999 & 0,0006 & 0,0970 & 0,0004 & 0,1192 & 0,0002 \\ F MRI1($\leftarrow$ R MRI3) TEST ON R MRI1 & Avg. & \textbf{0,9239} & 0,9824 & 0,9550 & 0,9856 & 0,8920 & 0,9828 & 0,9301 & 0,9875 & 0,8206 & 0,9985 & 0,8263 & 0,9992 & 0,8362 & 0,9994 & 0,8372 & 0,9997 \\ & Std. & 0,0326 & 0,0108 & 0,0334 & 0,0100 & 0,0951 & 0,0172 & 0,0537 & 0,0139 & 0,1017 & 0,0008 & 0,1021 & 0,0005 & 0,0951 & 0,0004 & 0,1168 & 0,0002 \\ F MRI2($\leftarrow$ R MRI1) TEST ON R MRI2 & Avg. & 0,9189 & 0,9813 & 0,9661 & 0,9831 & 0,9049 & 0,9856 & 0,9468 & 0,9889 & \textbf{0,9163} & 0,9993 & 0,9387 & 0,9995 & 0,9443 & 0,9998 & 0,9550 & 0,9999 \\ & Std. & 0,0364 & 0,0106 & 0,0309 & 0,0095 & \textbf{0,0815} & 0,0136 & 0,0539 & 0,0104 & 0,0434 & 0,0003 & 0,0403 & 0,0003 & 0,0315 & 0,0001 & 0,0521 & 0,0001 \\ F MRI2($\leftarrow$ R MRI3) TEST ON R MRI2 & Avg. & 0,9211 & 0,9824 & 0,9422 & 0,9873 & 0,9094 & 0,9863 & 0,9312 & 0,9907 & 0,9044 & 0,9992 & 0,9061 & 0,9996 & \textbf{0,9533} & 0,9998 & 0,9547 & 0,9999 \\ & Std. & \textbf{0,0400} & 0,0106 & 0,0491 & 0,0081 & 0,0812 & 0,0129 & 0,0567 & 0,0098 & \textbf{0,0414} & 0,0003 & 0,0552 & 0,0003 & \textbf{0,0277} & 0,0001 & 0,0532 & 0,0000 \\ F MRI3($\leftarrow$ R MRI1) TEST ON R MRI3 & Avg. & 0,9386 & 0,9861 & 0,9622 & 0,9891 & \textbf{0,9411} & 0,9921 & 0,9754 & 0,9933 & \textbf{0,8089} & 0,9983 & 0,8496 & 0,9989 & \textbf{0,8796} & 0,9995 & 0,8785 & 0,9998 \\ & Std. & \textbf{0,0249} & 0,0077 & 0,0323 & 0,0075 & 0,2356 & 0,2036 & 0,2067 & 0,2037 & \textbf{0,0998} & 0,0008 & 0,1052 & 0,0006 & \textbf{0,0954} & 0,0003 & 0,1075 & 0,0002 \\ F MRI3($\leftarrow$ R MRI2) TEST ON R MRI3 & Avg. & \textbf{0,9391} & 0,9863 & 0,9677 & 0,9885 & 0,9382 & 0,9916 & 0,9709 & 0,9929 & 0,8358 & 0,9987 & 0,8370 & 0,9993 & 0,8952 & 0,9996 & 0,8891 & 0,9998 \\ & Std. & 0,0269 & 0,0075 & 0,0239 & 0,0078 & \textbf{0,0506} & 0,0064 & 0,0165 & 0,0068 & 0,0900 & 0,0006 & 0,0937 & 0,0005 & 0,0861 & 0,0003 & 0,0983 & 0,0002 \\ \bottomrule \end{tabular}} \label{tab:test_on_single_mri} \caption{Segmentation performance of Multi Input Multi Output MR contrasts (5-fold cross validation) (Avg.=Average and Std.=Standard deviation)} \centering \label{tab:test_on_multiple_mri} \resizebox{1\textwidth}{!}{% \begin{tabular}{@{}llllllllllllllllll@{}} \toprule \textbf{MULTI INPUT} & \multicolumn{5}{c}{\textbf{MUSCLE}} & \multicolumn{4}{c}{\textbf{FAT}} & \multicolumn{4}{c}{\textbf{BONE}} & \multicolumn{4}{c}{\textbf{BONE MARROW}} \\ \midrule & \textbf{} & \textbf{DSC.} & \textbf{ACC.} & \textbf{SENS.} & \textbf{SPEC.} & \textbf{DSC.} & \textbf{ACC.} & \textbf{SENS.} & \textbf{SPEC.} & \textbf{DSC.} & \textbf{ACC.} & \textbf{SENS.} & \textbf{SPEC.} & \textbf{DSC.} & \textbf{ACC.} & \textbf{SENS.} & \textbf{SPEC.} \\ R MRI1 R MRI2 R MRI3 & Avg. & \textbf{0,9541} & 0,9898 & 0,9655 & 0,9927 & \textbf{0,9461} & 0,9923 & 0,9595 & 0,9944 & \textbf{0,9522} & 0,9996 & 0,9502 & 0,9998 & \textbf{0,9438} & 0,9998 & 0,9400 & 0,9999 \\ & Std & 0,0200 & 0,0056 & 0,0290 & 0,0061 & 0,0419 & 0,0062 & 0,0234 & 0,0059 & 0,0410 & 0,0003 & 0,0555 & 0,0001 & 0,0617 & 0,0002 & 0,0833 & 0,0001 \\ F MRI1($\leftarrow$ R MRI2) F MRI2 ($\leftarrow$ R MRI1) F MRI3 ($\leftarrow$ R MRI1) & Avg. & 0,9365 & 0,9854 & 0,9660 & 0,9878 & 0,9357 & 0,9909 & 0,9557 & 0,9933 & 0,8795 & 0,9990 & 0,9079 & 0,9993 & 0,8928 & 0,9996 & 0,8941 & 0,9998 \\ & Std. & 0,0280 & 0,0087 & 0,0338 & 0,0083 & 0,0520 & 0,0078 & 0,0288 & 0,0072 & 0,0862 & 0,0007 & 0,0784 & 0,0005 & 0,0868 & 0,0003 & 0,0971 & 0,0001 \\ F MRI1($\leftarrow$ R MRI2) F MRI2 ($\leftarrow$ R MRI1) F MRI3 ($\leftarrow$ R MRI2) & Avg. & 0,9370 & 0,9860 & 0,9419 & 0,9916 & 0,9341 & 0,9904 & 0,9656 & 0,9917 & 0,8913 & 0,9991 & 0,9146 & 0,9994 & 0,9145 & 0,9997 & 0,9152 & 0,9998 \\ & Std. & 0,0327 & 0,0090 & 0,0533 & 0,0067 & 0,0557 & 0,0087 & 0,0252 & 0,0092 & 0,0841 & 0,0007 & 0,0774 & 0,0005 & 0,0794 & 0,0002 & 0,0863 & 0,0001 \\ F MRI1($\leftarrow$ R MRI2) F MRI2 ($\leftarrow$ R MRI3) F MRI3 ($\leftarrow$ R MRI1) & Avg. & 0,9281 & 0,9835 & 0,9496 & 0,9878 & 0,9249 & 0,9889 & 0,9544 & 0,9915 & 0,8960 & 0,9992 & 0,9056 & 0,9996 & 0,9276 & 0,9997 & 0,9337 & 0,9999 \\ & Std. & 0,0306 & 0,0095 & 0,0424 & 0,0075 & 0,0618 & 0,0105 & 0,0363 & 0,0089 & 0,0803 & 0,0006 & 0,0788 & 0,0004 & 0,0700 & 0,0002 & 0,0850 & 0,0001 \\ F MRI1($\leftarrow$ R MRI2) F MRI2 ($\leftarrow$ R MRI3) F MRI3 ($\leftarrow$ R MRI2) & Avg. & \textbf{0,9408} & 0,9868 & 0,9549 & 0,9908 & 0,9314 & 0,9900 & 0,9558 & 0,9924 & 0,9011 & 0,9992 & 0,9108 & 0,9996 & 0,9240 & 0,9997 & 0,9303 & 0,9999 \\ & Std. & 0,0292 & 0,0081 & 0,0430 & 0,0066 & 0,0582 & 0,0093 & 0,0374 & 0,0078 & 0,0765 & 0,0006 & 0,0727 & 0,0004 & 0,0773 & 0,0002 & 0,0858 & 0,0001 \\ F MRI1($\leftarrow$ R MRI3) F MRI2 ($\leftarrow$ R MRI1) F MRI3 ($\leftarrow$ R MRI1) & Avg. & 0,9375 & 0,9860 & 0,9526 & 0,9902 & 0,9346 & 0,9909 & 0,9552 & 0,9930 & 0,8879 & 0,9991 & 0,8919 & 0,9995 & 0,8934 & 0,9997 & 0,8876 & 0,9999 \\ & Std. & 0,0263 & 0,0076 & 0,0395 & 0,0069 & \textbf{0,0515} & 0,0072 & 0,0295 & 0,0073 & 0,0788 & 0,0006 & 0,0772 & 0,0004 & 0,0887 & 0,0003 & 0,1073 & 0,0001 \\ F MRI1($\leftarrow$ R MRI3) F MRI2 ($\leftarrow$ R MRI1) F MRI3 ($\leftarrow$ R MRI2) & Avg. & 0,9406 & 0,9868 & 0,9518 & 0,9911 & 0,9349 & 0,9908 & 0,9680 & 0,9919 & 0,8925 & 0,9991 & 0,8990 & 0,9995 & 0,9127 & 0,9997 & 0,9069 & 0,9999 \\ & Std. & \textbf{0,0249} & 0,0071 & 0,0412 & 0,0062 & 0,0515 & 0,0075 & 0,0257 & 0,0078 & 0,0693 & 0,0005 & 0,0734 & 0,0003 & 0,0757 & 0,0003 & 0,0857 & 0,0001 \\ F MRI1($\leftarrow$ R MRI3) F MRI2 ($\leftarrow$ R MRI3) F MRI3 ($\leftarrow$ R MRI1) & Avg. & 0,9288 & 0,9843 & 0,9344 & 0,9906 & 0,9254 & 0,9888 & 0,9489 & 0,9913 & \textbf{0,9061} & 0,9992 & 0,9203 & 0,9996 & \textbf{0,9375} & 0,9998 & 0,9295 & 0,9999 \\ & Std. & 0,0336 & 0,0094 & 0,0519 & 0,0064 & 0,0624 & 0,0108 & 0,0421 & 0,0096 & \textbf{0,0570} & 0,0004 & 0,0586 & 0,0003 & \textbf{0,0526} & 0,0002 & 0,0737 & 0,0001 \\ F MRI1($\leftarrow$ R MRI3) F MRI2 ($\leftarrow$ R MRI3) F MRI3 ($\leftarrow$ R MRI2) & Avg. & 0,9399 & 0,9865 & 0,9519 & 0,9909 & 0,9335 & 0,9906 & 0,9611 & 0,9924 & 0,8840 & 0,9991 & 0,8808 & 0,9996 & 0,9180 & 0,9997 & 0,9193 & 0,9999 \\ & Std. & 0,0256 & 0,0074 & 0,0384 & 0,0063 & 0,0537 & 0,0077 & 0,0275 & 0,0075 & 0,0794 & 0,0005 & 0,0892 & 0,0003 & 0,0756 & 0,0002 & 0,0882 & 0,0001 \\ \bottomrule \end{tabular}} \caption{Quality of Single Modality Thigh MRIs Synthesis with 5-fold cross validation. (Avg. = Average (Mean) and Std.=Standard deviation)} \label{tab:psnr_fid_ssim} \resizebox{1\textwidth}{!}{ \begin{tabular}{@{}llllllll@{}} \toprule & & F MRI1($\leftarrow$ R MRI2) & F MRI1($\leftarrow$ R MRI3) & F MRI2($\leftarrow$ R MRI1) & F MRI2($\leftarrow$ R MRI3) & F MRI3($\leftarrow$ R MRI1) & F MRI3($\leftarrow$ R MRI2) \\ \midrule PSNR & Avg. & 28,3153 & 27,6520 & 27,2156 & 27,9233 & \textbf{28,5335} & 28,0810 \\ & Std. & 3,3855 & \textbf{2,9039} & 3,2992 & 3,4147 & 3,2828 & 3,7948 \\ SSIM & Avg. & 0,8786 & 0,8848 & 0,8728 & 0,8827 & \textbf{0,8968} & 0,8890 \\ & Std. & \textbf{0,0496} & 0,0510 & 0,0520 & 0,0608 & 0,0601 & 0,0616 \\ FID & Avg. & 42,5333 & 41,6491 & 57,5200 & \textbf{68,3417} & 39,8365 & 54,2351 \\ & Std. & \textbf{4,4439} & 4,6984 & 10,9396 & 20,8823 & 4,4900 & 13,9198 \\ \bottomrule \end{tabular}} \end{table*} \section{METHOD} The proposed segmentation strategy includes both synthesis of missing contrasts with a generator and a segmentor (Figure~\ref{fig:any_gan2}). In the generation stage, we adapted popular pix2pix \cite{isola2017image} conditional GAN method for synthesising all synthesized contrasts from real (true) contrasts. Any other GAN method can be used too. Briefly, pix2pix uses a conditional generative adversarial network to learn mapping from a source domain $x$ and random noise $z$ to an target domain $y$. The network is made up of two blocks, the generator ${G}$, and the discriminator ${D}$. The generator transforms the source domain ($x$) with random noise ($z$) to get the target domain ($y$) while discriminator learns how much target domain is similar to source domain. As shown in Figure ~\ref{fig:any_gan2}, then we use all real MR contrasts (R MRI) for synthesizing (generating) other contrasts (F MRI) using Equation \ref{eq:1} and Equation \ref{eq:2} where $x$ is source contrast, $y$ is target contrast, $z$ is random noise, $\lambda$ is hyperparameter for adjusting blurriness, and $\mathcal{L}_{L 1}$ mean square error: \begin{equation} \begin{aligned} \label{eq:1} \mathcal{L}_{c G A N}(G, D)=& \mathbb{E}_{x, y}[\log D(x, y)]+\\ & \mathbb{E}_{x, z}[\log (1-D(x, G(x, z))]. \end{aligned} \end{equation} \begin{equation} \label{eq:2} G^{*}=\arg \min _{G} \max _{D} \mathcal{L}_{c G A N}(G, D)+\lambda \mathcal{L}_{L 1}(G). \end{equation} First, we condition on source contrast, Real MRI1 for generating Synthesized MRI2 (R MRI1 $\rightarrow$ F MRI2) or Synthesized MRI3 (R MRI1 $\rightarrow$ F MRI3) separately, then we condition on Real MRI2 for generating Synthesized MRI1 (R MRI2 $\rightarrow$ F MRI1) or Synthesized MRI3 (R MRI2 $\rightarrow$ F MRI3), separately. Finally, we condition on Real MRI3 for generating Synthesized MRI1 ((R MRI3 $\rightarrow$ F MRI1) and Synthesized MRI2 (R MRI3 $\rightarrow$ F MRI2) for getting all six different combinations. For delineation of multiple tissues, we devise commonly used standard U-Net segmentor \cite{ronneberger2015u}. We speculate that if synthesized images are in good quality (Table \ref{tab:psnr_fid_ssim}), overall segmentation of tissues should be accurate. In addition to fully synthesized, mixed, and real images based training of segmentation, we also apply multi-contrast and single-contrast segmentation settings to validate the necessity of additional contrasts and their complementary strengths to each other. \section{EXPERIMENTS and RESULTS} \label{sec:typestyle} \noindent\textbf{Dataset and Preprocessing:} We have used multi-contrast MRI data from Baltimore Longitudinal Study of Aging (BLSA) \cite{ferrucci2008baltimore}. Experiments were performed on three different T1-weighted MR contrasts: fat-suppressed (MRI1), water and fat (MRI2),and water-suppressed (MRI3). These images are abbreviated as "real" in our experiments to separate them from synthesized ones. Original data set contains 150 volumetric MRI scans from 50 subjects acquired using a 3T Philips Achieva MRI scanner (Philips Healthcare, Best, The Netherlands). A voxel size of $1 \times 1$ mm$^2$ in-plane, and slice thickness varies from 1 mm to 3 mm in different scans. Details of contrasts and other imaging parameters can be found in \cite{ferrucci2008baltimore}. Prior to our experiments, we used non-uniform non-parametric intensity normalization technique (N4ITK) \cite{tustison2010n4itk} to remove bias afterwards edge-preserving diffusive filter for removing noise without any distortion on tissue structures in MR images. Finally, we used whitening transformation then scaled voxel values between $0$ and $1$.\\% Water and fat suppression were achieved using spectral pre-saturation with inversion recovery (SPIR), with coverage from the proximal to distal ends of the femur using $80$ slices in the foot to head direction, a field of view (FOV) of $440 \times 296 \times 400$ mm$^3$ and \noindent\textbf{Network settings and Training Procedure:} pix2pix was trained for 250 epoch based on 2D slices with learning rate of 0.0001. Generator and discriminators consist of U-Net and PatchGAN. Best models were selected on validation portion of the whole data set. In segmentation stage, we optimized network with cross entropy loss with ADAM optimizer and a learning rate of 0.0001. Early stopping criteria was used. We did not use any data augmentation techniques as we did not see any overfitting problem in training. We have performed 90 (18x5 Fold) experiments in segmentation stage and 30 (6x5 Fold) experiments for generation stage with 5 fold cross validation ($70\%$ training, $10\%$ validation, and $20\%$ test). All experiments were performed on Nvidia Titan-XP GPUs with 12GB memory. Proposed approach was implemented in PyTorch framework.\\ \noindent\textbf{Quantitative evaluations:} We report our GAN results in three different metrics: the Frechet Inception Distance (FID, lower is better), Peak Signal to Noise Ratio (PSNR, higher is better) and Structural Similarity Index Measure (SSIM, higher is better) (Table \ref{tab:psnr_fid_ssim}). We observed that synthesized MRI3 from real MRI1 gives the best PSNR, SSIM and FID outperforming other synthesized MRI contrasts. However, results are not hugely different from each other, this is likely due to the one-to-one mapping nature between MRI contrasts and the learned maps between contrasts are highly informative about each other. This is a strength that can be attributed to the power of GAN. We also summarized segmentation results with evaluation metrics of DICE, accuracy, sensitivity, and specificity (Table \ref{tab:test_on_single_mri} and Table \ref{tab:test_on_multiple_mri}). We analyzed the muscle, fat, bone and bone marrow segmentation results on single synthesized and real MRI contrasts (Table \ref{tab:test_on_single_mri}). Muscle and fat tissue shows higher DICE scores when real MRI3 (water-suppressed) is used, and bone and bone marrow show higher DICE scores when R MRI2 (water-fat) is used. Surprisingly, synthesized MRI3 ($\leftarrow$ R MRI2), synthesized MRI3 ($\leftarrow$ R MRI1), synthesized MRI2 ($\leftarrow$ R MRI1), synthesized MRI2 ($\leftarrow$ R MRI3) show similar results for muscle, fat, bone and bone marrow tissues, respectively (Table \ref{tab:test_on_single_mri}), thanks to strongly learned mapping between each contrast. For multi-contrast input (Table \ref{tab:test_on_multiple_mri}), although true MRIs trained segmentation results show higher DICE scores than other strategies, synthesized images from real MRI2 and MRI3 show very close DICE scores to that of the best results. Synthesized images based segmentation results showed similar trends to true images based segmentation results even when the tissue of interest is small such as bone marrow, indicating that high-quality synthesis was achieved.\\ \noindent\textbf{Qualitative evaluations:} Qualitative results are shown in Figure \ref{fig:qualitative} for muscle, fat, bone and bone marrow tissues. We compare some best synthesized MRI image results (Table \ref{tab:test_on_single_mri} and Table \ref{tab:test_on_multiple_mri}) to original MRIs. We observed that synthesizing multi-contrasts MRI were high quality such that even small details on soft tissues were preserved, justified in segmentation results too. This observation is promising as lack of data, missing contrast and other data problems can be addressed with synthetic images as an intermediate step while diagnostic path is still using true/real images, thus avoiding the concern of using synthetic images as diagnostic purpose. \begin{figure}[!ht] \centering \includegraphics[width = 0.47\textwidth]{qualitative_results.png} \caption{Comparison of fat, muscle, bone and bone marrow tissue segmentation results for a given MRI slice. (a) Original MRI, b) F MRI1($\leftarrow$ R MRI2) F MRI2 ($\leftarrow$ R MRI3) F MRI3 ($\leftarrow$ R MRI2) c) F MRI1($\leftarrow$ R MRI3) F MRI2 ($\leftarrow$ R MRI3) F MRI3 ($\leftarrow$ R MRI2) d) R MRI1 R MRI2 R MRI3 e) F MRI2 ($\leftarrow$ R MRI1) f) F MRI2 ($\leftarrow$ R MRI3).} \label{fig:qualitative} \end{figure} \section{CONCLUSIONS} \label{sec:majhead} In this work, we conducted extensive experiments to explore the use of synthetic MRI scans for training segmentation engine for multi-tissue analysis. We showed that an accurate segmentation model can be built on solely based on synthetic scans or mixed (real + synthetic) images with a precision level close to a segmentation model completely trained with true images. In addition, we have demonstrated that multi-modality combination of scans provide better segmentation results even when some of the modalities are synthetically generated. \\ \noindent\textbf{Acknowledgments} Our study is exempt from human subject section as the data is publicly available and fully anonymized. This study is approved under the existing IRB at Baltimore Longitudinal Study of Aging (BLSA) \cite{ferrucci2008baltimore}. This study is partially supported by the NIH grant R01-CA246704-01 and R01-CA240639-01. We thank Ege University for letting us to use their servers for running our experiments. \addtolength{\textheight}{-12cm} \bibliographystyle{IEEEbib}
proofpile-arXiv_065-4674
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Human pose estimation aims to correctly detect and localize keypoints, i.e., human body joints or parts, for all persons in an input image. It is one of the fundamental computer vision tasks which plays an important role in a variety of downstream applications, such as motion capture \cite{DBLP:conf/cvpr/ElhayekAJTPABST15,DBLP:conf/cvpr/RhodinCKSF19}, activity recognition \cite{DBLP:conf/cvpr/BagautdinovAFFS17,DBLP:conf/cvpr/WuWWGW19}, and person tracking \cite{DBLP:conf/cvpr/YangRLZW021,DBLP:conf/cvpr/WangTM20}. Recently, remarkable process has been made in human pose estimation based on deep neural network methods \cite{DBLP:conf/cvpr/CaoSWS17,Chen_2018_CVPR,DBLP:conf/cvpr/0009XLW19,He_2017_ICCV,DBLP:conf/cvpr/PapandreouZKTTB17,DBLP:conf/cvpr/SuYXGW19}. For regular scenes, deep learning-based methods have already achieved remarkably accurate estimation of body keypoints and there is little space for further performance improvement \cite{DBLP:conf/cvpr/ZhangZD0Z20,DBLP:conf/eccv/WangLGDW20,DBLP:conf/cvpr/0005ZGH20}. However, for complex scenes with person-person occlusions, large variations of appearance, and cluttered backgrounds, pose estimation remains very challenging \cite{DBLP:conf/eccv/XiaoWW18,DBLP:conf/cvpr/0005ZGH20}. We notice that, in complex scenes, the performance of pose estimation on different keypoints exhibits large variations. For example, for those visible keypoints with little interference from other persons or background, their estimation results are fairly accurate and reliable. However, for some keypoints, for example the terminal keypoints at tip locations of body parts, it is very challenging to achieve accurate pose estimation. The low accuracy of these challenging keypoints degrades the overall pose estimation performance. Therefore, the main challenge in pose estimation is how to improve the estimation accuracy of these challenging keypoints. \begin{figure}[t] \centering \setlength{\belowcaptionskip}{-0.4cm} \includegraphics[width=0.6\columnwidth]{fig/idea.png} \centering \caption{Illustration of the proposed idea of self-constrained inference optimization of structural groups for human pose estimation.} \label{fig:idea} \end{figure} As summarized in Fig. \ref{fig:idea}, this work is motivated by the following two important observations: (1) human poses, although exhibiting large variations due to the free styles and flexible movements of human, are however restricted by the biological structure of the body. The whole body consists of multiple parts, such as the upper limbs and lower limbs. Each body part corresponds to a subgroup of keypoints. We observe that the keypoint correlation across different body parts remains low since different body parts, such as the left and right arms, can move with totally different styles and towards different directions. However, within the same body part or within the same structural group, keypoints are more spatially constrained by each other. This implies that keypoints can be potentially predictable from each other by exploring this unique structural correlation. Motivated by this observation, in this work, we propose to partition the body parts into a set of structural groups and perform group-wise structure learning and keypoint prediction refinement. \begin{figure}[h] \centering \setlength{\abovecaptionskip}{-0.1cm} \setlength{\belowcaptionskip}{-0.4cm} \includegraphics[width=0.95\columnwidth]{fig/confidence_index.png} \centering \caption{Keypoints at the tip locations of body parts suffer from low confidence scores obtained from the heatmap during pose estimation.} \label{fig:confidence} \end{figure} (2) We have also observed that, within each group of keypoints, terminal keypoints at tip locations of the body parts, such as ankle and wrist keypoints, often suffer from lower estimation accuracy. This is because they have much larger freedom of motion and are more easily to be occluded by other objects. Fig. \ref{fig:confidence} shows the average prediction confidence (obtained from the heatmaps) of all keypoints with yellow dots and bars representing the locations and estimation confidence for terminal keypoints, for example, wrist or ankle keypoints. We can see that the average estimation confidence of terminal keypoints are much lower than the rest. Motivated by the above two observations, we propose to partition the body keypoints into 6 structural groups according to their biological parts, and each structural group is further partitioned into two subsets: \textit{terminal keypoints} and \textit{base keypoints} (the rest keypoints). We develop a self-constrained prediction-verification network to learn the structural correlation between these two subsets within each structural group. Specifically, we learn two tightly coupled networks, the prediction network $\mathbf{\Phi}$ which performs the forward prediction of terminal keypoints from base keypoints, and the verification network $\mathbf{\Gamma}$ which performs backward prediction of the base keypoints from terminal keypoints. This prediction-verification network aims to characterize the structural correlation between keypoints within each structural group. They are jointly learned using a self-constraint loss. Once successfully learned, the verification network $\mathbf{\Gamma}$ is then used as a performance assessment module to optimize the prediction of low-confidence terminal keypoints based on local search and refinement within each structural group. Our extensive experimental results on benchmark MS COCO datasets demonstrate that the proposed method is able to significantly improve the pose estimation results. The rest of the paper is organized as follows. Section 2 reviews related work on human pose estimation. The proposed self-constrained inference optimization of structural groups is presented in Section 3. Section 4 presents the experimental results, performance comparisons, and ablation studies. Section 5 concludes the paper. \section{Related Work and Major Contributions} \label{sec:related} In this section, we review related works on heatmap-based pose estimation, multi-person pose estimation, pose refinement and error correction, and reciprocal learning. We then summarize the major contributions of this work. \textbf{(1) Heatmap-based pose estimation.} In this paper, we use heatmap-based pose estimation. The probability for a pixel to be the keypoint can be measured by its response in the heatmap. Recently, heatmap-based approaches have achieved the state-of-the-art performance in pose estimation \cite{DBLP:conf/eccv/XiaoWW18,Cheng_2020_CVPR,DBLP:conf/cvpr/XuT21,DBLP:conf/cvpr/0009XLW19}. The coordinates of keypoints are obtained by decoding the heatmaps \cite{DBLP:conf/cvpr/SuYXGW19}. \cite{Cheng_2020_CVPR} predicted scale-aware high-resolution heatmaps using multi-resolution aggregation during inference. \cite{DBLP:conf/cvpr/XuT21} processed graph-structured features across multi-scale human skeletal representations and proposed a learning approach for multi-level feature learning and heatmap estimation. \textbf{(2) Multi-person pose estimation.} Multi-person pose estimation requires detecting keypoints of all persons in an image \cite{Fang_2017_ICCV}. It is very challenging due to overlapping between body parts from neighboring persons. Top-down methods and bottom-up methods have been developed in the literature to address this issue. \textbf{(a) Top-down} approaches \cite{He_2017_ICCV,DBLP:conf/eccv/SunXWLW18,DBLP:conf/cvpr/MoonCL19,DBLP:conf/cvpr/SuYXGW19} first detect all persons in the image and then estimates keypoints of each person. The performance of this method depends on the reliability of object detection which generates the bounding box for each person. When the number of persons is large, accurate detection of each person becomes very challenging, especially in highly occluded and cluttered scenes \cite{DBLP:conf/cvpr/PapandreouZKTTB17}. \textbf{(b) Bottom-up} approaches \cite{Geng_2021_CVPR,DBLP:conf/cvpr/CaoSWS17,Luo_2021_CVPR} directly detect keypoints of all persons and then group keypoints for each person. These methods usually run faster than the top-down methods in multi-person pose estimation since they do not require person detection. \cite{Geng_2021_CVPR} activated the pixels in the keypoint regions and learned disentangled representations for each keypoint to improve the regression result. \cite{Luo_2021_CVPR} developed a scale-adaptive heatmap regression method to handle large variations of body sizes. \textbf{(3) Pose refinement and error correction.} A number of methods have been developed in the literature to refine the estimation of body keypoints \cite{9107502,DBLP:conf/cvpr/MoonCL19,DBLP:conf/eccv/WangLGDW20}. \cite{8575519} proposed a pose refinement network which takes the image and the predicted keypoint locations as input and learns to directly predict refined keypoint locations. \cite{9107502} designed two networks where the correction network guides the refinement to correct the joint locations before generating the final pose estimation. \cite{DBLP:conf/cvpr/MoonCL19} introduced a model-agnostic pose refinement method using statistics of error distributions as prior information to generate synthetic poses for training. \cite{DBLP:conf/eccv/WangLGDW20} introduced a localization sub-net to extract different visual features and a graph pose refinement module to explore the relationship between points sampled from the heatmap regression network. \textbf{(4) Cycle consistency and reciprocal learning.} This work is related to cycle consistency and reciprocal learning. \cite{Zhu_2017_ICCV} translated an image from the source domain into the target domain by introducing a cycle consistence constraint so that the distribution of images from translated domain is indistinguishable from the distribution of target domain. \cite{Sun_2020_CVPR} developed a pair of jointly-learned networks to predict human trajectory forward and backward. \cite{xu2020segmentation} developed a reciprocal cross-task architecture for image segmentation, which improves the learning efficiency and generation accuracy by exploiting the commonalities and differences across tasks. \cite{liu2021watching} developed a Temporal Reciprocal Learning (TRL) approach to fully explore the discriminative information from the disentangled features. \cite{zhang2021accurate} designed a support-query mutual guidance architecture for few-shot object detection. \vspace{0.2cm} \textbf{(5) Major contributions of this work.} Compared to the above related work, the major contributions of this work are: (a) we propose to partition the body keypoints into structural groups and explore the structural correlation within each group to improve the pose estimation results. Within each structural group, we propose to partition the keypoints into high-confidence and low-confidence ones. We develop a prediction-verification network to characterize structural correlation between them based on a self-constraint loss. (b) We introduce a self-constrained optimization method which uses the learned verification network as a performance assessment module to optimize the pose estimation of low-confidence keypoints during the inference stage. (c) Our extensive experimental results have demonstrated that our proposed method is able to significantly improve the performance of pose estimation and outperforms the existing methods by large margins. Compared to existing methods on cycle consistency and reciprocal learning, our method has the following unique novelty. First, it addresses an important problem in prediction: how do we know if the prediction is accurate or not since we do not have the ground-truth. It establishes a self-matching constraint on high-confidence keypoints and uses the successfully learned verification network to verify if the refined predictions of low-confidence keypoints are accurate or not. Unlike existing prediction methods which can only perform forward inference, our method is able to perform further optimization of the prediction results during the inference stage, which can significantly improve the prediction accuracy and the generalization capability of the proposed method. \section{Method} In this section, we present our self-constrained inference optimization (SCIO) of structural groups for human pose estimation. \subsection{Problem Formulation} Human pose estimation, as a keypoint detection task, aims to detect the locations of body keypoints from the input image. Specifically, let $I$ be the image of size $W \times H \times 3$. Our task is to locate $K$ keypoints $X=\{X_1,X_2, ...,X_K\}$ from $I$ precisely. Heatmap-based methods transform this problem to estimate $K$ heatmaps $\{H_1,H_2, ...,H_K\}$ of size $W’ \times H’$. Given a heatmap, the keypoint location can be determined using different grouping or peak finding methods \cite{DBLP:conf/cvpr/MoonCL19,DBLP:conf/cvpr/SuYXGW19}. For example, the pixel with the highest heatmap value can be designated as the location of the corresponding keypoint. Meanwhile, given a keypoint at location $(p_x, p_y)$, the corresponding heatmap can be generated using the Gaussian kernel \begin{equation} C(x, y) = \frac{1}{2\pi \sigma^2} e^{-[(x-p_x)^2 + (y-p_y)^2]/2\sigma^2}. \end{equation} In this work, the ground-truth heatmaps are denoted by ${\bar{H}_1,\bar{H}_2, ..., \bar{H}_K}$. \begin{figure*}[t] \centering \setlength{\abovecaptionskip}{-0.05cm} \setlength{\belowcaptionskip}{-0.8cm} \includegraphics[width=0.99\columnwidth]{fig/framework_pred_veri.png} \centering \caption{The overall framework of our proposed network. For an input image, heatmaps of all keypoints predicted by the backbone are partitioned into 6 structural groups. During training stage, each group $\mathbf{H}$ is divided into two subsets: base keypoints and terminal keypoints. A prediction-verification network with self-constraints is developed to characterize the structural correlation between these two subsets. During testing, the learned verification network is used to refine the prediction results of the low-confidence terminal keypoints. } \label{fig:framework} \end{figure*} \subsection{Self-Constrained Inference Optimization on Structural Groups} \label{sec:overview} Fig. \ref{fig:framework} shows the overall framework of our proposed SCIO method for pose estimation. We first partition the detected human body keypoints into 6 structural groups, which correspond to different body parts, including lower and upper limbs, as well as two groups for the head part, as illustrated in Fig. \ref{fig:groups}. Each group contains four keypoints. We observe that these structural groups of four keypoints are the basic units for human pose and body motion. They are constrained by the biological structure of the human body. There are significant freedom and variations between structural groups. For example, the left arm and the right arm could move and pose in totally different ways. In the meantime, within each group, the set of keypoints are constraining each other with strong structural correlation between them. \begin{figure}[h] \centering \setlength{\abovecaptionskip}{-0.05cm} \includegraphics[width=0.5\columnwidth]{fig/groups.png} \centering \caption{Partition of the body keypoints into 6 structural groups corresponding to different body parts. Each group has 4 keypoints.} \label{fig:groups} \end{figure} As discussed in Section \ref{sec:intro}, we further partition each of these 6 structural groups into base keypoints and terminal keypoints. The base keypoints are near the body torso while the terminal keypoints are at the end or tip locations of the corresponding body part. Fig. \ref{fig:confidence} shows that the terminal keypoints are having much lower estimation confidence scores than those base keypoints during pose estimation. In this work, we denote these 4 keypoints within each group by \begin{equation} \mathbf{G} = \{X_A, X_B, X_C \ |\ X_D\}, \end{equation} where $X_D$ is the terminal keypoint and the rest three $\{X_A, X_B, X_C\}$ are the base keypoints near the torso. The corresponding heatmap are denoted by $\mathbf{H} = \{H_A, H_B, H_C \ |\ H_D\}$. To characterize the structural correlation within each structural group $\mathbf{H}$, we propose to develop a self-constrained prediction-verification network. As illustrated in Fig. \ref{fig:framework}, the prediction network $\mathbf{\Phi}$ predicts the heatmap of the terminal keypoint $H_D$ from the base keypoints $\{H_A, H_B, H_C\}$ with feature map $\mathbf{f}$ as the visual context: \begin{equation} \hat{H}_D = \mathbf{\Phi}(H_A, H_B, H_C; \mathbf{f}). \label{eq:prediction} \end{equation} We observe that the feature map $\mathbf{f}$ provides important visual context for keypoint estimation. The verification network $\mathbf{\Gamma}$ shares the same structure as the prediction network. It performs the backward prediction of keypoint $H_A$ from the rest three: \begin{equation} \hat{H}_A = \mathbf{\Gamma}(H_B, H_C, H_D; \mathbf{f}). \label{eq:verifiction} \end{equation} Coupling the prediction and verification network together by passing the prediction output $\hat{H}_D$ of the prediction network into the verification network as input, we have the following prediction loop \begin{eqnarray} \hat{H}_A &=& \mathbf{\Gamma}(H_B, H_C, \hat{H}_D; \mathbf{f})\\ &=& \mathbf{\Gamma}(H_B, H_C, \mathbf{\Phi}(H_A, H_B, H_C; \mathbf{f}); \mathbf{f}). \end{eqnarray} This leads to the following self-constraint loss \begin{equation} \mathcal{L}_A^s = ||\bar{H}_A - \hat{H}_A||_2. \label{eq:selfloss} \end{equation} This prediction-verification network with a forward-backward prediction loop learns the internal structural correlation between the base keypoints and the terminal keypoint. The learning process is guided by the self-constraint loss. If the internal structural correlation is successfully learned, then the self-constraint loss $\mathcal{L}_A^s$ generated by the forward and backward prediction loop should be small. This step is referred to as \textit{self-constrained learning}. Once successfully learned, the verification network $\mathbf{\Gamma}$ can be used to verify if the prediction $\hat{X}_D$ is accurate or not. In this case, the self-constraint loss is used as an objective function to optimize the prediction $\hat{X}_D$ based on local search, which can be formulated as \begin{eqnarray} \hat{X}_D^* &=& \arg\min_{\hat{X}_D} ||H_A - \hat{H}_A||_2, \\ &=& \arg\min_{\hat{X}_D} ||H_A - \mathbf{\Gamma}(H_B, H_C, \mathbb{H}(\hat{X}_D); \mathbf{f})||_2 \nonumber, \end{eqnarray} where $\mathbb{H}(\hat{X}_D)$ represents the heatmap generated from keypoint $\hat{X}_D$ using the Gaussian kernel. This provides an effective mechanism for us to iteratively refine the prediction result based on the specific statistics of the test sample. This adaptive prediction and optimization is not available in traditional network prediction which is purely forward without any feedback or adaptation. This feedback-based adaptive prediction will result in better generalization capability on the test sample. This step is referred to as \textit{self-constrained optimization}. In the following sections, we present more details about the proposed self-constrained learning (SCL) and self-constrained optimization (SCO) methods. \subsection{Self-Constrained Learning of Structural Groups} In this section, we explain the self-constrained learning in more details. As illustrated in Fig. \ref{fig:framework}, the input to the prediction and verification networks, namely, $\{H_A, H_B, H_C\}$ and $\{H_B, H_C, H_D\}$, are all heatmaps generated by the baseline pose estimation network. In this work, we use the HRNet \cite{DBLP:conf/cvpr/0009XLW19} as our baseline, on top of which our proposed SCIO method is implemented. We observe that the visual context surrounding the keypoint location provides important visual cues for refining the locations of the keypoints. For example, the correct location of the knee keypoint should be at the center of the knee image region. Motivated by this, we also pass the feature map $\mathbf{f}$ generated by the backbone network to the prediction and verification network as inputs. In our proposed scheme of self-constrained learning, the prediction and verification networks are jointly trained. Specifically, as illustrated in Fig. \ref{fig:framework}, the top branch shows the training process of the prediction network. Its input includes heatmaps $\{H_A, H_B, H_C\}$ and the visual feature map $\mathbf{f}$. The output of the prediction network is the predicted heatmap for keypoint $X_D$, denoted by $\hat{H}_D$. During the training stage, this prediction is compared to its ground-truth $\bar{H}_D$ and form the prediction loss $\mathcal{L}_P^O$ which is given by \begin{equation} \mathcal{L}_P^O = ||\hat{H}_D - \bar{H}_D||_2. \end{equation} The predicted heatmap $\hat{H}_D$, combined with the heatmaps $H_B$ and $H_C$ and the visual feature map $\mathbf{f}$, is passed to the verification network $\mathbf{\Gamma}$ as input. The output of $\mathbf{\Gamma}$ will be the predicted heatmap for keypoint $X_A$, denoted by $\hat{H}_A$. We then compare it with the ground-truth heatmap $\bar{H}_A$ and define the following self-constraint loss for the prediction network \begin{equation} \mathcal{L}_P^S = ||\hat{H}_A - \bar{H}_A||_2. \end{equation} These two losses are combined as $\mathcal{L}_P = \mathcal{L}_P^O+\mathcal{L}_P^S$ to train the prediction network $\mathbf{\Phi}$. Similarly, for the verification network, the inputs are heatmaps $\{H_B, H_C, H_D\}$ and visual feature map $\mathbf{f}$. It predicts the heatmap $\hat{H}_A$ for keypoint $X_A$. It is then, combined with $\{H_B, H_C\}$ and $\mathbf{f}$ to form the input to the prediction network $\mathbf{\Phi}$ which predicts the heatmap $\hat{H}_D$. Therefore, the overall loss function for the verification network is given by \begin{equation} \mathcal{L}_V = ||\hat{H}_A - \bar{H}_A||_2 + ||\hat{H}_D - \bar{H}_D||_2. \end{equation} The prediction and verification network are jointly trained in an iterative manner. Specifically, during the training epochs for the prediction network, the verification network is fixed and used to compute the self-constraint loss for the prediction network. Similarly, during the training epochs for the verification network, the prediction network is fixed and used to compute the self-constraint loss for the verification network. \subsection{Self-Constrained Inference Optimization of Low-Confidence Keypoints} \label{sec:slo} As discussed in Section \ref{sec:intro}, one of the major challenges in pose estimation is to improve the accuracy of hard keypoints, for example, those terminal keypoints. In existing approaches for network prediction, the inference process is purely forward. The knowledge learned from the training set is directly applied to the test set. There is no effective mechanism to verify if the prediction result is accurate or not since the ground-truth is not available. This forward inference process often suffers from generalization problems since there is no feedback process to adjust the prediction results based on the actual test samples. The proposed self-constrained inference optimization aims to address the above issue. The verification network $\mathbf{\Gamma}$, once successfully learned, can be used as a feedback module to evaluate the accuracy of the prediction result. This is achieved by mapping the prediction result $\hat{H}_D$ for the low-confidence keypoint back to the high-confidence keypoint $\hat{H}_A$. Using the self-constraint loss as an objective function, we can perform local search or refinement of the prediction result $\hat{X}_D$ to minimize the objective function, as formulated in (8). Here, the basic idea is that: if the prediction $\hat{X}_D$ becomes accurate during local search, then, using it as the input, the verification network should be able to accurately predict the high-confidence keypoint $\hat{H}_A$, which implies that the self-constraint loss $||{H}_A - \hat{H}_A||_2$ on the high-confidence keypoint ${X}_A$ should be small. Motivated by this, we propose to perform local search and refinement of the low-confidence keypoint. Specifically, we add a small perturbation $\Delta_D$ onto the predicted result $\hat{X}_D$ and search its small neighborhood to minimize the self-constraint loss: \begin{equation} \hat{X}_D^{*} = \arg\min_{\tilde{H}_D } ||H_A - \mathbf{\Gamma}(H_B, H_C, \tilde{H}_D; \mathbf{f})||_2 \nonumber \end{equation} \begin{equation} \tilde{H}_D = \mathbb{H}(\hat{X}_D +\Delta_D),\ \ ||\Delta_D||_2 \le \delta. \end{equation} Here, $\delta$ controls the search range and direction of the keypoint, and the direction will be dynamically adjusted with the loss. $\mathbb{H}(\hat{X}_D +\Delta_D)$ represents the heatmap generated from the keypoint location $\hat{X}_D +\Delta_D$ using the Gaussian kernel. In the Supplemental Material section, we provide further discussion on the extra computational complexity of the proposed SCIO method. \begin{table} \begin{center} \caption{ Comparison with the state-of-the-arts methods on COCO test-dev. } \label{tab:sota on COCO} \begin{tabular}{l|l|ccccccc} \hline\noalign{\smallskip} Method & Backbone & Size & \text{$AP$} & $AP^{50}$ & $AP^{75}$ & $AP^{M}$ & $AP^{L}$ & $AR$ \\ \hline\noalign{\smallskip} CMU-Pose \cite{DBLP:conf/cvpr/CaoSWS17} & - & - & 61.8 & 84.9 & 67.5 & 57.1 & 68.2 & 66.5\\ Mask-RCNN \cite{He_2017_ICCV} & R50-FPN & - & 63.1 & 87.3 & 68.7 & 57.8 & 71.4 & - \\ G-RMI \cite{DBLP:conf/cvpr/PapandreouZKTTB17} & R101 & 353$\times$257 & 64.9& 85.5 &71.3& 62.3&70.0 &69.7\\ AE \cite{DBLP:conf/nips/NewellHD17} & - & 512$\times$512& 65.5& 86.8& 72.3& 60.6& 72.6 &70.2\\ Integral Pose \cite{DBLP:conf/eccv/SunXWLW18} & R101 & 256$\times$256& 67.8 &88.2& 74.8 &63.9 &74.0 &-\\ RMPE \cite{Fang_2017_ICCV} &PyraNet& 320$\times$256& 72.3& 89.2 &79.1& 68.0& 78.6& -\\ CFN \cite{DBLP:conf/iccv/HuangGT17} & -& -& 72.6& 86.1& 69.7& \textbf{78.3}& 64.1& -\\ CPN(ensemble) \cite{Chen_2018_CVPR}& ResNet-Incep. &384$\times$288 &73.0& 91.7& 80.9 &69.5& 78.1 &79.0\\ CSM+SCARB \cite{DBLP:conf/cvpr/SuYXGW19} & R152& 384$\times$288 &74.3 &91.8& 81.9 &70.7 &80.2 &80.5\\ CSANet \cite{DBLP:journals/corr/abs-1905-05355} & R152& 384$\times$288& 74.5& 91.7 &82.1& 71.2 &80.2& 80.7\\ HRNet \cite{DBLP:conf/cvpr/0009XLW19} & HR48& 384$\times$288 &75.5& 92.5& 83.3& 71.9 &81.5& 80.5\\ MSPN \cite{DBLP:journals/corr/abs-1901-00148} & MSPN &384$\times$288 &76.1& {93.4} &83.8& 72.3 &81.5& 81.6\\ DARK \cite{DBLP:conf/cvpr/ZhangZD0Z20} & HR48& 384$\times$288 &76.2& 92.5& 83.6& 72.5 &82.4& 81.1\\ UDP \cite{DBLP:conf/cvpr/0005ZGH20} & HR48 &384$\times$288& 76.5 &92.7& 84.0& 73.0& 82.4& 81.6\\ PoseFix \cite{DBLP:conf/cvpr/MoonCL19}& HR48+R152& 384$\times$288 &76.7 &92.6& 84.1& 73.1& 82.6& 81.5\\ Graph-PCNN \cite{DBLP:conf/eccv/WangLGDW20} & HR48 &384$\times$288 &76.8& 92.6& 84.3& 73.3& 82.7 &81.6\\ \hline\noalign{\smallskip} \textbf{SCIO} (Ours) & HR48 & 384$\times$288 & \textbf{79.2} & \textbf{93.5} & \textbf{85.8} & 74.1 & \textbf{84.2} & \textbf{81.6}\\ \textbf{Performance Gain} & & & \textbf{+2.4} &\textbf{+0.9}&\textbf{+1.5}&&\textbf{+1.5}&\textbf{+0.0}\\ \hline\noalign{\smallskip} \end{tabular} \end{center} \end{table} \vspace{-1.0cm} \section{Experiments} In this section, we present experimental results, performance comparisons with state-of-the-art methods, and ablation studies to demonstrate the performance of our SCIO method. \subsection{Datasets} The comparison and ablation experiments are performed on MS COCO dataset \cite{DBLP:conf/eccv/LinMBHPRDZ14} and CrowdPose \cite{DBLP:conf/cvpr/LiWZMFL19} dataset, both of which contain very challenging scenes for pose estimation. \textbf{MS COCO Dataset}: The COCO dataset contains challenging images with multi-person poses of various body scales and occlusion patterns in unconstrained environments. It contains 64K images and 270K persons labeled with 17 keypoints. We train our models on train2017 with 57K images including 150K persons and conduct ablation studies on val2017. We test our models on test-dev for performance comparisons with the state-of-the-art methods. In evaluation, we use the metric of Object Keypoint Similarity (OKS) score to evaluate the performance. \textbf{CrowdPose Dataset}: The CrowdPose dataset contains 20K images and 80K persons labeled with 14 keypoints. Note that, for this dataset, we partition the keypoints into 4 groups, instead of 6 groups as in the COCO dataset. CrowdPose has more crowded scenes. For training, we use the train set which has 10K images and 35.4K persons. For evaluation, we use the validation set which has 2K images and 8K persons, and the test set which has 8K images and 29K persons. \begin{table}[t] \begin{center} \caption{ Comparison with the state-of-the-arts methods on CrowdPose test-dev. } \label{tab:sota on Crowdpose} \begin{tabular}{l|cccccccc} \hline\noalign{\smallskip} Method & Backbone & \text{$AP$} & $AP^{med}$ \\ \hline\noalign{\smallskip} Mask-RCNN \cite{He_2017_ICCV} & ResNet101 & 60.3 & - \\ OccNet \cite{DBLP:conf/avss/GoldaKSB19} & ResNet50 & 65.5 & 66.6 \\ JC-SPPE \cite{DBLP:conf/cvpr/LiWZMFL19} & ResNet101 & 66 & 66.3 \\ HigherHRNet \cite{Cheng_2020_CVPR} &HR48 & 67.6 & - \\ MIPNet \cite{Khirodkar_2021_ICCV} &HR48 & 70.0 & 71.1 \\ \hline\noalign{\smallskip} \textbf{SCIO} (Ours) & HR48 & \textbf{71.5} & \textbf{72.2} \\ \textbf{Performance Gain} & & \textbf{+1.5} & \textbf{+1.1} \\ \hline\noalign{\smallskip} \end{tabular} \end{center} \end{table} \begin{table}[t] \begin{center} \caption{ Comparison with state-of-the-art of three backbones on COCO test-dev. } \label{tab:backbone} \begin{tabular}{l|cccccccc} \hline\noalign{\smallskip} Method & Backbone & Size & \text{$AP$} & $AP^{50}$ & $AP^{75}$ & $AP^{M}$ & $AP^{L}$ & $AR$ \\ \hline\noalign{\smallskip} SimpleBaseline \cite{DBLP:conf/eccv/XiaoWW18} & R152 &384$\times$288& 73.7& 91.9& 81.1 &70.3& 80.0 &79.0\\ SimpleBaseline & \multirow{2}{*}{R152} & \multirow{2}{*}{384$\times$288} & \multirow{2}{*}{\textbf{77.9}} & \multirow{2}{*}{\textbf{92.1}} & \multirow{2}{*}{\textbf{82.7}} & \multirow{2}{*}{\textbf{72.6}} & \multirow{2}{*}{\textbf{82.3}} & \multirow{2}{*}{\textbf{80.9}}\\ +\textbf{SCIO} (Ours)\\ \textbf{Performance Gain} & & & \textbf{+4.2} & \textbf{+0.2} & \textbf{+1.6} & \textbf{+2.3}& \textbf{+2.3} & \textbf{+1.9}\\ \hline\noalign{\smallskip} HRNet \cite{DBLP:conf/cvpr/0009XLW19} & HR32& 384$\times$288& 74.9& 92.5& 82.8& 71.3& 80.9 &80.1\\ HRNet+\textbf{SCIO} (Ours) & HR32 & 384$\times$288 & \textbf{78.6} & \textbf{92.7} & \textbf{84.2} & \textbf{73.3} & \textbf{82.9} & \textbf{81.5}\\ \textbf{Performance Gain} & & & \textbf{+3.7} & \textbf{+0.2} & \textbf{+1.4} & \textbf{+2.0}& \textbf{+2.0} & \textbf{+1.4}\\ \hline\noalign{\smallskip} HRNet \cite{DBLP:conf/cvpr/0009XLW19} & HR48& 384$\times$288 &75.5& 92.5& 83.3& 71.9 &81.5& 80.5\\ HRNet+\textbf{SCIO} (Ours) & HR48 & 384$\times$288 & \textbf{79.2} & \textbf{93.5} & \textbf{85.8} & 74.1 & \textbf{84.2} & \textbf{81.6}\\ \textbf{Performance Gain} & & & \textbf{+3.7} &\textbf{+1.0}&\textbf{+1.5}&\textbf{+2.2}&\textbf{+2.2}&\textbf{+0.0}\\ \hline\noalign{\smallskip} \end{tabular} \end{center} \end{table} \subsection{Implementation Details} For fair comparisons, we use HRNet and ResNet as our backbone and follow the same training configuration as \cite{DBLP:conf/eccv/XiaoWW18} and \cite{DBLP:conf/cvpr/0009XLW19} for ResNet and HRNet, respectively. For the prediction and verification networks, we choose the FCN network \cite{long2015fully}. The networks are trained with the Adam optimizer. We choose a batch size of 36 and an initial learning rate of 0.001. The whole model is trained for 210 epochs. During inference, we set the number of search steps to be 50. \subsection{Evaluation Metrics and Methods} Following existing papers \cite{DBLP:conf/cvpr/0009XLW19}, we use the standard Object Keypoint Similarity (OKS) metric which is defined as: \begin{equation} OKS = \frac{\sum\limits_{i}e^{-d_i^2/2s^2k_i^2}\cdot \delta(v_i>0)}{\sum\limits_i\delta(v_i>0)}. \end{equation} Here $d_i$ is the Euclidean distance between the detected keypoint and the corresponding ground truth, $v_i$ is the visibility flag of the ground truth, $s$ is the object scale, and $k_i$ is a per-keypoint constant that controls falloff. $\delta(*)$ means if * holds, $\delta(*)$ equals to 1, otherwise, $\delta(*)$ equals to 0. We report standard average precision and recall scores: $AP^{50}$, $AP^{75}$, $AP$, $AP^{M}$, $AP^{L}$, $AR$, $AP^{easy}$, $AP^{med}$, $AP^{hard}$ at various OKS \cite{Geng_2021_CVPR,DBLP:conf/cvpr/0009XLW19}. \begin{table} \begin{center} \caption{ Comparison with DARK and Graph-GCNN of input size 128$\times$96 on COCO val2017. } \label{tab:inputsize} \begin{tabular}{l|cccccccc} \hline\noalign{\smallskip} Method & Backbone & Size & \text{$AP$} & $AP^{50}$ & $AP^{75}$ & $AP^{M}$ & $AP^{L}$ & $AR$ \\ \hline\noalign{\smallskip} DARK \cite{DBLP:conf/cvpr/ZhangZD0Z20} & HR48& 128$\times$96 &71.9& 89.1 & 79.6 & 69.2 & 78.0 &77.9\\ Graph-PCNN \cite{DBLP:conf/eccv/WangLGDW20}& HR48& 128$\times$96& 72.8& 89.2& 80.1& 69.9 &79.0 &78.6\\ \hline\noalign{\smallskip} \textbf{SCIO} (Ours) & HR48& 128$\times$96& \textbf{73.7}& \textbf{89.6}& \textbf{80.9} & \textbf{70.3}& \textbf{79.4} & \textbf{79.1}\\ \textbf{Performance Gain} & & & \textbf{+0.9} & \textbf{+0.4} & \textbf{+0.8} & \textbf{+0.4}& \textbf{+0.9} & \textbf{+0.8}\\ \hline\noalign{\smallskip} \end{tabular} \end{center} \end{table} \vspace{-1cm} \subsection{Comparison to State of the Art} We compare our SCIO method with other top-performing methods on the COCO test-dev and CrowdPose datasets. Table \ref{tab:sota on COCO} shows the performance comparisons with state-of-the-art methods on the MS COCO dataset. It should be noted that the best performance is reported here for each method. We can see that our SCIO method outperforms the current best by a large margin, up to 2.5\%, which is quite significant. Table \ref{tab:sota on Crowdpose} shows the results on challenging CrowdPose. In the literature, there are only few methods have reported results on this challenging dataset. Compared to the current best method MIPNet \cite{Khirodkar_2021_ICCV}, our SCIO method has improved the pose estimation accuracy by up to 1.5\%, which is quite significantly. In Table \ref{tab:backbone}, we compare our SCIO with state-of-the-art methods using different backbone networks, including R152, HR32, and HR48 backbone networks. We can see that our SCIO method consistently outperforms existing methods. Table \ref{tab:inputsize} shows the performance comparison on pose estimation with different input image size, for example 128$\times$96 instead of 384$\times$288. We have only found two methods that reported results on small input images. We can see that our SCIO method also outperforms these two methods on small input images. \begin{table}[t] \begin{center} \caption{Ablations study on COCO val2017.} \label{tab:ablations} \begin{tabular}{l|cccccccc} \hline\noalign{\smallskip} & $AP$ & $AP^{50}$ & $AP^{75}$ & $AR$ \\ \hline\noalign{\smallskip} Baseline & 76.3 & 90.8 & 82.9 & 81.2\\ Baseline + SCL & 78.3 & 92.9 & 84.9 & 81.3\\ Baseline + SCL + SCO &\textbf{79.5} & \textbf{93.7} & \textbf{86.0} & \textbf{81.6} \\ \hline\noalign{\smallskip} \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Ablations study of terminal keypoints accuracy on COCO val2017.} \label{tab:ablations of keypoints} \begin{tabular}{l|cccccc} \hline\noalign{\smallskip} & Left & Right & Left & Right & Left & Right\\ & Ear & Ear & Wrist & Wrist & Ankle & Ankle\\ \hline\noalign{\smallskip} HRNet & 0.6637 & 0.6652 & 0.5476 & 0.5511 & 0.3843 & 0.3871\\ \hline\noalign{\smallskip} HRNet + \textbf{SCIO}(Ours) &\textbf{0.7987} & \textbf{0.7949} & \textbf{0.7124} & \textbf{0.7147} & \textbf{0.5526} & \textbf{0.5484}\\ \textbf{Performance Gain} & \textbf{+0.1350} & \textbf{+0.1297} & \textbf{+0.1648} & \textbf{+0.1636}& \textbf{+0.1683} & \textbf{+0.1613}\\ \hline\noalign{\smallskip} \end{tabular} \end{center} \end{table} \begin{figure} \centering \setlength{\abovecaptionskip}{-0.05cm} \setlength{\belowcaptionskip}{-0.2cm} \includegraphics[width=\columnwidth]{fig/example.png} \centering \caption{Three examples of refinement of predicted keypoints. The top row is the original estimation. The bottom row is the refined version.} \label{fig:example} \end{figure} \begin{figure} \centering \setlength{\abovecaptionskip}{-0.05cm} \setlength{\belowcaptionskip}{-0.4cm} \includegraphics[width=1\columnwidth]{fig/s2.png} \centering \caption{The decreasing of the self-constraint loss during local search and refinement of the predicted keypoint.} \label{fig:loss} \end{figure} \subsection{Ablation Studies} To systematically evaluate our method and study the contribution of each algorithm component, we use the HRNet-W48 backbone to perform a number of ablation experiments on the COCO val2017 dataset. Our algorithm has two major new components, the Self-Constrained Learning (SCL) and the Self-Constrained optimization (SCO). In the first row of Table \ref{tab:ablations}, we report the baseline (HRNet-W48) results. The second row shows the results with the SCL. The third row shows results with the SCL and SCO of the prediction results. We can clearly see that each algorithm component is contributing significantly to the overall performance. In Table \ref{tab:ablations of keypoints}, We also use normalization and sigmoid functions to evaluate the loss of terminal keypoints, and the results show that the confidence of each keypoint from HRNet has been greatly improved after using SCIO. Fig. \ref{fig:example} shows three examples of how the estimation keypoints have been refined by the self-constrained inference optimization method. The top row shows the original estimation of the keypoints. The bottom row shows the refined estimation of the keypoints. Besides each result image, we show the enlarged image of those keypoints whose estimation errors are large in the original method. However, using our self-constrained optimization method, these errors have been successfully corrected. Fig. \ref{fig:loss} shows how the self-constraint loss decreases during the search process. We can see that the loss drops quickly and the keypoints have been refined to the correct locations. In the Supplemental Materials, we provide additional experiments and algorithm details for further understanding of the proposed SCIO method. \section{Conclusion} In this work, we observed that human poses exhibit strong structural correlation within keypoint groups, which can be explored to improve the accuracy and robustness of their estimation. We developed a self-constrained prediction-verification network to learn this coherent spatial structure and to perform local refinement of the pose estimation results during the inference stage. We partition each keypoint group into two subsets, base keypoints and terminal keypoints, and develop a self-constrained prediction-verification network to perform forward and backward predictions between them. This prediction-verification network design is able to capture the local structural correlation between keypoints. Once successfully learned, we used the verification network as a feedback module to guide the local optimization of pose estimation results for low-confidence keypoints with the self-constraint loss on high-confidence keypoints as the objective function. Our extensive experimental results on benchmark MS COCO datasets demonstrated that the proposed SCIO method is able to significantly improve the pose estimation results. \bibliographystyle{splncs04}
proofpile-arXiv_065-4691
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{intro} The theory of elliptic differential operators acting on sections of a vector bundle over a compact manifold $X$ is a well established discipline \cite{chazarain2011introduction,hormander2007analysis,wloka1995boundary,grubb2012functional}. If $X$ is boundaryless then we may resort to the fact that any such manifold can be infinitesimally identified to euclidean space around each of its points in order to transplant the symbolic calculus of pseudo-differential operators in flat space to this ``curved'' arena. As a consequence, the main technical result in the theory is proved, namely, the existence of a (pseudo-differential) param\-etr\-ix for the given elliptic differential operator $D$, from which the standard mapping properties (regularity of solutions, Fredholmness, etc) in the usual scale of Sobolev spaces may be readily derived. From this perspective, we may assert that the resulting theory is a natural outgrowth of Fourier Analysis as applied to the classical procedure of ``freezing the coefficients''. In case the underlying manifold $X$ carries a boundary $\partial X$, a fundamentally distinct approach is needed as the local identification to euclidean space obviously fails to hold in a neighborhood of a point in the boundary (from this standpoint, we are forced to view $\partial X$ as the ``singular locus'' of $X$). We may, however, pass to the double of $X$, say $2X$, and assume that a suitable {\em elliptic} extension of the original operator $D$, say $2D$, is available. Since $\partial(2X)=\emptyset$, we have at our disposal a parametrix for $2D$ which may be employed to construct a pseudo-differential projection $C$ acting on sections restricted to ${\partial X}$ (the Calder\'on-Seeley projector). If the differential operator $B$ defining the given boundary conditions is such that the principal symbol of $A=BC$ is sufficiently non-degenerate (for instance, if the pair $(D,B)$ satisfies the so-called Lopatinsky-Shapiro condition) then a parametrix for $A$ is available (recall that $\partial(\partial X)=\emptyset$) and from this we may deduce the expected mapping properties of the associated boundary value map \begin{equation}\label{bd:map} \mathcal Du=(Du,Bu|_{\partial X}) \end{equation} acting on suitable Sobolev spaces. Thus, the theory of elliptic boundary value problems ultimately hinges on the fact that the corresponding singular locus $\partial X$ not only is intrinsically smooth but also may be easily ``resolved'' after passage to $2X$\footnote{This trick of passing from the bordered manifold $X$ to the boundaryless manifolds $\partial X$ and $2X$ is a key ingredient in index theory \cite{atiyah1964index}.}. We may now envisage a situation where the underlying space $X$ displays a singular locus $Y$ which fails to admit such a simple resolution (as a boundary does). For instance, we may agree that {the singular locus $Y\subset X$ has the structure of a smooth closed manifold and that a neighborhood $U\subset X$ of $Y $ is the total space of a fiber bundle} \[ \begin{array}{ccc} \mathcal C^F & \hookrightarrow& U \\ && \,\,\,\,\Big\downarrow \pi\\ && Y \end{array} \] whose typical fyber is a cone $\mathcal C^F$ over a closed manifold $F$. For simplicity, we assume that this bundle is trivial, so that $U$ carries natural coordinates $(x,y,z)$, where $y\in Y$, $z\in F$ and $x$ {is the radial function obtained after identifying the cone generatrix to the inverval $[0,\delta]$, $\delta>0$, with $x=0$ along $Y$}. We now infuse a bit of geometry in this discussion by requiring that the smooth locus $X'=X\backslash Y$ carries a Riemannian metric $\overline g$ so that \begin{equation}\label{met:edge:ex} \overline g|_{U'}=dx^2+x^2g_F(z)+g_Y(y), \quad U'=U\backslash Y, \end{equation} where $g_F$ and $g_Y$ are fixed Riemannian metrics on $F$ and $Y$, respectively. {By abuse of language, we say that $x$ is a ``defining function'' for $Y$ (with respect to $\overline g$).} The simplest of such ``edge-type'' manifolds occurs when $Y$ collapses into a point, so we obtain a conical manifold (see Definition \ref{conic:metric} below). In any case, we are led to consider {\em geometric} differential operators (i.e. naturally associated to $\overline g$ such as the Laplacian acting on functions, the Dirac operator acting on spinors, etc.) and pose the general problem of studying their mapping properties in suitable functional spaces. The main purpose of this note is to illustrate through examples how useful this elliptic analysis on singular spaces turns out to be. The problem remains of transplanting the highly successful ``smooth'' elliptic theory outlined above to this setting. Clearly, the edge-type structure around $Y$ {poses} an obvious obstruction to a straightforward extension of the pseudo-differential calculus. Indeed, this leads us to suspect that, besides the standard ellipticity assumption on $X'$, a complementary notion of ellipticity around $Y$ is required in order to construct a global parametrix. In this regard, there is no canonical choice and the final formulation depends on which technique one is most familiar with. In the rather informal (and simplified) exposition below, which actually emphasizes the conical case (so that $Y=\{q\}$), we roughly follow the approach developed by B.-W. Schulze and collaborators \cite{schulze1998boundary,egorov2012pseudo}, as we believe it displays an adequate balance between technical subtlety and conceptual transparency. In this setting, a key ingredient is the classical Mellin transform, which allows us to pass from the restriction to $U'$ of the given geometric operator $D$ to its {\em conormal symbol} $\xi_D$. The appropriate complementary notion of ellipticity is then formulated by fixing $\beta\in\mathbb R$ and then requiring that $\xi_D$, viewed as a polynomial function {whose coefficients are differential operators} acting on the fiber $F$, is invertible when restricted to the vertical line $\Gamma_\beta=\{z\in\mathbb C;{\rm Re}\,z=\beta\}$\footnote{The moral here is that, when trying to freeze the coefficients of $D$ around the tip of the cone, we are inevitably led to contemplate the Mellin transform as the proper analogue of the Fourier transform which, as already noted, does a perfectly good job in the smooth locus.}. Armed with this notion of ellipticity, an appropriate pseudo-differential calculus may be conceived which leads to the construction of the sought-after parametrix; this has as a formal consequence the Fredholmness of $D=D_\beta$ when acting on the so-called {Sobolev-Mellin scale} $\mathcal H^{\sigma,p}_{\beta}(X)$, {\em independently} of $(\sigma,p)\in\mathbb R\times\mathbb Z_+$. Moreover, the index of $D_\beta$ jumps precisely at those values of $\beta$ for which ellipticity fails by an integer quantity depending on the kernel of $\xi_D|_{\Gamma_\beta}$. For instance, if $D=\Delta$, the Laplacian of the underlying conical metric, which is our main concern here, then $\xi_{\Delta}(z)=z^2+bz+\Delta_{g_F}$ for some $b\in\mathbb R$, so a jump occurs at each $\beta$ satisfying the {\em indicial equation} \begin{equation}\label{int:indicial} \beta^2+b\beta-\mu=0, \end{equation} for some $\mu\in {\rm Spec}(\Delta_{g_F})$, and equals the multiplicity of $\mu$ as an eigenvalue. Thus, if a fairly precise knowledge of ${\rm Spec}(\Delta_{g_F})$ is available, the Fredholm index of $\Delta$ in the whole scale $\mathcal H^{s,p}_{\beta}(X)$ can be determined upon computation at a single value of $\beta$. This final piece of calculation may be carried out by using the fact {that $\Delta$ gives rise to a densely defined, unbounded operator, say $\Delta_\beta$, acting on the Hilbert sector of the scale, namely, $\mathcal H^{\bullet,2}_{\beta}(X)$}. A separate argument, which boils down to identifying the minimal and maximal domains of this operator, then assures the existence of at least a $\beta_0$ such that $\Delta_{\beta_0}$ {has a {\em unique} closed extension (which is necessarily Fredholm).} Usually, $\beta_0$ lies in the interval determined by the indicial roots (the solutions of (\ref{int:indicial})) corresponding to $\mu=0$, so that in case $b\neq 0$, $\Delta_\beta$ turns out {to be Fredholm with the {\em same} index as long as $\beta$ varies in the interval with endpoints $0$ and $-b$}; compare with Theorem \ref{self:adj}. A detailed presentation of the program outlined above for a general elliptic operator is far beyond the scope of this introductory note. Instead, we merely sketch the argument for the Laplacian in conical manifolds (Sections \ref{conic} and \ref{proof:m:t}) and indicate how the method can be extended to other geometric operators by considering the case of the Dirac operator (Section \ref{map:dirac}). In fact, here we focus instead on illustrating the versatility of this theory by including a few representative applications of these mapping properties in Geometric Analysis (Sections \ref{examp:op} and \ref{nonempty:bd}). We insist, however, that the material discussed here is standard, drawn from a number of sources, so no claim is made regarding originality (except perhaps for the naive computations leading to Theorem \ref{albin:mellin:enh}). Indeed, this note has been written in the perspective that, after reading our somewhat informal account of a noticeably difficult subject, the diligent reader will be able to fill the formidable gaps upon consultation of the original sources. In this regard, we note that, alternatively to the path just outlined, the mapping results described below may be obtained as a consequence of the powerful ``boundary fibration calculus'' \cite{melrose1993atiyah,melrose1990pseudodifferential,mazzeo1991elliptic,lauter2003pseudodifferential,grieser2001basics,melrose1996differential,gil2007geometry,krainer2018friedrichs} (a comparison of Melrose's $b$-calculus and Schulze's cone algebra appears in \cite{lauter2001pseudodifferential}). Also, direct approaches, which in a sense avoid the consideration of the corresponding pseudo-differential formalism, are available in each specific application we consider here \cite{almaraz2014positive,andersson1993elliptic,bartnik1986mass,chaljub1979problemes,lockhart1985elliptic,lee2006fredholm,almaraz2021spacetime,pacini2010desingularizing}. We believe, however, that a presentation of their mapping properties as a repertory of results stemming from a common source contributes to highlight the unifying features of geometric differential operators in singular spaces. \section{Fredholmness of the Laplacian in conformally conical manifolds}\label{conic} In this section, we define the class of conformally conical manifolds (this entails a slight modification of (\ref{met:edge:ex}) which incorporates a conformal factor involving a suitable power of the defining function $x$) and discuss a few representative examples in this category. We then introduce the relevant functional spaces (the Sobolev-Mellin scale $\mathcal H^{\sigma,p}_\beta$) and then formulate a result (Theorem \ref{self:adj}) which precisely locates the set of values of $\beta$ for which the corresponding Laplacian is Fredholm with an explictly computable index. \subsection{Conformally conical manifolds}\label{con:man} Given a closed Riemannian manifold $(F,g_F)$ of dimension\footnote{{In fact, the general theory also works fine for $n=2$ and the assumption $n\geq 3$ is only needed for Theorem \ref{self:adj} and its consequences.}} $n-1\geq 2$, we consider the {\em infinite cone} $(\mathcal C^{(F,g_F)},g_{\mathcal C,F})$ over $(F,g_F)$: \[ \mathcal C^{(F,g_F)}={\mathbb R_{>0}\times F} \] endowed with the cone metric \begin{equation}\label{cone:met} g_{\mathcal C,F}=dr^2+r^2g_F, \quad r\in\mathbb R_{>0}. \end{equation} We then define the {\em truncated cones} by \[ \mathcal C^{(F,g_F)}_0=\{(r,z)\in \mathcal C^{(F,g_F)};z\in F, 0<r<1\} \] and \[ \mathcal C^{(F,g_F)}_\infty=\{(r,z)\in \mathcal C^{(F,g_F)};z\in F, 1<r<+\infty\}, \] both endowed with the induced metric. We also consider the {\em infinite cylinder} $(\mathsf C^{(F,g_F)},g_{\mathsf C, F})$ over $(F,g_F)$: \[ \mathsf C^{(F,g_F)}=\mathbb R\times F \] endowed with the product metric \[ g_{\mathsf C,F}=dr^2+g_F. \] We now consider a compact topological space $X$ which is smooth everywhere except possibly at a point, say $q$. We endow the smooth locus $X':=X\backslash\{q\}$, $\dim X'=n\geq 3$, with a Riemannian metric $\overline g$ and assume that there exists a neighborhood $U$ of $q$ (the conical region) such that $U':=U \backslash \{q\}$ is diffeomorphic to $\mathcal C^{(F,g_F)}_0$ and \begin{equation}\label{meg:ov:g} \overline g|_{U'}=g_{\mathcal C,F}=dx^2+x^2g_F, \end{equation} where for convenience we have set $x=r$ in the description of the cone metric to emphasize that $x$ is viewed as a defining function for $\{q\}$; compare with (\ref{cone:met}). \begin{definition}\label{conic:metric} A {\em conformally conical manifold} is a pair $(X,g_s)$, where $X$ is as above, and $g_s$ is a Riemannian metric in $X'$ which, restricted to $U'$, satisfies \begin{equation}\label{met:edge} g_s:=x^{2s-2}{\left(\overline g+ o(1)\right)},\quad s\in\mathbb R, \end{equation} as $x\to 0$. We then say that $(F,g_F)$ is the {\em link} of $(X,g_s)$. \end{definition} \begin{remark}\label{flex:conf} {Our terminology is justified by the presence of the conformal factor next to $\overline g+o(1)$, which allows us to arrange the examples below in a single geometric structure.} \end{remark} \begin{remark}\label{decay} In applications, it is often needed to append decay relations to (\ref{met:edge}) for the corresponding derivatives up to second order at least; see Remark \ref{order} below. \end{remark} \begin{remark}\label{complete} $(X',g_s)$ is complete if and only if $s\leq 0$. \end{remark} We will be interested in doing analysis in the open manifold $(X',g_s)$. More precisely, we will study the mapping properties of the Laplacian $\Delta_{g_s}$ in an appropriate scale of Sobolev spaces. Before proceeding, however, we discuss a few examples, which highlight the distinguished roles played by the ``rigid'' spaces $\mathcal C^{(F,g_F)}_0$, $\mathcal C^{(F,g_F)}_\infty$ and $\mathsf C^{(F,g_F)}$ as asymptotic models. \begin{example}\label{ex:conic} (${\rm AC}_0$ manifolds) Let $(V,h)$ be an open manifold for which there exists a compact $K\subset V$ and a diffeomorphism $\psi: \mathcal C^{(F,g_F)}_0\to V\backslash K$ such that, as $r\to 0$, \[ |\nabla_b^k(\psi^*h-g_{\mathcal C,F})|_b=O(r^{\nu_0-k}), \quad 0\leq k\leq m. \] Here, $m\geq 0$ is the order and $\nu_0>0$ is the rate of decay. {Also, the subscript $b$ refers to invariants attached to the ``rigid'' conical metric in the model space (the same notation is used in the examples below).} We then say that $(V,h)$ is an {\em asymptotically conical manifold at the origin} (${\rm AC}_0$). Clearly, if we take $x=r$, this corresponds to a {conformally} conical manifold with $s=1$ in (\ref{met:edge}). \end{example} \begin{example}\label{ex:af} (${\rm AC}_\infty$ manifolds) Let $(V,h)$ be an open manifold for which there exists a compact $K\subset V$ and a diffeomorphism $\psi:\mathcal C^{(F,g_F)}_\infty\to V\backslash K$ such that, as $r\to +\infty$, \[ |\nabla_b^k(\psi^*h-g_{\mathcal C,F})|_b=O(r^{-\nu_\infty-k}), \quad 0\leq k\leq m. \] Here, $m\geq 0$ is the order and $\nu_\infty>0$ is the rate of decay. We then say that $(V,h)$ is an {\em asymptotically conical manifold at infinity} $({\rm AC}_\infty)$. Clearly, if we take $x=r^{-1}$, this corresponds to a {conformally} conical manifold with $s=-1$ in (\ref{met:edge}). \end{example} \begin{example}\label{asym:cyl} (${\rm ACyl}$ manifolds) Let $(V,h)$ be an open manifold for which there exists a compact $K\subset V$ and a diffeomorphism $\psi:\mathsf C^{(F,g_F)}_\infty\to V\backslash K$ such that, as $r\to +\infty$, \[ |\nabla_b^k(\psi^*h-g_{\mathsf C,F})|_b=O(e^{-(\nu_c+k)r}), \quad 0\leq k\leq m. \] Here, $m\geq 0$ is the order and $\nu_c>0$ is the rate of decay. We then say that $(V,h)$ is an {\em asymptotically cylindrical manifold} $({\rm ACyl})$. Clearly, if we take $x=e^{-r}$, this corresponds to a conformally conical manifold with $s=0$ in (\ref{met:edge}). These manifolds play a central role in the formulation and proof of the Atiyah-Patodi-Singer index theorem \cite{atiyah1975spectral,melrose1993atiyah}. \end{example} \begin{example} \label{a0:ainf} (${\rm AC}_0/{\rm AC}_\infty$ manifolds) Assume more generally that $V\backslash K$ decomposes as a {\em finite} union of ends which are either ${\rm AC}_0$ or ${\rm AC}_\infty$. These manifolds, which are called {\em conifolds} in \cite{pacini2013special}, appear prominently in the study of moduli spaces of special Lagrangian submanifolds; see also \cite{joyce2003special}. \end{example} \begin{remark}\label{order} In all examples above, we take $m\geq 2$. \end{remark} \subsection{Sobolev-Mellin spaces and Fredholmness}\label{mel:sob:fred} Given $\beta\in \mathbb R$ and integers $k\geq 0$ and $1< p<+\infty$, we define $\mathcal H_\beta^{k,p}(X)$ to be the space of all distributions $u\in L^p_{\rm loc}(X',d{\rm vol}_{\overline g})$ such that: \begin{itemize} \item for any cutoff function $\varphi$ with $\varphi\equiv 1$ near $q$ and $\varphi\equiv 0$ outside $U$, we have that $(1-\varphi)u$ lies in the standard Sobolev space $H^{k,p}(X',d{\rm vol}_{\overline g})$; \item there holds \begin{equation}\label{sob:def:con} x^{\beta}\mathsf D^j\partial_z^\alpha(\varphi u)(x,z)\in L^p(X',d_+xd{\rm vol}_{g_F}), \quad j+|\alpha|\leq k. \end{equation} Here, $\mathsf D=x\partial_x$ is the Fuchs operator and $d_+x=x^{-1}dx$. \end{itemize} Using duality and interpolation, we may define $\mathcal H_\beta^{\sigma,p}(X)$ for any $\sigma\in\mathbb R$. As usual, $\mathcal H_\beta^{\sigma,p}(X)$ is naturally a Banach space which is Hilbert for $p=2$. For instance, when $k=0$ the corresponding norm {to the $p^{\rm th}$ power} reduces to the integral \begin{equation}\label{norm:near} \int|x^\beta u(x,z)|^pd_+xd{\rm vol}_{g_F}(z) \end{equation} near $q$. These are the weighted Sobolev-Mellin spaces considered in \cite{schrohe1999ellipticity}, except that there they are labeled by \begin{equation}\label{beta:gamma} \gamma=\frac{n}{2}-\beta. \end{equation} In order to confirm the scale character of these spaces, we recall the relevant embedding theorem; see \cite[Remark 2.2]{coriasco2007realizations} and \cite[Corollary 2.5]{roidos2013cahn}. \begin{proposition}\label{sob:emb} One has a continuous embedding $\mathcal H_{\beta'}^{\sigma',p}(X)\hookrightarrow \mathcal H_\beta^{\sigma,p}(X)$ if $\beta'{\leq}\beta$ and $\sigma'{\geq}\sigma$, which is compact if the strict inequalities hold. {Also, if ${\sigma}>n/p$ then any $u\in \mathcal H_\beta^{\sigma,p}(X,g)$ is continuous in $X'$ and satisfies $u(x)=O(x^{-\beta})$ as $x\to 0$.} \end{proposition} It is clear that the Laplacian $\Delta_{g_s}$ defines a bounded map \begin{equation}\label{lap:cont:sob} \Delta_{g_s,\beta}:\mathcal H_\beta^{\sigma,p}(X)\to \mathcal H_{\beta+2s}^{\sigma-2,p}(X), \end{equation} and our primary concern here is to study its mapping properties. As already discussed in the Introduction, we should be aware that a key point in the analysis of an elliptic operator in a conformally conical manifold is that, differently from what happens in the smooth case, invertibility of its principal symbol does not suffice to make sure that a parametrix exists. In particular, it is not clear whether (\ref{lap:cont:sob}) is Fredholm for some value of the weight $\beta$. It turns out that this Fredholmness property is insensitive to the pair $(\sigma,p)$ but depends crucially on $\beta$ \cite{schrohe1999ellipticity}. Indeed, it turns out that this map is Fredholm for all but a discrete set of values of $\beta$, with the index possibly jumping only when $\beta$ reaches these exceptional values. We now state a useful result that confirms this expectation for the map (\ref{lap:cont:sob}). For this, we introduce the quantity \begin{equation}\label{exp:a} a=(n-2)s. \end{equation} If $s\neq 0$ then $a\neq 0$ as well if we further assume that $n\geq 3$. We then denote by $I_a$ the open {interval with endpoints $a$ and $0$}. \begin{theorem}\label{self:adj} If $n\geq 4$ and $a\neq 0$ then the Laplacian map $\Delta_{g_s,\beta}$ in (\ref{lap:cont:sob}) is Fredholm of index $0$ whenever $\beta\in I_a$. \end{theorem} As already remarked, from this we can read off the Fredholm index of $\Delta_{g_s,\beta}$ as $\beta$ varies if a complete knowledge of the spectrum of $\Delta_{g_F}$ is available. As another useful application of Theorem \ref{self:adj}, we mention the following existence result, which is just a restatement of Fredholm alternative. \begin{corollary} If $\beta\in I_{a}$, $a\neq 0$, then the map (\ref{lap:cont:sob}) is surjective if and only if it is injective. \end{corollary} \begin{remark}\label{a:0} The case $a=0$ may also be treated by the method leading to Theorem \ref{self:adj}. It turns out that $\Delta_{g_0,\beta}$ is Fredholm for any $\beta$ such that $\beta^2\notin {\rm Spec}(\Delta_{g_F})$; see Remark \ref{a:0:2}. \end{remark} \section{The proof of Theorem \ref{self:adj} (a sketch)}\label{proof:m:t} Our aim here is to sketch the proof of Theorem \ref{self:adj}. This may be confirmed in a variety of ways on inspection of standard sources; see for instance \cite{melrose1993atiyah,mazzeo1991elliptic,schulze1998boundary,lesch1997differential,egorov2012pseudo,melrose1996differential}, among others. However, since in these references the arguments leading to Theorem \ref{self:adj} appear embedded in rather elaborate theories, we include a sketch of the proof here in the setting of the Sobolev-Mellin spaces introduced above. In fact, this section may be regarded as an essay on these fundamental contributions as applied to a rather simple situation. Since $\Delta_{g_s}$ is elliptic on $X'$, a local parametrix may be found in this region by standard methods. Thus, analyzing the mapping properties of $\Delta_{g_s}$ involves the consideration of a suitable notion of ellipticity in the conical region $U'$. Starting from (\ref{meg:ov:g}) and (\ref{met:edge}), we easily compute that the Laplacian $\Delta_{g_s}$ satisfies \begin{equation}\label{conorm} P:=x^{2s}\Delta_{g_s}|_{U'}=\mathsf D^2+a\mathsf D+\Delta_{g_F}+o(1), \end{equation} where $\mathsf D=x\partial_x$. As already noted, the needed ingredients to establish the mapping properties for $\Delta_{g_s}$ include not only its ellipticity when restricted to the smooth locus, but also the invertibility of the so-called {\em conormal symbol}, which is obtained by freezing the coefficients of $P$ at $x=0$, that is, passing to \begin{equation}\label{con:symb:free} P_0=\mathsf D^2+a\mathsf D+\Delta_{g_F}, \end{equation} and then applying the Mellin transform $\mathsf M$; see \cite{schrohe1999ellipticity,schulze1998boundary,egorov2012pseudo} and also (\ref{conor:symb}) below, where this construction is actually applied to an appropriate conjugation of $P_0$. Recall that $\mathsf M$ is the linear map that to each well-behaved function $f:\mathbb R_+\to\mathbb C$ associates another function $\mathsf M(f):U_f\subset \mathbb C\to \mathbb C$ by means of \[ \mathsf M(f)(\zeta)=\int_0^{+\infty}f(x)x^{\zeta}d_+x, \quad d_+x=x^{-1}dx. \] For our purposes, it suffices to know that this transform meets the following properties: \begin{itemize} \item For each $\theta\in\mathbb R$, the map \[ x^\theta L^2(\mathbb R_+,d_+x)\stackrel{\mathsf M}{\longrightarrow} L^2(\Gamma_{{-\theta}}), \] is an isometry. {Here, $\Gamma_{{\alpha}}=\{\zeta\in\mathbb C;{\rm Re}\,\zeta={\alpha}\}$, $\alpha\in\mathbb R$, and $x^\theta L^2(\mathbb R_+,d_+x)$ is endowed with the inner product \begin{equation}\label{inner:prod} \langle u,v\rangle_{x^\theta L^2(\mathbb R_+,d_+x)}=\langle x^{-\theta}u,x^{-\theta}v\rangle_{L^2(\mathbb R_+,d_+x)}. \end{equation}} Moreover, each element $u$ in the image extends holomorphically to the half-space {$\{\zeta\in\mathbb C;{\rm Re}\,\zeta>{-\theta}\}$} (Notation: $u\in\mathscr H(\{{\rm Re}\,\zeta>{-\theta}\})$). \item $\mathsf M(\mathsf Df)(\zeta)=-\zeta\mathsf M(f)(\zeta)$. \end{itemize} In particular, the conormal symbol \begin{equation}\label{con:symb:lap} \xi_{\Delta_{g_s}}(\zeta)=\zeta^2-a\zeta+\Delta_{g_F} \end{equation} is obtained by Mellin tranforming (\ref{con:symb:free}). Note that this is a polynomial function with coefficients in the space of differential operators on the link $(F,g_F)$. \begin{definition}\label{def:ellip:con} The Laplacian $\Delta_{g_s}$ is {\em elliptic} (with respect to some $\beta\in\mathbb R$) if \[ \xi_{\Delta_{g_s}}(\zeta):H^{\sigma,p}(F,d{\rm vol}_{g_F})\to H^{\sigma-2,p}(F,d{\rm vol}_{g_F}) \] is invertible for any $\zeta\in\Gamma_\beta$. Here, $H^{\sigma,p}$ denotes the standard Sobolev scale. \end{definition} \begin{remark}\label{rem:gen:geo} Inherent in the discussion above is the fact that the Laplacian can be written as a polynomial in $\mathsf D$ in the conical region. More generally, we may consider any elliptic operator $D$ satisfying, as $x\to 0$, \[ x^\nu D|_{U'}=\sum_{i=0}^{m}A_i(x)\mathsf D^i+o(1),\quad \nu>0, \] where each $A_i(x)$ is a differential operator of order at most $m-i$ acting on (sections of a vector bundle over) $F$ \cite{schulze1998boundary,lesch1997differential}. Definition \ref{def:ellip:con} then applies to the corresponding conormal symbol, which is \[ \xi_D(\zeta)=\sum_{i=0}^m(-1)^iA_i(0)\zeta^i. \] Besides the Laplacian, in next section we consider another most honorable example, namely, the Dirac operator acting on spinors. \end{remark} Armed with this notion of ellipticity, we may setup an appropriate pseudo-differential calculus that enables the construction of a parametrix for $\Delta_{g_s}$ in the Sobolev-Mellin scale $\mathcal H^{\sigma,p}_\beta(X)$; the quite delicate argument can be found in \cite{schulze1998boundary,egorov2012pseudo}. As in the smooth case, this turns out to be formally equivalent to the assertion that the map (\ref{lap:cont:sob}) is Fredholm. \begin{remark}\label{rem:conv} The converses in the chain of implications above also hold true, so that (\ref{lap:cont:sob}) fails to be Fredholm precisely at those $\beta$ for which the invertibility condition fails. More precisely, if we set \begin{equation}\label{ind:roots:up} \Xi_{\beta}:=\left\{\zeta\in\mathbb C;\zeta^2-a\zeta-\mu=0, \mu\in{\rm Spec}(\Delta_{g_F})\right\}\cap\Gamma_{\beta}. \end{equation} then the Laplacian map in (\ref{lap:cont:sob}) fails to be Fredholm if and only if $\Xi_{\beta}\neq\emptyset$. This takes place along the discrete set formed by those $\beta=\beta_\mu$ satisfying the {\em indicial equation} \[ \beta_\mu^2-a\beta_\mu-\mu=0, \quad \mu\in {\rm Spec}(\Delta_{g_F}), \] and a further argument shows that the corresponding jump in the Fredholm index equals \begin{equation}\label{jump:fac} \pm\dim\ker \xi_{\Delta_{g_s}}(\beta_\mu)=\pm\dim\ker (\Delta_{g_F}+\mu). \end{equation} \end{remark} From the previous remark, a first step toward computing the Fredholm index of $\Delta_{g_s,\beta}$ as $\beta$ varies involves first determining it at a single value of $\beta$. A possible approach to this goal is to consider the {\em core} Laplacian \begin{equation}\label{core:lap} (\Delta_{g_s},C^\infty_c(X')):C^\infty_c(X')\subset \mathcal H_\beta^{0,2}(X)\to \mathcal H_\beta^{0,2}(X), \end{equation} a densely defined operator whose closure is the operator $(\Delta_{g_s},D_{\rm min}(\Delta_{g_s}))$, with domain $D_{\rm min}(\Delta_{g_s})$ formed by those $u\in \mathcal H_\beta^{0,2}(X)$ such that there exists $\{u_n\}\subset C^\infty_c(X')$ with $u_n{\to} u$ {and} $\{\Delta_{g_s}u_n\}$ is Cauchy in $\mathcal H_\beta^{0,2}(X)$. Also, we may consider $(\Delta_{g_s},D_{\rm max}(\Delta_{g_s}))$, where \[ D_{\rm max}(\Delta_{g_s})=\left\{u\in \mathcal H_\beta^{0,2}(X);\Delta_{g_s}u\in \mathcal H_\beta^{0,2}(X)\right\}. \] Regarding these notions, the following facts are well-known. \begin{itemize} \item $D_{\rm min}(\Delta_{g_s})\subset D_{\rm max}(\Delta_{g_s})$; \item If $(\hat\Delta_{g_s},{\rm Dom}(\hat\Delta_{g_s}))$ is a closed extension of $(\Delta_{g_s},C^\infty_c(X'))$ then \[ D_{\rm min}(\Delta_{g_s})\subset {\rm Dom}(\hat\Delta_{g_s})\subset D_{\rm max}(\Delta_{g_s}). \] \end{itemize} Hence, in order to understand the set of closed extensions, we need to look at the subspaces of the {\em asymptotics space} \begin{equation}\label{as:space} \mathcal Q(\Delta_{g_s}):=\frac{D_{\rm max}(\Delta_{g_s})}{D_{\rm min}(\Delta_{g_s})}. \end{equation} Thus, $\mathcal Q(\Delta_{g_s})=\{0\}$ implies that the Laplacian {has a unique closed extension and hence the associated map (\ref{lap:cont:sob}) is Fredholm. In particular, it is essentially self-adjoint (hence with a vanishing index) whenever it is symmetric}. From this, the remaining values of the index as $\beta$ varies may be determined by means of the jump factors in (\ref{jump:fac}). The properties of the Mellin transform mentioned above suggest to work with the ``Mellin'' volume element \[ d{\rm vol}_{\mathsf M}=x^{-1}dxd{\rm vol}_{g_F} \] instead of the volume element $x^{n-1}dxd{\rm vol}_{g_F}$ associated to $\overline g$. This is implemented by working ``downstairs'' in the diagram below, where $\tau=x^{\frac{n}{2}}$ is unitary and $\Delta^\tau_{g_s}=\tau\Delta_{g_s}\tau^{-1}$: \begin{equation}\label{diag} \begin{array}{ccc} D_{\rm max}(\Delta_{g_s})\subset \mathcal H_\beta^{0,2}(X) & \xrightarrow{\,\,\,\,\Delta_{g_s}\,\,\,\,} & \mathcal H_\beta^{0,2}(X) \\ \tau \Big\downarrow & & \Big\downarrow \tau \\ D_{\rm max}(\Delta^\tau_{g_s})\subset {x^{\frac{n}{2}-\beta}} L^2(X',d{\rm vol}_{\mathsf M}) & \xrightarrow{\,\,\,\,\Delta^\tau_{g_s}\,\,\,\,} & {x^{\frac{n}{2}-\beta}} L^2(X',d{\rm vol}_{\mathsf M}) \end{array} \end{equation} { \begin{remark}\label{self-ad-delta} It is immediate to check that, near the singularity, \[ \langle \Delta_{g_s}u,v\rangle_{\mathcal H^{0,2}_\beta(X)}=\int x^{2\beta-ns}v\Delta_{g_s}u\, d{\rm vol}_{g_s}, \] so that the horizontal maps in (\ref{diag}) define symmetric operators if and only if $\beta=ns/2$. Notice that the same conclusion holds true for any operator which is formally self-adjoint with respect to $d{\rm vol}_{g_s}$. \end{remark} } Let $u\in D_{\rm max}(\Delta_{g_s})$. Thus, $v:=\tau u\in D_{\rm max}(\Delta^\tau_{g_s})$ satisfies $x^{\beta-n/2}v\in L^2(X',d{\rm vol}_{\mathsf M})$, so that $\mathsf M(v)\in{\mathscr H}(\{{\rm Re}\,\zeta>\beta-n/2\})$. On the other hand, if \[ P_0^\tau:=\tau P_0\tau^{-1}= \mathsf D^2+(a-n)\mathsf D+\frac{n(n-2a)}{4}+\Delta_{g_F}, \] then $w:=P_0^\tau v$ satisfies $x^{-2s+\beta-n/2}w=x^{\beta-n/2}\tau\Delta_{g_s}u\in L^2(X',d{\rm vol}_{\mathsf M})$, so that $\mathsf M(w)\in{\mathscr H}(\{{\rm Re}\,\zeta> -2s+\beta-n/2\})$. By taking Mellin transform, \[ \mathsf M(w)(\zeta,z,y)=\xi_{\Delta^\tau_{g_s}}(\zeta)\mathsf M(v)(\zeta,z,y), \] where \begin{equation}\label{conor:symb} \xi_{\Delta^\tau_{g_s}}(\zeta)= \zeta^2+(n-a)\zeta+\frac{n(n-2a)}{4}+\Delta_{g_F} \end{equation} is the conormal symbol of $\Delta^\tau_{g_s}$. The conclusion is that, at least formally, \begin{equation}\label{formally} \mathsf M(v)(\zeta,z,y)=\xi_{\Delta^\tau_{g_s}}^{-1}(\zeta)\mathsf M(w)(\zeta,z,y), \end{equation} but we should properly handle the zeros of $\xi_{\Delta^\tau_{g_s}}$ located within the critical strip $\Gamma_{-2s+\beta-n/2,\beta-n/2}$, which we may gather together in the {\em asymptotics set}\footnote{Note that $\Lambda_\beta^\tau\subset\mathbb R$ by (\ref{roots}) and the fact that ${\rm Spec}(\Delta_{g_F})\subset[0,+\infty)$.} \[ \Lambda^\tau_{\beta}:=\left\{\zeta\in\mathbb C;Q_\mu(\zeta)=0,\mu\in{\rm Spec}(\Delta_{g_F})\right\}\cap \Gamma_{-2s+\beta-n/2,\beta-n/2}. \] Here, $\Gamma_{c,c'}=\{\zeta\in \mathbb C; c<{\rm Re}\,\zeta<c'\}$ for $c<c'$ and \[ Q_\mu(\zeta)=\zeta^2+(n-a)\zeta+\frac{n(n-2a)}{4}-\mu. \] Since the roots of $Q_\mu$ are explicitly given by \begin{equation}\label{roots} \frac{a-n}{2}\pm \delta^\pm_{\mu},\quad \delta^\pm_{\mu}=\pm\frac{1}{2}\sqrt{a^2+4\mu}, \end{equation} we may alternatively consider \[ \tilde\Lambda^{\tau,\pm}_{\beta}=\left\{\mu\in{\rm Spec}(\Delta_{g_F});\delta_{\mu}^{\pm}\in\Gamma_{-2s+\beta-a/2,\beta-a/2}\right\}. \] After applying Mellin inversion to (\ref{formally}) and using the appropriate pseudo-differential calculus \cite{lesch1997differential,schrohe1999ellipticity,schulze1998boundary}, we obtain \begin{equation}\label{expansion} v-w=\sum_{\mu\in\tilde\Lambda^{\tau,\pm}_{\beta}}A_\mu(x,z,y), \end{equation} where the right-hand side represents a generic element in the asymptotics space $\mathcal Q(\Delta_{g_s})$. Thus, the elements in $\tilde\Lambda_\beta^{\tau,\pm}$ constitute the obstruction to having $v=w$ (and hence, $\mathcal Q(\Delta_{g_s})=\{0\}$). From this we easily derive the next results. \begin{theorem}\label{self:crit} The core Laplacian {has a unique closed extension} whenever $\tilde\Lambda^{\tau,\pm}_{\beta}=\emptyset$. \end{theorem} \begin{corollary}\label{map:p:lap} Assume that $n\geq 4$. Then the core Laplacian has a unique closed, Fredholm extension if either i) $s\leq 0$ or ii) $s>0$ and $\beta=ns/2$. {In both cases, it is essentially self-adjoint for $\beta=ns/2$.} \end{corollary} \begin{proof} The case $s\leq 0$ follows from the fact that $\Gamma_{-2s+\beta-n/2,\beta-n/2}=\emptyset$, which clearly implies that $\tilde\Lambda^{\tau,\pm}_{\beta}=\emptyset$ as well. If $s>0$ then \[ |\delta^\pm_\mu|\geq\frac{a}{2}=\frac{(n-2)s}{2}\geq s, \] so that \[ \tilde\Lambda^{\tau,\pm}_{ns/2}=\left\{\mu\in{\rm Spec}(\Delta_{g_F});\delta_{\mu}^{\pm}\in\Gamma_{-s,s}\right\}=\emptyset \] indeed. {The last assertion follows from Remark \ref{self-ad-delta}}. \end{proof} In each case of Corollary \ref{map:p:lap}, the corresponding map (\ref{lap:cont:sob}) is Fredholm and this turns out to be a crucial step in the proof of Theorem \ref{self:adj}. Indeed, we already know that Fredholmness and the associated index do not depend on the pair $(\sigma,p)$ but only on $\beta$. The key point now is that, as already explained in the slightly different (but equivalent) setting of the discussion surrounding Remark \ref{rem:conv}, the strategy to preserve Fredholmness as $\beta$ varies involves precluding the crossing of zeros of $\xi_{\Delta_{g_s}^\tau}$ through the critical line $\Gamma_{\beta-n/2}$ (this is what ellipticity is all about). Precisely, we consider \[ \Xi^\tau_{\beta}:=\left\{\zeta\in\mathbb C;Q_\mu(\zeta)=0, \mu\in{\rm Spec}(\Delta_{g_F})\right\}\cap\Gamma_{\beta-n/2}, \] and the relevant result is that $\Delta_{g_s}$ remains Fredholm with the {\em same} index as long as $\Xi^\tau_{\beta}=\emptyset$; see \cite[Section 3]{schrohe1999ellipticity} or \cite[Subsection 2.4.3]{schulze1998boundary}. Certainly, this is the case for all $\beta\in I_a$, $a\neq 0$. Since (the closure of) this interval always contains $ns/2$, the proof of Theorem \ref{self:adj} follows from Corollary \ref{map:p:lap} and the remarks above. \begin{remark}\label{a:0:2} The case $a=0$ follows by a similar argument observing that the roots of $Q_\mu(\zeta)=0$ are $ -{n}/{2}\pm\sqrt{\mu}, $ so that Fredholmness fails whenever $\beta=\pm\sqrt{\mu}$; compare with Remark \ref{a:0} and \cite[Theorem 6.2]{lockhart1985elliptic}. \end{remark} \begin{remark}\label{upstairs} We emphasize that the authors in \cite{schrohe1999ellipticity} and \cite{schulze1998boundary} work ``upstairs'' in respect to the diagram (\ref{diag}), that is, before applying the conjugation $\tau=x^{n/2}$, so instead of $\Xi^\tau_{\beta}$ they consider $\Xi_{\beta}$ as in (\ref{ind:roots:up}). Notice that the polynomial equation here is the Mellin transform of $P_0$ whereas the critical line is shifted to the right by $n/2$. It is immediate to check that both approaches produce the same numerical results for the Fredholmness of $\Delta_{g_s,\beta}$. \end{remark} \section{The Dirac operator}\label{map:dirac} We now illustrate how flexible the theory described in the previous section is by explaining how it may be adapted to establish the mapping properties of the Dirac operator \begin{equation}\label{dirac:map} {\dirac}_{g_s}:\mathcal H^{s,p}_\beta(S_X)\to \mathcal H^{s-1,p}_{\beta+1}(S_X) \end{equation} in the appropriate scale of Sobolev-Mellin spaces. Here, $X$ is assumed to be spin and $S_X$ is the corresponding spinor bundle (associated to $g_s$). As usual, we first consider the core Dirac operator \begin{equation}\label{dirac:map:c} ({\dirac}_{g_s},C^\infty_0(S_X)):C^\infty_0(S_X)\subset \mathcal H^{0,2}_\beta(S_X)\to H^{0,2}_\beta(S_X), \end{equation} and our aim is to give conditions on $\beta$ to make sure that the associated asymptotics space is trivial. It follows from \cite[Lemma 2.2]{albin2016index} that, in the conical region, \[ {\dirac}_{\overline g}={\mathfrak c}(\partial_x)\left(\partial_x+\frac{n-1}{2x}+\frac{1}{x}{\dirac}_F\right)+O(1), \] where $\mathfrak c$ is Clifford product and ${\dirac}_F$ is the Dirac operator of the spin manifold $(F,g_F)$. From \cite[Proposition 2.31]{bourguignon2015spinorial}, we thus obtain \[ \dirac_{g_s}=x^{1-s}{\mathfrak c}(\partial_x)\left(\partial_x+\frac{\hat a}{x}+\frac{1}{x}{\dirac}_F\right)+O(1), \quad \hat a=\frac{(n-1)s}{2}, \] so that \[ \mathscr P:=x^s{\dirac}_{g_s}={\mathfrak c}(\partial_x)\mathscr P_0+O(x), \] where \[ \mathscr P_0=\mathsf D+\hat a+{\dirac}_F \] is the conormal symbol. By working ``downstairs'', we get \[ \mathscr P_0^\tau:=\tau\mathscr P_0\tau^{-1}=\mathsf D+\hat a-\frac{n}{2}+ {\dirac}_F, \] and after Mellin transforming this we see that the corresponding asymptotics set is \[ \Theta^\tau_\beta:=\left\{\zeta\in \mathbb C; \zeta+\frac{n}{2}-\hat a-\vartheta=0,\vartheta\in{\rm Spec}({\dirac}_F)\right\}\cap\Gamma_{-s+\beta-n/2,\beta-n/2}. \] By arguing exactly as above, we easily obtain the following result. \begin{theorem}\label{albin:mellin} The core Dirac (\ref{dirac:map:c}) { has a unique closed extension} whenever $\Theta^\tau_\beta=\emptyset$. In particular, this happens if either i) $s\leq 0$ or ii) $s>0$, $\beta=n/2$ and the ``geometric Witt assumption'' \begin{equation}\label{geo:witt} {\rm Spec}({\dirac}_F)\cap\left(\frac{n}{2}-\hat a-s,\frac{n}{2}-\hat a\right)=\emptyset \end{equation} is satisfied. In this latter case, the Dirac map (\ref{dirac:map}) is Fredholm of index $0$ if $n/2-s< \beta<n/2$, {with the core Dirac being essentially self-adjoint for $s=1$}. \end{theorem} \begin{remark}\label{albin:cp} This should be compared with \cite[Theorem 1.1]{albin2016index}, which proves essential self-adjointness for $s=1$ in the general edge setting. An alternate approach to this latter result, which works more generally for stratified spaces, has been recently put forward in \cite{hartmann2018domain}. \end{remark} \begin{remark}\label{cheeger} If $D=d+d^*$, the Hodge-de Rham operator acting on differential forms, then the analogue of the Witt condition above translates into a purely topological obstruction. Precisely, if the cohomology group $H^{\frac{n-1}{2}}(F,\mathbb R)$ is trivial (in particular, if $n$ is even) then, after possibly rescaling the link metric $g_F$, $D_{g_1}$ is essentially self-adjoint \cite{cheeger1979spectral}. Extensions of this foundational result to general stratified spaces appear in \cite{albin2012signature}. \end{remark} This Fredholmness property of $\dirac_{g_s}$ may be substantially improved if we assume that $\kappa_{\overline g}$, the scalar curvature of $\overline g$, is non-negative when restricted to the conical region $U'$. Since \[ \kappa_{\overline g}|_{U'}=\left(\kappa_{g_F}-(n-1)(n-2)\right)x^{-2}+O(x^{-1}), \quad x\to 0, \] we infer that $\kappa_{g_F}\geq (n-1)(n-2)>0$ and a well-known estimate \cite[Section 5.1]{friedrich2000dirac} gives \[ \vartheta\in{\rm Spec}(\dirac_{F})\Longrightarrow |\vartheta|\geq \frac{n-1}{2}, \] which allows us to replace (\ref{geo:witt}) by \begin{equation}\label{geo:witt:imp} {\rm Spec}({\dirac}_F)\cap\left(\frac{1-n}{2},\frac{n-1}{2}\right)=\emptyset, \end{equation} {a gap estimate that, remarkably, does not involve the parameter $s$.} In this way we obtain the following specialization of Theorem \ref{albin:mellin}. \begin{theorem}\label{albin:mellin:enh} If $|s|\leq 1$ and $\kappa_{g_s}|_{U'}\geq 0$ then the Dirac map (\ref{dirac:map}) is Fredholm of index $0$ whenever \begin{equation}\label{allow:int} \frac{1}{2}(n-1)(s-1)< \beta<\frac{1}{2}(n-1)(s+1). \end{equation} \end{theorem} \begin{proof} If $\alpha={(s-1)(n-2)}/{2}$, a computation shows that \[ \kappa_{g_s}|_{U'}=x^{-\frac{\alpha(n+2)}{n-2}}\left((n-1)(n-2)(1-s^2)x^{\alpha-2}+\kappa_{\overline g}|_{U'}x^\alpha\right), \] so that $\kappa_{g_s}|_{U'}\geq 0$ implies $\kappa_{\overline g}|_{U'}\geq 0$ {and we may appeal to (\ref{geo:witt:imp}) to obtain (\ref{allow:int}) as the interval where the index remains constant. Since $\beta=ns/2$ lies in this interval if and only if $1-n<s<1+n$, the result follows by Remark \ref{self-ad-delta}.} \end{proof} \section{Applications}\label{examp:op} We now discuss a few (selected) applications of Theorems \ref{self:adj}, \ref{albin:mellin} and \ref{albin:mellin:enh} and Remark \ref{a:0:2} in Geometric Analysis. \subsection{The Laplacian in ${\rm AC}_0$ manifolds} This class of manifolds appears in Example \ref{ex:conic} above, so that $s=1$ in (\ref{met:edge}). Thus, Theorem \ref{self:adj} applies with $h=g_1$ and $a=n-2$. It is convenient here to pass from $\beta$ to $\gamma$ as in (\ref{beta:gamma}), so the Sobolev-Mellin norm in (\ref{norm:near}) becomes \begin{equation}\label{norm:0} \int|x^{\frac{n}{2}-\gamma} u(x,z)|^px^{-1}dxd{\rm vol}_{g_F}(z), \end{equation} which gives rise to the Sobolev-Mellin spaces $\mathcal H^{\sigma,\gamma}_{p}(V)$ considered in \cite{schrohe1999ellipticity}. The following result is an immediate consequence of Theorem \ref{self:adj}. \begin{theorem}\label{self:ac0} If $n\ge 4$ then the Laplacian map \[ \Delta_{h,\gamma}:\mathcal H^{\sigma,\gamma}_{p}(V)\to \mathcal H^{\sigma-2,\gamma-2}_{p}(V) \] is Fredholm of index $0$ if $(4-n)/2<\gamma<n/2$. \end{theorem} This result is used in \cite[Section 2]{de2022scalar} as a key step in the argument toward proving that a function which is negative somewhere is the scalar curvature of some conical metric in $V$. \subsection{The Laplacian in ${\rm AC}_\infty$ manifolds} This class of manifolds appears in Example \ref{ex:af} above, so that $s=-1$ in (\ref{met:edge}). Thus, Theorem \ref{self:adj} applies with $h=g_{-1}$ and $a=2-n$. If $r=x^{-1}$ then the Sobolev-Mellin norm in (\ref{norm:near}) becomes \begin{equation}\label{norm:1} \int|r^{-\beta} u(r,z)|^pr^{-n}d{\rm vol}_{h}(r,z), \end{equation} which gives rise to the weighted Sobolev spaces $L^p_{\sigma,\beta}(V)$ considered in \cite[Section 9]{lee1987yamabe}. The following result is an immediate consequence of Theorem \ref{self:adj}; compare with \cite[Theorem 9.2 (b)]{lee1987yamabe}. \begin{theorem}\label{self:ac0the} If $n\ge 4$ then the Laplacian map \[ \Delta_{h,\beta}:L^p_{\sigma,\beta}(V)\to L^p_{\sigma-2,\beta-2}(V) \] is Fredholm of index $0$ if $2-n<\beta<0$. \end{theorem} \begin{remark}\label{as:anal} Consider the case in which the link is the round sphere $(\mathbb S^{n-1},\delta)$ and $\nu_\infty>(n-2)/2$. Thus, we are in the {\em asymptotically flat} case so dear to practitioners of Mathematical Relativity \cite{lee1987yamabe,bartnik1986mass}. Here, an asymptotic invariant for $(V,h)$, the ADM mass $\mathfrak m_{(V,h)}$, is defined by \[ \mathfrak m_{(V,h)}=\lim_{r\to +\infty}\int_{S^{n-1}_r}\left(h_{ij,j}-h_{jj,i}\right){\eta}^idS^{n-1}_r, \] where $h_{ij}$ are the coefficients of $h$ in the given coordinate system, the comma denotes partial differentiation, $S^{n-1}_r$ is the coordinate sphere of radius $r$ in the asymptotic region and $\eta$ is its outward unit normal (with respect to the flat metric). The problem remains of checking that the expression above does {\em not} depend on the particular coordinate system chosen near infinity. The first step in confirming this assertion involves the construction of {harmonic} coordinates; this is explained in \cite[Theorem 9.3]{lee1987yamabe}, which is a rather straightforward consequence of Theorem \ref{self:ac0the}. \end{remark} \subsection{The Laplacian in ${\rm ACyl}$ manifolds}\label{as:cyl:man} This class of manifolds appears in Example \ref{asym:cyl} above, so that $s=0$ in (\ref{met:edge}). Thus, Theorem \ref{self:adj} applies with $h=g_0$ and $a=0$. If $x=e^{-r}$ and $\delta=-\beta$ then the Sobolev-Mellin norm in (\ref{norm:near}) becomes \[ \int|e^{\delta r} u(r,z)|^pd{\rm vol}_{h}(r,z), \] which gives rise to the weighted Sobolev spaces $W^p_{\sigma,\delta}(V)$ considered in \cite{lockhart1985elliptic}, but notice that these authors use $\log r$ instead of $r$. The following result is an immediate consequence of Remark \ref{a:0:2}. \begin{theorem}\label{self:ac0the:cyl} If $n\ge 4$ then the Laplacian map \[ \Delta_{h,\delta}:W^p_{\sigma,\delta}(V)\to W^p_{\sigma-2,\delta}(V) \] is Fredholm if $0<\delta<\sqrt{\mu_{g_F}}$, where $\mu_{g_F}$ is the first (positive) eigenvalue of $\Delta_{g_F}$. \end{theorem} The H\"older counterpart of this result is used in \cite{haskins2015asymptotically} to study asymptotically cylindrical Calabi-Yau manifolds. \subsection{The Laplacian in ${\rm AC}_0/{\rm AC}_\infty$ manifolds}\label{man:0inf} This class of manifolds appears in Example \ref{a0:ainf} above, so that $s=\pm 1$ in (\ref{met:edge}) depending on the nature of the end. The corresponding mapping properties for the Laplacian are formulated in weighted Sobolev spaces incorporating the norms induced by (\ref{norm:0}) and (\ref{norm:1}) above. These properties, including the extra information coming from the jumps in the Fredholm index, are used in \cite{pacini2013special} to study the moduli space of special Lagrangian conifolds in $\mathbb C^m$; see also \cite{joyce2003special}. \subsection{The Dirac operator in ${\rm AC}_0$ spin manifolds}\label{dir:ac0:man} For this class of manifolds, Theorems \ref{albin:mellin} and \ref{albin:mellin:enh} apply with $s=1$ (and $h=g_1$) so if we further assume that $\kappa_{h}|_{U'}\geq 0$ then the Dirac operator $\dirac_{h}$ is Fredholm of index $0$ for $0<\beta<n-1$ with the core Dirac being essentially self-adjoint for $\beta=n/2$. Now recall that if $n$ is even then the spinor bundle decomposes as $S_V=S_V^+\oplus S_V^-$, with a corresponding decomposition for $\dirac_{h}$: \[ \dirac_{h}=\left( \begin{array}{cc} 0 & \dirac_{h}^-\\ \dirac_{h}^+ & 0 \end{array} \right) \] where $\dirac_{h}^\pm:\Gamma(S_V^\pm)\to \Gamma(S_V^\mp)$, the chiral Dirac operators, are adjoint to each other. Thus, it makes sense to consider the {\em index} of $\dirac_{h}^+$: \[ {\rm ind}\, \dirac_{h}^+=\dim\ker \dirac_{h}^+-\dim\ker \dirac_{h}^-. \] This fundamental integer invariant can be explicitly computed in terms of topological/geometric data of the underlying ${\rm AC}_0$ manifold by means of heat asymptotics \cite{albin2016index,chou1985dirac,lesch1997differential}. The resulting formula has been used in \cite{de2022scalar} to exhibit obstructions for the existence of conical metrics with positive scalar curvature. \subsection{The Dirac operator in asymptotically flat spin manifolds}\label{dir:asym:f} If an asymptotically flat manifold $V$ as in Remark \ref{as:anal} (with $h=g_{-1}$) is spin and satisfies $\kappa_{h}\geq 0$ in the asymptotic region then Theorem \ref{albin:mellin:enh} applies and $\dirac_{h}:L^p_{\sigma,\beta}(S_V)\to L^p_{\sigma-1,\beta-1}(S_V)$ is Fredholm of index $0$ for $1-n<\beta<0$. If we further assume that $\kappa_{h}\geq 0$ {\em everywhere} then integration by parts starting with the Weitzenb\"ock formula for the Dirac Laplacian $\dirac_{h}^2$ shows that $\dirac_{h}$ is injective and hence surjective by Fredholm alternative. We now take a {\em parallel} spinor $\phi_{\infty}$ in $\mathbb R^n$, $|\phi_\infty|=1$, and transplant it to the asymptotic region by means of the diffeomorphism $\psi$ in Example \ref{ex:af}. If we still denote by $\phi_{\infty}$ a smooth extension of this spinor to the whole of $V$, then a computation shows that, as $r\to\infty$, \[ \dirac_{h}\phi_\infty=O(|\partial h|)=O(r^{-\nu_\infty-1})\in L^p_{\sigma-1,\beta-1}(S_V), \quad \beta\in\left[1-\frac{n}{2},0\right), \] so there exists $\phi_0\in L^p_{\sigma,\beta}(S_V)$ with $\dirac_{h}\phi_0=-\dirac_{h}\phi_\infty$. It follows that $\phi=\phi_0+\phi_\infty$ is harmonic ($\dirac_{h}\phi=0 $) and $|\phi-\phi_\infty|=O(r^{\beta})$. With this spinor $\phi$ at hand, another (more involved!) integration by parts yields Witten's remarkable formula for the ADM mass of $(V,h)$: \[ \mathfrak m_{(V,h)}=c_n\int_V\left(|\nabla\phi|^2+\frac{\kappa_h}{4}|\phi|^2\right)d{\rm vol}_h, \quad c_n>0. \] From this we easily deduce the following fundamental positive mass inequality. \begin{theorem}\cite{witten1981new} If $(V,h)$ is asymptotically flat and spin as above and $\kappa_h\geq 0$ everywhere then $\mathfrak m_{(V,h)}\geq 0$. Moreover, the equality holds only if $(V,h)=(\mathbb R^n,\delta)$ isometrically. \end{theorem} The details of the argument above may be found in \cite[Appendix]{lee1987yamabe}. \section{Further applications}\label{nonempty:bd} The techniques described above may be adapted to handle more general situations. We only briefly discuss here three interesting cases. \subsection{Conformally conical manifolds with boundary} Here we consider conformally conical manifolds carrying a non-empty boundary $\partial X$ which is allowed to reach the tip of the cone. The formal definition is as in Example \ref{ex:conic}, except that the link $F$ itself carries a non-empty boundary $\partial F$. The key observation now is that both $\partial X$ and the double $2X$ along the boundary $\partial X$ are conformally conical manifolds as in Definition \ref{conic:metric} (the links of these ``boundaryless'' manifolds are $\partial F$ and $2F$, respectively). Thus, we are led to ask whether the Calder\'on-Seeley technique mentioned in the Introduction (for smooth manifolds) may be adapted to this context. This program has been carried out in \cite{coriasco2007realizations}, where it is shown, among other things, that the realizations of the Laplacian under standard boundary conditions (Dirichlet/Neumann) may be treated as well, at least in the ``straight'' case where the link metric is not allowed to vary with $x$ \cite[Section 5]{coriasco2007realizations}; see also \cite{fritzsch2020calder} for an approach in the setting of fibred cusp operators. If we invert the conical singularity as in Example $\ref{ex:af}$, we obtain an asymptotically flat manifold with a {\em non-compact} boundary and this theory provides a (rather sophisticated) approach to the results obtained ``by hand'' in \cite[Appendix A]{almaraz2014positive}. Finally, we mention that the setup in \cite{coriasco2007realizations} also applies to the realization of the Dirac operator acting on spinors under MIT bag boundary condition, so after inversion we recover the analytical machinery underpinning the positive mass theorems for asymptotically flat initial data sets in \cite{almaraz2014positive,almaraz2021spacetime}. \subsection{{Asymptotically hyperbolic} spaces}\label{sub:edge} We may consider an edge space $(X,g_s)$ with $g_s={x^{2s-2}}(\overline g+o(1))$ and $\overline g$ as in (\ref{met:edge:ex}). Here, \[ x^{2s}\Delta_{g_s}|_{U'}=\mathsf D^2+\tilde a\mathsf D+\Delta_{g_F}+x^2\Delta_{g_Y}+o(1), \quad \tilde a=a-d,\quad d=\dim Y. \] If we specialize to the ``pure'' edge case in which the cone fiber $\mathcal C^F$ degenerates into a line ($F$ becomes a point) then $d=n-1$ and $\tilde a=a+1-n$, $n= \dim X\geq 3$, and \[ g_s|_{U'}=x^{2s-2}\left(\overline g+o(1)\right), \quad \overline g=dx^2+g_Y. \] If we further take $s=0$ then $(X,g_0)$ is {\em conformally compact} with $(Y,[g_Y])$ as its {\em conformal boundary} and $x$ is the corresponding {\em defining function}. {In particular, since $|dx|_{\overline g}=1$ along $Y$, a computation shows that $g_0$ is {\em asymptotically hyperbolic} in the sense that its sectional curvature approaches $-1$ as $x\to 0$.} Since $a=(n-2)s=0$, the conormal symbol gets replaced by \begin{equation}\label{con:symb:cc} \xi_{\Delta_{g_0}}(\zeta)=\zeta^2+(n-1)\zeta, \end{equation} whose roots define a {\em unique} interval $(1-n,0)$ where the weight parameter $\beta$ is allowed to vary. Here, it is convenient to set \[ \beta=-\delta+\frac{1-n}{p}, \quad p>1, \] so the Sobolev-Mellin norm in (\ref{norm:near}) becomes \[ \int |x^{-\delta} u(x,y)|^px^{-n}dxd{\rm vol}_{g_Y}=\int |x^{-\delta} u(x,y)|^pd{\rm vol}_{g_0}, \] which defines the weighted Sobolev spaces $H^{\sigma,p}_\delta(X)$ considered in \cite{andersson1993elliptic,lee2006fredholm}. A variation of the procedure above then yields the following result, which should be compared to \cite[Proposition F]{lee2006fredholm} and \cite[Corollary 3.13]{andersson1993elliptic}; this latter reference only treats the case $p=2$. \begin{theorem}\label{self:cc} The Laplacian map \begin{equation}\label{lap:cc} \Delta_{g_0,\delta}:H^{\sigma,p}_{\delta}(X)\to H^{\sigma-2,p}_{\delta}(X) \end{equation} is Fredholm of index $0$ if \begin{equation}\label{int:hyp} \frac{1-n}{p}<\delta<\frac{(n-1)(p-1)}{p}. \end{equation} \end{theorem} \begin{remark}\label{outside} Since in this {asymptotically hyperbolic} case the Laplacian of the total space of the restricted fiber bundle $F\times Y=\{{\rm pt}\}\times Y{\to} Y$ endowed with the metric $g_F\oplus g_Y=g_Y$ does not show up in (\ref{con:symb:cc}), we are led to suspect that (\ref{lap:cc}) fails to be Fredholm if $\delta$ does not satisfy (\ref{int:hyp}). This is the case indeed and a proof of this claim may be found in \cite{lee2006fredholm}. Notice also that for $p=2$, (\ref{int:hyp}) becomes $|\delta|^2<(n-1)^2/4$, a bound that also appears in MacKean's estimate \cite{mckean1970upper}, which in particular provides a sharp lower bound for the bottom of the spectrum of the Laplacian in the model space (this is of course hyperbolic $n$-space $\mathbb H^n$, which is obtained in the formalism above by taking $(Y,g_Y)$ to be a round sphere). In fact, asymptotic versions of this estimate are used in \cite{andersson1993elliptic} as a key ingredient in directly establishing the mapping properties of geometric operators in {asymptotically hyperbolic spaces}. \end{remark} { \begin{remark}\label{qing} In an asymptotically hyperbolic manifold as above, the operator $\mathcal L_{g_0}^{(t)}:=\Delta_{g_0}+t(n-1-t)$, $t\in\mathbb C$, whose conormal symbol is \[ \xi_{\mathcal L^{(t)}_{g_0}}(\zeta)=\zeta^2+(n-1)\zeta+t(n-1-t), \] also plays a distinguished role \cite{mazzeo1987meromorphic,graham2003scattering,case2016fractional}. Let us assume that $2t\in\mathbb R\backslash \{n-1\}$ so that, by symmetry, we may take $2t>n-1$. Proceeding as above, we see that \[ \mathcal L^{(t)}_{g_0,\delta}:H^{\sigma,p}_{\delta}(X)\to H^{\sigma-2,p}_{\delta}(X) \] is Fredholm of index $0$ if \[ \frac{(n-1-t)p+1-n}{p}<\delta < \frac{tp+1-n}{p}. \] If $g_0$ is Einstein (${\rm Ric}_{g_0}=-(n-1)g_0$), the special choice $t=n$ is particularly important: the so-called {\em static potentials} (that is, solutions of $\nabla^2_{g_0}V=Vg_0$) all lie in the kernel of $\mathcal L^{(n)}_{g_0}$. In this case, the H\"older counterpart of this result (which is obtained by sending $p\to +\infty$ in the assertion above) has been used in \cite{qing2003rigidity} to establish the existence of ``approximate'' static potentials. As a consequence, an asymptotically hyperbolic Einstein manifold with the round sphere as its conformal infinity was shown to be isometric to hyperbolic $n$-space, at least if $4\leq n\leq 7$. \end{remark} } \subsection{{Asymptotically hyperbolic} spaces with boundary}\label{further:cc} A more general kind of {asymptotically hyperbolic} space is obtained by assuming that the underlying conformally compact space $X$ carries a boundary decomposing as $\partial X=Y \cup Y_{\rm f}$, with the intersection $\Sigma=Y\cap Y_{\rm f}$ being a (intrinsically smooth) co-dimension two corner. We assume further that there exists a tubular neighborhood $U$ of $Y$ on which a defining function $x$ for $Y$ exists so that \[ g|_U=x^{-2}(\overline g+o(1)),\quad \overline g=dx^2+g_Y(y), \] where $g_Y$ is a metric in $Y$. Thus, the conformal boundary $(Y,[g_Y])$ itself carries a boundary, namely, $(\Sigma,[{g_Y}|_\Sigma])$, whereas the other piece of the boundary, $Y_{\rm f}$, remains at a finite distance. Finally, we impose that $\nabla_{\overline g}x$ is tangent to $Y_{\rm f}$ along $U\cap Y_{\rm f}$, so that $Y$ and $Y_{\rm f}$ meet orthogonally along $\Sigma$. It is immediate to check that: i) the ``finite'' boundary $\partial X_{\rm f}:=Y_{\rm f}$ is a pure edge space as above with conformal infinity $(\Sigma,[g_Y|_\Sigma])$; ii) the double $2X_{\rm f}:=X\sqcup_{Y_{\rm f}}-X$ is naturally a pure edge space as above with conformal boundary having the closed manifold $2Y$ as carrier and conformal structure induced by $g_Y$. Thus, at least in principle, the appropriate version of the Calder\'on-Seeley approach mentioned in the Introduction should apply here. Very likely, this follows from the corresponding adaptation of the general setup in \cite{coriasco2007realizations,fritzsch2020calder}, so that both the realizations of the Laplacian and the Killing Dirac operator (under Dirichlet/Neumann boundary and chiral boundary conditions, respectively, imposed along $\partial X_{\rm f}$) may be shown to be Fredholm in suitable Sobolev scales. This should provide an alternate approach to the analysis underlying the positive mass theorems for asymptotically hyperbolic initial data sets in \cite{almaraz2020mass,almaraz2021spacetime}. \bibliographystyle{alpha}
proofpile-arXiv_065-4692
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Let $\mathcal{P}$ be a property about rational numbers, such as being rational squares or cubes, and denote also by $\mathcal{P}$ the subset of all rational numbers with this property. A nonzero integer $n$ is called a \emph{reflecting number of type $(\mathcal{P}_{1},\mathcal{P}_{2})$}, if there exist $u,v\in\mathcal{P}_{1}$ and $t\in\mathcal{P}_{2}\setminus\{0\}$ such that $n-t=u$ and $n+t=v$, where $\mathcal{P}_{1}$ and $\mathcal{P}_{2}$ are two properties about rational numbers. On a number line, the integer $n$ behaves like a mirror and reflects two rational numbers $u,v$ with the same property $\mathcal{P}_{1}$ to each other, while the distance from $u$ or $v$ to $n$ is a nonzero rational number $t$ with property $\mathcal{P}_{2}$, hence the name. Since the meaning behind $t$ is distance, we require it to be positive. Also, we exclude $n=0$ from the definition for it is in general less interesting or can be easily dealt with. In this work, we consider the property $\mathcal{P}(i)=\{x^{i}:x\in\mathbb{Q}\}$ consisting of rational $i$th powers. For any ordered pair of positive integers $(k,m)$, reflecting numbers of type $(\mathcal{P}(k),\mathcal{P}(m))$ are also called $(k,m)$-reflecting numbers for short. So a \emph{$(k,m)$-reflecting number} is the average of two distinct rational $k$th powers, between which the distance is twice another nonzero rational $m$th power. A basic problem is to classify reflecting numbers $n$ of various types $(k,m)$ and for each such number $n$ find all positive rational numbers $t$ as in the definition. For example, if $k=1$ then $n\pm t^{m}$ are always rational numbers for any $t\in\mathbb{Q}^{*}$ and thus all nonzero integers are $(1,m)$-reflecting for all natural number $m$. Since $2n=v^{k}+u^{k}$ and $2t^{m}=v^{k}-u^{k}$, we observe that: 1) If $k$ is odd, then $n$ is $(k,m)$-reflecting if and only if $-n$ is. 2) If $k$ is even, then negative integers cannot be $(k,m)$-reflecting. 3) If $n$ is $(k,m)$-reflecting, so is $nd^{\lcm(k,m)}$ for any positive integer $d$. So it suffices to study the \emph{primitive $(k,m)$-reflecting numbers}, i.e., the ones that are positive and free of $\lcm(k,m)$th power divisors. Denote by $\mathscr{R}(k,m)$ the set of all $(k,m)$-reflecting numbers and by $\mathscr{R}'(k,m)$ the subset of all primitive ones. 4) If $n$ is $(k',m')$-reflecting, then it is $(k,m)$-reflecting for all $k\mid k'$ and $m\mid m'$. So we have a filtration of reflecting numbers of various types $(k,m)$, i.e, \[ \mathscr{R}(k,m)\supset\mathscr{R}(k',m'),\ \forall k\mid k',\ \forall m\mid m'. \] In particular, if $\mathscr{R}(k,m)=\emptyset$ then $\mathscr{R}(k',m')=\emptyset$. 5) The set of $(k,m)$-reflecting numbers is nonempty if and only if the ternary Diophantine equation $v^{k}-u^{k}=2t^{m}$ has a rational solution such that $v^{k}\ne\pm u^{k}$. 6) Reflecting numbers of type $(k,1)$ are those nonzero integers, whose double can be written as sums of two distinct rational $k$th powers. The motivation comes from a sudden insight into the definition of congruent numbers: a positive integer $n$ is call a \emph{congruent number}, if it is the area of a right triangle with rational sides, or equivalently, if it is the common difference of an arithmetic progression of three rational squares, i.e., if there exists a positive rational number $t$ such that $t^{2}\pm n$ are both rational squares. But what happens if $t^{2}$ and $n$ are swapped in the latter definition? Well, if there exists a positive rational number $t$ such that $n\pm t^{2}$ are rational squares, then $n$ is by our definition a\emph{ $(2,2)$-}reflecting number, which turns out to be a congruent number. To emphasize that $(2,2)$-reflecting numbers are in fact congruent, such numbers are also called \emph{reflecting congruent numbers}. For example, $5$ is $(2,2)$-reflecting since $5-2^{2}=1^{2}$ and $5+2^{2}=3^{2}$, and congruent since $(41/12)^{2}-5=(31/12)^{2}$ and $(41/12)^{2}+5=(49/12)^{2}$. But there exist congruent numbers, such as $6$ and $7$, that are not reflecting congruent. Hence, we have a dichotomy of congruent numbers, according to whether they are reflecting congruent or not. In this work, we focus on the primitive reflecting congruent numbers. It is easy to see that such numbers can only have prime divisors congruent to $1$ modulo $4$. Among all such numbers, the prime ones are certainly interesting. Heegner \cite{Heegner1952} asserts without proof that prime numbers in the residue class $5$ modulo $8$ are congruent numbers. This result is repeated by Stephens \cite{Stephens1975} and finally proved by Monsky \cite{Monsky1990}. But we can say more about these numbers. \begin{thm} \label{thm:p5mod8} Prime numbers $p\equiv5\mod8$ are reflecting congruent. \end{thm} Most prime numbers congruent to $1$ modulo $8$ are not congruent. But if they are congruent, then we have the following theorem and conjecture. \begin{thm} \label{thm:p1mod8} Let $p$ be a prime congruent number $\equiv1\mod8$. If $E_{p}/\mathbb{Q}$ has rank $2$ (or equivalently $\textrm{{\cyr SH}}(E_{p}/\mathbb{Q})[2]$ is trivial), then $p$ is reflecting congruent. \end{thm} \begin{conjecture} \label{conj:p1mod8} Prime congruent numbers $p\equiv1\mod8$ are reflecting congruent. \end{conjecture} This conjecture follows easily from our criterion of reflecting congruent numbers and any one of the parity conjecture for Mordell-Weil group and the finiteness conjecture for Shafarevich-Tate group. \begin{conjecture*} For any elliptic curve $E$ defined over a number field $K$, one has $(-1)^{\rank(E/K)}=w(E/K)$, where $w(E/K)$ is the global root number of $E$ over $K$. \end{conjecture*} For any square-free congruent number $n$ and the associated elliptic curves $E_{n}:y^{2}=x^{3}-n^{2}x$ over $\mathbb{Q}$, the global root number is given by the following formula \begin{equation} w(E_{n}/\mathbb{Q})=\begin{cases} +1, & \text{if }n\equiv1,2,3\mod8,\\ -1, & \text{if }n\equiv5,6,7\mod8. \end{cases}\label{eq:root number} \end{equation} \begin{conjecture*} For any elliptic curve $E$ defined over a number field $K$, the Shafarevich-Tate group $\textrm{{\cyr SH}}(E/K)$ is finite. \end{conjecture*} A theorem of Tian \cite{Tian2012} says that for any given integer $k\ge0$, there are infinitely many square-free congruent numbers in each residue class of $5$, $6$, and $7$ modulo $8$ with exactly $k+1$ odd prime divisors. With a bit of work, we can show \begin{thm} \label{thm:TianThm1.1} For any given integer $k\ge0$, there are infinitely many square-free reflecting congruent numbers in the residue class of $5$ modulo $8$ with exactly $k+1$ prime divisors. \end{thm} Moreover, this result can be strengthened by \cite[Thm. 5.2]{Tian2012} as follows. \begin{thm} \label{thm:TianThm5.2} Let $p_{0}\equiv5\mod8$ be a prime number. Then there exists an infinite set $\Sigma$ of primes congruent to $1$ modulo $8$ such that the product of $p_{0}$ with any finitely many primes in $\Sigma$ is a reflecting congruent number. \end{thm} In fact, we will show by our criterion of reflecting congruent numbers that these congruent numbers congruent to $5$ modulo $8$ constructed by Tian as in his \cite[Thm. 1.3]{Tian2012} are actually reflecting congruent. \begin{thm} \label{thm:TianThm1.3} Let $n\equiv5\mod8$ be a square-free positive integer with all prime divisors $\equiv1\mod4$, exactly one of which is $\equiv5\mod8$, such that the field $\mathbb{Q}(\sqrt{-n})$ has no ideal classes of exact order $4$. Then, $n$ is a reflecting congruent number. \end{thm} Before closing the paper, we discuss reflecting numbers of type $(k,m)$ with $\gcd(k,m)\ge3$. In virtue of a deep result on rational solutions to the ternary Diophantine equation $x^{k}+y^{k}=2z^{k}$ for any $k\ge3$, we can show that \begin{thm} \label{thm:gcd(k,m)>=00003D3} There exist no reflecting numbers of type $(k,m)$ if $\gcd(k,m)\ge3$. \end{thm} Let $d$ be the greatest common divisor of $k$ and $m$. Since $\mathscr{R}(k,m)\subset\mathscr{R}(d,d)$, it suffices to show that there exist no reflecting numbers of type $(d,d)$ for any $d\ge3$. The cases in which $d=3,4$ can be easily proved, due to classical results of Euler. The cases in which $d\ge5$ follow easily from the Lander, Parkin, and Selfridge conjecture \cite{LanderParkinSelfridge1967} on equal sums of like powers or the D\'enes Conjecture \cite{Denes1952} on arithmetic progressions of like powers: \begin{conjecture*} If the formula $\sum_{i=1}^{n}a_{i}^{d}=\sum_{j=1}^{m}b_{j}^{d}$ holds, where $a_{i}\neq b_{j}$ are positive integers for all $1\le i\le n$ and $1\le j\le m$, then $m+n\ge d$. \end{conjecture*} \begin{conjecture*} For any $d\ge3$, if the ternary Diophantine equation $x^{d}+y^{d}=2z^{d}$ has a rational solution $(x,y,z)$, then $xyz=0$ or $|x|=|y|=|z|$. \end{conjecture*} Fortunately, the latter D\'enes conjecture becomes a theorem by the work of Ribet \cite{Ribet1997} and Darmon and Merel \cite{DarmonMerel1997}, based on which we prove Theorem \ref{thm:gcd(k,m)>=00003D3}. This work is organized as follows. Section \ref{sec:(k,m)} gives a general description of $(k,m)$-reflecting numbers and picks out some special ones, whose existence depends on whether $k$ and $m$ are coprime or not. Moreover, reflecting numbers of type $(k,1)$, in particular, $(2,1)$ and $(3,1)$, are also discussed in this section. Section \ref{sec:(2,2)} is devoted to reflecting congruent numbers and occupies the major part of this work. In Section \ref{sec:gcd(k,m)>=00003D3}, we discuss reflecting numbers of type $(k,m)$ with $\gcd(k,m)\ge3$ and disprove their existence. \begin{notation*} Throughout this paper, for any prime number $p$, we denote by $v_{p}$ the normalized $p$-adic valuation such that $v_{p}(\mathbb{Q}_{p}^{*})=\mathbb{Z}$ and $v_{p}(0)=+\infty$. \end{notation*} \section{\label{sec:(k,m)}$(k,m)$-Reflecting Numbers} In this section, we give a general discription of $(k,m)$-reflecting numbers and then focus on $(k,1)$-reflecting numbers for smaller $k=2,3$. \subsection{Homogeneous Equations} By definition, a nonzero integer $n$ is $(k,m)$-reflecting if and only if the following system of homogeneous equations \[ \begin{cases} nS^{m}-T^{m}=S^{m-k}U^{k},\\ nS^{m}+T^{m}=S^{m-k}V^{k}, \end{cases}\text{if }k\le m,\text{ or }\begin{cases} nS^{k}-S^{k-m}T^{m}=U^{k},\\ nS^{k}+S^{k-m}T^{m}=V^{k}, \end{cases}\text{if }k\ge m, \] has a nontrivial solution $(S,T,U,V)$ in integers such that $S,T>0$. Such a solution is called \emph{primitive} if the greatest common divisor of $S,T,U,V$ is $1$.\footnote{However, if $k\ne m$, then the greatest common divisor of $S$ and $T$ might be greater than $1$.} In either case, we have $n=\frac{V^{k}+U^{k}}{2S^{k}}$ and $\frac{T^{m}}{S^{m}}=\frac{V^{k}-U^{k}}{2S^{k}}$. Since $n$ and $T$ are nonzero, $V$ and $U$ must have the same parity and satisfy $V^{k}\ne\pm U^{k}$. \subsection{Inhomogeneous Equations} If $k\ne m$, then it is more convenient to work with inhomogeneous equations. Again we assume $t>0$ and write $t=T/S$ with $\gcd(S,T)=1$ and $S,T>0$. Then, we have \[ nS^{m}-T^{m}=S^{m}u^{k},\quad nS^{m}+T^{m}=S^{m}v^{k}. \] It follows that denominators of $u^{k}$ and $v^{k}$ are divisors of $S^{m}$. If $p$ is a prime divisor of $S\ne1$, then $p$ does not divide $T$. If $uv\ne0$, then we compare the $p$-adic valuations of both sides of the above equations \[ mv_{p}(S)+kv_{p}(u)=v_{p}(nS^{m}-T^{m})=0=v_{p}(nS^{m}+T^{m})=mv_{p}(S)+kv_{p}(v). \] This implies that $S^{m}$ is a $k$th power and thus a $\lcm(k,m)$th power, which is exactly the denominator of $u^{k}$ and $v^{k}$. Then, for some nonzero integer $S_{0}$, we can write \[ S=S_{0}^{k'},\ U^{k}=S^{m}u^{k}=(S_{0}^{m'}u)^{k},\ V^{k}=S^{m}v^{k}=(S_{0}^{m'}v)^{k} \] where $U,V\in\mathbb{Z}$ and $k'=k/\gcd(k,m),m'=m/\gcd(k,m)$. Therefore, we have the following inhomogeneous equations, \begin{equation} nS_{0}^{\lcm(k,m)}-T^{m}=U^{k},\quad nS_{0}^{\lcm(k,m)}+T^{m}=V^{k},\label{eq:inhomo} \end{equation} which become homogeneous when $k=m$. If exactly one of $u$ and $v$ is $0$ or $S=1$, then $t=T$, $S=S_{0}=1$, $u=U$, and $v=V$ are all integers, so we will obtain the same equations as above. So we can always write \[ n=\frac{V^{k}+U^{k}}{2S_{0}^{\lcm(k,m)}},\ T=\sqrt[m]{\frac{V^{k}-U^{k}}{2}},\text{ where }V^{k}\ne\pm U^{k}\text{ and }V\equiv U\mod2. \] \begin{prop} \label{prop:R(k,m)} The set of $(k,m)$-reflecting numbers is given by \[ \mathscr{R}(k,m)=\left\{ \frac{V^{k}+U^{k}}{2S_{0}^{\lcm(k,m)}}\in\mathbb{Z}^{*}:(S_{0},U,V)\in\mathbb{N}\times\mathbb{Z}^{2},\sqrt[m]{\frac{V^{k}-U^{k}}{2}}\in\mathbb{N}\right\} \] and it is nonempty if and only if the following ternary Diophantine equation \begin{equation} 2T^{m}+U^{k}=V^{k}\label{eq:2Tm+Uk=00003DVk} \end{equation} has an integer solution $(T,U,V)$ such that $U^{k}\ne\pm V^{k}$. \end{prop} \begin{proof} It remains to show the ``if'' part. If (\ref{eq:2Tm+Uk=00003DVk}) has an integer solution $(T,U,V)$ such that $U^{k}\ne\pm V^{k}$, then $U\equiv V\mod2$ and $n=(V^{k}+U^{k})/2$ is $(k,m)$-reflecting. \end{proof} \subsection{Special $(k,m)$-Reflecting Numbers} If $\gcd(k,m)=1$ and $i\in[0,k)$ is the least integer such that $k\mid(im+1)$, then $n=2^{im}T_{0}^{km}$ is $(k,m)$-reflecting for any $T_{0}\in\mathbb{N}$. Indeed, $T=2^{i}T_{0}^{k}$ and $V=2^{(im+1)/k}T_{0}^{m}$ are solutions to $n-T^{m}=0$ and $n+T^{m}=V^{k}$. This shows that $\mathscr{R}(k,m)$ is not empty whenever $\gcd(k,m)=1$. We will refer to such numbers (and their opposites if $k$ is odd) as the \emph{special $(k,m)$-reflecting numbers}. It turns out that special $(k,m)$-reflecting numbers exist only when $\gcd(k,m)=1$. \begin{prop} \label{prop:special} If $n=T^{m}$ is a positive $(k,m)$-reflecting number such that $2T^{m}=V^{k}$ for some positive integers $T$ and $V$, then $\gcd(k,m)=1$ and $n=2^{im}T_{0}^{km}$, where $T_{0}$ is a positive integer and $i$ is the least nonnegative integer such that $k\mid(im+1)$. \end{prop} \begin{proof} Indeed, the $2$-adic valuation $1+mv_{2}(T)=kv_{2}(V)$ of $2T^{m}=V^{k}$ shows that $\gcd(k,m)=1$. If $p$ is an odd prime divisor of $T$, then we have $mv_{p}(T)=kv_{p}(V)$. Since $m\mid v_{p}(V)$ and $k\mid v_{p}(T)$, we can write $T=2^{i}T_{0}^{k}$ where $T_{0}$ is a positive integer and $i=v_{2}(T)\mod k\in[0,k)$ is the least integer such that $k\mid(im+1)$. \end{proof} \begin{rem*} Special $(k,m)$-reflecting numbers give an abundant but less interesting supply of solutions to the problem. One could avoid them by setting $uv\ne0$ in the definition. \end{rem*} \subsection{$(k,1)$-Reflecting Numbers} If $m=1$, then (\ref{eq:2Tm+Uk=00003DVk}) has integer solutions if and only if $V$ and $U$ have the same parity. So we obtain that \begin{cor} The set of $(k,1)$-reflecting numbers is given by \[ \mathscr{R}(k,1)=\left\{ \frac{V^{k}+U^{k}}{2S_{0}^{k}}\in\mathbb{Z}^{*}:(S_{0},U,V)\in\mathbb{N}\times\mathbb{Z}^{2},V^{k}\ne\pm U^{k},V\equiv U\mod2\right\} . \] \end{cor} \begin{notation*} For each $i\in\mathbb{N}$, the $i$th power free part of any nonzero rational number $t$ is temporarily denoted by $\llbracket t\rrbracket_{i}$, i.e., an integer such that $t/\llbracket t\rrbracket_{i}\in(\mathbb{Q}^{+})^{i}$ is a rational $i$th power. So $n=\llbracket n\rrbracket_{i}$ means that $n$ is an integer having no $i$th power divisors. \end{notation*} The following fact is useful in our analysis of $(2,m)$-reflecting numbers. \begin{fact*} $-1$ is a quadratic residue modulo a positive square-free number $n$ if and only if each odd prime divisor of $n$ is congruent to $1$ modulo $4$. \end{fact*} It is easy to classify primitive $(2,1)$-reflecting numbers, among which $n=1$ is the first one and $1\cdot5^{2}-2^{3}\cdot3=1^{2}$ and $1\cdot5^{2}+2^{3}\cdot3=7^{2}$, where $S_{0}=5$ is the smallest, and $n=2$ is the special one for which $u=0$ and $2-2=0^{2}$, $2+2=2^{2}$. \begin{prop} \label{prop:R'(2,1)} The set $\mathscr{R}'(2,1)$ consists of positive square-free integers $n$, such that $n$ has no prime divisors congruent to $3$ modulo $4$, or equivalently, $-1$ is a quadratic residue modulo $n$. \end{prop} \begin{proof} If $n\in\mathscr{R}'(2,1)$, then $2nS_{0}^{2}=V^{2}+U^{2}$ is a sum of two distinct squares with the same parity. Since the square-free part of a sum of two squares cannot have a prime divisor $\equiv3\mod4$, $n$ has no prime divisors $\equiv3\mod4$. Conversely, we have seen that $1\in\mathscr{R}'(2,1)$ and if $n>1$ is a square-free integer having no prime divisors $\equiv3\mod4$, then $2n$ can be written as a sum of two squares with same parity. These two squares must be distinct; otherwise, $n$ will not be square-free. \end{proof} As a $(2,1)$-reflecting number, $n=1$ is closely related to the congruent numbers. \begin{prop} There is a one-to-one correspondence between the set of square-free congruent numbers and the set $\{\llbracket t\rrbracket_{2}\mid t\in\mathbb{Q}^{+},1\pm t\in\mathbb{Q}^{2}\}$. \end{prop} \begin{proof} If $m$ is a square-free congruent number, then there exists $t\in\mathbb{Q}^{+}$ such that $t^{2}\pm m$ or $1\pm mt^{-2}$ are rational squares. Hence, $1$ is $(2,1)$-reflecting and $m=\llbracket mt^{-2}\rrbracket_{2}$. Conversely, if $t\in\mathbb{Q}^{+}$ is such that $1-t=u^{2}$ and $1+t=v^{2}$ for $u,v\in\mathbb{Q}^{+}$. Clearing denominators of $t,u,v$, we have $S^{2}-S^{2}t=U^{2}$ and $S^{2}+S^{2}t=V^{2}$, i.e., $S^{2}t$ is the common difference of the arithmetic progression $U^{2},S^{2},V^{2}$ of perfect squares, and thus $\llbracket t\rrbracket_{2}=\llbracket S^{2}t\rrbracket_{2}$ is a congruent number. \end{proof} \begin{rem*} Searching all rational numbers $t$ such that $1\pm t$ are rational squares is equivalent to searching all congruent numbers $n$ and arithmetic progressions of three rational squares with common difference $n$. So in this sense the reflecting number problem generalizes the congruent number problem. \end{rem*} It is not easy to classify all primitive $(3,1)$-reflecting numbers, among which $3$ is the first one and $3\cdot21^{3}-22870=17^{3}$, $3\cdot21^{3}+22870=37^{3}$, where $S_{0}=21$ is the smallest, and $4$ is the special one and $4-4=0$, $4+4=2^{3}$. Note that $(3,1)$-reflecting numbers are closely related to sums of two distinct rational cubes. Indeed, if $n$ is $(3,1)$-reflecting, then there exists $u,v\in\mathbb{Q}$ such that $2n=u^{3}+v^{3}$ where $v\ne\pm u$, i.e., $2n$ can be written as sums of two distinct rational cubes. Conversely, if $N\ne0$ can be written as sums of two distinct rational cubes, so does $8N$ and thus $n=4N$ is $(3,1)$-reflecting. The problem of deciding whether an integer can be written as sums of two rational cubes has a long history; cf. \cite[Ch. XXI, pp. 572--578]{Dickson2005history}. In particular, we recall the following theorem of Euler, cf. \cite[Ch. XV, Thm. 247, pp. 456--458]{Euler1822elements}, which proves that $n=1$ is not $(3,1)$-reflecting. \begin{thm*} Neither the sum nor the difference of two cubes can become equal to the double of another cube; or, in other words, the formula, $x^{3}\pm y^{3}=2z^{3}$, is always impossible, except in the evident case of $x=y$.\footnote{ Here, $x$, $y$, and $z$ are assumed to be non-negative. } \end{thm*} \begin{rem*} Euler's proof is based on Fermat's infinite descent method and similar to his proof of Fermat's Last Theorem of degree $3$. \end{rem*} Let $C^{N}:u^{3}+v^{3}=N$ and $C_{N}:y^{2}=x^{3}+N$, where $N\ne0$, be elliptic curves defined over rational numbers. The Weierstrass form of the elliptic curve $C^{N}$ is given by the elliptic curve $C_{-432N^{2}}$. Indeed, the homogeneous form of $C^{N}$ is given by $U^{3}+V^{3}=NW^{3}$, which has a rational point $[1,-1,0]$. One checks that \[ x=12N\frac{1}{v+u},\quad y=36N\frac{v-u}{v+u} \] satisfy the Weierstrass equation $C_{-432N^{2}}:y^{2}=x^{3}-432N^{2}$, whose homogeneous form is given by $Y^{2}Z=X^{3}-432N^{2}Z^{3}$, which contains a rational point $[0,1,0]$. Conversely, $u$ and $v$ can be expressed in terms of $x$ and $y$ by \[ u=\frac{36N-y}{6x},\quad v=\frac{36N+y}{6x}. \] Hence, there exists a one-to-one correspondence between rational points $(u,v)\in C^{N}(\mathbb{Q})$ and rational points $(x,y)\in C_{-432N^{2}}(\mathbb{Q})$ (except the case when $v=-u$ we manually make the correspondence between $[1,-1,0]$ and $[0,1,0]$). Let $N=2n$. Notice that $C_{-1728n^{2}}:y^{2}=x^{3}-1728n^{2}$ is isomorphic to $C_{-27n^{2}}:y'^{2}=x'^{2}-27n^{2}$ via $y'=2^{-3}y$ and $x'=2^{-2}x$. \begin{prop} \label{prop:R'(3,1):order} The set $\mathscr{R}'(3,1)$ consists of positive cube-free integers $n$ such that there exists a nontrivial rational point $(x,y)$ of order other than $2$ on $C_{-27n^{2}}$. \end{prop} \begin{proof} If $n\in\mathscr{R}'(3,1)$, then for some $v\neq\pm u\in\mathbb{Q}$ we have $v^{3}+u^{3}=2n$. Hence, $(u,v)$ is a rational point on $C^{2n}$. Since $v\ne-u$, $(u,v)$ corresponds to $(x,y)=(6n\frac{1}{v+u},9n\frac{v-u}{v+u})$. Since $v\ne u$ if and only if $y\ne0$, $(x,y)$ is a nontrivial rational point on $C_{-27n^{2}}$ of order other than $2$. \end{proof} \begin{rem*} Among all primitive $(3,1)$-reflecting numbers, $4$ is the special one. The elliptic curve $C_{-432}$ has rank $0$ and its torsion subgroup is isomorphic to $\mathbb{Z}/3$, in which the nontrivial torsion points are $(12,\pm36)$, corresponding to the only rational solutions $(v,u)=(0,2),(2,0)$ to $v^{3}+u^{3}=8$. In fact, $4$ is the only one with this property, as shown by the following lemma and corollary. \end{rem*} \begin{lem*} Let $N\ne0$ be an integer having no $6$th power divisors. Then, a complete description of the torsion subgroup $(C_{N})_{\tors}(\mathbb{Q})$ is given by \[ (C_{N})_{\tors}(\mathbb{Q})=\begin{cases} \{O,(-1,0),(0,\pm1),(2,\pm3)\}\cong\mathbb{Z}/6\mathbb{Z}, & \text{if }N=1,\\ \{O,(12,\pm36)\}\cong\mathbb{Z}/3\mathbb{Z}, & \text{if }N=-432,\\ \{O,(0,\pm\sqrt{N})\}\cong\mathbb{Z}/3\mathbb{Z}, & \text{if }N\ne1\text{ is a square},\\ \{O,(-\sqrt[3]{N},0)\}\cong\mathbb{Z}/2\mathbb{Z}, & \text{if }N\ne1\text{ is a cube},\\ \{O\}, & \text{otherwise}. \end{cases} \] \end{lem*} \begin{proof} See Exercise 10.19 of \cite{Silverman2009}. \end{proof} \begin{cor} If $n$ is positive and cube-free, then $C_{-27n^{2}}$ has nontrivial torsion subgroup (isomorphic to $\mathbb{Z}/3\mathbb{Z}$) if and only if $n=4$. \end{cor} \begin{proof} Since $n$ is positive and cube-free, $-27n^{2}$ is neither a square nor a cube. So $C_{-27n^{2}}$ has no rational points of order $2$, and the torsion subgroup of $C_{-27n^{2}}$ is nontrivial (and isomorphic to $\mathbb{Z}/3\mathbb{Z}$) if and only if $-27n^{2}=-432$, i.e., $n=4$. \end{proof} Therefore, Proposition \ref{prop:R'(3,1):order} can be strengthened in the following way. \begin{prop} \label{prop:R'(3,1):rank} The set $\mathscr{R}'(3,1)$ consists of the special $n=4$ and positive cube-free integers $n$ such that the rank of $C_{-27n^{2}}$ is positive. \end{prop} \begin{rem*} If $n=1$, then $C_{-27}$ has rank $0$ and torsion subgroup $\{O,(3,0)\}$. Thus, $u^{3}+v^{3}=2$ only has one rational solution $(1,1)$. This gives a modern interpretation of the theorem of Euler mentioned before. \end{rem*} \begin{cor} If $n=p$, where $p\equiv2\mod9$ is an odd prime number, or $n=p^{2}$, where $p\equiv5\mod9$ is a prime number, then $n$ is a $(3,1)$-reflecting number. \end{cor} \begin{proof} This follows directly from Satg\'e's results in \cite{Satge1987}: if $p\equiv2\mod9$ is an odd prime number, then $C^{2p}$ has infinitely many rational points, and if $p\equiv5\mod9$ is a prime number, then $C^{2p^{2}}$ has infinitely many rational points. \end{proof} \section{\label{sec:(2,2)}$(2,2)$-Reflecting Numbers} We begin with the following property of $(2,2)$-reflecting numbers, which justify their alternative name, reflecting congruent numbers. \begin{prop} A $(2,2)$-reflecting number is a congruent number. \end{prop} \begin{proof} Let $n$ be a $(2,2)$-reflecting number and $(S,T,U,V)$ be a primitive solution to the following homogeneous equations \begin{equation} nS^{2}-T^{2}=U^{2},\quad nS^{2}+T^{2}=V^{2}.\label{eq:(2,2)} \end{equation} Then, $(-T^{2}/S^{2},TUV/S^{3})$ is a rational point on the congruent number elliptic curve \[ E_{n}:y^{2}=-x(n+x)(n-x)=x(x+n)(x-n)=x^{3}-n^{2}x. \] Since $S,T>0$, $V$ is positive. If $U=0$, then $nS^{2}=T^{2}$ and $V^{2}=2T^{2}$, but this is impossible since $\sqrt{2}$ is irrational. So $TUV/S^{3}\ne0$ and we obtain a non-torsion point on $E_{n}(\mathbb{Q})$. Hence, $E_{n}$ has a positive rank and $n$ is a congruent number. \end{proof} From now on, each congruent number, if not otherwise specified, is always assumed to be square-free. \begin{notation*} A triple $(A,B,C)$ of positive integers is called a \emph{Pythagorean triple} if $A^{2}+B^{2}=C^{2}$; and it is called \emph{primitive} if in addition $\gcd(A,B,C)=1$. Each rational right triangle is similar to a unique right triangle with its sides given by a primitive Pythagorean triple $(A,B,C)$ and is denoted by a triple $(a,b,c)$ of the length of its sides such that $a/A=b/B=c/C$. \end{notation*} We always assume that $B$ is the even one in any primitive Pythagorean triple $(A,B,C)$, since $A$ and $B$ must have different parity. Therefore, we always attempt to make the middle term $b$ in $(a,b,c)$ correspond to the even number $B$ in $(A,B,C)$, unless we are uncertain about this in some formulas. Let $\ltriangle_{n}$ be the set of all rational right triangles $(a,b,c)$ with area $n$. Let $\mathscr{P}$ be the set of pairs $(P,Q)$ of positive coprime integers $P>Q$ with different parity. Then, a primitive Pythagorean triple can be constructed by Euclid's formula \[ (A,B,C)=(P^{2}-Q^{2},2PQ,P^{2}+Q^{2}). \] Conversely, any primitive Pythagorean triple is of this form. So $\mathscr{P}$ can be identified with the set of primitive Pythagorean triples. Then, $PQ(P^{2}-Q^{2})$ and its square-free part $n$ are congruent numbers. Let $R=\sqrt{PQ(P^{2}-Q^{2})/n}$. Then, $(a,b,c)=(A/R,B/R,C/R)$ is a rational right triangle in $\ltriangle_{n}$. Let $\mathscr{P}_{n}\subset\mathscr{P}$ consist of pairs $(P,Q)$ such that the square-free part of $PQ(P^{2}-Q^{2})$ is $n$. Let $t\in\mathbb{Q}^{+}$ be such that $n\pm t^{2}$ are both squares. Since $(-t^{2},\pm t\sqrt{n^{2}-t^{4}})$ are two rational points on $E_{n}(\mathbb{Q})$, we obtain an injective odd map \[ \varphi:\mathscr{T}_{n}:=\{t\in\mathbb{Q}^{*}\mid n\pm t^{2}\in\mathbb{Q}^{*2}\}\to E_{n}(\mathbb{Q});\ t\mapsto(-t^{2},t\sqrt{n^{2}-t^{4}}), \] whose image lies in $E_{n}(\mathbb{Q})\setminus(2E_{n}(\mathbb{Q})\cup E_{n}[2])$ (cf. \cite[Prop. 20, §1]{Koblitz2012}), and \[ \left(\frac{n^{2}-t^{4}}{t\sqrt{n^{2}-t^{4}}},\frac{2nt^{2}}{t\sqrt{n^{2}-t^{4}}},\frac{n^{2}+t^{4}}{t\sqrt{n^{2}-t^{4}}}\right)\in\ltriangle_{n}. \] On the other hand, let $z\in\mathbb{Q}^{+}$ be such that $z^{2}\pm n$ are both rational squares. Since $(z^{2},\pm z\sqrt{z^{4}-n^{2}})$ are two rational points on $E_{n}$, we obtain an injective odd map \[ \psi:\mathscr{Z}_{n}=\{z\in\mathbb{Q}^{*}\mid z^{2}\pm n\in\mathbb{Q}^{*2}\}\to E_{n}(\mathbb{Q});\ z\mapsto(z^{2},-z\sqrt{z^{4}-n^{2}}), \] whose image lies in $2E_{n}(\mathbb{Q})\setminus\{O\}$ (cf. \cite[Prop. 20, §1]{Koblitz2012}). Denote by $\mathscr{T}_{n}^{+}$ (resp. $\mathscr{Z}_{n}^{+}$) the subset of positive numbers in $\mathscr{T}_{n}$ (resp. $\mathscr{Z}_{n}$). Then, there is a one-to-one correspondence between the following sets: \begin{equation} \mathscr{Z}_{n}^{+}\leftrightarrow\{(x,y)\in2E_{n}(\mathbb{Q})\setminus\{O\}:y<0\}\leftrightarrow\ltriangle_{n}\leftrightarrow\mathscr{P}_{n},\label{eq:from Z_n+ to Pn} \end{equation} where the first correspondence is given by $\psi$ and its inverse $\sqrt{x}\mapsfrom(x,y)$, the second one given by (cf. \cite[Prop. 19, §1]{Koblitz2012}) \begin{align*} (x,y) & \mapsto(\sqrt{x+n}-\sqrt{x-n},\sqrt{x+n}+\sqrt{x-n},2\sqrt{x}),\\ (Z^{2}/4,-|Y^{2}-X^{2}|Z/8) & \mapsfrom(X,Y,Z), \end{align*} and the last one given by Euclid's formula and a proper scaling. Since $\left(\frac{n^{2}+t^{4}}{2t\sqrt{n^{2}-t^{4}}}\right)^{2}\pm n=\left(\frac{n^{2}\pm2nt^{2}-t^{4}}{2t\sqrt{n^{2}-t^{4}}}\right)^{2}$, we obtain an odd map of sets \[ z:\mathscr{T}_{n}\to\mathscr{Z}_{n};\ t\mapsto z(t)=\frac{n^{2}+t^{4}}{2t\sqrt{n^{2}-t^{4}}}. \] In fact, we can apply the duplication formula \[ x([2]P)=\left(\frac{x^{2}+n^{2}}{2y}\right)^{2},\quad\forall P=(x,y)\in E_{n} \] to the rational points $P=(-t^{2},\pm t\sqrt{n^{2}-t^{4}})$ on $E_{n}$ and obtain \[ x([2]P)=\frac{(n^{2}+t^{4})^{2}}{4t^{2}(n^{2}-t^{4})}=z(t)^{2}. \] Hence, we have the following commutative diagram \begin{equation} \xymatrix{t\ar@{|->}[d] & \mathscr{T}_{n}\ar[r]^{z}\ar@{^{(}->}[d]_{\varphi} & \mathscr{Z}_{n}\ar@{_{(}->}[d]^{\psi} & z\ar@{|->}[d]\\ (-t^{2},t\sqrt{n^{2}-t^{4}}) & E_{n}(\mathbb{Q})\ar[r]_{[2]} & 2E_{n}(\mathbb{Q}) & (z^{2},-z\sqrt{z^{4}-n^{2}}) } \label{eq:comm.diag.1} \end{equation} \begin{example*} The first congruent number $n=5$ is also the first reflecting congruent number. If $t=2$, then $z(t)=41/12$ and $n\pm t^{2},z(t)^{2}\pm n$ are all rational squares. \end{example*} Since $z(0+)=z(\sqrt{n}-)=+\infty$, as a real-valued function, certainly \[ z(t)=\frac{n^{2}+t^{4}}{2t\sqrt{n^{2}-t^{4}}}=\frac{\sqrt{n^{2}-t^{4}}}{2t}+\frac{t^{3}}{\sqrt{n^{2}-t^{4}}} \] is not injective for $t\in(-\sqrt{n},0)\cup(0,\sqrt{n})$. However, as a rational-valued function: \begin{prop} \label{prop:Tn->Zn inj} The map $z:\mathscr{T}_{n}\to\mathscr{Z}_{n}$ is always injective. \end{prop} \begin{proof} If $z\in\mathscr{Z}_{n}$, then there exists $P=(x,y)\in E_{n}(\mathbb{Q})$ such that $x([2]P)=z^{2}$ and the set $\Sigma=\{\pm P,\pm P+T_{1},\pm P+T_{2},\pm P+T_{3}\}$ consists of all rational points $Q\in E_{n}(\mathbb{Q})$ such that $x([2]Q)=z^{2}$. Then, we have \[ x(\pm P+T_{1})=\frac{-n(x-n)}{x+n},\ x(\pm P+T_{2})=\frac{-n^{2}}{x},\ x(\pm P+T_{3})=\frac{n(x+n)}{x-n}. \] If $z$ has a preimage $t\in\mathscr{T}_{n}$, then $n-t^{2}=u^{2}$ and $n+t^{2}=v^{2}$ for some $u,v\in\mathbb{Q}^{*}$ and the $x$-coordinates of points in $\Sigma$ are given by $-t^{2},nv^{2}u^{-2},n^{2}t^{-2},-nu^{2}v^{-2}$. Since $n$ is not a square, $nu^{2}v^{-2}$ is not a rational square. So there exists only one $t\in\mathscr{T}_{n}$ as the preimage of $z$. Hence, $\mathscr{T}_{n}\to\mathscr{Z}_{n}$ is always injective. \end{proof} Let $\mathscr{P}_{n}'\subset\mathscr{P}_{n}$ consist of pairs $(P,Q)$ of positive coprime integers $P>Q$ with different parity such that $P/n$, $Q$, $P-Q$, and $P+Q$ are all integer squares. \begin{thm} A positive square-free integer $n$ is a reflecting congruent number if and only if the set $\mathscr{P}_{n}'$ is not empty. \end{thm} \begin{proof} If suffices to show that there is a one-to-one correspondence \[ \mathscr{P}_{n}'\leftrightarrow\mathscr{T}_{n}^{+};\quad(P,Q)=(nS^{2},T^{2})\leftrightarrow t=\frac{T}{S}. \] For any $(P,Q)\in\mathscr{P}_{n}'$, we write $(P,Q)=(nS^{2},T^{2})$, where $S,T$ are positive coprime integers, and let $t=T/S=\sqrt{nQ/P}$. Then, $n\pm t^{2}=n\pm nQ/P=(P\pm Q)/(P/n)$ are all rational squares and thus $t\in\mathscr{T}_{n}^{+}$. Conversely, any $t=T/S\in\mathscr{T}_{n}^{+}$ with $\gcd(S,T)=1$ corresponds to a unique pair $(P,Q)=(nS^{2},T^{2})\in\mathscr{P}_{n}'$. Indeed, $nS^{2}\pm T^{2}=S^{2}(n\pm t^{2})$ are integer squares. To show that $P$ and $Q$ are coprime, we consider $d:=\gcd(P,Q)$, which is given by \[ \gcd(nS^{2},T^{2})=\gcd(n,T^{2})=\gcd(n,T), \] since $\gcd(S,T)=1$ and $n$ is square-free. If $d$ were greater than $1$, then $0<U^{2}=nS^{2}-T^{2}$ would imply that $d\mid U$, $d^{2}\mid nS^{2}$, and thus $d\mid S$, a contradiction. \end{proof} \begin{rem*} We obtain the following commutative diagram to extend the previous one \begin{equation} \xymatrix{\mathscr{P}_{n}'\ar@{_{(}->}[r]\ar@{<->}[d] & \mathscr{P}_{n}\ar@{<->}[d]\\ \mathscr{T}_{n}^{+}\ar@{^{(}->}[r]^{z} & \mathscr{Z}_{n}^{+} } \label{eq:comm.diag.2} \end{equation} So the map $\mathscr{T}_{n}\to\mathscr{Z}_{n}$ is never surjective unless they are both empty. For example, if $n$ is reflecting congruent, for any $z\in\mathscr{Z}_{n}$ such that $\psi(z)\in4E_{n}(\mathbb{Q})$, $z$ has no preimage in $\mathscr{T}_{n}$, since $[2]\circ\varphi(\mathscr{T}_{n})\subset2E_{n}(\mathbb{Q})\setminus4E_{n}(\mathbb{Q})$. \end{rem*} But a congruent number may not be a reflecting congruent number. \begin{lem} \label{lem:-1QuadRes} If a positive square-free number $n$ is reflecting congruent, then $-1$ must be a quadratic residue modulo $n$. \end{lem} \begin{proof} This is immediate from Proposition \ref{prop:R'(2,1)} since $\mathscr{R}'(2,2)\subset\mathscr{R}'(2,1)$. \end{proof} \begin{lem} \label{lem:EvenNoReflect} A positive square-free even number $n$ is not reflecting congruent. \end{lem} \begin{proof} Since $n$ is square-free and even, $n\equiv2,6\mod8$. Suppose $(S,T,U,V)$ is a nontrivial primitive solution to the system (\ref{eq:(2,2)}). If $n\equiv2\mod8$, then the system modulo by $8$ only has the following solutions \[ (S^{2},T^{2},U^{2},V^{2})\equiv(0,0,0,0),(0,4,4,4),(4,0,0,0),\text{ or }(4,4,4,4)\mod8, \] which imply that $S,T,U,V$ are all even, a contradiction to $\gcd(S,T,U,V)=1$. The case in which $n\equiv6\mod8$ can be proved in the same way or by Lemma \ref{lem:-1QuadRes} because $n/2\equiv3\mod4$ must have a prime divisor $\equiv3\mod4$. \end{proof} Summarizing the two lemmas, we obtain the following \begin{prop} For a positive square-free number $n$ to be reflecting congruent, it is necessary that $n$ only has prime divisors $\equiv1\mod4$. \end{prop} \begin{example*} The congruent number $5735=5\cdot31\cdot37$ is not reflecting, as it has a prime divisor $31\equiv3\mod4$ and thus $-1$ is not a quadratic residue modulo $5735$\footnote{May MU5735 R.I.P.}. \end{example*} It is natural to consider the problem of reflecting congruent numbers in local fields $\mathbb{Q}_{v}$, where $v$ is either a prime number or $\infty$. Recall that for any $a,b\in\mathbb{Q}_{v}^{*}$, the Hilbert symbol $(a,b)_{v}$ is defined to $1$ if $ax^{2}+by^{2}-z^{2}=0$ has a solution $(x,y,z)$ other than the trivial one $(0,0,0)$ in $\mathbb{Q}_{v}^{3}$ and $-1$ otherwise. Let $n>1$ be a positive square-free integer. If $nS^{2}-T^{2}=V^{2}$ has a nontrivial solution in $\mathbb{Q}_{v}^{3}$, then $(n,-1)_{v}=1$. If $p$ is an odd prime divisor of $n$, then \[ (n,-1)_{p}=\left(\frac{-1}{p}\right)=\begin{cases} +1, & \text{if }p\equiv1\mod4,\\ -1, & \text{if }p\equiv3\mod4, \end{cases} \] where $\left(\frac{*}{p}\right)$ is the Legendre symbol. Then, $n$ cannot have prime divisors $\equiv3\mod4$, which gives another proof of Lemma \ref{lem:-1QuadRes}. If $2$ is a divisor of $n$, then the proof of Lemma \ref{lem:EvenNoReflect} essentially shows that (\ref{eq:(2,2)}) only has a trivial solution in $\mathbb{Z}_{2}$. Hence, $n$ can only has prime divisors congruent to $1$ modulo $4$. Conversely, \begin{lem} \label{lem:localsol} If a positive square-free number $n$ only has prime divisors congruent to $1$ modulo $4$, then there exists $t\in\mathbb{Q}_{p}^{*}$ such that $n\pm t^{2}$ are squares in $\mathbb{Q}_{p}$, where $p$ is $\infty$, or $2$, or any prime number congruent to $1$ modulo $4$, or any prime number congruent to $3$ modulo $4$ such that $n$ is a quadratic residue modulo $p$. \end{lem} \begin{proof} Any prime number congruent to $1$ modulo $4$ can be written as a sum of two distinct positive integer squares. By the Brahmagupta-Fibonacci identity \[ (x^{2}+y^{2})(z^{2}+w^{2})=(xz\mp yw)^{2}+(xw\pm yz)^{2}, \] $n$ can be written as a sum of two distinct positive integer squares, say, $n=t^{2}+u^{2}$. Case 1: $p=\infty$. Clearly, $v^{2}=n+t^{2}$ is a square in $\mathbb{Q}_{\infty}=\mathbb{R}$. Case 2: $p=2$. Then, we can choose $t,u$ such that $u^{2}\equiv1\mod8$ and \[ t^{2}\equiv\begin{cases} 0\mod8, & \text{if }n\equiv1\mod8,\\ 4\mod8, & \text{if }n\equiv5\mod8. \end{cases} \] Then in both cases, $n+t^{2}\equiv1\mod8$ and thus $v^{2}=n+t^{2}$ is a square in $\mathbb{Z}_{2}$. Case 3: $p\equiv1\mod4$. If $p\mid n$, then $p\mid t$ if and only if $p\mid u$; so $p$ cannot divide $t$, otherwise $p^{2}\mid n$. Therefore, $n+t^{2}\equiv t^{2}\not\equiv0\mod p$ and thus $v^{2}=n+t^{2}$ is a square in $\mathbb{Z}_{p}$. If $p\nmid n$, then we can write $p=a^{2}+b^{2}$, $p^{2}=x^{2}+y^{2}$ such that $x=a^{2}-b^{2}$ and $y=2ab$, and $np^{2}=t'^{2}+u'^{2}$ such that $p\nmid t'$ and $p\nmid u'$. Indeed, we have \[ np^{2}=(t^{2}+u^{2})(x^{2}+y^{2})=(xt\mp yu)^{2}+(xu\pm yt)^{2}. \] If $p$ divides both $xt-yu$ and $xt+yu$, then $p\mid xt$ and $p\mid yu$. Since $p\nmid x$ and $p\nmid y$, we have $p\mid t$, $p\mid u$, and thus $p\mid n$, a contradiction. Then, $np^{2}+t'^{2}\equiv t'^{2}\not\equiv0\mod p$ and thus $u^{2}=n-t'^{2}/p^{2}=u'^{2}/p^{2}$ and $v^{2}=n+t'^{2}/p^{2}$ are squares in $\mathbb{Q}_{p}$. Case 4: $p\equiv3\mod4$ such that $n$ is a quadratic residue modulo $p$. Let $t=p$. Then, $n\pm t^{2}\equiv n\not\equiv0\mod p$. Thus $u^{2}=n-t^{2}$ and $v^{2}=n+t^{2}$ are squares in $\mathbb{Z}_{p}$. In any case, we can find $t$ in $\mathbb{Q}_{p}$ such that $n\pm t^{2}$ are squares in $\mathbb{Q}_{p}$. \end{proof} \begin{rem*} If $p$ is a prime in the residue class $3$ modulo $4$ and $n$ is a quadratic nonresidue modulo $p$, then certain new conditions that we do not know will be imposed on $n$ so that there exists $t\in\mathbb{Q}_{p}^{*}$ such that $n\pm t^{2}$ are squares in $\mathbb{Q}_{p}$. \end{rem*} Next, we apply a complete $2$-descent on the elliptic curve $E_{n}$ to find a criterion for a congruent number to be reflecting congruent. The discriminant of $E_{n}$ is $\Delta=64n^{6}$ and the torsion subgroup of $E_{n}(\mathbb{Q})$ is \[ E_{n}[2]=\{T_{0}=O,T_{1}=(-n,0),T_{2}=(0,0),T_{3}=(n,0)\}. \] So $E_{n}$ has good reduction except at $2$ and prime divisors of $n$. Let $S$ be the set of prime divisors of $n$ together with $2$ and $\infty$. A complete set of representatives for \[ \mathbb{Q}(S,2)=\{b\in\mathbb{Q}^{*}/\mathbb{Q}^{*2}:v_{p}(b)\equiv0\pmod2\text{ for all }p\notin S\} \] is given by the set $\{\pm\prod_{i}p_{i}^{\varepsilon_{i}}\mid\varepsilon_{i}\in\{0,1\},p_{i}\in S\setminus\{\infty\}\}$. We identify this set with $\mathbb{Q}(S,2)$. Then, we have the following injective homomorphism \begin{align} \kappa:E_{n}(\mathbb{Q})/2E_{n}(\mathbb{Q}) & \to\mathbb{Q}(S,2)\times\mathbb{Q}(S,2)\label{eq:kappa}\\ P=(x,y) & \mapsto\begin{cases} (x-e_{1},x-e_{2}), & \text{if }x\ne e_{1},e_{2},\\ \left(\frac{e_{1}-e_{3}}{e_{1}-e_{2}},e_{1}-e_{2}\right), & \text{if }x=e_{1},\\ \left(e_{2}-e_{1},\frac{e_{2}-e_{3}}{e_{2}-e_{1}}\right), & \text{if }x=e_{2},\\ (1,1), & \text{if }P=O, \end{cases}\nonumber \end{align} where $e_{1}=-n$, $e_{2}=0$, $e_{3}=n$. A pair $(m_{1},m_{2})\in\mathbb{Q}(S,2)\times\mathbb{Q}(S,2)$, not in the image of one of the three points $O$, $T_{1}$, $T_{2}$, is the image of a point $P=(x,y)\in E_{n}(\mathbb{Q})/2E_{n}(\mathbb{Q})$ if and only if the equations \begin{equation} \begin{cases} e_{2}-e_{1} & =\ n\,=m_{1}y_{1}^{2}-m_{2}y_{2}^{2},\\ e_{3}-e_{1} & =2n=m_{1}y_{1}^{2}-m_{1}m_{2}y_{3}^{2}, \end{cases}\label{eq:2-cover} \end{equation} have a solution $(y_{1},y_{2},y_{3})\in\mathbb{Q}^{*}\times\mathbb{Q}^{*}\times\mathbb{Q}$. If such a solution exists, then $P=(m_{1}y_{1}^{2}+e_{1},m_{1}m_{2}y_{1}y_{2}y_{3})$ is a rational point on $E_{n}$ such that $\kappa(P)=(m_{1},m_{2})$. Then, we obtain the first criterion of reflecting congruent numbers: \begin{thm} \label{thm:criterionby2descent} A positive square-free integer $n$ is reflecting congruent if and only if one and thus all of $(1,-1),(2,n),(n,1),(2n,-n)$ lie in the image of $\kappa$. \end{thm} \begin{proof} Since $\kappa(T_{1})=(2,-n)$, $\kappa(T_{2})=(n,-1)$, and $\kappa(T_{3})=(2n,n)$, one of \[ (1,-1),(2,n),(n,1),(2n,-n)\in(1,-1)\kappa(E_{n}[2])\subset\mathbb{Q}(S,2)\times\mathbb{Q}(S,2) \] lies in the image of $\kappa$ if and only if all of them lie in the image of $\kappa$. If (\ref{eq:(2,2)}) has a nontrivial solution $(S,T,U,V)$ in integers, then \[ (U/S)^{2}+(T/S)^{2}=n,\quad(U/S)^{2}+(V/S)^{2}=2n, \] i.e., $(y_{1},y_{2},y_{3})=(U/S,T/S,V/S)$ is a solution to (\ref{eq:2-cover}) for $m_{1}=1$ and $m_{2}=-1$, or in other words, $\kappa(-t^{2},\pm t\sqrt{n^{2}-t^{4}})=(1,-1)$, where $t=T/S$. Conversely, if (\ref{eq:2-cover}) has a solution $(y_{1},y_{2},y_{3})\in\mathbb{Q}^{*}\times\mathbb{Q}^{*}\times\mathbb{Q}$ for $m_{1}=1$ and $m_{2}=-1$, then $P=(x,y)=(y_{1}^{2}-n,-y_{1}y_{2}y_{3})\in E_{n}(\mathbb{Q})$ and \[ \sqrt{-x}=\sqrt{n-y_{1}^{2}}=|y_{2}| \] is a rational number such that $n-y_{2}^{2}=y_{1}^{2}$ and $n+y_{2}^{2}=y_{3}^{2}$. \end{proof} \begin{cor} If $n$ is a reflecting congruent number, then $\mathscr{T}_{n}^{+}$ is infinite. \end{cor} \begin{proof} Indeed, $\mathscr{T}_{n}$ contains at least two elements, say $\pm t_{0}$. Let $P=\varphi(t_{0})=(-t_{0}^{2},t_{0}\sqrt{n^{2}-t_{0}^{4}})\in E_{n}(\mathbb{Q})$. Then, $\{\varphi^{-1}(Q)\mid Q\in P+2E_{n}(\mathbb{Q})\}$ is an infinite subset of $\mathscr{T}_{n}$. Hence, $\mathscr{T}_{n}$ and thus $\mathscr{T}_{n}^{+}$ are infinite. \end{proof} \begin{example*} Although $n=205=5\cdot41\equiv5\mod8$ is a square-free congruent number, which only has prime divisors $\equiv1,5\mod8$, it is not reflecting. Indeed, $E_{205}(\mathbb{Q})$ has rank $1$ and a torsion-free generator $(x,y)=(245,2100)$, but one has $\kappa(x,y)=(2,5)$, which is different from $(1,-1),(2,n),(n,1),(2n,-n)$. \end{example*} Given any $(P,Q)\in\mathscr{P}_{n}$, we get a rational right triangle $(a,b,c)$ in $\ltriangle_{n}$ of the form $(A/R,B/R,C/R)$, where $(A,B,C)=(P^{2}-Q^{2},2PQ,P^{2}+Q^{2})$ and $R=\sqrt{PQ(P^{2}-Q^{2})/n}$. Then, by (\ref{eq:from Z_n+ to Pn}), $(a,b,c)$ corresponds to a rational point $(c^{2}/4,-|b^{2}-a^{2}|c/8)\in2E_{n}(\mathbb{Q})$, which by the duplication formula is the double of the rational point $(x,y)=(\frac{nb}{c-a},\frac{2n^{2}}{c-a})\in E_{n}(\mathbb{Q})$.\footnote{There are $4$ such rational points in total, but their difference lies in $E_{n}[2]$ and a different choice of such a point leads to the same conclusion, that is, $(P,Q)\in\mathscr{P}_{n}'$.} Then, we have \begin{align*} \kappa(x,y) & =\left(\frac{nb}{c-a}+n,\frac{nb}{c-a}\right)=\left(n\frac{B+C-A}{C-A},n\frac{B}{C-A}\right)\\ & =\left(PQ(P^{2}-Q^{2})\frac{2(P+Q)Q}{2Q^{2}},PQ(P^{2}-Q^{2})\frac{2PQ}{2Q^{2}}\right)\\ & =(P(P-Q),(P-Q)(P+Q)). \end{align*} If $n$ is a reflecting congruent number and $\kappa(x,y)$ is one of $(1,-1)$, $(2,n)$, $(n,1)$, and $(2n,-n)$, then $\kappa(x,y)$ is either $(2,n)$ or $(n,1)$ since $P>Q>0$. Case 1: $\kappa(x,y)=(2,n)$. Then, the square-free part of $P^{2}-Q^{2}$ is $n$. It follows that $PQ$ is a square. Since $P$ and $Q$ are coprime, both of them are squares. Then, the square-free part of $P(P-Q)$ and $P-Q$ could never be $2$, since $P$ and $Q$ have different parity. So this case never happens. Case 2: $\kappa(x,y)=(n,1)$. Then, $P^{2}-Q^{2}$ is a square and the square-free part of $(P-Q)P$ is $n$. It follows that the square-free parts of $PQ$ is $n$ and $(P+Q)Q$ is a square. Since $P$ and $Q$ are coprime, so are $P+Q$ and $Q$. So $P+Q$ and $Q$ are both squares and thus $P-Q$ is also a square. Moreover, the square-free part of $P$ is $n$. In a word, we must have $(P,Q)\in\mathscr{P}_{n}'$. Hence, only the subset of $\mathscr{Z}_{n}^{+}$, which corresponds to $\mathscr{P}_{n}'$ as in \ref{eq:from Z_n+ to Pn}, have preimages in $\mathscr{T}_{n}^{+}$, which explains (\ref{eq:comm.diag.2}) again. Now we can give a description of the image of the map $z:\mathscr{T}_{n}\to\mathscr{Z}_{n}$. \begin{prop} \label{prop:preimage of z} Let $n$ be a reflecting congruent number and define \[ \Sigma(z)=\{P\in E_{n}(\mathbb{Q})\mid[2]P=\pm\psi(z)\}=\{P\in E_{n}(\mathbb{Q})\mid x([2]P)=z^{2}\}, \] for any $z\in\mathscr{Z}_{n}$. Then, the following conditions are equivalent: \begin{enumerate} \item \label{enu:preimage} $z\in\mathscr{Z}_{n}$ has a preimage in $\mathscr{T}_{n}$; \item \label{enu:exist} $\exists P\in\Sigma(z)$, $\kappa(P)\in(1,-1)\kappa(E_{n}[2])$; \item \label{enu:for all} $\forall P\in\Sigma(z)$, $\kappa(P)\in(1,-1)\kappa(E_{n}[2])$. \end{enumerate} So the image of $z:\mathscr{T}_{n}\to\mathscr{Z}_{n}$ is the following subset of $\mathscr{Z}_{n}$: \[ \{z\in\mathscr{Z}_{n}\mid\kappa(\Sigma(z))\subset(1,-1)\kappa(E_{n}[2])\}. \] \end{prop} \begin{proof} (\ref{enu:preimage})$\Rightarrow$(\ref{enu:exist}). If $t\in\mathscr{T}_{n}$ is a preimage of $z\in\mathscr{Z}_{n}$, then $P=(-t^{2},\pm t\sqrt{n^{2}-t^{4}})\in\Sigma(z)$ and $\kappa(P)=(n-t^{2},-t^{2})=(1,-1)$. (\ref{enu:exist})$\Leftrightarrow$(\ref{enu:for all}). For any $P,Q\in\Sigma(z)$, we have either $P+Q\in E_{n}[2]$ or $P-Q\in E_{n}[2]$, and thus $\kappa(P)$ and $\kappa(Q)$ lie in the same coset of $\kappa(E_{n}[2])$. (\ref{enu:for all})$\Rightarrow$(\ref{enu:preimage}). Since $\kappa(P)\in(1,-1)\kappa(E_{n}[2])$ for any $P\in\Sigma(z)$, we may replace $P$ by $P+T$ for some $T\in E_{n}[2]$ if necessary so that $\kappa(P)=(1,-1)$. Then, $z$ has a preimage $\sqrt{-x(P)}$ multiplied by the sign of $z$ in $\mathscr{T}_{n}$. \end{proof} \begin{example*} Note that $n=41$ is the least prime congruent number such that $E_{n}$ has rank $2$. Let $P=(-9,120)$ be one of the torsion-free generators of $E_{n}(\mathbb{Q})$. Then, $z=881/120\in\mathscr{Z}_{n}$ is a square root of $x([2]P)$ and we have \[ z^{2}-n=(431/120)^{2},\quad z^{2}+n=(1169/120)^{2}. \] However, we have $\kappa(P)=(2,-1)\notin\{(1,-1),(2,41),(41,1),(82,-41)\}$. So $z$ has no preimage in $\mathscr{T}_{n}$. Later, we will show that $n=41$ is also reflecting congruent. \end{example*} Our criterion in Theorem \ref{thm:criterionby2descent} is essentially based on computing generators for the weak Mordell-Weil group $E_{n}(\mathbb{Q})/2E_{n}(\mathbb{Q})$, which fits in the short exact sequence \begin{equation} 0\to E_{n}(\mathbb{Q})/2E_{n}(\mathbb{Q})\to S^{(2)}(E_{n}/\mathbb{Q})\to\textrm{{\cyr SH}}(E_{n}/\mathbb{Q})[2]\to0,\label{eq:SES} \end{equation} where $S^{(2)}(E_{n}/\mathbb{Q})$ is the $2$-Selmer group and $\textrm{{\cyr SH}}(E_{n}/\mathbb{Q})[2]$ is the $2$-torsion of the Shafarevich-Tate group $\textrm{{\cyr SH}}(E_{n}/\mathbb{Q})$ of $E_{n}/\mathbb{Q}$. The homogeneous space associated to any pair $(m_{1},m_{2})\in\mathbb{Q}(S,2)\times\mathbb{Q}(S,2)$ is the curve in $\mathbb{P}^{3}$ given by the equation \[ C_{(m_{1},m_{2})}:\begin{cases} e_{2}-e_{1} & =\ n\,=m_{1}y_{1}^{2}-m_{2}y_{2}^{2},\\ e_{3}-e_{1} & =2n=m_{1}y_{1}^{2}-m_{1}m_{2}y_{3}^{2}, \end{cases} \] and we have the following isomorphism of finite groups: \[ S^{(2)}(E_{n}/\mathbb{Q})\cong\{(m_{1},m_{2})\in\mathbb{Q}(S,2)\times\mathbb{Q}(S,2):C_{(m_{1},m_{2})}(\mathbb{Q}_{v})\ne\emptyset,\forall v\in S\}, \] with which the composition of $E_{n}(\mathbb{Q})/2E_{n}(\mathbb{Q})\to S^{(2)}(E_{n}/\mathbb{Q})$ gives the injective map $\kappa$ in (\ref{eq:kappa}). Since $\kappa(T_{1})=(2,-n)$, $\kappa(T_{2})=(n,-1)$, and $\kappa(T_{3})=(2n,n)$, we have $E_{n}[2]\cong\{(1,1),(2,-n),(n,-1),(2n,n)\}\subset S^{(2)}(E_{n}/\mathbb{Q})$. Hence, we have \[ \rank E_{n}(\mathbb{Q})=\dim_{\mathbb{F}_{2}}S^{(2)}(E_{n}/\mathbb{Q})-\dim_{\mathbb{F}_{2}}\textrm{{\cyr SH}}(E_{n}/\mathbb{Q})[2]-2. \] If $m_{1}<0$, then $n=m_{1}y_{1}^{2}-m_{2}y_{2}^{2}$ has no solution in $\mathbb{Q}_{\infty}$ for $m_{2}>0$, and similarly $2n=m_{1}y_{1}^{2}-m_{1}m_{2}y_{3}^{2}$ has no solution in $\mathbb{Q}_{\infty}$ for $m_{2}<0$. Therefore, we have $(m_{1},m_{2})\notin S^{(2)}(E_{n}/\mathbb{Q})$ whenever $m_{1}<0$. Theorem \ref{thm:criterionby2descent} says that a positive square-free integer $n$ is reflecting congruent if and only if one and thus all of $(1,-1),(2,n),(n,1),(2n,-n)$ lie in the image of $E_{n}(\mathbb{Q})/2E_{n}(\mathbb{Q})\to S^{(2)}(E_{n}/\mathbb{Q})$, or the kernel of $S^{(2)}(E_{n}/\mathbb{Q})\to\textrm{{\cyr SH}}(E_{n}/\mathbb{Q})[2]$. This proves the following necessary condition on reflecting congruent numbers. \begin{prop} \label{prop:necessary condition} For a positive square-free integer $n$ to be a reflecting congruent number, it is necessary that one and thus all of $C_{(1,-1)},C_{(2,n)},C_{(n,1)},C_{(2n,-n)}$ have a point in $\mathbb{Q}_{v}$ for any $v\in S$; It is also sufficient if $\textrm{{\cyr SH}}(E_{n}/\mathbb{Q})[2]$ is trivial. \end{prop} Note that this condition covers all the necessary conditions given before. Indeed, $C_{(1,-1)}:n=y_{1}^{2}+y_{2}^{2},2n=y_{1}^{2}+y_{3}^{2}$ is the same as $n-t^{2}=u^{2},n+t^{2}=v^{2}$. If $-1$ is a quadratic nonresidue modulo $n$, i.e., $n$ has a prime divisor $p\equiv3\mod4$, then Lemma \ref{lem:-1QuadRes} essentially says that $C_{(1,-1)}(\mathbb{Q}_{p})$ is empty. If $n$ is even, then Lemma \ref{lem:EvenNoReflect} essentially says that $C_{(1,-1)}(\mathbb{Q}_{2})$ is empty. Moreover, \begin{cor} \label{cor:p1mod4:(1,-1)} If a positive square-free integer $n$ only has prime divisors congruent to $1$ modulo $4$, then $(1,-1)E_{n}[2]\subset S^{(2)}(E_{n}/\mathbb{Q})$. \end{cor} \begin{proof} Indeed, Lemma \ref{lem:localsol} says that $C_{(1,-1)}(\mathbb{Q}_{v})$ is not empty for any $v\in S$. \end{proof} \begin{example*} Return to the example $n=205$. We have $(1,-1)\in S^{(2)}(E_{n}/\mathbb{Q})$ by Corollary \ref{cor:p1mod4:(1,-1)}. Moreover, we have the following isomorphisms of groups: \[ E_{n}(\mathbb{Q})/2E_{n}(\mathbb{Q})\cong(\mathbb{Z}/2\mathbb{Z})^{3},\ S^{(2)}(E_{n}/\mathbb{Q})\cong(\mathbb{Z}/2\mathbb{Z})^{5},\ \textrm{{\cyr SH}}(E_{n}/\mathbb{Q})[2]\cong(\mathbb{Z}/2\mathbb{Z})^{2}, \] and in particular $\{(1,\pm1),(1,\pm41),(5,\pm1),(5,\pm41)\}$ gives a complete set of eight representatives for $S^{(2)}(E_{n}/\mathbb{Q})/E_{n}[2]$. Since $\kappa(245,2100)\in(1,-41)E_{n}[2]$, the map $\kappa$ has image $E_{205}[2]\cup(1,-41)E_{205}[2]$, which does not contain $(1,-1)E_{205}[2]$. \end{example*} Next we calculate more $2$-Selmer groups for our main results. Here, we run the routine calculations in detail because we need to know not only the size of the $2$-Selmer groups but also their group elements. \begin{lem} \label{lem:2Selmer(p1mod4)} Let $p$ be a prime integer congruent to $1$ modulo $4$. Then \[ S^{(2)}(E_{p}/\mathbb{Q})=\begin{cases} (1,\pm1)E_{p}[2]\cup(1,\pm p)E_{p}[2]\cong(\mathbb{Z}/2\mathbb{Z})^{4}, & \text{if }p\equiv1\mod8,\\ (1,\pm1)E_{p}[2]\cong(\mathbb{Z}/2\mathbb{Z})^{3}, & \text{if }p\equiv5\mod8. \end{cases} \] \end{lem} \begin{proof} We have $S=\{2,p,\infty\}$ and $\mathbb{Q}(S,2)\cong\{\pm1,\pm2,\pm p,\pm2p\}$. Since $(m_{1},m_{2})\notin S^{(2)}(E_{p}/\mathbb{Q})$ whenever $m_{1}<0$, $S^{(2)}(E_{p}/\mathbb{Q})$ is a subgroup of \[ S=\{(m_{1},m_{2})\in\mathbb{Q}(S,2)\times\mathbb{Q}(S,2)\mid m_{1}>0\}. \] Since $E_{p}[2]\cong\{(1,1),(2,-p),(p,-1),(2p,p)\}$ is a subgroup of $S^{(2)}(E_{p}/\mathbb{Q})$, \[ \{(1,\pm1),(1,\pm2),(1,\pm p),(1,\pm2p)\} \] gives a complete set of eight representatives for $S/E_{p}[2]$. Since by Corollary \ref{cor:p1mod4:(1,-1)} $(1,-1)\in S^{(2)}(E_{n}/\mathbb{Q})$, it suffices to check for $m_{2}=2,p,2p$ whether \[ C_{(1,m_{2})}:p=y_{1}^{2}-m_{2}y_{2}^{2},\quad2p=y_{1}^{2}-m_{2}y_{3}^{2} \] has a point in the local field $\mathbb{Q}_{v}$ for all $v=2,p$, or equivalently whether \[ pZ^{2}+m_{2}Y_{2}^{2}=Y_{1}^{2},\quad2pZ^{2}+m_{2}Y_{3}^{2}=Y_{1}^{2},\quad Z\ne0 \] has a solution in the ring $\mathbb{Z}_{v}$ of $v$-adic integers for all $v=2,p$. Case 1: $p\equiv1\mod8$. In this case, $-1$ and $2$ are both squares in $\mathbb{Z}_{p}$. Let $m_{2}=2$ and $Z,Y_{1},Y_{2},Y_{3}\in\mathbb{Z}_{2}$ be such that at least one of them has $2$-adic valuation $0$. Then $2pZ^{2}+2Y_{3}^{2}=Y_{1}^{2}$ implies that $v_{2}(Y_{1})\ge1$ and $pZ^{2}+2Y_{2}^{2}=Y_{1}^{2}$ implies that $v_{2}(Z)\ge1$. Then $2Y_{2}^{2}=Y_{1}^{2}-pZ^{2}$ and $2Y_{3}^{2}=Y_{1}^{2}-2pZ^{2}$ imply that $v_{2}(Y_{2})\ge1$ and $v_{2}(Y_{3})\ge1$, a contradiction. Let $m_{2}=2p$. Since $p$ is a square in $\mathbb{Z}_{2}$, this reduces to the previous case. Let $m_{2}=p$. Then, we write $p=a^{2}+b^{2}$, where $a,b\in\mathbb{Z}$ such that $2$ divides $a$ but not $b$. Since $-2$ is a square in $\mathbb{Z}_{p}$, we have \[ p-2a^{2}\equiv-2a^{2}\not\equiv0\mod p,\quad p-2a^{2}\equiv1-2a^{2}\equiv1\mod8, \] which implies that $p-2a^{2}$ is a square in $\mathbb{Z}_{p}$ and $\mathbb{Z}_{2}$. Let $c$ be any square root of $p-2a^{2}$ in $\mathbb{Z}_{p}$ (resp. $\mathbb{Z}_{2}$). Then, $pZ^{2}+pY_{2}^{2}=Y_{1}^{2}$ and $2pZ^{2}+pY_{3}^{2}=Y_{1}^{2}$ have a solution $(Z,Y_{1},Y_{2},Y_{3})=(a,p,b,c)$ in $\mathbb{Z}_{p}$ (resp. $\mathbb{Z}_{2}$). It follows that $(1,p)\in S^{(2)}(E_{p}/\mathbb{Q})$ and $(1,2),(1,2p)\notin S^{(2)}(E_{p}/\mathbb{Q})$ and hence the assertion. Case 2: $p\equiv5\mod8$. In this case, $-1$ is a square but $2$ is not a square in $\mathbb{Q}_{p}$. Then, the Hilbert symbols $(p,2)_{p}=(p,2p)_{p}=-1$ imply that $pZ^{2}+2Y_{2}^{2}=Y_{1}^{2}$, $pZ^{2}+2pY_{2}^{2}=Y_{1}^{2}$, $2pZ^{2}+pY_{3}^{2}=Y_{1}^{2}$ only have trivial solutions in $\mathbb{Q}_{p}^{3}$. It follows that $(1,2)$, $(1,2p)$, and $(1,p)$ are not in $S^{(2)}(E_{p}/\mathbb{Q})$, and hence the assertion. \end{proof} \begin{lem} \label{lem:2Selmer:Lem5.3:Tian2012} Let $n$ be as in Theorem \ref{thm:TianThm1.3}. Then, we have \[ S^{(2)}(E_{n}/\mathbb{Q})=(1,\pm1)E_{n}[2]\cong(\mathbb{Z}/2\mathbb{Z})^{3}. \] \end{lem} \begin{proof} By Corollary \ref{cor:p1mod4:(1,-1)}, we have $(1,-1)\in S^{(2)}(E_{n}/\mathbb{Q})$. By Lemma 5.3 of \cite{Tian2012}, $S^{(2)}(E_{n}/\mathbb{Q})\cong(\mathbb{Z}/2\mathbb{Z})^{3}$. Hence, $S^{(2)}(E_{n}/\mathbb{Q})$ must be of the desired form. \end{proof} Now we present proofs of our main results of this work. \begin{proof}[Proof of Theorem \ref{thm:p5mod8}] By Proposition \ref{prop:necessary condition} and Corollary \ref{cor:p1mod4:(1,-1)}, it suffices to show that $\textrm{{\cyr SH}}(E_{p}/\mathbb{Q})[2]$ is trivial. By Theorem 3.6 of \cite{Monsky1990}, $E_{p}/\mathbb{Q}$ has positive rank and thus $\dim_{\mathbb{F}_{2}}E_{p}(\mathbb{Q})/2E_{p}(\mathbb{Q})\ge3$. By Lemma \ref{lem:2Selmer(p1mod4)}, \[ \dim_{\mathbb{F}_{2}}S^{(2)}(E_{p}/\mathbb{Q})=3=\dim_{\mathbb{F}_{2}}E_{p}(\mathbb{Q})/2E_{p}(\mathbb{Q})+\dim_{\mathbb{F}_{2}}\textrm{{\cyr SH}}(E_{p}/\mathbb{Q})[2]. \] Hence, $E_{p}(\mathbb{Q})/2E_{p}(\mathbb{Q})\cong S^{(2)}(E_{p}/\mathbb{Q})$ and $\textrm{{\cyr SH}}(E_{p}/\mathbb{Q})[2]$ is trivial. \end{proof} \begin{example*} The popular prime congruent number $n=157\equiv5\mod8$ due to Zagier is reflecting congruent and $E_{157}$ has rank $1$. One can check that \[ t=\frac{407598125202}{53156661805}\mapsto z=\frac{224403517704336969924557513090674863160948472041}{17824664537857719176051070357934327140032961660}, \] and that $n\pm t^{2}$, $z^{2}\pm n$ are all rational squares. \end{example*} \begin{rem*} In this example, $z$ seems to be much more complicated than $t$ and thus we have a good reason to study reflecting congruent numbers. \end{rem*} Return to Conjecture \ref{conj:p1mod8}, by Proposition \ref{prop:necessary condition} and Corollary \ref{cor:p1mod4:(1,-1)}, it suffices to show that $\textrm{{\cyr SH}}(E_{p}/\mathbb{Q})[2]$ is trivial. Since $p\equiv1\mod8$ is congruent, by Lemma \ref{lem:2Selmer(p1mod4)}, \[ 1\le\rank E_{p}(\mathbb{Q})+\dim_{\mathbb{F}_{2}}\textrm{{\cyr SH}}(E_{p}/\mathbb{Q})[2]=\dim_{\mathbb{F}_{2}}S^{(2)}(E_{p}/\mathbb{Q})-2=2. \] Then, Theorem \ref{thm:p1mod8} is immediate. The parity conjecture says that $(-1)^{E_{p}(\mathbb{Q})}$ is the global root number of $E_{p}$ over $\mathbb{Q}$, which is $1$ since $p\equiv1\mod8$. Thus, we have $\rank E_{p}(\mathbb{Q})\equiv0\mod2$. On the other hand, if the Shafarevich-Tate group $\textrm{{\cyr SH}}(E_{p}/\mathbb{Q})$ is finite, then the order of $\textrm{{\cyr SH}}(E_{p}/\mathbb{Q})[2]$ is a perfect square; cf. \cite[Thm. 4.14 of Ch. X]{Silverman2009}. Thus, $\textrm{{\cyr SH}}(E_{p}/\mathbb{Q})[2]\ne\mathbb{Z}/2\mathbb{Z}$ and $\rank E_{p}(\mathbb{Q})\ne1$. Hence, any one of the parity conjecture and Shafarevich-Tate conjecture implies that $\rank E_{p}(\mathbb{Q})$ is exactly $2$ and $\textrm{{\cyr SH}}(E_{p}/\mathbb{Q})[2]$ is trivial. It would be interesting if one can construct a rational point on $E_{p}$ whose image under $\kappa$ lies in $(1,-1)E_{p}[2]$ and thus prove Conjecture \ref{conj:p1mod8} independently of any conjectures. \begin{example*} Return to the example $n=41$, the least prime congruent number in the residue class $1$ modulo $8$. Since $E_{n}(\mathbb{Q})$ has rank $2$, $n=41$ is reflecting congruent. Indeed, if $t=8/5$, then $z(t)=1054721/81840\in\mathscr{Z}_{n}$ and \begin{align*} n-t^{2} & =(31/5)^{2}, & z(t)^{2}-n & =(915329/81840)^{2},\\ n+t^{2} & =(33/5)^{2}, & z(t)^{2}+n & =(1177729/81840)^{2}. \end{align*} \end{example*} \begin{rem*} Therefore, for any prime number $p$, we summarize that \begin{enumerate} \item if $p=2$, then $p$ is not congruent by the infinite descent method; \item if $p\equiv1\mod8$ is congruent, then $p$ is conjecturally reflecting congruent; \item if $p\equiv3\mod8$, then $p$ is not congruent by the infinite descent method; \item if $p\equiv5\mod8$, then $p$ is reflecting congruent by Theorem \ref{thm:p5mod8}; \item if $p\equiv7\mod8$, then $p$ is not reflecting congruent by Lemma \ref{lem:-1QuadRes}. \end{enumerate} \end{rem*} \begin{proof}[Proof of Theorems \ref{thm:TianThm1.1}, \ref{thm:TianThm5.2}, and \ref{thm:TianThm1.3}.] We only prove Theorem \ref{thm:TianThm1.3} here. By Proposition \ref{prop:necessary condition} and Corollary \ref{cor:p1mod4:(1,-1)}, it suffices to show that $\textrm{{\cyr SH}}(E_{p}/\mathbb{Q})[2]$ is trivial, which follows from the proof of Theorem 1.3 of \cite{Tian2012}. For the proof of the other two theorems, see the proof of Theorem 1.1 and Theorem 5.3 of \cite{Tian2012}. \end{proof} \section{\label{sec:gcd(k,m)>=00003D3}$(k,m)$-Reflecting Numbers with $\gcd(k,m)\ge3$} The goal of this section is to disprove the existence of $(k,m)$-reflecting numbers with $d=\gcd(k,m)\ge3$. Since the set $\mathscr{R}(k,m)$ of $(k,m)$-reflecting numbers is a subset $\mathscr{R}(d,d)$ of $(d,d)$-reflecting numbers, it suffices to show that $(d,d)$-reflecting numbers do not exist for any $d\ge3$. By Theorem \ref{prop:R(k,m)}, $\mathscr{R}(d,d)$ is not empty if and only if the ternary Diophantine equation \begin{equation} 2T^{d}+U^{d}=V^{d}\label{eq:2Td+Ud=00003DVd} \end{equation} has an integer solution $(T,U,V)$ such that $-V^{d}<U^{d}<V^{d}$. Therefore, Theorem \ref{thm:gcd(k,m)>=00003D3} follows immediately from the following \begin{thm} For any $d\ge3$, the ternary Diophantine equation (\ref{eq:2Td+Ud=00003DVd}) has no integer solution $(T,U,V)$ such that $-V^{d}<U^{d}<V^{d}$. \end{thm} \begin{proof} If $d=3$, then by the Theorem of Euler in Section \ref{sec:(k,m)}, $2T^{d}=V^{d}+(-U)^{d}$ is not soluble in the set of integers unless $V=\pm U$. If $d=4$, then another theorem of Euler says that $2T^{d}+U^{d}$ is not a square for any integers $T$ and $U$ unless $T=0$; see \cite[Ch. XIII, Thm. 210, page 411]{Euler1822elements}. Then, it suffices to prove the assertion for any odd prime number $d\ge5$. Then, we rewrite the equation (\ref{eq:2Td+Ud=00003DVd}) as $V^{d}+(-U)^{d}=2T^{d}$, which, by the D\'enes conjecture, now a theorem, only has integer solutions $(V,U,T)$ such that $VUT=0$ or $|V|=|U|=|T|$. So equation (\ref{eq:2Td+Ud=00003DVd}) has no integer solution $(T,U,V)$ such that $-V^{d}<U^{d}<V^{d}$. \end{proof} \begin{rem*} This theorem follows from the Lander, Parkin, and Selfridge conjecture as well. If $d\ge5$ is odd, then $T,V$ are distinct positive integers. Clearly, $U$ cannot be $0$. Also, $U$ must be negative; otherwise, $T,V,U$ will be distinct positive integers and the conjecture implies $d\le4$. We rewrite $2T^{d}+U^{d}=V^{d}$ as $2T^{d}=V^{d}+(-U)^{d}$. Since $T,V,-U$ are distinct positive integers, the conjecture implies that $d\le4$. If $d\ge6$ is even, then we may assume $T,U,V$ to be distinct and positive. Then, the conjecture also implies that $d\le4$. \end{rem*} \bibliographystyle{alpha} \newcommand{\etalchar}[1]{$^{#1}$}
proofpile-arXiv_065-4696
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:intro} Panoptic segmentation (PS) is attracting growing attention from the vision community since it was proposed by Kirillov et al.~\cite{kirillov2019panoptic}. Such hot trend attributes to its ambitious goal for accommodating both semantic segmentation and instance segmentation in an unified framework and producing holistic scene parsing results~\cite{xiong2019upsnet,li2019attention,kirillov2019panoptic,cheng2020panoptic,wang2020axial,wang2021max,li2021fully,zhang2021k,cheng2021per,cheng2021masked,li2021panoptic}. Most of the researches are built under a common closed-set assumption, i.e., the model only needs to segment the same class of objects appeared in the training set. However, such kind of systems will not be competent for the complex open-set scenario. For example, automatic driving can not identify abnormal objects will lead to catastrophic danger~\cite{abraham2016autonomous} and possible problems can even not be predictable in medical diagnosis~\cite{bakator2018deep}. Therefore, PS systems for dealing with open-set challenge are urgently demanded. Open-set problem~\cite{scheirer2012toward,scheirer2014probability,geng2020recent,vaze2021open} has been well explored in classification tasks which refer to the scenario that when some new classes unseen in training appear in testing, the recognition model is required to not only accurately classify known classes given in training but also effectively deal with the unknown classes. The recent research~\cite{hwang2021exemplar} extends PS to a realistic setting and firstly defines the \textit{open-set panoptic segmentation} (OPS) task. OPS takes categories given during training as \textit{known}\xspace classes and requires the model to produce segments for both \textit{known}\xspace and \textit{unknown}\xspace class objects (``things'') at testing phase, where the \textit{unknown}\xspace classes are never annotated or even appeared in the training set. As examples shown in Figure~\ref{fig:ops_setting}, many bottles in training image (above the closet) are hard to be labeled pixel by pixel and given as the \emph{void} area in ground truth segments. While for testing images, it is required to predict segments for different kinds of bottles and even the fork and toothbrush which are never appeared during training. The OPS task is challenging because, on the one hand, the appearance of \textit{unknown}\xspace class objects are diverse and it would be hard for directly modeling \textit{unknown}\xspace classes from the given training images. On the other hand, although the ``void'' category is available at training phase, the training samples in ``void'' category are too noisy to provide effective supervisions since the ``unknown thing'' class and the ``background'' are confounded together. In order to tackle these two challenges, we choose to recognize the \textit{unknown}\xspace class objects in a dual decision process. Through coupling the \textit{known}\xspace class discriminator with an class-agnostic object prediction head, we can significantly improve the performance for the OPS task. Specifically, we build up a \textit{known}\xspace class classifier and suppress its predictions for ``void'' class proposals to compact the decision boundaries of \textit{known}\xspace classes and empower the \textit{known}\xspace class classifier the ability to reject non-known classes. Then, we further create a class-agnostic object prediction head to further distinguish ``unknown things'' from the background. Moreover, we propose to use the pseudo-labeling method to further boost the generalization ability of the newly added object prediction head. Extensive experimental results show that our approach has successfully achieved a new state-of-the-art performance on various OPS tasks. \section{Related Work}\label{sec:related_work} \smallskip \noindent \textbf{Panoptic Segmentation} Pursuing a wholistic scene parsing, panoptic segmentation task (PS) is proposed to expect the generation of both semantic and instance segmentation simultaneously. Given fully annotations to the training images, different kinds of modeling targets have been explored for the PS problem. Specifically, unified end-to-end networks~\cite{xiong2019upsnet,li2019attention,kirillov2019panoptic} are soon proposed after the initial release of baseline method with separate networks. DeepLab series methods~\cite{cheng2020panoptic,wang2020axial,wang2021max} are deployed for fast inference speed. More recently, universal image segmentation~\cite{li2021fully,zhang2021k,cheng2021per,cheng2021masked,li2021panoptic} is pursued. While OPS shares a distinct target which demands the model to produce segments for \textit{unknown}\xspace classes that are never acknowledged during training. \smallskip \noindent \textbf{Open-Set Learning} Open-set problem has been well explored in the recognition/classification task~\cite{scheirer2012toward,scheirer2014probability,bendale2016towards,yoshihashi2019classification,oza2019c2ae,geng2020recent,vaze2021open}. The target of open-set recognition is to make the model successfully identify known classes and have the ability to identify unknown classes which are never exposed during training. OPS can be more challenging because \textit{unknown}\xspace classes are not provided intact, but needs to be detected by the model itself. Other related work includes open-world object detection~\cite{joseph2021towards,gupta2022ow} and open-world entity segmentation~\cite{qi2021open}. According to the problem definition, the former work contains a human labeling process after the \textit{unknown}\xspace class detection while OPS does not require. And the open-set recognition procedure proposed in OW-DETR~\cite{gupta2022ow} significantly differs from our approach, e.g., the generation and usage of pseudo unknown objects varies greatly and the unknown class decision process is also different. While the latter one aims to segment visual entities without considering classes which is precisely the problem that OPS needs to solve urgently. \input{figure/fig_baseline_decision_boundary} \section{Open-Set Panoptic Segmentation} \label{sec:bg} According to the problem definition given in~\cite{hwang2021exemplar}, \textit{open-set panoptic segmentation} (OPS) has a similar definition to the standard closed-set panoptic segmentation except for the label space and targets of the task. Apart from the \textit{known}\xspace label space (i.e., countable objects in \emph{thing} class $\mathcal{C}^{\text{Th}}$ and amorphous and uncountable regions in \emph{stuff} class $\mathcal{C}^{\text{St}}$) which has annotations at training phase and requires to be effectively segmented during testing, OPS also requires the model to be able to detect and generate instance masks for \emph{unknown thing} class $\mathcal{C}^{\text{Th}}_{u}$ in test set\footnote{Segmentation of \textit{unknown}\xspace \textit{stuff}\xspace class is not required in the current OPS definition~\cite{hwang2021exemplar}.}. The \textit{unknown}\xspace \emph{thing} $\mathcal{C}^{\text{Th}}_{u}$ are not annotated or even not appeared in the training images. For pixel areas in the ground truth of training images that are not manually annotated, a semantic label named \emph{void} will be assigned to them. \input{table/tab_motivation} Existing OPS methods~\cite{hwang2021exemplar} are built upon the classic Panoptic FPN network~\cite{kirillov2019panoptic} and this is due to that region proposal network~\cite{ren2015faster} (RPN), an important part of the network, can generate class-agnostic proposals and enable the possible of finding various classes of objects in any image~\cite{gu2022openvocabulary} and makes the OPS problem tractable. Figure~\ref{fig:baseline_decision_boundary} (a) presents some proposal examples generated from RPN module where solid boxes in orange and blue denote the proposals are labeled as a specific \textit{known}\xspace \textit{thing}\xspace class. Dashed boxes in black and orange denote the ``void'' class proposals\footnote{Proposals who have a half of the region is inside the ``void'' area.} and other black solid boxes are background proposals. Since the proposal labeling of \textit{known}\xspace classes is based on the \textit{known}\xspace class GT, the quality of selected proposals are guaranteed. However, the quality of proposals $\mathcal{P}^{void}$ varies greatly as the connected ``void'' area is not manually annotated and may contain multiple objects or just ambiguous pixels, therefore some of them should be labeled as background in the closed-set PS setting. Examples in Figure~\ref{fig:baseline_decision_boundary}(a) show that few yellow dashed boxes are well aligned with an \textit{unknown}\xspace instance in the ``void'' area, while a large number of black dashed boxes are not well aligned with a specific \textit{unknown}\xspace instance which should have been labeled as background proposals in the closed-set setting but it is impossible for the open-set case. The existing OPS methods differ in how to use ``void'' class proposals and top row of Figure~\ref{fig:baseline_decision_boundary}(b) presents their usage ways: Void-ignorance baseline does not include ``void'' class proposals $\mathcal{P}_{void}$ into network training; Void-background takes $\mathcal{P}_{void}$ as background; Void-suppression alternatively utilizes $\mathcal{P}_{void}$ to do a suppression on \emph{known} class classifiers\footnote{We empirically find that suppress background as well will deteriorate the recognition of \textit{known}\xspace classes.}; Void-train treats all $\mathcal{P}_{void}$ as the same and adds an \emph{void} class classifier during training; EOPSN method can be seen as an enhanced version of Void-train and builds multiple representative exemplars from $\mathcal{P}_{void}$ through $k$-means clustering. During testing, proposals will be predicted as \textit{unknown}\xspace class only when they are rejected by \textit{known}\xspace classes with a pre-defined confidence threshold. Void-train and EOPSN further require the proposals to be predicted as ``void'' class or exemplar-based classes. Bottom row of Figure~\ref{fig:baseline_decision_boundary}(b) gives a visualization of \textit{unknown}\xspace class decision field for these methods and their \textit{unknown}\xspace class recognition quality are presented in Table~\ref{tab:motivation}. We can find that neither Void-ignorance nor Void-background can produce a reasonable \textit{unknown}\xspace class recognition result. Although both of Void-suppression and Void-train share a similar performance, i.e., relatively high recall and low precision, they may have different reasons. Void-suppression is due to the lack of ability for distinguish \textit{unknown}\xspace class from background, while Void-train is because the supervision of $\mathcal{P}_{void}$ will make it overfit to the training set. EOPSN greatly improves the precision but heavily affects the recognition recall which means the exemplars obtained from proposals $\mathcal{P}_{void}$ are not representative enough. \input{figure/fig_main} \section{Our Approach}\label{sec:our_approach} In this section, we first present the necessary of constructing a two-stage decision structure for the OPS task. Then, we further propose a pseudo-labeling method to enhance the generalization ability of \textit{unknown}\xspace class recognition. \subsection{Dual Decision Structure for the OPS Task}\label{sec:objhead} Based on the analysis in Sec.~\ref{sec:bg}, we believe that \textit{unknown}\xspace class cannot be well modeled at training phase without being aware of what kinds of \textit{unknown}\xspace class will appear during testing. Therefore, both of Void-train and EOPSN may not be a promising direction for solving the OPS problem and the empirical results on \textit{unseen}\xspace class~\footnote{\textit{unseen}\xspace means the corresponding thing never appears in the training images. Sec.~\ref{sec:exp_detail} gives a detail definition.} in Table~\ref{tab:main_3split} confirm our conclusion. However, other OPS methods can only rely on the \textit{known}\xspace class classifier when making decisions on \textit{unknown}\xspace classes and the empirical results show that such a decision procedure can not enable them to achieve a satisfactory recognition performance for unknown classes. Thus we build up a dual decision process for the effective recognition of \textit{unknown}\xspace classes. Following existing OPS methods, our structure is also adapted from the Panoptic FPN framework~\cite{kirillov2019panoptic} and the core structure is presented in Figure~\ref{fig:main_structure}(a). Specifically, for a given image, we first use the ResNet50 network and feature pyramid network to extract multi-scale feature representations. Then a region proposal network is used to generate class-agnostic proposals and their features can be obtained through the RoI align module. Given the ground truth segmentation annotations at training stage, these proposals can be assigned labels according to their positional relationship with the annotations. For example, the proposals will be labeled as one \textit{known}\xspace \textit{thing}\xspace class $\mathcal{C}_i^{th}\in\mathcal{C}^{th}$ when it has a large overlap to any \textit{known}\xspace \textit{thing}\xspace class instance. Similarly, the ``void'' areas are also utilized for defining ``void'' class proposals $\mathcal{P}_{\emph{void}}$. Other proposals are labeled as background class samples $\mathcal{P}_{bg}$. In order to identify \textit{known}\xspace classes, the classification head is supervised by the proposals \begin{equation}\label{eq:cls_loss} \min - \frac{1}{N_{\mathcal{P}_{\overline{\emph{void}}}}} \sum_{i\in\{\mathcal{C}^{th}, bg\}} \sum_{k=1}^{N_{\mathcal{P}_i}}\log \frac{\exp\bigl (w_i^T f(\mathcal{P}_i^k) \bigr)}{\sum_{j\in \{\mathcal{C}^{\text{Th}}, bg\}}\exp\bigl(w_j^T f(\mathcal{P}_i^k)\bigr)} \end{equation} where $w$ is the weight of classification head. $N_{\mathcal{P}_{\overline{\emph{void}}}}$ is the number of proposals except for those belonging to ``void'' class. $N_{\mathcal{P}_i}$ is the number of proposals in any specific \textit{thing}\xspace class. In order to separate \textit{known}\xspace and \textit{unknown}\xspace class effectively, we follow the Void-suppression baseline to do a suppression on \textit{known}\xspace class classifiers with ``void'' class proposals \begin{equation}\label{eq:void_supp} \min - \frac{1}{N_{\mathcal{P}_{void}}} \sum_{i=1}^{N_{\mathcal{P}_{void}}} \sum_{k\in \mathcal{C}^{\text{th}}}\log \Bigl(1 - \frac{\exp\bigl (w_k^T f(\mathcal{P}_{void}^i) \bigr)}{\sum_{\{\mathcal{C}^{\text{th}}, bg\}}\exp\bigl(w_j^T f(\mathcal{P}_{void}^i)\bigr)} \Bigr). \end{equation}where $N_{\mathcal{P}_{\emph{void}}}$ means the number of ``void'' class proposals. Considering the modeling of Eqs.~\ref{eq:void_supp} and~\ref{eq:cls_loss} can only improve the discriminative ability of \textit{known}\xspace classes, \textit{unknown}\xspace classes may still mix with background ones. In order to mitigate this drawback, we introduce a class-agnostic object prediction head (a.k.a. objectiveness head) parallel to the \textit{known}\xspace class classification head and optimize it as follows \begin{equation}\label{eq:obj_loss} \min \frac{-1}{N_{\mathcal{P}_{\overline{\emph{void}}}}} \biggl( \sum_{i\in\mathcal{C}^{th}} \sum_{k=1}^{N_{\mathcal{P}_i}}\log \frac{\exp\bigl (\theta^T f(\mathcal{P}_i^k) \bigr)}{1 + \exp\bigl (\theta^T f(\mathcal{P}_i^k) \bigr)} + \sum_{l=1}^{N_{\mathcal{P}_{bg}}} \log \frac{1}{1 + \exp{\bigl(\theta^T f(\mathcal{P}_{bg}^l)\bigr)}} \biggr ) \end{equation} where $\theta$ is the weight of objectiveness head and $N_{\mathcal{P}_{bg}}$ is the number of background proposals. At the testing stage, the recognition of \textit{unknown}\xspace class will be made in a dual decision process based on the predictions on both \textit{known}\xspace class classification head and class-agnostic object prediction head, i.e., only proposals who are rejected by the \textit{known}\xspace class classification head and accepted by the objectiveness head simultaneously will be predicted as \textit{unknown}\xspace class. Empirical results in Table~\ref{tab:main_res} shows that such kind of dual decision process significantly boosts the \textit{unknown}\xspace class recognition performance on all kinds of OPS settings. \smallskip \noindent{\textbf{Rationale of design:}} The key feature of the above design is that we will treat all known class proposals as training samples for \textit{a single class-agnostic} ``object'' class. In contrast, the methods described in Figure \ref{fig:baseline_decision_boundary} will treat each class separately. The class-agnostic classification head will encourage the network identify patterns that are shared across class rather than focusing on (known-)class specific patterns. The former can generalize well to unseen thing while the latter may overfit to things only seen at the training stage. \subsection{Improve Object Recognition Generalization with Pseudo-labeling} \label{sec:pl_void} Currently, the newly added class-agnostic object prediction head is only optimized on proposals belonging to \textit{known}\xspace \textit{thing}\xspace class or background ones and the ``void'' class proposals~\footnote{We take any connected ``void'' area in ground truth of training images as ``void'' class proposals.} are not fully utilized. Since the proposals of ``void'' class may contain many novel objects which does not belong to the annotated \textit{known}\xspace \textit{thing}\xspace classes, we assume the properly exploiting of ``void'' class proposals can be helpful for the recognition generalization of objectiveness head. One straightforward way is to directly take all the ``void'' class proposals as potential \textit{unknown}\xspace ones to supervise the objectiveness head. However, results in Figure~\ref{fig:abl_confid} shows that this strategy will heavily deteriorate the recognition quality. It may because the proposals of \emph{void} class are not precise and contain much noise which is not suitable for the immediate exploiting. Therefore, we propose to use the pseudo-labeling technique to filter out invalid ``void'' class proposals. Since the newly added objectiveness head is designed in a class-agnostic fashion, the quality of ``void'' class proposals can be predicted by the up-to-date objectiveness head and we can select those high confident ones to further supervise the objectiveness head \begin{equation}\label{eq:obj_loss_online_pl} \min \ \frac{-1}{N_{\mathcal{P}_{void}}} \sum_{i=1}^{N_{\mathcal{P}_{void}}} \mathbbm{1}{\biggl(\frac{\exp\bigl(\theta^T f(\mathcal{P}_{void}^i)\bigr)}{1 + \exp\bigl(\theta^T f(\mathcal{P}_{void}^i)\bigr)} \geq \delta\biggr)} \log \frac{\exp\bigl(\theta^T f(\mathcal{P}_{void}^i)\bigr)}{1 + \exp\bigl(\theta^T f(\mathcal{P}_{void}^i)\bigr)} \\ \end{equation} where $\delta$ is the confidence threshold. \input{table/tab_main_results} \input{table/tab_main_3split} \section{Experimental Results}\label{sec:exp_exp} In this section, we conduct experiments to evaluate the proposed approach and existing OPS methods on open-set panoptic segmentation task. \subsection{Experimental Details}\label{sec:exp_detail} To make a fair comparison, we directly build up our experiments based on the released codebase\footnote{\url{https://github.com/jd730/EOPSN.git}}. Some experimental details are as follows \smallskip \noindent \textbf{Datasets}: Following the protocol of \cite{hwang2021exemplar}, all experiments are conducted on MS-COCO 2017 dataset whose default annotations are constructed by 80 \textit{thing}\xspace classes and 53 \textit{stuff}\xspace classes. \cite{hwang2021exemplar} manually removes a subset of \textit{known}\xspace \textit{thing}\xspace classes (i.e., $K$\% of 80 classes) in the training dataset and takes them as \textit{unknown}\xspace classes for evaluating on open-set task (\textit{stuff}\xspace classes are all kept). Three \textit{known}\xspace-\textit{unknown}\xspace splits of $K$ are considered:\textbf{5\%}, \textbf{10\%} and \textbf{20\%}. In order to evaluate the object recognition generalization ability of OPS methods, we further construct a more realistic OPS setting named \textbf{\emph{zero-shot}} which is built up from the 5\% split setting mentioned above and further removes training images that contains instances belonging to the 20\% tail \textit{thing}\xspace class of MS-COCO. These classes are \{hair drier, toaster, parking meter, bear, scissors, microwave, fire hydrant, toothbrush, stop sign, mouse, refrigerator, snowboard, frisbee, keyboard, hot dog, baseball bat\}. To distinguish from \textit{unknown}\xspace classes, we call these classes \textbf{\textit{unseen}\xspace classes}. \smallskip \noindent \textbf{Methods}: Two strong baselines and the state-of-the-art OPS method are included for comparison, i.e., Void-train, Void-suppression and EOPSN. Meanwhile, Panoptic FPN trained on full 80 \textit{thing}\xspace classes are also reported for a reference baseline (denoted as supervised). \smallskip \noindent \textbf{Evaluation Metric}: The standard panoptic segmentation metrics (i.e., PQ, SQ, RQ) are reported for \textit{known}\xspace, \textit{unknown}\xspace and \emph{unseen} classes (see detail formulations in the appendix). \input{figure/fig_exp_k20} \subsection{Results on \textit{known}\xspace-\textit{unknown}\xspace Setting}\label{sec:exp_kn_unk} Table~\ref{tab:main_res} shows the quantitative results of comparing methods. It is clear that our proposed method significantly improves the panoptic quality of \textit{unknown}\xspace class objects than the Void-suppression baseline across all kinds of splits. Meanwhile, compared with the SOTA method EOPSN, our approach excels on both of the recall and precision of \textit{unknown}\xspace objects recognition and therefore achieves much better PQ values. Figure~\ref{fig:exp_k20} illustrates the qualitative results. We find that our approach can successively detect more \textit{unknown}\xspace class objects and generate more precise instance masks than both of Void-suppression baseline and EOPSN method. \input{figure/fig_tab_abl} \subsection{Results on \emph{zero-shot} Setting}\label{sec:exp_zeroshot} Our approach has been verified to be effective on \textit{known}\xspace-\textit{unknown}\xspace setting in Sec.~\ref{sec:exp_kn_unk}, we also want to know its novel object recognition ability in a \emph{zero-shot} setting. Table~\ref{tab:main_3split} presents that the proposed methods are superior than the comparing ones on both \textit{unknown}\xspace class and \textit{unseen}\xspace class objects. It is interesting that EOPSN performs well on \textit{unknown}\xspace class but almost fails on \emph{unseen} class. This may be due to the fact that the exemplars obtained in EOPSN are completely derived from the training set and cannot be generalized to unseen class objects. Qualitative results for the \emph{zero-shot} setting has been presented in the appendix due to the space limit and our approach can always detect salient objects in the image and produce overall best instance masks. \subsection{Ablation Study}\label{sec:exp_abl_study} We are interested in ablating our approach from the following perspective views: \smallskip \noindent \textbf{Effective of each component in our method:} Our approach is mainly composed of two components (i.e., objectiveness head and pseudo labeling) and Table~\ref{tab:abl_void_obj_plus} shows the performance contribution of each component on two kinds of settings. It is obvious that simply adding the objectiveness head significantly improves the unknown segmentation performance and incorporating the pseudo labeling trick further boost the overall performance. \smallskip \noindent \textbf{Sensitivity analysis:} Our method only has one hyper-parameter, i.e., the confidence threshold $\delta$ in pseudo labeling mechanism. As shown in Figure~\ref{fig:abl_confid}, the performance of our approach is stable when the confidence value falls into $\delta \in [0.88, 0.99]$. \section{Conclusion} Open-set panoptic segmentation (OPS) is a newly proposed research task which aims to perform segmentation for both \textit{known}\xspace classes and \textit{unknown}\xspace classes. In order to solve the challenges of OPS, we propose a dual decision mechanism for \textit{unknown}\xspace class recognition. We implement this mechanism through coupling a \textit{known}\xspace class classification head and a class-agnostic object prediction head and make them corporate together for final \textit{unknown}\xspace class prediction. To further improve the recognition generalization ability of the objectiveness head, we use the pseudo-labeling technique to boost the performance of our approach. Extensive experimental results verify the effectiveness of the proposed approach on various kinds of OPS tasks. \clearpage \section{Formulation Details of Evaluation Metric} Three kinds of panoptic segmentation metrics are normally considered in the literature, i.e., panoptic quality (PQ), segmentation quality (SQ) and the recognition quality (RQ) \begin{equation*} \text{Panoptic Quality (PQ)} = \underbrace{\frac{\sum_{(p, g) \in \mathit{TP}} \text{IoU}(p, g)}{\vphantom{\frac{1}{2}}|\mathit{TP}|}}_{\text{segmentation quality (SQ)}} \cdot \underbrace{\frac{|\mathit{TP}|}{|\mathit{TP}| + \frac{1}{2} |\mathit{FP}| + \frac{1}{2} |\mathit{FN}|}}_{\text{recognition quality (RQ)}}\\ \end{equation*} where $\text{IoU}(p, g)$ means intersection over union of predicted $p$ and ground truth $g$ segments. $\mathit{TP}$/$\mathit{FP}$/$\mathit{FN}$ respectively denote the set of true positives, false positives and false negatives. For the open-set panoptic segmentation task, we use these three metrics to evaluate the performance of \textit{known}\xspace, \textit{unknown}\xspace and \textit{unseen}\xspace classes. \section{Details of Dataset Splits} In our paper, we have conducted experiments on two kinds of open-set panoptic segmentation settings, i.e., \textit{known}\xspace-\textit{unknown}\xspace setting and \emph{zero-shot} setting. For the \textit{known}\xspace-\textit{unknown}\xspace setting, three kinds of splits are evaluated following the previous work EOPSN~\cite{hwang2021exemplar}, i.e., K\% of classes are removed from the 80 \textit{thing}\xspace classes of MS-COCO 2017 as \textit{unknown}\xspace classes and $K=\{5, 10, 20\}$. We list the \textit{unknown}\xspace classes as follows (the classes are removed cumulatively for these three settings respectively) \begin{itemize} \item car, cow, pizza, toilet \item boat, tie, zebra, stop sign \item dining table, banana, bicycle, cake, sink, cat, keyboard, bear \end{itemize} For the more realistic \emph{zero-shot} setting, we build it upon the 5\% split \textit{known}\xspace-\textit{unknown}\xspace setting mentioned above and further removes training images that contains instances belonging to the 20\% tail \textit{thing}\xspace class of MS-COCO. The removed tail classes has already been presented in the main paper. \section{Qualitative Results of \emph{zero-shot} Setting} In the main paper, we have presented the superior quantitative results of our approach on the \emph{zero-shot} setting. In the appendix, we further visualize the qualitative results of our approach on the \emph{zero-shot} setting. As Figure~\ref{fig:exp_3split} shows, our approach can not only detect more unknown class objects but also generate more precise instance masks than both of Void-suppression baseline and EOPSN method. \input{figure/fig_exp_3split} \clearpage
proofpile-arXiv_065-4699
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{abstract} \fi \vspace{-20pt} \section*{\centering\Large Abstract} The Internet of Things (IoT) has many applications in our daily lives. One aspect in particular is how the IoT is making a substantial impact on education and learning; as we move into the `Smart Educational' era. This article explores how the IoT continues to transform the education landscape, from classrooms and assessments to culture and attitudes. Smart Education is a pivotal tool in the fight to meet the educational challenges of tomorrow. The IoT tools are getting used more and more often in the area of education, aiming to increase student engagement, satisfaction and quality of learning. IoT will reshape student culture and habits beyond belief. As Smart Education is more than just using technologies, it involves a whole range of factors, from the educational management through to the pedagogical techniques and effectiveness. % Educators in the 21st century now have access to gamification, smart devices, data management, and immersive technologies. Enabling academics to gather a variety of information from students. Ranging from monitoring student engagement to adapting the learning strategies for improved learning effectiveness. Through Smart Education, educators will be able to better monitor the needs of individual students and adjust their learning load correspondingly (i.e., optimimal learning environment/workload to support and prevent students failing). One of the biggest challenges for educators is how new technologies will address growing problems (engagement and achievement). The scale and pace of change (technological IoT era) is unprecedented. Typically, jobs students are trained for today will not be here tomorrow. Education is not just about knowledge acquisition, but also the digital skills, adaptability and creativity (essential, if students are to thrive in the new world). \vspace{10pt} \textbf{Keywords: internet of things, education, connectivity, smart worlds, smart classrooms, learning, digital era} \\ \newpage \begin{figure*} \centering \scalebox{0.7} { \begin{tikzpicture}[grow cyclic, text width=2cm, align=flush center, every node/.style=concept, concept color=orange!40, level 1/.style={level distance=4cm,sibling angle=60}, level 2/.style={level distance=4cm,sibling angle=45}] \node{Internet of Things (IoT)} child [concept color=blue!30] { node {Smart Cities } child { node {Smart Homes} } } child [concept color=teal!40] { node { Smart Roads } child { node {Smart Cars}} } child [concept color=purple!10] { node {Smart Wearable} child { node {Smart Watches}} child { node {Smart Clothes}} child { node {Smart Glasses}} child { node {Smart Hearing Aids}} } child [concept color=yellow!30] { node { Smart Agriculture } child { node {Smart Farming}} } child [concept color=red!50] { node [concept,circular glow={fill=orange!50},scale=1.2] {\textbf{Smart Education}} child { node {Smart Classrooms}} child { node {Smart Campuses}} } child [concept color=green!20] { node {Smart Hospitals} child { node {Smart Instruments}} child { node {Smart Doctors (Avatars)}} }; \end{tikzpicture} } \caption{Smart IoT World - As the integration and connection of devices continues to grow - we head towards an ever more IoT-driven ``smart world''. Smart devices in every aspect of life - from the home to the hospital, not to mention, in education. } \end{figure*} \section{Introduction} \paragraph{Smart World} The Internet of Things (IoT) operates within a rapidly changing industry built on innovation and cutting edge technologies \cite{bhayani2016internet,zhu2015green}. These technological advancements drive smart environments (or `pervasive computing') which permeates many spaces of our daily live. Smart Hospitals \cite{zhang2018connecting}, Smart Farming \cite{kamilaris2016agri} and Smart Cities \cite{zanella2014internet} (e.g., like airports, hospitals or university campuses) are already equipped with a mass of connected devices (various types and complexity). One particular area that has started to take hold in recent years, is the envisioned `Smart University' and `Smart Education' which uses the opportunities provided by pervasive computing technologies to benefit learning (i.e., students and staff) \cite{alvarez2017smart}. In such a setting, smart technologies in learning environments are able to address current and future pedagogical challenges e.g. engagement, effective communication complex ideas, cost effectiveness and disabilities. `Smartness' centered on people (end users). A particular concern is how to incorporation the IoT technologies in a learning environment. This involves devising new concepts for making software applications and services more aware of their learner's needs (engage and adapt as needed). Questions this article addresses: \begin{itemize} \item Do we need to change from instructors to coaches? \item How does the IoT complement traditional educational values? \item What are the benefits to students and staff? \item What is the wider social and moral impact of the IoT? (are people more isolated) \item What are the hidden dangers and opportunities? (data privacy and security) \end{itemize} \paragraph{Contribution} The key contributions of this article are: (1) review of the IoT with respect to education; (2) what new possibilities and associated challenges are on the horizon; (3) we discuss the impact on traditional educational paradigms; and (4) where is the internet of education going (today and tomorrow) based upon trends and patterns in the literature. \begin{figure} \input{noarticlesgraph.cls} \caption{A coarse guide to the number of articles in the area of internet of things. Plots show the number of new articles with the keywords in their title published over the past few decades (from Google Scholar 29/08/2019))].} \label{fig:plotpublications} \end{figure} \section{Education and Technology} \paragraph{Why is Education Changing?} There is a meaningful change in education, as society and the world evolves (moving into a new era). This influences how people learn and interact with the world around them (and in the classroom). Influences culture, attitude and values. Take part in the learning. Information (digital) age. New tools and new ways of thinking. Access to a plathora of information at a click of a button. The instructor no longer holds all the `knowledge'. Instructors are coaches (guide students on how they should learn). Students must learn skills and knowledge that makes them useful for tomorrows world (i.e., work with fast changing technologies). Shift in how people learn \cite{hanna2000higher}. The world has and continues to change, as we enter into the fourth industrial evolution and the rise of the digital era, we face increasing economic uncertainty with global political and social instabilities. \paragraph{Well-being and Comfort} Education is more successful when students are emotionally, socially and spiritually happy. Well-being and high academic achievement go hand-in-hand \cite{el2010health}. Supporting students is more than just about making the learning material easily available (environment and experience are also important). A person must see self-worth in themselves to be successful. A strong sense of self-worth leads to a positive attitude and is reflected in the students learning and engagement. This self-worth is achieved through multiple factors, such as, belonging, well-being and even physical aspects (healthy eating and exercise). This is supported at university, through opportunities, providing students with social events, group work, activities and an open safe working environment. \paragraph{The Student is `King'} Higher and further educational institutions are under pressure to transform the learning experience. Student fees and flawed ranking systems, means universities are becoming more competitive. Research vs teaching focused institutions. Innovation leading to vocational and flexible opportunities (industry and employability factors). Explosive growth in educational technologies (social, cloud, mobile and video technologies). Growing expectations for open interfaces, open data and interoperability. Increasing use of personal devices and direct access to third-party solutions. \paragraph{Attendance and IoT} At the heart of the idea, is if students attend they are more likely to succeed. However, research indicates that the issues for non-attendance is more complex \cite{rodgers2002encouraging}. A renewed and innovative approach to supporting students is required. Simply tracking students or even fining or punishing students who do not attend is not effective. While these approaches may offer some-short term improvement, this does not resolve the issues of poor engagement and low student achievement. At the core of the problem, we must ask ourselves what are the issues, such as: \begin{itemize} \item What motivates students to learn? \item How is the students learning connected to their lives (and communities)? \item How do students learn best? \item What health, family, financial, or personal problems at students struggling with (distract them from their academic focus)? \end{itemize} No doubt about it, when a student is truly engaged and focused on their studies, they are unstoppable, they go above and beyond in their duties and work. This influences their peers, family and instructors. We must ask how technologies, such as, IoT is able to achieve this, to support each individual student's needs (e.g., remote access, flexible learning). \figuremacroW {smartclassroom} {Smart Classroom} {Technologies are getting used more and more in the classroom to facilitate flexible learning. Engaging the communication of understanding in interactive ways.} {1.0} \section{Smart Education} \paragraph{Unlimited Knowledge (Internet)} Almost a 100\% of universities across the world have access to the internet. An almost unlimited wealth of information and knowledge. It must be argued, the internet is arguably one of the most successful and useful tools mankind has ever created. The Internets benefits in learning and education are limitless. Students are able to search for information on any topic they are interested in (or find it hard to talk about). Connected to social media, help them with informed decisions (advice and support). Online access means there is a great deal of flexibility and learning choices. Able to connect people `regardless of the distance'. For example, Skype has connected more than 300 million people around the world (2.6 million Skype calls a year) \cite{wuttidittachotti2015qoe}. Meaningful shift influencing every area of society, including education, leading to many internet related advancements (e.g., IoT), which impact polices, public opinion, culture and the economy. The explosion of IoT devices over the past thirty years, has created technological revolution. The IoT is a necessity for education, employability, finance and more. The growing use of terms such as, digital divide, digital literacy and digital inclusion all reflect the growing realization that technologies, such as, the IoT, have become irreducible components of modern life. Significant impact on every aspect of our lives, including educational sector. \begin{itemize} \item Instructor is: a person who teaches a subject or skill: someone who instructs people \item Coach is: a person who provides formal, professional coaching to improve the persons effectiveness and performance, and help them achieve their full potential \end{itemize} \paragraph{Current University Technologies} Technologies are an efficient way to manage and engage students at their own level of comfort. Educational intuitions are already using technologies increasingly (both directly and indirectly to support students). For example, a short list of current smart technologies in education include: \begin{multicols}{2} \begin{itemize} \item Cameras and video \item Interactive whiteboards \item Tablets and eBooks \item Bus tracking (transport) \item Student ID cards (RFID) \item Airplay and smart televisions \item 3D printers \item Smart podiums \item WiFi everywhere (on campus) \item Electric lighting \item Attendance tracking \item Wireless doorlocks (and booking system) \item Temperature sensors \end{itemize} \end{multicols} \figuremacroW {iotandeducation} {Technologies Enhance and Support Education} { The IoT impacts education on numerous levels as people move towards a `digital culture'. Students (and educators) are becoming more confident about using digital technologies. Leading to new pedagogical approaches and associated tools/infrastructure. } {0.6} \paragraph{Digital Assessment - Barriers and Benefits} The technologies are not just about communicating and teaching students - the technologies are also impacting how we `assess' students. Currently, there is a clear move towards digital assessments (e.g. e-exams). Offers a streamlined system (for delivery and marking). Not just a matter of converting paper tests to e-versions. Online assessments and e-assessments are being use more and more in education. The forms of e-assessment, are anything from multiple choice to drawing a digital diagram. The assessments can be automatically marked (scripts) or by humans (manually). The application of digital online assessment are limitless and include a vast range of types (from portfolio to gamification). What is interesting though, is the technologies and assessments are constantly changing and evolving. Importantly, a big benefit of online assessment, is it allows greater transparency, security, flexibility and efficiency. A barrier to digital assessment, is that existing paper exams and approaches cannot merely be transferred over to a digital form. Peer review is another example, so learners are give the power to critique their peers (build a community and give feedback and help). Ultimately thought, no one size fits all solution exists for digital assessments, as a biology and art student would have different criteria, with bespoke solutions often being required (avoid hindering or watering down the assessment criteria). Educational system is changing, no longer simply providing content and assessing that the student has learned and understood the material: \begin{enumerate} \item Student is a Consumer \item Students want experience over Knowledge \item Education is Changing \item Engagement is Interactive \end{enumerate} To achieve this, we must embrace new ways of thinking, which include innovative pedagogical practices and immersive and engaging technologies (e.g., IoT, Virtual Reality and Augmented Reality). \paragraph{Train for Tomorrow} Many of the jobs we train children and students for today may not exit tomorrow (or the jobs they will do do not exist today). The rapid advancement and widespread applications of information technologies provide unpredictable opportunities. How will we ensure graduates are trained and able to meet the challenges ahead? The educational sector, needs to ensure students are equiped with the necessary skillsets, which means embracing dynamic strategies for learning and teaching. The IoT is helping the educational sector meet these challenges head-on (innovations for enhancing the quality of learning which would be difficult for traditional classroom-based approaches). This opens the door to new innovative pedagogical practices for interactions between things and humans - to enable the realization of smart educational institutions for enhance and improve learning to meet tomorrows jobs. \figuremacroW {iothistory} {IoT time line in Education} { Solutions Internet of Things brings in education. (a) \cite{Internet_History}, (b) \cite{History_of_IoT_A_Timeline_of_Development}. (c) \cite{IoT_In_Education}. (d) \cite{History_of_SMART_Board}, } {1.0} \section{IoT applications in education sector} Nowadays, no one able to use and enjoy the benefit of electronic devises and information technology unless connected via a network. We have started to believe that the IoT is playing a part in the rapid change of our society especially in education sector, additionally anyone who denies that isn’t living in the real world. IoT is applied in many sectors and industries (e.g. enterprises, health care, retail, government, teaching, communication) \cite{Huansheng2012,yan2010application,majdub,mse238blog.stanford.edu}. \paragraph{IoT in Teaching and Interactive Learning} The major reason behind implementing IoT in the education sector, is to enhance the learning environment (e.g. IoT can connect academic sectors all over the world to provide a deeper learning experience for different types of students to gain an easy and rapid high quality education), also to provide advanced value to reconsider how the education components are run. In the next few sections we will touch the most important IoT applications in education sector. \paragraph{Smart Posters applications} IoT has developed/improved poster boards and are now used as multimedia labels. It is possible to easily create virtual posters that combine images, audio, video, text, and hyperlinks. IoT allows the users (e.g. instructors, students) to share such digital posters in high quality with others and monitor students activities easily. These digital posters can then be shared with classmates and instructors/teachers via email\cite{everythingconnected,majdub}. \paragraph{Smart board applications} Such applications are able to help teachers to explain lessons more easily in an efficient and interactive techniques with help of using the online presentations and videos. Students in classroom are encouraged to use interactive games as a powerful platform, that because of Web-based applications help to teach students more effectively. Smart technology allows their users such as instructors/teachers and students to browse the web and even edit video clips and sharing the contents interactively (see Figure \ref{fig:smartboard}). \figuremacroW {smartboard} {Smartboard} {Improve engagement and interactive experience (vivid displays that are touch sensitive).} {0.5} \paragraph{Interactive Learning applications} Today's learning is not limited to a combination of text and images but beyond that, most textbooks are available online (loaded to web sites) which include additional materials such as videos, assessments, animations to support effective learning. All that give students a broader view of learning new topics with better understanding and interaction with their teachers and colleagues/partners in classroom and distance learning as well. Additionally, bring real-world problems into the classroom and allow students to find their own solutions. \paragraph{Sensors and smart devices:} Smart phones, tablets and motion sensors are one of the interesting applications. They are a real change to the teaching and learning field that can be considered a powerful tools which allow students and teachers to create 3D graphics, use e-books that include videos, educational games and give them the ability to take notes, also to provide the best way to learn new topics which makes education more attractive than ever. \paragraph{Multimedia Digital Library (e-books)} That provide a better way of learning that allow users (e.g. teachers, students) to carry a library of hundreds of e-books with them easily including graphics, 3D figures, animation and video, smart mobile devices can contain hundreds of textbooks, in addition to homework and other related files and thus eliminate the need for physical storage of books, which contributes to a richer experience and expand learning opportunities for students. \paragraph{Sensors in classroom} There are advanced temperature sensors that allow schools to monitor different conditions under any circumstances, which is not only saving thousands in utility costs but also enhancing learning capabilities. Hence, have a significant impact on students 'cognitive skills abilities, memory, attitudes and teachers' feeling. Additionally, using such advanced sensors technology can help to monitor all classrooms remotely from anywhere. \section{Privacy and data Protection} The main challenge of IoT in education is privacy and data protection. Once the data has captured (videos, images) or collected (text, numbers) must be accessible, and secured (protected). Own and accessing a sensitive data is a concerned. Data privacy is an urgent priority in education sector exactly as in finance and defence sectors\cite{abomhara2014security,ValidationDataIntegrity}. Privacy and data security is the process of protection an education enterprise information and the confidentiality of information from illegal use and threats/intrusions, threats that may threaten the enterprise economically and socially. In fact, maintaining confidentiality and protecting information is only one aspect of security; specialists in digital data security believe that the privacy and data protection consists of the following components \cite{Informationsecurityine-learning}. \paragraph{Data confidentiality} This includes all necessary measures to prevent unauthorized access to confidentiality (e.g. students’ and staff personal information, academic information, and institution financial status). \begin{figure} \centering \input{securityimg.cls} \caption{Security vs Integrity (importance of data security in the IoT context).} \label{fig:integrity} \end{figure} \paragraph{Data integrity} In this aspect it is not important to keep information confidential, but we are concerned about taking measures to protect the information from amending or change (see Figure \ref{fig:integrity}). The important question is how do we preserve data integrity? \paragraph{How to preserve data integrity?} \begin{itemize} \item Invalid input: Data can be entered (supplied) by a known or unknown source (end user, an application, a malicious user …etc.). Therefore, implementing checks on errors (e.g. checking human errors when data is entered or transferred from one machine to another) are very important to increase data integrity. \item Remove duplicate and redundant data: It is a big risk if you use the old file rather than the up to date one. To be in safe side of such risk remove duplicated files. \item Backup data: It is important to prevent permanent data from lose! Backup is critical when there is a ransomware (malware) attack, backup scheduling best practices to ensure availability. \item Access controls: Additionally, it is important to minimize the risk of unauthorized access, only the authorized sources can access sensitive data. \end{itemize} According to all the above, the risks must be addressed before implement and relay on any technology that may heavily deal with personal or academic student data. \section{IoT Educational Toolbox: Right Tool for the Right Job} A number of issues and concerns need to be addressed with respect to the success of Smart Education, such as, is it really better than traditional approaches? This is, an important area for debate, which is not very often brought up in the literature as an imminent concern, but is important. Students, instructors and employers need to be concerned with the heterogeneity and complexity of the technologies and the unprecedented impact they have on the educational system and society in general. A shallow or impaired education system based upon technologies may cause severe harm - both to educational institutions and to the economy. While normally, in the use of technology for learning, concerns are placed on actions the device take to accomplish their desired task, as specified, education and learning is a concern related to ensuring that the device complements the learning experience (communicate the information in an effective and meaningful way). Just like with a regular TV set, which must can be used to communicate educational programmes, but also could be used for entertainment or more, there are multiple instances of IoT applications, however the educational environment could suffer due to unintended misbehavior of the device. For examples, internet-enabled devices that may malfunction (and cause consequences), include automated mobile devices that instead of instructing cause confusion to students, a interactive classroom camera causing accidents due to a malfunction of the software, or a smart thermostat in a classroom causing overheating. There is no doubt that the IoT are a critical part of education and must be taken into account in the design processes to help ensure success. Academia understand the challenges coming with the advent of IoT and are taking steps to improve the design and operational uses. From an instructor perspective the IoT in the context of learning, concerns and proposes establishing guidelines and cases for successful and effective utilization of the IoT. Important issues that we have not discussed in this article, include certification, testing and validation and legal aspects of IoT devices in educational contexts. \section{Conclusion and Discussion} In an increasingly connected digital world (the smart generation), vast opportunities and benefits are available, transform every aspect of our lives, especially in education and learning (extraordinary ways). This smart IoT age, is a catalyst for change, and will impact how we learn, on a global level (influence the economy and society). Benefit million of people and lead to a wealth of new industries, marketplaces and jobs. While there are huge opportunities, we must also remember to tread cautiously, for instance, we do not know what the long term deeper emotional and psychological impacts of smart education will be. \section*{Acknowledgements} We want to thank the reviewers for taking the time out of their busy schedules to provide insightful and valuable comments to help improve the quality of this article. \bibliographystyle{apa} \let\oldthebibliography=\thebibliography \let\endoldthebibliography=\endthebibliography \renewenvironment{thebibliography}[1]{% \begin{oldthebibliography}{#1}% \setlength{\parskip}{0ex}% \setlength{\itemsep}{0.5ex}% }% {% \end{oldthebibliography}% }
proofpile-arXiv_065-4700
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Modeling the mechanisms of how nodes interact in networks is a relevant problem in many applications. In social networks, we observe a set of interactions between people, and one can use this information to cluster them into communities based on some notion of similarity~\cite{fortunato2010community}. Broadly speaking, the connections between users can be used to infer users' membership, and this in turns determines the likelihood that a pair of users interacts. Real networks are often sparse, people interact with a tiny amount of individuals, compared to the large set of possible interactions that they could in principle explore. Traditionally, models for community detection in networks treat an existing link as a positive endorsement between individuals: if two people are friends in a social network, this means they like each other. In assortative communities, where similar nodes are more likely to be in the same group \cite{newman2018networks,fortunato2016community}, this encourages the algorithm to put these two nodes into the same community. On the contrary, a non-existing link influences the model to place them into different communities, as if the two non-interacting individuals were not compatible. However, many of these non-existing links (--especially in large-scale networks--) are absent because the individuals are not aware of each other, rather than because they are not interested in interacting. This is a general problem in many network data sets: we know that interacting nodes have a high affinity, but we can not conclude the contrary about non-interacting nodes.\\ This problem has been explored in the context of recommender systems \cite{liang2016modeling,yang2018unbiased,wang2016learning,chuklin2015click}, where it is crucial to learn what items that a user did not consume could be of interest. In this context, items' exposure is often modeled by means of propensity scores or selection biases assigned to user-item pairs that increase the probability of rare consumption events. \\ It is not clear how to adapt these techniques to the case of networks of interacting individuals, hence the investigation of this problem in the context of networks is still missing. Existing approaches partially account for this by giving more weight to existing links, as in probabilistic generative models that use a Poisson distribution for modeling the network adjacency matrix~\cite{de2017community,ball2011efficient,zhao2012consistency,schein2016bayesian}. These methods are effective, but may be missing important information contained in non-existing links. \section{Community detection with exposure} We address this problem by considering a probabilistic formulation that assigns probabilities to pairs of nodes of being exposed or not. These are then integrated into standard probabilistic approaches for generative networks with communities. For this, as a reference model we consider \mbox{{\small \textsc{MultiTensor} }}\ \cite{de2017community}, as it is a flexible model that takes in input a variety of network structures (\textit{e.g.} directed or undirected networks, weighted or unweighted) and detects overlapping communities in a principled and scalable way. \subsection{Representing exposure}\label{sec:rep-exp} Consider an $N\times N$ \emph{observed} network adjacency matrix $\mathbf{A^\mathrm{(o)}}$, where $A^\mathrm{(o)}_{ij}\geq0$ is the weight of the interaction between nodes $i$ and $j$, this is the input data. For instance, $A^\mathrm{(o)}_{ij}$ could be the number of times that $i$ and $j$ met or exchanged messages. If a link $A^\mathrm{(o)}_{ij}$ exists, this indicates an affinity between individuals $i$ and $j$, triggered by both individuals' inner preferences. If the link does not exist ($A^\mathrm{(o)}_{ij}=0$), one usually assumes that this indicates a lack of affinity between $i$ and $j$. However, the link might not exist simply because $i$ and $j$ never met. This is the case in social networks, where an \textit{ego} might follow an \textit{alter} because of personal preference, but this choice is subject to being exposed to the \textit{alter} in the first place. This suggests that the event of being exposed to someone influences the patterns of interactions observed in networks. We are interested in incorporating this notion of exposure in modeling network data, and investigate how results change. To represent this, we postulate the existence of a \emph{ground-truth adjacency matrix}, $\mathbf{A^\mathrm{(g)}}$, that indicates the affinity between nodes $i$ and $j$ regardless of whether the two nodes were exposed to each other (Fig.~\ref{fig:diagram}--left). In addition, we introduce a \emph{dilution matrix} $\mathbf{Z}$ (red crosses in Fig.~\ref{fig:diagram}--center), with values $Z_{ij}=0,1$ indicating whether nodes $i$ and $j$ were exposed ($Z_{ij}=1$) or not ($Z_{ij}=0$). The observed matrix is then the element-wise product of the ground truth network times the dilution matrix, \begin{equation} \mathbf{A^\mathrm{(o)}} = \mathbf{A^\mathrm{(g)}} \otimes \mathbf{Z}\,, \ee where $\otimes$ indicates an element-by-element multiplication. A diagram of the resulting matrix is shown in Fig.~\ref{fig:diagram}--right. Through this representation, a zero-entry $A^\mathrm{(o)}_{ij}=0$ can be attributed to $A^\mathrm{(g)}_{ij}=0$ (lack of affinity), $Z_{ij}=0$ (lack of exposure) or both. Standard models for community detection do not account for exposure, therefore they treat a zero-entry $A^\mathrm{(o)}_{ij}=0$ as a signal for non-affinity. We aim at measuring both communities and exposure, given the observed data $A^\mathrm{(o)}_{ij}$. In other words, for a given node $i$, we would like to estimate its community membership and for a given pair $(i,j)$ we want to estimate the probability that they were exposed to each other. For simplicity, we show derivations for the case of undirected networks, but similar ones apply to directed ones. \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{diagram-exposure-cropped} \caption{Diagram of the exposure mechanism. On the \textbf{left} we have the full graph ($\mathbf{A^\mathrm{(g)}}$). We then set to zero the probability of some connections through a mask $\mathbf{Z}$ (\textbf{center}), and reconstruct the true graph and communities based solely on the visible links, $\mathbf{A^\mathrm{(o)}}$ (\textbf{right}).} \label{fig:diagram} \end{figure} \subsection{The ground truth adjacency matrix} In our notation, we use $\theta$ to denote the latent variables affecting community detection, \textit{i.e.} determining the probability of observing an interaction between $i$ and $j$ given that they have been exposed. We will treat the case of symmetric edges, $A^\mathrm{(g)}_{ij}=A^\mathrm{(g)}_{ji}$, and provide an extension to asymmetric interactions in App.~\ref{app:asym}. Following the formalism of Ref.~\cite{de2017community}, we assign a $K$-dimensional hidden variable $u_i$ to every node $i$. Since different communities may interact in different ways, we also introduce a $K\times K$ affinity matrix $w$, regulating the density of interactions between different groups. The latent variables related to the ground truth matrix are then $\theta=(u,w)$. We express the expected interaction between two nodes through a parameter \begin{equation}\label{eq:lambda} \lambda_{ij}=\sum_{k,q}^K u_{ik}u_{jq} w_{kq}\,, \end{equation} and extract the elements of $\mathbf{A^\mathrm{(g)}}$ from a Poisson distribution with mean $\lambda_{ij}$, \begin{equation} P(A^\mathrm{(g)}_{ij}|u_{i},u_{j},w)=\textrm{Pois} \bup{A^\mathrm{(g)}_{ij};\lambda_{ij}} = \frac{e^{-\lambda_{ij}}\lambda_{ij}^{A^\mathrm{(g)}_{ij}}}{A^\mathrm{(g)}_{ij}!}\,. \ee We then assume conditional independence between different pairs of edges given the latent variables $P(\mathbf{A^\mathrm{(g)}}|u,u,w)=\prod_{i<j}P(A^\mathrm{(g)}_{ij}|u_{i},u_{j},w)$, but this can be generalized to more complex dependencies \cite{safdari2021generative,contisciani2021community,safdari2022reciprocity}. We do not explore this here. \subsection{The observed adjacency matrix} The observed adjacency matrix depends on whether two nodes were exposed or not, through the matrix $\mathbf{Z}$. If $Z_{ij}=1$, the two nodes are exposed, and the edge comes from the ground truth matrix, \textit{i.e.} $P(A^\mathrm{(o)}_{ij}| Z_{ij}=1, \theta) = P(A^\mathrm{(g)}_{ij}|\theta) = \textrm{Pois}(A^\mathrm{(g)}_{ij}; \lambda_{ij})$. If $Z_{ij}=0$, then $A^\mathrm{(o)}_{ij}=0$ regardless of $\lambda_{ij}$. Therefore, the elements of $\mathbf{A^\mathrm{(o)}}$ are extracted from the distribution \begin{equation}\label{eqn:likelihood} P(A^\mathrm{(o)}_{ij}| Z_{ij}, \theta) = \textrm{Pois}(A^\mathrm{(o)}_{ij}; \lambda_{ij})^{Z_{ij}}\, \delta(A^\mathrm{(o)}_{ij})^{1-Z_{ij}} \,. \ee Since $Z_{ij}$ is binary, we assign it a Bernoulli prior with parameter $\mu_{ij}$, \begin{eqnarray}\label{eqn:PZ} P(\mathbf{Z}|\mu) = \prod_{i<j}P(Z_{ij}|\mu_{ij}) = \prod_{i<j}\bup{\mu_{ij}}^{Z_{ij}}\, \bup{1-\mu_{ij}}^{1-Z_{ij}}\,. \end{eqnarray} The parameter $\mu_{ij}$ will depend on some latent variable related to nodes $i$ and $j$. There are several possible choices for that. Here, we consider a simple setting: \begin{align} \label{eqn:prior} \mu_{ij}&=\mu_i\,\mu_j\,,\\ \mu_{i}&\in[0,1]\,, \end{align} This allows to keep the number of parameters small and has an easy interpretation. In fact, the parameter $\mu_{i}$ acts as the propensity of an individual to be exposed to others: the higher its value, the higher the probability that node $i$ will be exposed to other nodes. This way of modeling exposure only adds one more parameter per node, allowing for heterogeneous behaviors among users while keeping the model compressed. The full set of variables that need to be inferred consists of the $u$, the $w$ and the $\mu$ variables, which amounts to $NK+K^2+N$ parameters, which is one order of magnitude smaller than the $ N^2$ elements of $\mathbf{A^\mathrm{(o)}}$. \subsection{Inference and Expectation-Maximization} Given the data $\mathbf{A^\mathrm{(o)}}$, our goal is to first determine the values of the parameters $\theta$, which fixes the relationship between the hidden indicator $Z_{ij}$ and the data, and then to approximate $Z_{ij}$ given the estimated $\theta$. We perform this using statistical inference as follows. Consider the posterior distribution $P(\mathbf{Z},\theta| \mathbf{A^\mathrm{(o)}})$. Since the dilution $\mathbf{Z}$ is independent from the parameters $\theta$ and all the edges are considered conditionally independent given the parameters, Bayes' formula gives \begin{equation}\label{eqn:joint} P(\mathbf{Z},\theta| \mathbf{A^\mathrm{(o)}}) = \f{P(\mathbf{A^\mathrm{(o)}}|\mathbf{Z},\theta) P(\mathbf{Z} |\mu) P(\theta)}{P(\mathbf{A^\mathrm{(o)}})} \,. \ee Summing over all the possible indicators we have: \begin{equation} P(\theta| \mathbf{A^\mathrm{(o)}}) = \sum_{\mathbf{Z}}P(\mathbf{Z},\theta| \mathbf{A^\mathrm{(o)}}) = \prod_{i<j}^N\sum_{Z_{ij}=0,1} P(Z_{ij},\theta| \mathbf{A^\mathrm{(o)}}) \quad, \ee which is the quantity that we need to maximize to extract the optimal $\theta$. It is more convenient to maximize its logarithm, as the two maxima coincide. We use Jensen's inequality: \begin{equation}\label{eqn:jensen} \log P(\theta| \mathbf{A^\mathrm{(o)}}) = \log\sum_{\mathbf{Z}}P(\mathbf{Z},\theta| \mathbf{A^\mathrm{(o)}}) \geq \sum_{\mathbf{Z}} q(\mathbf{Z})\, \log \f{P(\mathbf{Z},\theta| \mathbf{A^\mathrm{(o)}})}{q(\mathbf{Z})}:= \mathcal{L}(q,\theta,\mu)\, \ee where $q(\mathbf{Z})$ is {any distribution satisfying $\sum_{\mathbf{Z}} q(\mathbf{Z})=1$, we refer to this as the variational distribution}. Inequality~(\ref{eqn:jensen}) is saturated when \begin{equation}\label{eqn:q} q(\mathbf{Z}) = \f{P(\mathbf{Z},\theta| \mathbf{A^\mathrm{(o)}})}{\displaystyle\sum_{\mathbf{Z}} P(\mathbf{Z},\theta| \mathbf{A^\mathrm{(o)}})}\,, \ee hence this choice of $q$ maximizes $\mathcal{L}(q,\theta,\mu)$ with respect to $q$. Further maximizing it with respect to $\theta$ gives us the optimal latent variables. This can be done in an iterative way using Expectation-Maximization (EM), alternating between maximizing with respect to $q$ using \Cref{eqn:q} and then maximizing $\mathcal{L}(q,\theta,\mu)$ with respect to $\theta$ and $\mu$. To obtain the updates for the parameters we need to derive the equations that maximize $\mathcal{L}(q,\theta,\mu)$ with respect to $\theta$ and $\mu$ and set these derivatives to zero. This leads to the following closed-form updates: \begin{align} u_{ik} &= \f{\sum_{j} Q_{ij}\, A_{ij}\sum_{q}\rho_{ijkq} }{\sum_{j}Q_{ij}\, \sum_{q}u_{jq}w_{kq}} \label{eqn:u} \\ w_{kq} & = \f{\sum_{i,j} Q_{ij}\, A_{ij}\rho_{ijkq} }{\sum_{i,j}Q_{ij}\, u_{ik}u_{jq}} \label{eqn:w} \\ \rho_{ijkq} & =\f{u_{ik}u_{jq}w_{kq}}{\sum_{k,q}u_{ik}u_{jq}w_{kq}} \label{eqn:rho} \\ \mu_{i} & =\f{\sum_{j}Q_{ij}} {\sum_{j}\f{(1-Q_{ij})\, \mu_{j} }{ (1-\mu_{i}\, \mu_{j})}} \label{eqn:mu} \quad, \end{align} where we defined $Q_{ij}=\sum_{\mathbf{Z}}\,Z_{ij}\,q(\mathbf{Z}) $ the expected value of $Z_{ij}$ over the variational distribution. As $\mu_i$ appears on both sides of \Cref{eqn:mu}, this can be solved with root-finding methods bounding $\mu_i$ to the interval $[0,1]$, to be compatible as a parameter of the Bernoulli prior.\footnote{ In practice, we limit the domain of $\mu_i$ to the interval [$\epsilon,1-\epsilon]$, where $\epsilon$ is a small hyperparameter chosen to avoid numerical overflows of $\mathcal{L}$. To maintain the model interpretable in terms of exposure, at the end of the optimization we set to zero each $\mu_{i}\equiv \epsilon$ and to one each $\mu_{i} \equiv 1-\epsilon$. } Finally, to evaluate $q(Z)$, we substitute the estimated parameters inside \Cref{eqn:joint}, and then into \Cref{eqn:q} to obtain: \begin{align} q(\mathbf{Z})&={\prod_{i<j}}\, Q_{ij}^{Z_{ij}}\, (1-Q_{ij})^{(1-Z_{ij})}\,\label{eqn:qij}, \end{align} where \begin{equation}\label{eqn:Qij} Q_{ij} = \f{\textrm{Pois}(A_{ij};\lambda_{ij})\mu_{ij}}{\textrm{Pois}(A_{ij};\lambda_{ij})\mu_{ij}+\delta(A_{ij})\, (1-\mu_{ij})} \quad. \ee In other words, the optimal {$q(\mathbf{Z})$ is a product $\prod_{i<j}q_{ij}(\mathbf{Z}_{ij})$ of } Bernoulli distributions $q_{ij}$ with parameters $Q_{ij}$. {This parameter is also a point-estimate of the exposure variable, as for the Bernoulli distribution $Q_{ij} = \mathbb{E}_{q}\rup{\mathbf{Z}_{ij}}$.\\ {The algorithmic EM procedure then works by initializing at random all the parameters and then iterating \Crefrange{eqn:u}{eqn:mu} for fixed $q$, and the calculating \Cref{eqn:Qij} given the other parameters, and so on until convergence of $\mathcal{L}$. The function $\mathcal{L}$ is not convex, hence we are not guaranteed to converge to the global optimum. In practice, one needs to run the algorithm several times with different random initial parameters' configurations and then select the run that leads to best values of $\mathcal{L}$. In the following experiments we use 5 of such realizations.} \section{Results}\label{sec:res} We test our algorithm on synthetic and real data, and compare it to its formulation without exposure, \textit{i.e.} the \mbox{{\small \textsc{MultiTensor} }} algorithm described in Ref.~\cite{de2017community}. In the following, we refer to our algorithm as \mbox{{\small \textsc{EXP}}}, and we use \mbox{{\small \textsc{NoEXP}}}\ for the algorithm that does not utilize exposure. \subsection{Synthetic data}\label{sec:synt} Synthetic data experiments are particularly interesting, because we can validate our model performances on the ground truth values. The creation of a synthetic dataset follows the generative model described in \Cref{sec:rep-exp}: \begin{enumerate} \item For a graph with $N=500$ nodes, we generate the latent parameters $\theta$ and $\mu$ as follows. We draw overlapping communities by sampling $u_i$ from a Dirichlet distribution with parameter $\alpha_k=1$, $\forall k$; we choose an assortative $w$ by selecting the off-diagonal entries to be 0.001 times smaller then the on-diagonal ones. We then vary $K\in [3,5,8]$. We draw $\mu_i$ from a Beta distribution $\mathrm{Beta}(\mu_i;2,\beta)$, where we vary $\beta\in [0.1,10]$ to tune the fraction of unexposed links. \item Sample $\mathbf{A^\mathrm{(g)}}_{ij}$ from a Poisson distribution with means $\lambda_{ij}=\sum_{k,q}u_{ik}u_{jq}w_{kq}$. \item Sample $\mathbf{Z}$ from a Bernoulli distribution of means $\mu_{ij}=\mu_{i}\mu_j$. \item Calculate the matrix $\mathbf{A^\mathrm{(o)}}=\mathbf{A^\mathrm{(g)}}\otimes\mathbf{Z}$. This matrix has on average $\langle k\rangle$ links per node. \end{enumerate} We repeat this procedure 10 times for each set of parameters to obtain different random realizations of synthetic data. We then apply the \mbox{{\small \textsc{EXP}}}\ and \mbox{{\small \textsc{NoEXP}}}\ algorithms to $\mathbf{A^\mathrm{(o)}}$ to learn the parameters and study the performance as a function of $\langle k\rangle$, controlling the density of observed edges. \paragraph{Reconstructing hidden links} We start by testing the ability of the model to predict missing links, a procedure often used as a powerful evaluation framework for comparing different models \cite{liben2007link,lu2011link}. We use a 5-fold cross-validation scheme where we hide $20\%$ of the edges in $\mathbf{A^\mathrm{(o)}}$ and train the model on the remaining $80\%$. Performance is then computed on the hidden $20\%$ of the edges. As a performance evaluation metric we measure the area under the receiver operating characteristic curve (AUC) between the inferred values and the ground truth used to generate $\mathbf{A^\mathrm{(o)}}$ on the test set. The AUC is the probability that a randomly selected existing edge is predicted with a higher score than a randomly selected non-existing edge. A value of 1 means optimal performance, while 0.5 is equivalent to random guessing. As the score of an edge $\mathbf{A^\mathrm{(o)}}_{i,j}$ we use the quantity $Q_{ij}\, \lambda_{ij}$ for \mbox{{\small \textsc{EXP}}}, and $\lambda_{ij}$ for \mbox{{\small \textsc{NoEXP}}}. In both cases, these are the expected values of $A^\mathrm{(o)}_{ij}$ using the estimates of the latent parameters and, for \mbox{{\small \textsc{EXP}}}, over the inferred $q(\mathbf{Z})$. We find that the \mbox{{\small \textsc{EXP}}}\ algorithm outperforms \mbox{{\small \textsc{NoEXP}}}\ by a large margin, which increases as the network becomes more dense, going above 10\%, as shown in \Cref{fig:synth1}--left. At low densities, the performance increase of the \mbox{{\small \textsc{EXP}}}\ algorithm is narrow for models with a large number of communities, while at large densities it becomes bigger and independent of the number of communities. This result suggests that \mbox{{\small \textsc{EXP}}}\ is capturing the input data better--consistently for varying dilution densities--than a model that does not account for exposure.} \paragraph{Guessing unexposed links} Our algorithm not only allows us to predict missing edges but also gives interpretable estimates of the probability of exposure between nodes. These probabilities follow naturally from the posterior distribution on \textbf{Z}, which is the Bernoulli distribution in \Cref{eqn:qij}. Standard algorithms as \mbox{{\small \textsc{NoEXP}}}\ cannot estimate this. We can use the mean value $Q_{ij}$ as in \Cref{eqn:Qij} as a score of an edge to compute the AUC between inferred and ground truth values of $\mathbf{Z}$, analogously to what was done for reconstructing $\mathbf{A^\mathrm{(o)}}$. We report in \Cref{fig:synth1}--center the ability of \mbox{{\small \textsc{EXP}}}\ to reconstruct the matrix $\mathbf{Z}$, \textit{i.e.} to infer which edges were removed in the dilution step. The AUC varies between $0.65$ and $0.75$, well above the random baseline of $0.5$. We notice how the values increase as the density of connection increases, but stay above $0.65$ even at small density values, where reconstruction is more challenging. \paragraph{Inferring communities} In \Cref{fig:synth1}--right, we can see that \mbox{{\small \textsc{EXP}}}\ and \mbox{{\small \textsc{NoEXP}}}\ show similar performances in reconstructing communities. From this plot we can also notice how reconstruction improves for larger densities and fewer communities. The similar performances may be due to selecting a simple prior as in \Cref{eqn:prior}. For a more structured prior, the inferred communities would likely change and potentially improve. Given this similar community detection abilities but the better predictive power in reconstructing $\mathbf{A^\mathrm{(o)}}$, we argue that the learned $Q_{ij}$'s are important to boost prediction compared to a model that does not properly account for exposure. This is true even for a simple prior. \begin{figure}\centering \includegraphics[width=0.325\textwidth]{AUC_A} \includegraphics[width=0.325\textwidth]{AUC_Q} \includegraphics[width=0.325\textwidth]{CS} \caption{ Performance of the \mbox{{\small \textsc{EXP}}}\ and \mbox{{\small \textsc{NoEXP}}}\ algorithms on synthetic data. The matrix $\mathbf{A^\mathrm{(g)}}$ has $K=3,5,8$ communities and $N=500$. The exposure mask $Z$ is extracted from a binomial distribution with parameter $\mu_{ij}=\mu_i\mu_j$. \textbf{Left:} AUC between the inferred values and the ground truth used to generate $\mathbf{A^\mathrm{(o)}}$. \textbf{Center:} AUC of the reconstruction of the exposure mask $\mathbf{Z}$. \textbf{Right:} Cosine similarity between inferred and ground truth communities. \textbf{Inset:} We show the same data as in the main plots by rescaling the average number of links by the number of communities. } \label{fig:synth1} \end{figure} \paragraph{Dependence on the number of communities} All of these metrics exhibit a scaling w.r.t. the variable $\langle k\rangle/K$, as can be seen in the insets of \Cref{fig:synth1}. This suggests that the curves seem to be independent of the number of communities when accounting for this rescaling. Thus observing the behavior for one particular value of $K$ should be informative enough to understand how the model behaves for various densities. \paragraph{Suggesting good matches} Since the \mbox{{\small \textsc{EXP}}}\ algorithm is good at predicting which nodes were removed from the original graph (\Cref{fig:synth1}--center), we can use this to address the following question: Is the \mbox{{\small \textsc{EXP}}}\ algorithm able to suggest two nodes that have high affinity despite not having any connection? In other words, we are asking whether we are able to find links that in $\mathbf{A^\mathrm{(o)}}$ are absent, but have a high expected value in $\mathbf{A^\mathrm{(g)}}$. To test this ability, we take for each node $i$: a) all the possible neighbors $j$ such that $A^\mathrm{(o)}_{ij}=0$; b) select among them the 20 with the largest inferred affinity $\lambda_{ij}$; and c) check how many of those are present in $\mathbf{A^\mathrm{(g)}}$. We call Precision@20 (P@20) the fraction of links which were correctly inferred, averaging across all nodes. In \Cref{fig:precision} we show that for intermediate dilution values, the P@20 reaches around 80\%, and outperforms random guessing at any value of the dilution. Notice that random guessing in not constant in $\langle k \rangle$. This is because this depends on the number of missing links in $\mathbf{A^\mathrm{(o)}}$, and those depend both on the density of $\mathbf{A^\mathrm{(g)}}$ and on the dilution mask $\mathbf{Z}$. Specifically, P@20 of the random baseline goes as $(\langle k \rangle_g-\langle k \rangle )/(N-\langle k \rangle)$, where $\langle k \rangle_g$ is degree of $\mathbf{A^\mathrm{(g)}}$. This is a decreasing function of $\langle k \rangle$, for $\langle k \rangle_g < N$. \begin{figure}[t!] \centering \includegraphics[width=0.7\textwidth]{precision.png} \caption{Suggesting unexposed compatible nodes. For each node $i$, we suggest the 20 links with highest $\lambda_{ij}$ inferred by our algorithm from the non-observed links where $A^\mathrm{(o)}_{ij}=0$. We show the P@20 averaged across all nodes and compare with a uniform-at-random baseline (random) where 20 nodes are selected at random among the available ones. Error bars are standard deviations. Here we use a synthetic network generated as in \Cref{sec:synt} with $N=500$ and $K=5$.} \label{fig:precision} \end{figure} \subsection{Real data} To test our algorithm on real data, we use the American College Football Network (ACFN) dataset provided in Ref.~\cite{girvan2002community}, which represents the schedule of Division I games for the season of the year 2000. Each node in the data set corresponds to a team, and each link is a game played between teams. Teams are grouped in conferences, and each team plays most of its games within a same conference (though not all teams within a conference encounter each other). Conferences group teams of similar level, but another main criterion is geographic distance. Therefore, this dataset has a community structure which is not based on affinity. Here, affinity indicates that teams are of similar level, and therefore should play in the same conference, if conferences were based solely on affinity. We randomly hide 20\% of the links in the ACFN and check how well the \mbox{{\small \textsc{EXP}}}\ and \mbox{{\small \textsc{NoEXP}}}\ algorithms are able to reconstruct which links are missing. We run the algorithm with various number of communities $K=9,11,13$, finding the best result at $K=11$, which is also the number of conferences in the dataset. In \Cref{fig:real}--left we show a scatter plot of the AUC trial-by-trial. This reveals a superior performance of the \mbox{{\small \textsc{EXP}}}\ method which outperforms \mbox{{\small \textsc{NoEXP}}}\ in 142 out of 150 trials (5 fold per 10 random seeds for each of $K=9,11,13$). This suggest that \mbox{{\small \textsc{EXP}}}\ is better capturing the data. \begin{figure}[t!] \includegraphics[width=0.4\textwidth]{AUC_football_scatter} \includegraphics[width=.6\textwidth, trim=0 0 0 500]{football} \caption{\textbf{Left}: Performance in predicting missing links of the \mbox{{\small \textsc{EXP}}}\ and \mbox{{\small \textsc{NoEXP}}}\ algorithms on the ACFN dataset. Different marker shapes correspond to different numbers of communities, while blue (red) markers denote instances where \mbox{{\small \textsc{EXP}}}\ (\mbox{{\small \textsc{NoEXP}}}) has better performance than \mbox{{\small \textsc{NoEXP}}}\ (\mbox{{\small \textsc{EXP}}}). There is a total of 150 markers, denoting 5 folds repeated for 10 random seeds for each value of $K$. \textbf{Right}: Top 10 games that are recommended by the \mbox{{\small \textsc{EXP}}}\ algorithm with $K=11$, which were not played in the AFCN data set. Different colors indicate different conferences. } \label{fig:real} \end{figure} In \Cref{fig:real}--right we show the top 10 recommendations that we can extract from the \mbox{{\small \textsc{EXP}}}\ algorithm by taking, among the links missing from $\mathbf{A^\mathrm{(o)}}$, those with smallest predicted exposure $Q_{ij}$ and the highest affinity $\lambda_{ij}$. Although, in the absence of ground truth, we are not able to assess the validity of these suggestions, we note that all the suggested links represent unplayed games within the the same conference and that games within teams in different conferences were ranked lower. \section{Conclusions} In networks, nodes that would enjoy a high mutual affinity often appear disconnected for reasons that are independent of affinity. This is the case, for example, with people or entities in social networks that have never met, or due to some kind of sampling bias. This introduces a sampling bias in the datasets used for community detection. We studied this problem through a general framework, where we postulate that affinity in terms of compatibility of communities is not enough in order to explain the existence of a link, but rather a mechanism of exposure between nodes should be taken into account as well. We proposed a principled probabilistic model, \mbox{{\small \textsc{EXP}}}, that takes into account this type of bias and is able to estimate the probability that two non-connected nodes are exposed while jointly learning what communities they belong to. We tested the \mbox{{\small \textsc{EXP}}}\ algorithm against a version of itself that does not account for exposure, \mbox{{\small \textsc{NoEXP}}}. On artificial data, where we could validate our results on ground truth parameters and unobserved ground truth data, we found that \mbox{{\small \textsc{EXP}}}\ is as good as \mbox{{\small \textsc{NoEXP}}}\ in learning communities, but it outperforms it when it comes to reconstructing missing links. In addition, the \mbox{{\small \textsc{EXP}}}\ approach allows us to satisfactorily infer which links remained unexposed, an estimate that cannot be done with standard method as, for example, \mbox{{\small \textsc{NoEXP}}}. We finally tested our algorithm on a real dataset which has a hidden structure that is independent of the affinity between links, finding that also here the \mbox{{\small \textsc{EXP}}}\ algorithm is better at reconstructing missing links. The principled approach that we used based on statistical inference is general. It can be made more specific depending on the application at hand. For example, we considered the simple case where exposure only depends on each individual's propensity towards being exposed. However, this could depend on a more fine structure of society, and we could think of introducing an exposure mechanism that mimics the presence of communities which are independent of affinity (\textit{e.g.} different schools, or different classes in a school). Allowing for community-dependent exposure has the potential to better mimic the kind of dilution that occurs in many real datasets. This can also apply to the AFCN dataset, where a better way to model exposure may be one that allows a structure that is able to account for different conferences or geographical regions. We leave this for future work. Additionally, exposure could be driven by covariate information on nodes, as also used in recommender systems \cite{liang2016modeling}. This could be integrated using variants of community detection methods that account for this extra information \cite{contisciani2020community,newman2016structure,fajardo2021node}. Exposure could also change through time, and it could also have some dependence on the structure of $\mathbf{A^\mathrm{(g)}}$. These are all interesting avenues for future work. \bibliographystyle{unsrt}
proofpile-arXiv_065-4701
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:1} Recently, there has been a growing interest for surfaces in the isotropic $3$-space $\mathbb{I}^3$, which is the 3-dimensional vector space $\mathbb{R}^3$ with the degenerate metric $dx^2+dy^2$. Here, $(x,y,t)$ are canonical coordinates on $\mathbb{I}^3$. In particular, a class of surfaces in $\mathbb{I}^3$ naturally appears as an intermediate geometry between geometry of minimal surfaces in the Euclidean $3$-space $\mathbb{E}^3$ and that of maximal surfaces in the Loretnz-Minkowski $3$-space $\mathbb{L}^3$. In fact, if we consider the deformation family introduced in \cite{AF3} with parameter $c\in \mathbb{R}$ \begin{equation}\label{eq:Wformula} X_{c}(w)=\mathrm{Re}\int^w \left(1-c G^2,-i(1+c G^2),2G\right)Fd\zeta,\quad w\in D, \end{equation} on a simply connected domain $D\subset \mathbb{C}$. Here, the pair $(F,G)$ of a holomorphic function $F$ and a meromorphic function $G$ on $D$ is called a {\it Weierstrass data} of $X_c$. Interestingly, \eqref{eq:Wformula} represents Weierstrass-type formulae for minimal surfaces in $\mathbb{E}^3$ when $c=1$ and for maximal surfaces in $\mathbb{L}^3$ when $c=-1$. When we take $c=0$, \eqref{eq:Wformula} is nothing but the representation formula for zero mean curvature surfaces in $\mathbb{I}^3$, see \cite{AF3,Si,MaEtal,SY,Sato,Pember} for example and Figure \ref{Fig:Cats}. \begin{figure}[htbp] \vspace{3.0cm} \hspace*{-11ex} \begin{tabular}{cc} \begin{minipage}[t]{0.55\hsize} \centering \vspace{-3.2cm} \includegraphics[keepaspectratio, scale=0.5]{Fig1_1.eps} \end{minipage} \hspace*{-18ex} \begin{minipage}[t]{0.55\hsize} \centering \vspace{-3.3cm} \includegraphics[keepaspectratio, scale=0.5]{Fig1_2.eps} \end{minipage} \hspace*{-15ex} \begin{minipage}[t]{0.50\hsize} \centering \vspace{-3.1cm} \includegraphics[keepaspectratio, scale=0.55]{Fig1_3.eps} \end{minipage} \end{tabular} \caption{The catenoid $X_1$ in $\mathbb{E}^3$ (left), isotropic catenoid $X_0$ in $\mathbb{I}^3$ (center) and elliptic catenoid $X_{-1}$ in $\mathbb{L}^3$ (right). These surfaces are related by one deformation family $\{X_c\}_{c\in \mathbb{R}}$.} \label{Fig:Cats} \end{figure} One thing that is distinctive about surfaces in $\mathbb{I}^3$ is concerning vertical lines. Since the metric in $\mathbb{I}^3$ ignores the vertical component of vectors, the induced metric on a surface degenerates on vertical lines in $\mathbb{I}^3$. In this sense, vertical lines in $\mathbb{I}^3$ are intriguing and important objects in geometry in $\mathbb{I}^3$, and such a vertical line in $\mathbb{I}^3$ is called an {\it isotropic line}. In this paper, we solve the problem raised in the paper by Seo-Yang \cite[Remark 30]{SY}, which we can state the following \vspace{0.3cm} \begin{center} {\it Is there a principle of analytic continuation across isotropic lines\\ on zero mean curvature surfaces in $\mathbb{I}^3$?} \end{center} \vspace{0.3cm} Obviously, it is directly related to a reflection principle for zero mean curvature surfaces in $\mathbb{I}^3$. The main theorem of this paper is as follows (see also Figure \ref{Fig:Assumption}). \begin{theorem}\label{thm:reflection_Intro} Let $S\subset \mathbb{I}^3$ be a bounded zero mean curvature graph over a simply connected Jordan domain $\Omega \subset \mathbb{C}\simeq \text{${xy}$-plane}$. If $\partial{S}$ has an isotropic line segment $L$ on a boundary point $z_0\in \partial{\Omega}$ satisfying \begin{itemize} \item $L$ connects two horizontal curves $\gamma_1$ and $\gamma_2$ on $\partial{S}$, and \item the projections of $\gamma_1$ and $\gamma_2$ into the $xy$-plane form a regular analytic curve near $z_0$. We denote this analytic curve on the $xy$-plane by $\Gamma$. \end{itemize} Then $S$ can be extended real analytically across $L$ via the analytic continuation across $\Gamma$ in the $xy$-plane and the reflection with respect to the height of the midpoint of $L$ in the $t$-direction. \end{theorem} \begin{figure}[htbp] \vspace{3.3cm} \hspace{-8ex} \begin{tabular}{cc} \begin{minipage}[t]{0.55\hsize} \centering \vspace{-3.5cm} \hspace*{-12ex} \includegraphics[keepaspectratio, scale=0.55]{Fig2_1.eps} \end{minipage} \hspace{-7ex} \begin{minipage}[t]{0.55\hsize} \centering \vspace{-3.5cm} \includegraphics[keepaspectratio, scale=0.65]{Fig2_2.eps} \end{minipage} \end{tabular} \vspace{-0.5cm} \caption{A surface with the boundary condition assumed in Theorem \ref{thm:reflection_Intro} (left) and its analytic extension (right). Each vertical line (blue one) indicates $L$ and horizontal curves connected by $L$ (yellow ones) indicate $\gamma_1$ and $\gamma_2$ in Theorem \ref{thm:reflection_Intro}.} \label{Fig:Assumption} \end{figure} Moreover, we also investigate how this kind of isotropic lines appear on boundary of zero mean curvature surfaces in $\mathbb{I}^3$, see Theorem \ref{thm:reflection} for more details. As an application of Theorem \ref{thm:reflection_Intro}, we give an analytic continuation of some examples including the helicoid across isotropic lines. We also give triply periodic zero mean curvature surfaces with isotropic lines. One of them is analogous to the Schwarz' D-minimal surface in $\mathbb{E}^3$. The organization of this paper is as follows. In Section 2, we give a short summary of zero mean curvature surfaces in $\mathbb{I}^3$. In Section 3, we first prove a reflection principle for continuous boundaries of zero mean curvature surfaces in $\mathbb{I}^3$ as the classical reflection principle for minimal surfaces in $\mathbb{E}^3$. After that we give a proof of Theorem \ref{thm:reflection_Intro}. Finally, we give some examples of zero mean curvature surfaces with isotropic lines in Section 4. \section{Preliminary} \label{sec:2} In this section, we recall some basic notions of zero mean curvature surfaces in $\mathbb{I}^3$. See \cite{Sachs,Sato, Strubecker, Strubecker2, SY, Si} and their references for more details. The {\it simply isotropic $3$-space} $\mathbb{I}^3$ is the 3-dimensional vector space $\mathbb{R}^3$ with the degenerate metric $\langle \ ,\ \rangle := dx^2+dy^2$. Here, $(x,y,t)$ are canonical coordinates on $\mathbb{I}^3$. A non-degenerate surface in $\mathbb{I}^3$ is an immersion $X\colon D \to \mathbb{I}^3$ from a domain $D\subset \mathbb{C}$ into $\mathbb{I}^3$ whose induced metric $ds^2:=X^*\langle\ ,\ \rangle$ is positive-definite. Then, we can define the Laplacian $\Delta$ on the surface with respect to $ds^2$. Since minimal surfaces in the Euclidean $3$-space $\mathbb{E}^3$ and maximal surfaces in the Minkowski $3$-space $\mathbb{L}^3$ are characterized by the equation \begin{equation}\label{eq:harmonic} \Delta{X}= \left( \Delta{x}, \Delta{y}, \Delta{t} \right) \equiv \vect{0}, \end{equation} we can also consider surfaces in $\mathbb{I}^3$ satisfying the equation \eqref{eq:harmonic}. Such a surface is called a {\it zero mean curvature surface} or {\it isotropic minimal surface} in $\mathbb{I}^3$. Since $ds^2$ is positive-definite, each tangent plane of the surface $X$ is not vertical. Hence, each surface $X$ is locally parameterized by $X(x,y)=(x,y,f(x,y))$ for a function $f=f(x,y)$ and the Laplacian $\Delta$ can be written as $\Delta = \partial_x^2+\partial_y^2$ on this coordinates. This means that a zero mean curvature surface in $\mathbb{I}^3$ is locally the graph of a harmonic function on the $xy$-plane. As well as minimal surfaces in $\mathbb{E}^3$ and maximal surfaces in $\mathbb{L}^3$, we can see by the equation \eqref{eq:harmonic} that zero mean curvature surfaces in $\mathbb{I}^3$ also have the following Weierstrass-type representation formula. \begin{proposition}[cf.~\cite{MaEtal,Pember,Sato,Strubecker,SY}]\label{prop:W} Let $F\not \equiv 0$ be a holomorphic function on a simply connected domain $D\subset \mathbb{C}$ and $G$ a meromorphic function on $D$ such that $FG$ is holomorphic on $D$. Then the mapping \begin{equation}\label{eq:w-formula} X(w)=\mathrm{Re}\int^w {}^t(1,-i,2G)Fd\zeta \end{equation} gives a zero mean curvature surface in $\mathbb{I}^3$. Conversely, any zero mean curvature surface in $\mathbb{I}^3$ is of the form \eqref{eq:w-formula}. \end{proposition} The pair $(F,G)$ is called a {\it Weierstrass data} of $X$. \begin{remark} Since the induced metric is $ds^2=\lvert F \rvert^2dwd\overline{w}$, we can consider zero mean curvature surfaces in $\mathbb{I}^3$ with singular points by the equation \eqref{eq:w-formula}. Singular points, on which the metric $ds^2$ degenerates, correspond to isolated zeros of the holomorphic function $F$. \end{remark} At the end of this section, we mention another representation by using harmonic functions. Let us define the holomorphic function $h(w)=\int^w Fd\zeta$ on $D$. Then the conformal parametrization of $X(w)=(x(w),y(w),t(w))$ in \eqref{eq:w-formula} is written as $X(w)=(h(w),t(w))$, here we identify the $xy$-plane with the complex plane $\mathbb{C}$. \section{Main theorem} \label{sec:3} \subsection{Reflection principle for horizontal curves} As in the classical minimal surface theory in $\mathbb{E}^3$, the Schwarz reflection principle also leads to a symmetry principle for continuous planar boundary of zero mean curvature surfaces $\mathbb{I}^3$. In this subsection, we recall a reflection principle of this typical type. A subset $\Gamma \subset \mathbb{C}$ is said to be a {\it regular simple analytic arc} if there exists an open interval $I\subset \mathbb{R}$ and an injective real analytic curve $\gamma\colon I \to \mathbb{C}$ such that $\gamma' \neq 0$ and $\gamma(I)=\Gamma$. We denote the analytic continuation of $\gamma$ into a neighborhood of $I$ by $\gamma$ again, and note that $\gamma$ is a conformal mapping around $I$ since $\gamma' \neq 0$. We define the reflection map $R_{\Gamma}$ with respect to $\Gamma$ by the relation $R_\Gamma= R_\gamma:=\gamma \circ R \circ \gamma^{-1}$, where $R$ is the complex conjugation (note that we can easily see that $R_\Gamma=R_\gamma$ is independent of the choice of $\gamma$). We call this reflection $R_\Gamma$ the {\it reflection with respect to $\Gamma$}. Let $f\colon \mathbb{H} \to \mathbb{C}$ be a holomorphic function which extends continuously to an interval $J=(a,b)\subset \partial \mathbb{H}$. Here, $\mathbb{H}$ denotes the upper half-plane in $\mathbb{C}$. In this setting, let us recall the following reflection principle for holomorphic functions (the proof is given in the similar way to the discussion in \cite[Chapter 6, Section 1.4]{Ahl1}). \begin{lemma} \label{lem:ref_in_analytic_arc} Under the above assumption, if the image $\Gamma :=f(J)$ is a regular simple analytic arc, then $f$ extends holomorphically to an open subset containing $\mathbb{H}\cup J$ so that \[ f(\overline{w})=R_{\Gamma} \circ f(w). \] \end{lemma} As a typical case, the following reflection principle for zero mean curvature surfaces in $\mathbb{I}^3$ holds. \begin{proposition}\label{prop:reflection} Let $X\colon \mathbb{H}\to S$ be a conformal parametrization of a zero mean curvature surface $S\subset \mathbb{I}^3$. If $X$ is continuous on an open interval $I\subset \partial{\mathbb{H}}$ and $\Gamma:=X(I)$ is a regular (simple) analytic arc on a horizontal plane $P\simeq \mathbb{C}$, then $S$ can be extended real analytically across $\Gamma$ via the reflection with respect to $\Gamma$ in $xy$-direction and the planar symmetry with respect to $P$. \end{proposition} We should mention that this result was essentially obtained by Strubecker \cite{Strubecker3}. Here, we give a short proof of this fact for the sake of completeness. \begin{proof} We may assume that $P$ is the $xy$-plane. By the assumption, $X=(x,y,t)$ satisfies $t\equiv 0$ on $I$, and $f(I)=X(I)=\Gamma$ is a regular analytic arc. By the Schwarz reflection principle (see \cite[Chapter 4, Section 6.5]{Ahl1}), the harmonic function $t$ can be extended across $I\subset \partial{\mathbb{H}}$ so that $t(\overline{w})=-t(w)$. On the other hand, by Lemma \ref{lem:ref_in_analytic_arc}, $f$ is also extended across $I$. Therefore, $X(w)=(f(w), t(w))$ can be defined across $I$ and satisfies $X(\overline{w})=(R_{\Gamma} \circ f(w),-t(w))$, which is the desired symmetry. \end{proof} As a special case of Proposition \ref{prop:reflection}, we can consider the following specific boundary conditions. \begin{corollary}\label{cor:reflection_nonvertical} Under the same assumptions as in Proposition \ref{prop:reflection}, the following statements hold. \begin{itemize} \item[(i)] If $\Gamma$ is a straight line segment on a horizontal plane $P$, then $S$ can be extended via the $180^\circ$-degree rotation with respect to $\Gamma$. \item[(ii)] If $\Gamma$ is a circular arc on a horizontal plane $P$, then $S$ can be extended via the inversion of the circle in the $xy$-direction and the planar symmetry with respect to $P$. \end{itemize} \end{corollary} \begin{remark}[Reflection for boundary curves on non-vertical planes] Proposition \ref{prop:reflection} and Corollary \ref{cor:reflection_nonvertical} are also valid when $P$ is a general non-vertical plane as follows: If $P$ is a non-vertical plane, we can write it by the equation $t=ax+by+c$ for some $a,b,c \in \mathbb{R}$. After taking the affine transformation \begin{equation}\label{eq:iso} (x,y,t)\longmapsto (x,y,t-ax-by-c) \end{equation} preserving the metric $\langle \ ,\ \rangle$, $P$ becomes a horizontal plane. Hence we can apply the reflection properties as in Proposition \ref{prop:reflection} and Corollary \ref{cor:reflection_nonvertical}. We remark that the affine transformation \eqref{eq:iso} is not an isometry in $\mathbb{E}^3$ and hence symmetry changes slightly after this transformation. For instance, the symmetry in (i) of Corollary \ref{cor:reflection_nonvertical} is no longer the $180^\circ$-degree rotation with respect to a straight line after the inverse transformation of \eqref{eq:iso}. The transformation \eqref{eq:iso} is one of congruent motions in $\mathbb{I}^3$. See \cite{Pottmann, Sachs, Strubecker3} for example. \end{remark} \subsection{Reflection principle for vertical lines}\label{subsec:vline} The classical Schwarz reflection principle is for harmonic functions which are at least continuous on their boundaries. On the other hand, as discussed in the proof of Theorem 2.3 and Remark 2.4 in {\cite{AF2}}, each harmonic function with a discontinuous jump point at the boundary has also a real analytic continuation across the boundary after taking an appropriate blow-up as follows. Let $\Pi\colon D^+:=\mathbb{R}_{>0}\times (0, \pi) \to \mathbb{H}$ be a homeomorphism defined by $\Pi(r, \theta)=re^{i \theta}$. By definition, $\Pi$ is real analytic on the wider domain $D:=\mathbb{R}\times (0, \pi)$. \begin{proposition}[\cite{AF2}]\label{prop:DiscontiRef} Let $f\colon \mathbb{H} \to \mathbb{R}$ be a bounded harmonic function which is continuous on $\mathbb{H} \cup (-\varepsilon, 0) \cup (0,\varepsilon)$ for some $\varepsilon >0$. If $f\equiv a$ on $(-\varepsilon, 0)$ and $f\equiv b$ on $ (0,\varepsilon)$, then the real analytic map $f\circ \Pi$ on $D^+$ extends to $D$ real analytically satisfying the following conditions. \begin{itemize} \item[(i)] $f\circ \Pi(-r,\pi-\theta)+X\circ\Pi(r,\theta)=a+b$, and \item[(ii)] $f\circ\Pi(0,\theta)=a\cfrac{\theta}{\pi}+b\left(1-\cfrac{\theta}{\pi}\right)$. \end{itemize} \end{proposition} \begin{remark}[Blow-up of discontinuous point $0\in \partial{\mathbb{H}}$] The condition (ii) in Proposition \ref{prop:DiscontiRef} means that $f\circ\Pi(0,\theta)$ is the point which divides the line segment connecting $a$ and $b$ into two segments with lengths $(1-\theta/\pi)\colon \theta/\pi$. \end{remark} As pointed out in \cite[Remark 30]{SY}, vertical lines naturally appear on boundary of zero mean curvature surfaces in $\mathbb{I}^3$, along which each tangent vector $\vect{v}$ has zero length: $\langle \vect{v}, \vect{v} \rangle=0$. In this sense, a vertical line segment in $\mathbb{I}^3$ is different from any other non-vertical lines and it is called an {\it isotropic line} (cf.~\cite{Pottmann}). Obviously, we cannot apply the usual Schwarz reflection principle for such boundary lines because the conformal structure on such a surface in $\mathbb{I}^3$ breaks down on isotropic lines. By using Proposition \ref{prop:DiscontiRef}, we can investigate such isotropic lines and a reflection property along them as follows. \begin{theorem}\label{thm:reflection} Let $S\subset \mathbb{I}^3$ be a bounded zero mean curvature graph over a simply connected Jordan domain $\Omega \subset \mathbb{C}$. If $\partial{S}$ has a isotropic line segment $L$ on a boundary point $z_0\in \partial{\Omega}$ connecting two horizontal curves on $\partial{S}$ whose projections to the $xy$-plane form a regular (simple) analytic arc $\Gamma$ near $z_0$, then the following properties hold. \begin{itemize} \item[(i)] $S$ can be extended real analytically across $L$ via the reflection with respect to $\Gamma$ in the $xy$-direction and the reflection with respect to the height of the midpoint of $L$ in the $t$-direction. \item[(ii)] $L$ coincides with the cluster point set $C(X, w_0)$ of a conformal parametrization $X=(h,t)\colon \mathbb{H}\to S$ at $w_0$ in $\partial{\mathbb{H}}$ satisfying $h(w_0)=z_0$. \end{itemize} \end{theorem} \noindent Here, $C(X,w_0)$ consists of the points $z$ so that $z=\lim_{w_n\to w_0} X(w_n)$ for some $w_n \in \mathbb{H}$. \begin{proof} Let us consider a conformal parametrization $X=(h,t)\colon \mathbb{H}\to S$, where $h$ is a holomorphic function defined on $\mathbb{H}$ satisfying $h(\mathbb{H})=\Omega$. Without loss of generality, we may assume that $h(0)=z_0$. Since $t$ is a bounded harmonic function, it can be written as the Poisson integral \[ t(\xi +i\eta)=P_{\hat{t}}(\xi +i\eta)=\frac{1}{\pi}\int_{-\infty}^{\infty}\frac{\eta}{(\xi-s)^2+\eta^2}\hat{t}(s)ds. \] of some measurable bounded function $\hat{t}$ such that $\hat{t}(x)=\lim_{y\to 0} t(x+iy)$ almost every $x\in \mathbb{R}$ (see \cite[Chapter 3]{Katz}, and see \cite[Chapter 4, Section 6.4]{Ahl1} for the Poisson integral on $\mathbb{H}$). By the assumption, $\hat{t}$ has a discontinuous jump point at $w=0$ and we may assume that $\hat{t}\equiv a$ on $(-\varepsilon, 0)$ and $\hat{t} \equiv b$ on $ (0,\varepsilon)$ for some $\varepsilon>0$. By Proposition \ref{prop:DiscontiRef}, $t\circ \Pi \colon D^+\to \mathbb{R}$ extends to $D$ real analytically so that \begin{equation}\label{eq:t} t\circ \Pi(-r,\pi-\theta)+t\circ\Pi(r,\theta)=a+b,\quad (r, \theta) \in D^+. \end{equation} Next, we consider the function $h$. By the Carath\'eodory Theorem (see \cite[Theorem 17.16]{Mil1}), $h$ can be extended to $h\colon \overline{\mathbb{H}} \to \overline{\Omega}$ homeomorphically. The assumption implies that $h({-\varepsilon, \varepsilon})$ is a regular analytic curve for some $\varepsilon >0$ and hence $h$ can be extended across $(-\varepsilon,\varepsilon)$ so that $h(\overline{w})=R_{\Gamma} \circ h(w)$ by Lemma \ref{lem:ref_in_analytic_arc}. Therefore $h\circ \Pi \colon D^+\to \Omega$ extends across $\{(0,\theta)\mid 0<\theta<\pi\}$ real analytically and it satisfies \begin{equation}\label{eq:h} h\circ \Pi (-r,\pi-\theta) =h(\overline{re^{i\theta}}) =\left( R_{\Gamma} \circ h\circ \Pi \right) (r,\theta). \end{equation} By the equations \eqref{eq:t} and \eqref{eq:h}, $X\circ \Pi$ can be extended across $\{(0,\theta)\mid 0<\theta<\pi\}$ and satisfies \[ X\circ \Pi (-r,\pi-\theta)=\left((R_{\Gamma} \circ h\circ \Pi) (r,\theta), a+b-t\circ\Pi(r,\theta)\right), \] which implies the desired reflection across $L$. Finally, by (ii) of Proposition \ref{prop:DiscontiRef}, we obtain the relation \[ X\circ \Pi (0, \theta)=\left(h(0), t\circ \Pi (0, \theta)\right) = \left(z_0, a\cfrac{\theta}{\pi}+b\left(1-\cfrac{\theta}{\pi}\right) \right). \] This means that the cluster point set $C(X,0)$ of $X$ at $w_0=0$ becomes $L$. \end{proof} \begin{remark}\label{rem:cursterline} As in (ii) of Theorem \ref{thm:reflection}, special kind of boundary lines on zero mean curvature surfaces in several ambient spaces appear as cluster point sets of conformal mappings. For example, points of minimal graphs in $\mathbb{E}^3$ of the form $t=\varphi(x,y)$ on which the function $\varphi$ diverges to $\pm \infty$, and lightlike line segments on boundary of maximal surfaces in $\mathbb{L}^3$ can be also written as cluster point sets of conformal mappings, see \cite{AF1} for more details. In particular, a reflection principle for lightlike line segments was proved in \cite{AF2}. \end{remark} As a special case of Theorem \ref{thm:reflection}, we can consider the following more specific boundary conditions. \begin{corollary}\label{cor:hline} Under the same assumptions as in Theorem \ref{thm:reflection}, suppose the isotropic line segment $L$ connects two parallel horizontal straight line segments $l_i$ $(i=1, 2)$ on $\partial{S}$. Then the surface $S$ can be extended real analytically across $L$ via the symmetry with respect to the parallel line to $l_i$ $(i=1, 2)$ passing through the midpoint of $L$. \end{corollary} \begin{proof} By the assumption, $h(-\varepsilon, \varepsilon )$ in the proof of Theorem \ref{thm:reflection} is a line segment and we may assume that this line segment is on the $x$-axis. Then $h\circ \Pi (-r,\pi-\theta) =\overline{h(re^{i\theta})}$ holds by \eqref{eq:h} and hence $X=(h,t)$ satisfies \begin{align*} X\circ \Pi(-r, \pi-\theta) &=\left( \overline{h(re^{i\theta})}, a+b - t\circ\Pi(r,\theta) \right), \end{align*} which implies that the extension $X\circ \Pi$ is invariant under the symmetry with respect to the parallel line to $x$-axis passing through the midpoint of $L$. \end{proof} For a zero mean curvature surface $S$ parametrized as \eqref{eq:w-formula} with Weierstrass data $(F,G)$, the surface $X^*$ with Weierstrass data $(iF,G)$ is called the {\it conjugate surface} of $S$, see \cite[p.424]{Strubecker}, \cite[p.~238]{Sachs} and \cite{Sato}. Let $X=(h,t)\colon \mathbb{H} \to \mathbb{I}^3 \simeq \mathbb{C}\times \mathbb{R}$ be a conformal parametrization of $S$. Then $X^*:=-(h^*,t^*)$ is a conformal parametrization of $S^*$, which are formed by conjugate harmonic functions. In the end of this section, we mention that isotropic lines of $S$ correspond to points of $S^*$ on which $t^*$ diverges to $\pm \infty$ as follows. \begin{corollary}\label{cor:conjugate} Under the same assumption as in Theorem \ref{thm:reflection}, \begin{equation}\label{eq:conjugate_behavior} \displaystyle \lim_{w\to w_0}\lvert t^*(w)\rvert=\infty. \end{equation} \end{corollary} \noindent Here, we remark that since $h^*=ih$ the $(x,y)$ coordinates of $S^*$ are essentially only the $90^\circ$-rotation of those of $S$. \begin{proof} We use the formulation of the proof of Theorem \ref{thm:reflection}. We can easily see that $t$ is written as \begin{equation}\label{eq:t2} t(w)= a+b+\frac{a-b}{\pi}\arg{w}-\frac{a}{\pi}\arg{(-\varepsilon -w)} +\frac{b}{\pi}\arg{(\varepsilon+w)}+P_W, \end{equation} where $P_W$ is the Poisson integral of $W:=(1-\chi_{(-\varepsilon, \varepsilon)})\hat{t}$ and $\chi_{(-\varepsilon, \varepsilon)}$ is the characteristic function on $(-\varepsilon, \varepsilon)$. By \eqref{eq:t2} and the fact that the conjugate function $\arg^*{(w)}$ is $-\log{|w|}$, we obtain \[ t^*(w) = -\frac{a-b}{\pi}\log{\lvert w \rvert} + \frac{a}{\pi} \log{\lvert-\varepsilon -w\rvert} - \frac{b}{\pi}\log{\lvert \varepsilon+w\rvert} +P^*_W +c, \] for some constant $c$. Since $P_W\lvert_{(-\varepsilon, \varepsilon)} =0$, it follows that $\lim_{r\to 0}P^*_W(re^{i\theta})$ is a constant and hence $\displaystyle \lim_{r\to 0}\lvert t^*(re^{i\theta})\rvert=\infty$. Therefore, we obtain the desired result. \end{proof} \section{Examples} By Theorem \ref{thm:reflection}, we can construct zero mean curvature surfaces in $\mathbb{I}^3$ with isotropic line segments. \begin{example}[Isotropic helicoid and catenoid] If we take the Weierstrass data $(F,G)=\left(1,\frac{1}{2\pi i w}\right)$ defined on $\mathbb{H}$, then by using \eqref{eq:w-formula} we have the helicoid \[ X(re^{i\theta}) = \left( r\cos{\theta},r\sin{\theta}, \frac{\theta}{\pi} \right),\quad w=re^{i\theta} \in \mathbb{H}. \] Using the notations in Section \ref{subsec:vline} and by Corollary \ref{cor:hline}, $X$ can be extended to $X\circ \Pi (r, \theta) =X(re^{i\theta})$ defined on $\mathbb{R}\times (0, \pi)$ across the isotropic line segment $L=\{ X\circ \Pi(0, \theta)\in \mathbb{I}^3\mid \theta \in [0,\pi]\}$. Moreover, by Corollary \ref{cor:reflection_nonvertical}, we can extend $X\circ \Pi$ across the horizontal lines on the boundary. Repeating this reflection, we have the entire part of the singly periodic helicoid in $\mathbb{I}^3$. See Figure \ref{Fig:Helicoids}. \begin{figure}[htbp] \vspace{3.5cm} \hspace*{-8ex} \begin{tabular}{cc} \hspace*{-3ex} \begin{minipage}[t]{0.55\hsize} \centering \vspace{-3.2cm} \hspace*{-5ex} \includegraphics[keepaspectratio, scale=0.36]{Fig3_1.eps} \end{minipage} \hspace*{-19ex} \begin{minipage}[t]{0.55\hsize} \centering \vspace{-3.5cm} \includegraphics[keepaspectratio, scale=0.36]{Fig3_2.eps} \end{minipage} \hspace*{-9ex} \begin{minipage}[t]{0.50\hsize} \centering \vspace{-3.5cm} \includegraphics[keepaspectratio, scale=0.36]{Fig3_3.eps} \end{minipage} \end{tabular} \vspace{-1.5cm} \caption{The left one is a part of the helicoid in $\mathbb{I}^3$ with an isotropic line segment $L$, the center one is its reflection across $L$, and the right one is the surface after the reflection of a horizontal line. For the notations of $\gamma_1$ and $\gamma_2$, see Theorem \ref{thm:reflection_Intro}.} \label{Fig:Helicoids} \end{figure} \end{example} The conjugate surface of $X$ is written as \[ X^*(re^{i\theta}) = \left( r\sin{\theta}, -r\cos{\theta},\frac{1}{\pi}\log{r} \right),\quad w=re^{i\theta} \in \mathbb{H}, \] which is the half-piece of a rotational zero mean curvature surface called the {\it isotropic catenoid}. By Corollary \ref{cor:conjugate}, $L$ on $X$ corresponds to the limit $\lim_{r\to 0}X^*(re^{i\theta})=(0,0, -\infty)$. Moreover, since horizontal straight lines of $X$ also correspond to curves on $X^*(\partial{\mathbb{H}})$ in the $yt$-plane, we can extend $X^*$ via the reflection with respect to this vertical plane. See Figure \ref{Fig:iCat}. \begin{figure}[htbp] \vspace{3.6cm} \hspace{-8ex} \begin{tabular}{cc} \begin{minipage}[t]{0.55\hsize} \centering \vspace{-3.5cm} \includegraphics[keepaspectratio, scale=0.5]{Fig4_1.eps} \end{minipage} \hspace{-7ex} \begin{minipage}[t]{0.55\hsize} \centering \vspace{-3.7cm} \includegraphics[keepaspectratio, scale=0.6]{Fig4_2.eps} \end{minipage} \end{tabular} \caption{The isotropic catenoid (left) and its analytic extension (right).} \label{Fig:iCat} \end{figure} \begin{example}[Isotropic Schwarz D-type surface] \label{ex:Schwarz} For an integer $n \geq 2$, it is known that the Schwarz-Christoffel mapping $f\colon \mathbb{D} \to \mathbb{C}$ defined by \begin{equation*} f(w)=\int^w_0 \frac{1}{(1-w^{2n})^{\frac{1}{n} } } dw \end{equation*} maps the unit disk $\mathbb{D}$ conformally to a regular $2n$-gon $\Omega$ (see \cite[Chapter 6, Section 2.2]{Ahl1}). The mapping $f$ extends homeomorphically to $f\colon \overline{\mathbb{D}} \to \overline{\Omega}$ by the Carath\'eodory theorem, and $w=e^{k \pi i/n}\ (k=1,2,\ldots,2n)$ correspond to the vertices of $\Omega$. On the other hand, by using the equations \begin{equation*} f(e^{\pi i/n}w)=e^{\pi i/n}f(w), \quad f(\overline{w})=\overline{f(w)}, \end{equation*} we can see that the boundary points \begin{equation*} w_k := e^{ \frac{k \pi i}{n} - \frac{\pi i}{2n} }\ \ (k=1,2,\ldots,2n) \end{equation*} correspond to the midpoints of edges of $\Omega$. Let $I_k$ be the shortest arc of $\partial \mathbb{D}$ joining $w_k$ and $w_{k+1}$ ($k=1,2,\ldots,2n$) where $w_{2n+1}:=w_1$, and let \begin{eqnarray*} \hat{t}(w):=\left\{ \begin{array}{ll} 1 & \ \ (w \in I_{2k-1},\ k=1,2,\ldots ,n )\\[1ex] 0 & \ \ (w\in I_{2k},\ k=1,2,\ldots , n ) \end{array} \right. . \end{eqnarray*} The Poisson integral of $\hat{t}$ can be easily computed, and we have \begin{eqnarray*} t(w)&=&\frac{1}{2\pi}\int_0^{2\pi}\frac{1-\lvert w \rvert^2}{\lvert e^{is}-w \rvert^2}\ \hat{t}(e^{is})ds \\[1ex] &=&\sum_{k=1}^n \frac{1}{\pi} \arg \left( \frac{w_{2k}-w}{w_{2k-1}-w} \right) - \frac{1}{2}. \end{eqnarray*} Then $X:=(f,t)\colon \mathbb{D} \to \mathbb{I}^3$ is a zero mean curvature surface with $2n$-isotropic lines in its boundary, and by construction, each of the isotropic lines connects two parallel horizontal line segments on the boundary. Therefore, Corollary \ref{cor:hline} is applicable, and $S:=X(\mathbb{D})$ extends real analytically across each isotropic line $L$ via the $180^\circ$-degree rotation with respect to the straight line parallel to the edge of $\Omega$ passing through the midpoint of $L$ (see Figure \ref{Fig:polygon1}). \begin{figure}[htbp] \vspace{38ex} \begin{tabular}{cc} \hspace{-5ex} \begin{minipage}[t]{0.4\hsize} \centering \vspace{-40ex} \includegraphics[keepaspectratio, scale=0.40]{Fig5_1.eps} \label{zu1} \end{minipage} & \hspace{-5ex} \begin{minipage}[t]{0.6\hsize} \vspace{-40ex} \centering \includegraphics[keepaspectratio, scale=0.60]{Fig5_2.eps} \label{zu2} \end{minipage} \end{tabular} \vspace*{-1.8cm} \caption{Case $n=3$ in Example \ref{ex:Schwarz}: a zero mean curvature surface whose boundary consists of horizontal lines and isotropic lines (left) and its extension across an isotropic line (right).}\label{Fig:polygon1} \end{figure} \noindent In particular, if $n=2$ (i.e. $\Omega$ is a square), we can obtain a triply periodic zero mean curvature surface in $\mathbb{I}^3$ which is analogous to Schwarz' D minimal surface in $\mathbb{E}^3$ (cf.~\cite{DHS}) with isotropic lines by iterating reflections of $S$ (see Figure \ref{Fig:polygon2}). \begin{figure}[htbp] \vspace{26ex} \begin{tabular}{cc} \hspace{-6em} \begin{minipage}[t]{0.4\hsize} \centering \vspace{-23ex} \includegraphics[keepaspectratio, scale=0.36]{Fig6_1.eps} \label{zu1} \end{minipage} & \begin{minipage}[t]{0.6\hsize} \vspace{-31ex} \centering \hspace{-19ex} \includegraphics[keepaspectratio, scale=0.38]{Fig6_2.eps} \label{zu2} \end{minipage} \begin{minipage}[t]{0.6\hsize} \vspace{-30ex} \centering \hspace{-39ex} \includegraphics[keepaspectratio, scale=0.5]{Fig6_3.eps} \label{zu2} \end{minipage} \end{tabular} \vspace*{-1.2cm} \caption{Case $n=2$ in Example \ref{ex:Schwarz}: construction of a triply periodic zero mean curvature surface.}\label{Fig:polygon2} \end{figure} \end{example} \begin{acknowledgement} The authors would like to express their gratitude to the referees for their careful readings of the submitted version of the manuscript and fruitful comments and suggestions. \end{acknowledgement}
proofpile-arXiv_065-4712
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Recently there has been observed a big resurgence of interests in a special class of Einstein field equation solutions representing tunnel-like structures connecting spatially separated regions or even more different Universes, nowadays called wormholes. \red{These fascinating objects are not only important for popular culture, but also gain a lot of scientific attention as their properties allow them to be black hole mimickers.} From historical point of view, the first description of such kind of objects begins with the issue of \cite{fla16}, devoted to spatial part of Schwarzschild solution studies. The prototype of wormhole emerged from the studies devoted to particle model, where the mathematical construction which tried to eliminate coordinate or curvature singularities, dubbed as Einstein-Rosen bridge, was proposed in \cite{ein35}. Later on, the Kruskal-Szekeres coordinates were implemented for the description of Schwarzschild wormhole \cite{whe55}, while the Euclidean form of wormhole solution was obtained in \cite{haw88}. One should remark that all these concepts were postulated at quantum scale. The current understanding of wormholes was revealed in \cite{mor88}, where the conditions for traversability for Lorentzian wormholes were defined by the survivability of human travellers. \red{This redefinition was not only of great importance to physics, but also to futurology and is still seen as a main way to travel at large distances in space by humans.} On the other hand, models of a wormhole, possessing no event horizon and physical singularities, were elaborated in \cite{ell73}-\cite{ell79}. In order to obtain such kind of wormhole solutions one should invoke phantom field (exotic matter), whose energy momentum tensor violates the null, weak and strong energy conditions, as well as, its kinetic energy term is of the reversed sign. However, traversability requires also stability of the wormhole solution, except small acceleration and tidal forces. To achieve this goal we may consider a generalized Einstein gravity theories, like Gauss-Bonnet-dilaton theory. Moreover in this theory wormholes can be built with no use of such exotic kind of matter \cite{kan11}-\cite{har13}. On the other hand, the method of constructing traversable wormholes by applying duality rotation and complex transformations was proposed \cite{gib16,gib17}. By assuming that the dilaton field constitutes a phantom one, an electrically charged traversable wormhole solution in Einstein-Maxwell-phantom dilaton gravity, has been revealed \cite{gou18}. Soon after the rotating wormhole solutions were paid attention to \cite{teo98}-\cite{bro13}. There were also conceived perturbative and numerical attempts to construct spinning generalization of static wormhole solutions \cite{kas08}-\cite{che16}. It was claimed that the rotating wormholes would be with a higher possibility stable \cite{mat06} and therefore traversable. The other interesting problem in wormhole physics is their classification. Having in mind classification delivered by the black hole uniqueness theorem, the first work in this direction was provided in \cite{rub89}, delivering the uniqueness theorem for wormhole spaces with vanishing Ricci scalar. Further, the uniqueness of Ellis-Bronikov wormhole with phantom field was found in \cite{yaz17}, while the uniqueness for four-dimensional case of the Einstein-Maxwell-dilaton wormholes with the dilaton coupling constant equal to one, was presented in \cite{laz17}. The case of higher-dimensional generalization of wormhole solution, valid from the point of view of the unification theories like string/M-theory attracts also attention. The uniqueness theorem for higher-dimensional case of the static spherically symmetric phantom wormholes was treated in \cite{rog18}, while the case of of static spherically symmetric traversable wormholes with two asymptotically flat ends, subject to the higher-dimensional solutions of Einstein-Maxwell-phantom dilaton field equations with an arbitrary dilaton coupling constant, was elaborated in \cite{rog18a}. Various other aspects of physics of these objects were under intensive studies (for a detailed review of the blossoming subject the reader may consult \cite{worm}). Wormholes being a fascinating subject of their possible impact on space and time travels, may also be regarded as potential astrophysical objects, that can be observationally search for. From the astrophysical point of view, it is persuasive to consider rotating wormholes. The problem that arises is how to observationally distinguish rotating wormholes from stationary axisymmetric black holes of Kerr-type. Remarkable attention to the aforementioned problem was paid to after the Even Horizon Telescope observed the black hole shadow in the center of the galaxy M87.\\ The first studies to what extent wormholes can imitate the observational characteristics of black holes were conducted in \cite{dam07}, where the simple generalization of Schwarzschild-like line element was revealed. The considered metric differs from the static general relativity one by introducing the dimensionless parameter ${\lambda}$. The value of the parameter equal to zero is responsible for the ordinary Schwarzschild black hole solution. Of course one should be aware that for non-zero values of the parameter the presented line element is no longer the static solution of Einstein equations and changes the structure of the manifold. Therefore the matter with almost vanishing energy density ought to be required to maintain the aforementioned gravitational configuration (for the discussion of the influence of the parameter ${\lambda}$ on the static manifold structure see, e.g., \cite{bue18}). Further generalization of the idea given in \cite{dam07} to describe Kerr-like wormhole spacetime as a toy model, was achieved by applying a modification on the Kerr metric similar to the procedure performed in \cite{dam07}. The embedding diagrams, geodesic structure, as well as, shadow characteristics of the obtained Kerr-like wormhole were given in \cite{ami19}. On the other hand, the throat-like effects on the shadow of Kerr-like wormholes were elaborated in \cite{kas21}. However, the problem of the structure at the horizon scale of black hole which gives rise to echoes of the gravitational wave signal bounded with the postmeger ring-down phase in binary coalescences, in the case of static and rotating toy models of traversable wormholes, has been elucidated in \cite{bue18}. The other subject acquiring much attention in contemporary astrophysics and physics is the unrelenting search for finding {\it dark matter} sector particles. The nature of this elusive ingredient of our Universe is a mystery and several models try to explain it and constitute the possible guidance for the future experiments. The main aim of our work will be to investigate the behavior of axion-like particle {\it dark matter} model clouds, around the mimickers of rotating black holes, stationary axially symmetric wormholes. The work will provide some continuity with our previous studies \cite{kic21}, where we have paid attention to the main features of axionic clouds {\it dark matter} in the vicinity of magnetized rotating black holes. The principal goal of the investigations will be to find the possible differences in characteristic features of the axion-like condensate, between those two classes of compact objects, i.e., rotating black holes and black hole mimickers. Our studies will constitute the first glimpse at the problem in question. Namely, we restrict our consideration to the probe limit case, when one has the complete separation of the degrees of freedom, i.e., matter fields do not backreact on wormhole spacetime. The organization of the paper is as follows. In Sec. II we deliver the basic facts about the axion-like {\it dark matter} model. Sec. III will be devoted to the description of the rotating wormholes models surrounded by {\it dark matter} clouds, in the considered model of axion-like {\it dark matter}. In Sec. IV we describe the numerical results of the studies, while in Sec. V we conclude our investigations and aim the possible problems for the future investigations. \section{Model of axion-like {\it dark matter} sector} The explanation of astronomical and cosmological observations require {\it dark matter} existence, whose nature is one of the most tantalizing questions confronting contemporary physics and cosmology. A large number of ongoing or planned experimental searches for its detection and understanding of the {\it dark sector} role in a fundamental description of the Universe. Axions are among the strongest candidates for the possible explanation of the existence of {\it hidden sector} \cite{pre83}-\cite{din83}. Their existence has been postulated to explain the apparent lack of violation of charge conjugate parity \cite{pec77}-\cite{wil78} and in the strong interaction motivated the absence of observable electric dipole moment of the neutron \cite{pen15}. Axionlike particles are also widely spotted in the realm of string theories \cite{svr06}. In what follows, we shall study axionlike scalar particles coupled to the Maxwell $U(1)$-gauge field. The non-trivial coupling of axion field to the Maxwell field strength invariant plays the crucial role in the model in question. The field equations of motion are provided by the variation procedure with respect to the action given by \begin{equation} \mathcal{S} = \int d^4 x \sqrt{-g} \left[R - \frac{1}{4} F_{\mu \nu} F^{\mu \nu} - \frac{1}{2} \nabla_\mu \Psi \nabla^\mu \Psi - \frac{\mu^2}{2} \Psi^2 - \frac{k}{2} \Psi \ast F^{\mu \nu} F_{\mu \nu} \right], \end{equation} where we set $R$ for the Ricci scalar, $F_{\mu \nu} = 2 \nabla_{[\mu} A_{\nu]}$, while $\Psi$ stands for the scalar field (axion) with mass $\mu$. $\ast F^{\mu \nu} = 1/2 \epsilon_{\mu \nu \alpha \beta} F^{\alpha \beta}$ is the dual to Maxwell field strength. The equation of motion for the scalar field $\Psi$, which constitutes a covariant Klein-Gordon equation with a source term of the dual Maxwell field invariant, implies \begin{equation} \nabla_\mu \nabla^\mu \Psi - \mu^2 \Psi - \frac{k}{2} ~\ast F^{\mu \nu} F_{\mu \nu} = 0, \label{eq:field_eqn} \end{equation} while the $U(1)$-gauge field is subject to the relation as follows: \begin{equation} \nabla_\mu F^{\nu \mu} + 2 k~\ast F^{\nu \mu} \nabla_{\mu }\Psi = 0. \end{equation} We refer to the $\Psi$ field as axionlike, because the axions (originating from QCD) have adequate constrains on both mass and coupling parameter. Here however we consider particles with physics given by an analogical Lagrangian yet with arbitrary values of physical parameters. However for simplicity we might refer to the studied axionlike particles as simply axions. The {\it dark matter} model in question was widely elaborated in studies of black hole superradiance and light polarization effects, possible experimental signals of {\it dark sector} around these objects \cite{pla18}-\cite{car18}, \cite{kic21}, and neutron stars \cite{gar18}-\cite{gra15}, as well as, the influence of axionic {\it dark matter} on the physics on early Universe and primordial black holes \cite{fed19}-\cite{ros18}. The form of the relation (\ref{eq:field_eqn}) envisages the fact that the presence of the non-zero source term, containing the dual invariant, given by \begin{equation} \mathcal{I} = ~\ast F^{\mu \nu} F_{\mu \nu} \neq 0, \end{equation} is crucial. In the opposite case, when the invariant is equal to zero, the axion-like scalar field equation of motion reduces to the simple massive Klein-Gordon case, without any self-interaction potential. It means that no scalar hair configuration on the studied line element can emerge. Although it has been shown that in Kerr spacetime scalar hair may emerge in certain situations \cite{herd14}, here we pick a different ansatz (see below) as we focus on stationary configurations, which appear to be magnetically induced in this approach. On the other hand, it can be noticed that the discussed invariant, $\ast F_{\mu \nu} F^{\mu \nu}$, is equal to zero in the case when $F_{\mu \nu} =0$, or for spherically symmetric spacetime. However, it has a non-trivial form, $\ast F_{\mu \nu} F^{\mu \nu} \neq 0$, when both rotation and magnetic $U(1)$-gauge field components are present in the spacetime under consideration. To introduce the magnetic field we use the method proposed by Wald \cite{wal74}, where the vector potential is sourced by Killing vectors of the rotating spacetime. In general it has a form \begin{equation} A_\mu = \frac{1}{2}B (m_\mu + 2 a k_\mu), \end{equation} where $k_\mu$ and $m_\mu$ are the Killing vectors connected with temporal invariance and $\phi$ rotation respectively. As in \cite{kic21}, where we have studied rotating magnetized black holes submerged into axionic {\it dark matter} cloud, one can introduce a static magnetic field to the system, which will be oriented along the rotation axis. It seems to be plausible from the point of astrophysical perspective and can be regarded as a starting point for studies of the magnetic field influence of the system in question. Because of the fact that our investigations focus on static magnetic field, parallel to the wormhole rotation axis, the gauge potential may be rewritten in the form as $ A_\mu dx^\mu = B/2~ g_{\mu \nu} m^\nu dx^\mu.$ For our considerations we choose a static, time independent ansatz. The symmetry of the problem enables us to elaborate the axion field in the form provided by \begin{equation} \Psi = \psi(r, \theta), \label{eq:ansatz} \end{equation} which will be plugged into the equation \eqref{eq:field_eqn}, for the considered line element. \section{Rotating wormhole metrics} The simplicity of the static line element describing a wormhole may suggest that the spinning generalization can be achieved analytically and ought to be globally regular. But in vain, it happens that finding the stationary solution with an extended source is far more complicated (see for the recent aspects of this problem \cite{vol21}). However, the rotating wormhole solutions are widely discussed in literature \cite{teo98}-\cite{che16}, but one should be aware that they do not constitute the exact solutions of the equations of motion but rather comprise some model of geometries. In this section, we shall study two kinds of rotating wormhole model metrics. First one accounts for the extension of the regular black hole Kerr metric \cite{bue18,ami19}. The other is the Teo class wormhole \cite{teo98}, a rotating generalization of Morris-Thorne wormhole, which serves us as comparison to a bit more realistic Kerr-like wormhole. \subsection{Kerr-like wormhole} To begin with, we consider the metric of Kerr-like rotating wormhole. It is constructed by a slight modification of stationary axisymmetric line element with a parameter ${\lambda}$. For the first time, such construction was proposed in \cite{dam07}, where the static Schwarzschild black hole was considered. Then, it was generalized to the case of stationary axisymmetric line element \cite{bue18,ami19}. The Kerr-like wormhole line element yields \begin{eqnarray} ds^2 &=& - \left( 1 - \frac{2 M r}{\Sigma} \right)dt^2 - \frac{4 M ar \sin^2 \theta}{\Sigma} dt d\phi + \frac{\Sigma}{\tilde{\Delta}} dr^2 + \Sigma d\theta^2\\ \nonumber &+& \Big(r^2 + a^2 + \frac{2 M a^2 r \sin^2 \theta}{\Sigma} \Big) \sin^2 \theta d\phi^2, \end{eqnarray} where we set \begin{align} \Sigma(r, \theta) = r^2 + a^2 cos^2 \theta, \\ \tilde{\Delta}(r) = r^2 + a^2 - 2M(1 + \lambda^2)r. \end{align} The parameters $M$ and $a M$ correspond to mass and angular momentum of a wormhole. For a small deviation parameter ${\lambda}$, one achieves almost indistinguishable from of Kerr black hole line element. These three parameters describe the system as seen from the outside. Moreover its Arnowitt-Deser-Misner (ADM) mass, as seen by the observer at asymptotic spatial infinity, is given by $M_{ADM} = M (1 + {\lambda}^2)$. The largest root of $\tilde{\Delta}(r) = 0$, establishes the surface provided by \begin{equation} r_+ = M ( 1 + \lambda^2 ) + \sqrt{M^2 ( 1 + \lambda^2)^2 - a^2}. \end{equation} For the model in question it does not constitute a radius of the event horizon, but describes the radius of the throat of the rotating wormhole, which connects two asymptotically flat regions of the spacetime. It can be explicitly seen by the adequate changes of variables \cite{bue18,ami19}. The points with the condition $r<r_+$ do not exist. Consequently the axion field equation written in the Kerr-like wormhole spacetime implies the following: \begin{align} \tilde{\Delta} \partial_r^2 \psi + \frac{2(r - M)\tilde{\Delta} - M \lambda^2 (r^2 + a^2)}{\Delta} \partial_r \psi + \partial_{\theta}^2 \psi + \cot \theta \partial_{\theta} \psi - \mu^2 \Sigma \psi = \frac{k \Sigma}{2} \mathcal{I}_{KWH}, \label{eqn:kwh_axion} \end{align} where the electromagnetic field invariant is provided by \begin{align} \mathcal{I}_{KWH} = - \frac{a B^2 M \tilde{\Delta} \sin^2 \theta \cos \theta}{2 \Delta \Sigma^4} \big[ 3 a^6 + 2 a^4 M r - 5 a^4 r^2 - 8 a^2 M r^3 - 32 a^2 r^4 - 24 r^6 \nonumber \\ + 4 a^2 (a^4 - a^2 r^2 + 2(M - r)r^3 ) \cos 2\theta + a^4 (a^2 - 2 M r + r^2) \cos 4\theta \big]. \end{align} The equation \eqref{eqn:kwh_axion} undergoes a following scaling transformation \begin{equation} r \rightarrow \eta r, \quad a \rightarrow \eta a, \quad M \rightarrow \eta M, \quad B \rightarrow B/\eta, \quad \mu^2 \rightarrow \mu^2 / \eta^2, \quad r_+ \rightarrow \eta r_+, \end{equation} \red{which allows us to fix one of model parameters to unity. For this we pick $M = 1$.} \subsection{Teo rotating wormhole} The well-known Morris-Thorne metric, introduced in Ref. \cite{mor88}, describes a traversable wormhole spacetime, which is stabilised by exotic matter in the area of its throat. That solution was achieved by using reverse engineering of general relativity, namely the metric was postulated first and with a help of Einstein equations the suitable matter components were found. Generalization of the aforementioned solution, by including the rotation into the consideration, was performed in \cite{teo98}. The resulting metric of the rotating wormhole has a following form: \begin{equation} ds^2 = -N^2 dt^2 + \frac{dr^2}{1 - \frac{b}{r}} + K^2 r^2 \left[ d \theta^2 + \sin^2 \theta (d \phi - \omega dt)^2 \right], \end{equation} where, as in the Morris-Thorne case, one has a lot of freedom in choosing the shape of $N$, $b$, $K$ and $\omega$ functions, as long as they meet specific requirements. Firstly, all the functions can be functions of $r$ and $\theta$ and should be regular on the symmetry axis $\theta =0, \pi$. Secondly, $N$, the gravitational redshift function, ought to be finite and nonzero, $b$ as the shape function determining the shape of the wormhole throat, should satisfy $b \leqslant r$. $K$ accounts for the radial distance with respect to the coordinate origin and $\omega$ stands for the angular velocity of the wormhole. The embedding of constant $t$ and $\theta$-cross sections in the three-dimensional Euclidean space reveals the well-recognizable form of the wormhole spacetime. The constructed geometry describes two regions, where the radial coordinates are given by $r \in [r_+,~\infty)$, which are joined together at the wormhole throat $r=r_+$. At spatial infinity, the requirement of asymptotic flatness regions provides that the metric coefficients ought to satisfy the following expansions: \begin{equation} N = 1 - \frac{M}{r} + {\cal O} \Big(\frac{1}{r^2}\Big), \qquad K = 1 + {\cal O}\Big(\frac{1}{r}\Big), \qquad \frac{b}{r} = {\cal O}\Big(\frac{1}{r}\Big), \qquad \omega = \frac{2 J}{r^3} + {\cal O}\Big(\frac{1}{r^4}\Big), \label{eq:twh_asympt} \end{equation} where we have denoted by $M$ the mass of the wormhole and by $J$ its angular momentum. In general, one encounters the whole range of functions, which fulfil the aforementioned conditions and constitute a regular rotating wormhole solution. For the numerical calculations, we pick a set of functions which appear to be quite popular in the literature of the subject, and were previously used by different authors \cite{shaikh18, nedkova13, abdujabbarov16, harko09, bambi13} \begin{equation} N = \exp\left[- \frac{r_+}{r} \right], \qquad b(r) = r_+ \left( \frac{r_+}{r} \right)^\gamma, \qquad \omega = \frac{2 a r_+}{r^3}, \qquad K=1, \label{eq:twh_metric_fun} \end{equation} where we use the $r_+$ symbol, for denoting the wormhole throat radius. \red{The angular momentum parameter is defined in the standard way $a = J/M$. Using the asymptotic relations \eqref{eq:twh_asympt} we find that for the picked set of functions \eqref{eq:twh_metric_fun} $M = r_+$.} Thus, the family of the above solutions is described by three parameters, i.e., the throat radius $r_+$, angular momentum parameter $a$ and the shape parameter $\gamma$. After putting the ansatz \eqref{eq:ansatz} and the metric into the field equation \eqref{eq:axion_only_action} we arrive at the equation of motion \begin{align} \left[ r^2 - r_+ r \left( \frac{r_+}{r} \right)^\gamma \right] \partial_r^2 \psi + \left[ 2r + r_+ + \left(\frac{r_+}{r} \right)^\gamma \left(\frac{1}{2}r_+ \gamma - \frac{r_+^2}{r} -\frac{3}{2} r_+ \right) \right] \partial_r \psi \nonumber \\ + \partial_{\theta}^2 \psi + \cot \theta \partial_{\theta} \psi - \mu^2 r^2 \psi = \frac{1}{2} k r^2 \mathcal{I}_{TWH}, \label{eqn:twh_axion} \end{align} which radial part is strongly dependent on $\gamma$. The Maxwell field invariant related to uniform magnetic field in this spacetime implies \begin{equation} \mathcal{I}_{TWH} = \frac{12 a B^2 r_+ \cos \theta \sin^2 \theta}{r^{5/2}} \sqrt{\frac{r - r_+ \left(\frac{r_+}{r} \right)^\gamma}{\exp \left[ -\frac{2 r_+}{r} \right]}}. \end{equation} The equation \eqref{eqn:twh_axion} follows a scaling transformation \begin{equation} r \rightarrow \eta r, \quad r_+ \rightarrow \eta r_+, \quad a \rightarrow \eta a, \quad B \rightarrow B/ \eta, \quad \mu^2 \rightarrow \mu^2 / \eta^2. \end{equation} \red{Using this transformation we fix $r_+ = 1$.} \subsection{Free energy} As a benchmark for the thermodynamical preference of the obtained states we use free energy by evaluating the on-shell action of the axion dependent part of the theory \begin{equation} \mathcal{S}_{axion} = \int d^4 x \sqrt{-g} \left[- \frac{1}{2} \nabla_\mu \Psi \nabla^\mu \Psi - \frac{\mu^2}{2} \Psi^2 - \frac{k}{2} \Psi \ast F^{\mu \nu} F_{\mu \nu} \right]. \label{eq:axion_only_action} \end{equation} By substituting the equations of motion into the action and imposing the ansatz of the field we arrive to the formula for the free energy \begin{equation} F = - 2 \pi \int_\mathcal{M} dr d\theta ~\sqrt{-g} \bigg[ (\partial_r \psi)^2 g^{rr} + (\partial_\theta \psi)^2 g^{\theta \theta} + \mu^2 \psi^2 \bigg]. \label{eq_freeenergy} \end{equation} The straightforward integration of the equation \eqref{eq_freeenergy} appears to be problematic. It is because both considered backgrounds have singular metric determinant at the throat, which makes simple integration from throat to infinity impossible in these coordinates. It should be noted that this singularity is merely a coordinate singularity, as the curvature of both wormholes is regular and finite at the throat. In the case of Kerr-like wormhole metric, we have \begin{equation} \sqrt{-g} = \sqrt{\frac{\Delta}{\tilde{\Delta}}} \Sigma \sin^2 \theta, \end{equation} where for the case of ${\lambda}$ equal to zero we obtain that $\Delta = {\tilde{\Delta}}$. This fact naturally eradicates the singularity problem in the black hole scenario. Here, however, as we radially fall toward the wormhole, the root of ${\tilde{\Delta}}$ comes first and creates the singularity. On the other hand, for the Teo rotating line element we get \begin{equation} \sqrt{-g} = \frac{\exp \left( - \frac{r_+}{r} \right) r^2 \sin \theta}{\sqrt{1 - \left( \frac{r_+}{r} \right)^{\gamma + 1}}}, \end{equation} with the denominator naturally generating the infinity. To deal with the integration in such spacetimes we use energy differences instead. Also we introduce a cutoff to the lower integration bound, so we start from $r_+ + \epsilon$ rather than simply $r_+$. In this way we ensure the finiteness of energy differences and give them straightforward physical interpretation. With the change of the background parameter the solution becomes more or less thermodynamically stable with respect to some \textit{ground} solution. \section{Results} In this section we pay attention to the solutions of the equations of motion for the previously described two toy models of rotating wormholes. Due to the complications of the relations \eqref{eqn:kwh_axion} and \eqref{eqn:twh_axion}, we solve them numerically by virtue of spectral methods. Firstly the adequate equation is discretized on Gauss-Lobato grid \cite{matlabnum} and next translated into a system of algebraic equations with spectral differentiation matrices. The method in question has already been implemented in Python and tested for the numerical stability. The technical details, especially convergence tests of the numerical method are described in the Appendix of \cite{kic21}, where we studied the problem of axionlike particle clouds in the spacetimes of rotating magnetized black holes. The spectral nature of the numerical scheme requires remapping the coordinates onto the $[-1, 1]$ intervals. It can be achieved by the coordinate transformation provided by \begin{align} z = 1 - \frac{2 r_+}{r}, \\ u = \frac{4 \theta}{\pi} - 1, \end{align} where $r_+$ is the wormhole throat radius. After such operation, our numerical domain may be written in the form $[-1, 1]\times[-1, 1]$. For $z$-coordinate, the boundaries are the wormhole throat ($z = -1$) and spatial infinity ($z=1$), while for $u = -1$, one talks about \textit{north pole} of a wormhole and the \textit{equator} with $u = 1$. Consequently after the coordinate transformation in the underlying equations, one shall impose the adequate boundary conditions. Namely, on the throat surface we demand that the axion field should be regular, therefore $\partial_r \psi = 0$ provides a desirable conduct of the field. \red{Alternatively, setting the field to a constant value, such as zero in a wormhole scenario, is also a possible choice. However we wish to explore the Kerr-like solution for different values of $\lambda$ parameter, including its zeroing when it simplifies to the Kerr black hole. Given that for the consistency between these two kinds of solutions we use the Neumann boundary condition.} At the spatial infinity, we take a look on the asymptotic behaviour of the equation itself and the source term $\mathcal{I}$. It appears that the Maxwell field invariants in both backgrounds are vanishing functions. As $r \rightarrow \infty$, we have \begin{equation} I_{KWH} = \mathcal{O}\left(\frac{1}{r^4}\right), \end{equation} \begin{equation} I_{TWH} = \mathcal{O}\left( \frac{1}{r^2} \right). \end{equation} Which means that both equations \eqref{eqn:kwh_axion} and \eqref{eqn:twh_axion} reach a simple, asymptotic form, to the leading order \begin{equation} \partial^2_r \psi + \frac{2}{r} \partial_r \psi - \mu^2 \psi = 0. \end{equation} This simple equation has a solution \begin{equation} \psi = A \frac{\exp(\mu r)}{r} + B \frac{\exp(-\mu r)}{r}, \label{eq:psi_asympt} \end{equation} where $A$ and $B$ are constants. Naturally the field ought to decay for the sake of asymptotic flatness of the spacetime. Given that we are allowed to choose $A = 0$, with arbitrary $B$. This means that a boundary condition $\psi(r \rightarrow \infty) = 0$ is an adequate and mathematically motivated choice. On the other hand, the boundary conditions for the angular dependency are built on the basis of the spacetime symmetry. Both considered spacetimes are rotating, therefore we demand $\partial_{\theta} \psi = 0$ on the \textit{north pole}. On the \textit{equator}, the presence of magnetic field combined with the spacetime symmetry implies that $\psi = 0$. \subsection{Kerr-like wormhole} To commence with, we solve the equation \eqref{eqn:kwh_axion} for the Kerr-like background metric. A portion of obtained distributions is depicted in Fig. \ref{fig_kw_maps}. In the following panels we see the increasing mass of the axionic field. In the panel (a) the field is ultralight, subsequently in (b) $\mu^2 = 0.01$, (c) $\mu^2 = 0.1$ and finally in (d) $\mu^2 = 1$. In every panel we have $a = 0.99$ and $\lambda = 0.5$. We can clearly see how the mass of the axionlike field changes the angular distribution of it around the wormhole. For little masses the clouds are concentrated around the poles of the wormhole and spread in the space for several throat radii. As we increase the axion mass we see that the polar regions of the wormhole become depleted and the field drifts towards the equator. The largest concentration is visible on the latitude $\theta \simeq \pi/4$. Second important effect is the influence of the field mass on the magnitude of the field. Inspection of the colorbars reveals that the larger the mass the smaller the field. The spatial tail of the field is also much shorter, when the mass of the field is larger. Intuitively, in the asymptotic solution \eqref{eq:psi_asympt} $\mu$ enters the suppressing exponential term. The field decays faster for larger masses, which means the massive fields are localized in the vicinity of the throat surface. Another important thing that stands out in relation to the black hole solutions is the repulsion of the axion cloud from the wormhole throat surface. While in the case of the black hole, the field had non-zero values on the surface of the event horizon, and its radial character was monotonically decreasing, here we have a completely different situation. For the wormhole, the field vanishes or at least has a significantly smaller value on the throat. Then it grows with the radius as it reaches the maximum and finally decreases. This effect is particularly visible for the high values of the angular momentum. \begin{figure}[h] \centering \subfloat[$\mu^2 \rightarrow 0^+, \quad \lambda = 0.5$]{ \includegraphics[width=0.45 \textwidth]{wh_a09_la05_m0_HD.pdf} } \qquad \subfloat[$\mu^2 = 0.01, \quad \lambda = 0.5$]{ \includegraphics[width=0.45 \textwidth]{wh_a09_la05_m001_HD.pdf} } \vspace{0.5cm} \subfloat[$\mu^2 = 0.1, \quad \lambda = 0.5$]{ \includegraphics[width=0.45 \textwidth]{wh_a09_la05_m01_HD.pdf} } \qquad \subfloat[$\mu^2 = 1, \quad \lambda = 0.5$]{ \includegraphics[width=0.45 \textwidth]{wh_a09_la05_m1_HD.pdf} } \caption{Axion field distribution around Kerr-like wormholes for given sets of parameters. The blank space in the vicinity of wormhole throat distinguishes the solution from the black hole counterpart, where the field appears to be non-zero on the event horizon. Subsequent panels for each mass parameter show how the angular distribution of the field is affected.} \label{fig_kw_maps} \end{figure} The radial behaviour of the axionic field can be seen more precisely in Fig. \ref{fig:kw_slices}. We present there a slice of $\psi$ as a function of $r$ in throat radius units, for constant $\theta=\pi/4$ and few different values of the $\lambda$ parameter. In contrast, we also plot the behaviour of axions in Kerr black hole metric (that is $\lambda = 0$). What we can see is the increasing $\lambda$ consequently extinguishes the axionic hair. In the foreground a structural change in the field profile is visible as we compare it to the black hole scenario. An axionic field over a black hole has a maximum value on the event horizon. The opposite is true for a wormhole, on the throat the field vanishes, then grows to its maximum and fades away with the radius. Then, the bigger is $\lambda$ the smaller are the maxima and overall magnitude of the axionic hair. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{psi_pi4_a99_m0.pdf} \caption{A closer look on the axion cloud gap near the wormhole throat. Here we show slices of $\psi$ for constant $\theta = \pi/4$, with parameters $a = 0.99, \mu^2 = 0^+$. Increasing of $\lambda$ decreases the magnitude of the axions and cuts off its tail.} \label{fig:kw_slices} \end{figure} In the next step we investigate the free energy of the obtained axion cloud configurations. It is interesting to see how the parameters describing the spacetime geometry around the wormhole influence the thermodynamics of the axion clouds. Due to the previously mentioned difficulties in computing the free energy in these metrics, we rather talk about energy differences, than the exact values. In Fig. \ref{fig:kw_fe} we present the differences of the free energy versus angular momentum $a$, with respect to the $\lambda=0$ level, which constitutes a plain Kerr black hole. It is clearly visible that the larger value of the distortion parameter ${\lambda}$ one takes into account, the higher value of the free energy of the cloud we achieve. It turns out, that the more the gravitational background deviates from from the black hole metric, the less thermodynamically desirable axion clouds are. This effect works together with the diminishing magnitude of the field on the previously discussed Fig. \ref{fig:kw_slices}. Additionally the increasing angular momentum of the wormhole also increases the free energy difference. This means that for \textit{extreme} Kerr-like wormhole axion hair is the least favourable. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{fediff_a_lambs_m0.pdf} \caption{Free energy differences as a function of angular momentum (with $\lambda = 0$ as the ground curve) for different values of $\lambda$. With axion mass $\mu^2 = 0^+$ we see that the cloud thermodynamical favourability decreases with growth of both angular momentum and $\lambda$ parameter.} \label{fig:kw_fe} \end{figure} \subsection{Teo rotating wormhole} Teo class wormhole has a different set of parameters and it does not simply transform into a black hole solution, just like a Kerr-like wormhole does. Here the throat radius is independent of the other parameters and is imposed manually in $g_{tt}$ and $g_{rr}$. With the particular choice of functions \eqref{eq:twh_metric_fun}, we can only steer with the shape of metric components via $\gamma$ parameter. Therefore, let us consider values of $\gamma$ in the interval $(-1, 1]$, where for $\gamma = -1$ the function $g_{rr}$ is singular, so we can only approach this value. \red{As it was mentioned, the Kerr-like wormhole can be reduced to a black hole solution by setting $\lambda = 0$. Teo solution does not share this feature, but is a well-known wormhole metric, just like its non-rotating counterpart the archetypical Morris-Thorne wormhole. While it can not serve as a testing field for differences between axion clouds around wormholes and black holes, one can treat it as a benchmark for behaviours of the axion hair in another wormhole environment. Using this background might help us to see if the obtained axion solutions share similar features. } In Fig. \ref{fig_teo_maps} one can see the distribution of the axionic cloud around a Teo wormhole for different axionlike field masses. In the panel (a) we have an ultralight field, then it takes values $0.1$, $0.5$ and $1.0$ for (b), (c) and (d) respectively. In all panels we use $\gamma = -0.99$, which gives us tiny value of the axionic field (see the colorbars). The angular distribution has similar features to the Kerr-like metric. For ultralight axions the field is localized in the majority of the wormhole surroundings. As the mass increases the hair tightens spatially and disappears from the polar regions. For the large mass case the axionic clouds are drifting toward the equator with the polar caps left almost empty. Moreover the radial reach is very short - around one throat radius. One can clearly notice that for the negative gamma value the analogous effect to the Kerr-like scenario is observed. The axionic cloud is also pushed away from the throat surface - in its vicinity the field acquires small values, reaches the maximum and the descends monotonically to zero. However in this gravitational background this effect is not as dramatic as in case of Kerr-like wormhole. The weakening of the axionic field in the vicinity of the throat is easily visible in the distributions, although it is not that large. Increasing $\gamma$ up to zero and beyond causes the rise of $\psi$ field. It grants bigger values, but the spatial qualitative characteristic remains intact. \begin{figure}[h] \centering \subfloat[$\mu^2 \rightarrow 0^+, \quad \gamma = -0.99$]{ \includegraphics[width=0.45 \textwidth]{psidist_g-099_a99_m0.pdf} } \qquad \subfloat[$\mu^2 = 0.1, \quad \gamma = -0.99$]{ \includegraphics[width=0.45 \textwidth]{psidist_g-099_a99_m01.pdf} } \vspace{0.5cm} \subfloat[$\mu^2 = 0.5, \quad \gamma = -0.99$]{ \includegraphics[width=0.45 \textwidth]{psidist_g-099_a99_m05.pdf} } \qquad \subfloat[$\mu^2 = 1, \quad \gamma = -0.99$]{ \includegraphics[width=0.45 \textwidth]{psidist_g-099_a99_m1.pdf} } \caption{Axion cloud distribution around the magnetized Teo type wormhole. The negative value of $\gamma$ resembles the results for Kerr-like wormhole, with large $\lambda$ value, i.e., the decreasing of the field magnitude. Each panel depicts the field distribution for different axionlike masses. Once again a big mass results with a localized field, concentrated closely to the throat with depleted polar region. For bigger values of $\gamma$ we observe similar influence of $\mu$ on the spatial distribution.} \label{fig_teo_maps} \end{figure} The next figure brings a closer look on the field drop near the throat. In Fig. \ref{fig_teo_slice} we present radial slices of the field distribution with $\theta = \pi/4$. In this particular figure we depict the behaviour for ultralight field, however a similar tendencies are shown by more massive fields. First of all, we observe a significant amplification of the field with the growth of $\gamma$. The field does not acquire new features however, but it seems that the curves follow some kind of scaling related to gamma. Additionally the profiles resemble the results obtained for Kerr-like wormholes. As we have previously mentioned, Teo class wormhole cannot be simply transformed into a black hole by a simple choice of parameter value. However the features like a drop near the throat surface, then maximum and monotonic fall show that these might be more general wormhole related behaviours of axionic hair. \begin{figure}[h] \centering \includegraphics[width=0.7 \textwidth]{teowh_psi_pi4_a99_m0.pdf} \caption{Radial slices of the axion field for constant angle $\theta = \pi/4$. The growth of $\gamma$ increases the maximum value of the field. However in this metric the significant growth of the blank space near the throat is not present. Also the growth of $\gamma$ does not seem to greatly affect the tail of the field, away from the throat, which is different from the Kerr-like wormhole results.} \label{fig_teo_slice} \end{figure} If we consider the free energy, it appears that the axion clouds for the background with negative $\gamma$, are definitely less thermodynamically favourable. In Fig. \ref{fig_teo_fe} we plot the free energy difference as a function of angular momentum for several values of the gamma parameter. We use the curve with $\gamma = 0$ as a baseline for calculating energy differences. Free energy difference curves for $\gamma < 0$ are positive, especially the $\gamma = -0.99$ curve reaches relatively big values. Therefore thermodynamically speaking wormholes with $\gamma$ close to $-1$ have least chances to hold axionic hair. With the growth of the parameter, the free energy of the cloud decreases, which makes the axions more thermodynamically favourable. However this fall is rather moderate comparing to the rise of the top curve. In both cases the increase of angular momentum amplifies the tendencies of the curves. Curves for negative gamma grow, while the positive fall. A consequent growth of $\gamma$ parameter leads the hair to some limit characteristics which can be seen in Fig. \ref{fig_teo_slice} and Fig. \ref{fig_teo_fe}. \begin{figure}[h] \centering \includegraphics[width=0.7 \textwidth]{fediff_a_gam_m0.pdf} \caption{Free energy differences vs. angular momentum for different values of $\gamma$. The curve of $\gamma = 0$ is the reference level. We see that free energy increases for negative values of gamma and slightly drops for positive ones, as angular momentum grows.} \label{fig_teo_fe} \end{figure} Finally let us conduct qualitative comparison of the axionic clouds in considered metrics. The solutions have undoubtedly similar features, especially when one takes a look on the $\psi$ slices. We also observe the separation of the cloud from the surface of the throat in both cases. This allows us to notice some general wormhole related phenomena, which are not present around the black holes \cite{kic21}. Naturally we cannot speak in a fully general manner, as we only considered here merely two distinct gravitational backgrounds, which on the other hand should be treated as toy models. \section{Conclusion} In our paper, we have considered the problem of the distribution of axionlike particles, being regarded as {\it dark matter} sector model, around the toy models of rotating wormholes. We have investigated the Kerr-like wormhole line element with the distortion parameter ${\lambda}$ and Teo model of a rotating axisymmetric wormhole. The models under inspection were characterized by the mass, angular momentum, distortion parameter (for Kerr-like wormhole) and the shape parameter $\gamma$ (for the Teo model). We numerically solve the equations of motion for the underlying cases, using spectral methods. Among all we have found that the axion clouds are pushed forward from the wormhole throat, especially for the case of large value of the rotation parameter $a$. The voids in the vicinity of the wormhole throat appear for the larger value of the distortion parameter ${\lambda}$. This phenomenon distinguishes the studied system from the previously elaborated black holes \cite{kic21}. On the other hand, for the larger ${\lambda}$, one achieves higher value of the free energy, and therefore this solution is less thermodynamically favoured. As far as the Teo class of rotating wormholes is concerned, we have for the negative value of $\gamma$ the analogous effect as in the latter case is obtained. However for the positive value, the behavior of axionic clouds resembles features of the {\it dark matter} clouds around Kerr black hole in a uniform magnetic field. The solution with negative $\gamma$ is not thermodynamically favourable, as it has been revealed in free energy studies. However when $\gamma$ increases, the free energy of the axionic cloud decreases. We have found that the behavior of the axionic clouds significantly differs from the black hole scenario, which we discussed in our previous work \cite{kic21}. This fact will account for the possible guidance, enabling one to distinguish between these two classes of compact objects. Nevertheless, the search for astronomically observable criteria require far more complex approach. A more realistic dynamical gravitational model is needed, when the time dependence of the studied fields is taken into account, as well as, the direct mathematical proofs of the stabilities of rotating wormhole spacetimes ought to be found. These subjects impose a real mathematical challenge and require also solid numerical relativity machinery. These problems shall be investigated elsewhere. \section{Introduction} Recently there has been observed a big resurgence of interests in a special class of Einstein field equation solutions representing tunnel-like structures connecting spatially separated regions or even more different Universes, nowadays called wormholes. \red{These fascinating objects are not only important for popular culture, but also gain a lot of scientific attention as their properties allow them to be black hole mimickers.} From historical point of view, the first description of such kind of objects begins with the issue of \cite{fla16}, devoted to spatial part of Schwarzschild solution studies. The prototype of wormhole emerged from the studies devoted to particle model, where the mathematical construction which tried to eliminate coordinate or curvature singularities, dubbed as Einstein-Rosen bridge, was proposed in \cite{ein35}. Later on, the Kruskal-Szekeres coordinates were implemented for the description of Schwarzschild wormhole \cite{whe55}, while the Euclidean form of wormhole solution was obtained in \cite{haw88}. One should remark that all these concepts were postulated at quantum scale. The current understanding of wormholes was revealed in \cite{mor88}, where the conditions for traversability for Lorentzian wormholes were defined by the survivability of human travellers. \red{This redefinition was not only of great importance to physics, but also to futurology and is still seen as a main way to travel at large distances in space by humans.} On the other hand, models of a wormhole, possessing no event horizon and physical singularities, were elaborated in \cite{ell73}-\cite{ell79}. In order to obtain such kind of wormhole solutions one should invoke phantom field (exotic matter), whose energy momentum tensor violates the null, weak and strong energy conditions, as well as, its kinetic energy term is of the reversed sign. However, traversability requires also stability of the wormhole solution, except small acceleration and tidal forces. To achieve this goal we may consider a generalized Einstein gravity theories, like Gauss-Bonnet-dilaton theory. Moreover in this theory wormholes can be built with no use of such exotic kind of matter \cite{kan11}-\cite{har13}. On the other hand, the method of constructing traversable wormholes by applying duality rotation and complex transformations was proposed \cite{gib16,gib17}. By assuming that the dilaton field constitutes a phantom one, an electrically charged traversable wormhole solution in Einstein-Maxwell-phantom dilaton gravity, has been revealed \cite{gou18}. Soon after the rotating wormhole solutions were paid attention to \cite{teo98}-\cite{bro13}. There were also conceived perturbative and numerical attempts to construct spinning generalization of static wormhole solutions \cite{kas08}-\cite{che16}. It was claimed that the rotating wormholes would be with a higher possibility stable \cite{mat06} and therefore traversable. The other interesting problem in wormhole physics is their classification. Having in mind classification delivered by the black hole uniqueness theorem, the first work in this direction was provided in \cite{rub89}, delivering the uniqueness theorem for wormhole spaces with vanishing Ricci scalar. Further, the uniqueness of Ellis-Bronikov wormhole with phantom field was found in \cite{yaz17}, while the uniqueness for four-dimensional case of the Einstein-Maxwell-dilaton wormholes with the dilaton coupling constant equal to one, was presented in \cite{laz17}. The case of higher-dimensional generalization of wormhole solution, valid from the point of view of the unification theories like string/M-theory attracts also attention. The uniqueness theorem for higher-dimensional case of the static spherically symmetric phantom wormholes was treated in \cite{rog18}, while the case of of static spherically symmetric traversable wormholes with two asymptotically flat ends, subject to the higher-dimensional solutions of Einstein-Maxwell-phantom dilaton field equations with an arbitrary dilaton coupling constant, was elaborated in \cite{rog18a}. Various other aspects of physics of these objects were under intensive studies (for a detailed review of the blossoming subject the reader may consult \cite{worm}). Wormholes being a fascinating subject of their possible impact on space and time travels, may also be regarded as potential astrophysical objects, that can be observationally search for. From the astrophysical point of view, it is persuasive to consider rotating wormholes. The problem that arises is how to observationally distinguish rotating wormholes from stationary axisymmetric black holes of Kerr-type. Remarkable attention to the aforementioned problem was paid to after the Even Horizon Telescope observed the black hole shadow in the center of the galaxy M87.\\ The first studies to what extent wormholes can imitate the observational characteristics of black holes were conducted in \cite{dam07}, where the simple generalization of Schwarzschild-like line element was revealed. The considered metric differs from the static general relativity one by introducing the dimensionless parameter ${\lambda}$. The value of the parameter equal to zero is responsible for the ordinary Schwarzschild black hole solution. Of course one should be aware that for non-zero values of the parameter the presented line element is no longer the static solution of Einstein equations and changes the structure of the manifold. Therefore the matter with almost vanishing energy density ought to be required to maintain the aforementioned gravitational configuration (for the discussion of the influence of the parameter ${\lambda}$ on the static manifold structure see, e.g., \cite{bue18}). Further generalization of the idea given in \cite{dam07} to describe Kerr-like wormhole spacetime as a toy model, was achieved by applying a modification on the Kerr metric similar to the procedure performed in \cite{dam07}. The embedding diagrams, geodesic structure, as well as, shadow characteristics of the obtained Kerr-like wormhole were given in \cite{ami19}. On the other hand, the throat-like effects on the shadow of Kerr-like wormholes were elaborated in \cite{kas21}. However, the problem of the structure at the horizon scale of black hole which gives rise to echoes of the gravitational wave signal bounded with the postmeger ring-down phase in binary coalescences, in the case of static and rotating toy models of traversable wormholes, has been elucidated in \cite{bue18}. The other subject acquiring much attention in contemporary astrophysics and physics is the unrelenting search for finding {\it dark matter} sector particles. The nature of this elusive ingredient of our Universe is a mystery and several models try to explain it and constitute the possible guidance for the future experiments. The main aim of our work will be to investigate the behavior of axion-like particle {\it dark matter} model clouds, around the mimickers of rotating black holes, stationary axially symmetric wormholes. The work will provide some continuity with our previous studies \cite{kic21}, where we have paid attention to the main features of axionic clouds {\it dark matter} in the vicinity of magnetized rotating black holes. The principal goal of the investigations will be to find the possible differences in characteristic features of the axion-like condensate, between those two classes of compact objects, i.e., rotating black holes and black hole mimickers. Our studies will constitute the first glimpse at the problem in question. Namely, we restrict our consideration to the probe limit case, when one has the complete separation of the degrees of freedom, i.e., matter fields do not backreact on wormhole spacetime. The organization of the paper is as follows. In Sec. II we deliver the basic facts about the axion-like {\it dark matter} model. Sec. III will be devoted to the description of the rotating wormholes models surrounded by {\it dark matter} clouds, in the considered model of axion-like {\it dark matter}. In Sec. IV we describe the numerical results of the studies, while in Sec. V we conclude our investigations and aim the possible problems for the future investigations. \section{Model of axion-like {\it dark matter} sector} The explanation of astronomical and cosmological observations require {\it dark matter} existence, whose nature is one of the most tantalizing questions confronting contemporary physics and cosmology. A large number of ongoing or planned experimental searches for its detection and understanding of the {\it dark sector} role in a fundamental description of the Universe. Axions are among the strongest candidates for the possible explanation of the existence of {\it hidden sector} \cite{pre83}-\cite{din83}. Their existence has been postulated to explain the apparent lack of violation of charge conjugate parity \cite{pec77}-\cite{wil78} and in the strong interaction motivated the absence of observable electric dipole moment of the neutron \cite{pen15}. Axionlike particles are also widely spotted in the realm of string theories \cite{svr06}. In what follows, we shall study axionlike scalar particles coupled to the Maxwell $U(1)$-gauge field. The non-trivial coupling of axion field to the Maxwell field strength invariant plays the crucial role in the model in question. The field equations of motion are provided by the variation procedure with respect to the action given by \begin{equation} \mathcal{S} = \int d^4 x \sqrt{-g} \left[R - \frac{1}{4} F_{\mu \nu} F^{\mu \nu} - \frac{1}{2} \nabla_\mu \Psi \nabla^\mu \Psi - \frac{\mu^2}{2} \Psi^2 - \frac{k}{2} \Psi \ast F^{\mu \nu} F_{\mu \nu} \right], \end{equation} where we set $R$ for the Ricci scalar, $F_{\mu \nu} = 2 \nabla_{[\mu} A_{\nu]}$, while $\Psi$ stands for the scalar field (axion) with mass $\mu$. $\ast F^{\mu \nu} = 1/2 \epsilon_{\mu \nu \alpha \beta} F^{\alpha \beta}$ is the dual to Maxwell field strength. The equation of motion for the scalar field $\Psi$, which constitutes a covariant Klein-Gordon equation with a source term of the dual Maxwell field invariant, implies \begin{equation} \nabla_\mu \nabla^\mu \Psi - \mu^2 \Psi - \frac{k}{2} ~\ast F^{\mu \nu} F_{\mu \nu} = 0, \label{eq:field_eqn} \end{equation} while the $U(1)$-gauge field is subject to the relation as follows: \begin{equation} \nabla_\mu F^{\nu \mu} + 2 k~\ast F^{\nu \mu} \nabla_{\mu }\Psi = 0. \end{equation} We refer to the $\Psi$ field as axionlike, because the axions (originating from QCD) have adequate constrains on both mass and coupling parameter. Here however we consider particles with physics given by an analogical Lagrangian yet with arbitrary values of physical parameters. However for simplicity we might refer to the studied axionlike particles as simply axions. The {\it dark matter} model in question was widely elaborated in studies of black hole superradiance and light polarization effects, possible experimental signals of {\it dark sector} around these objects \cite{pla18}-\cite{car18}, \cite{kic21}, and neutron stars \cite{gar18}-\cite{gra15}, as well as, the influence of axionic {\it dark matter} on the physics on early Universe and primordial black holes \cite{fed19}-\cite{ros18}. The form of the relation (\ref{eq:field_eqn}) envisages the fact that the presence of the non-zero source term, containing the dual invariant, given by \begin{equation} \mathcal{I} = ~\ast F^{\mu \nu} F_{\mu \nu} \neq 0, \end{equation} is crucial. In the opposite case, when the invariant is equal to zero, the axion-like scalar field equation of motion reduces to the simple massive Klein-Gordon case, without any self-interaction potential. It means that no scalar hair configuration on the studied line element can emerge. Although it has been shown that in Kerr spacetime scalar hair may emerge in certain situations \cite{herd14}, here we pick a different ansatz (see below) as we focus on stationary configurations, which appear to be magnetically induced in this approach. On the other hand, it can be noticed that the discussed invariant, $\ast F_{\mu \nu} F^{\mu \nu}$, is equal to zero in the case when $F_{\mu \nu} =0$, or for spherically symmetric spacetime. However, it has a non-trivial form, $\ast F_{\mu \nu} F^{\mu \nu} \neq 0$, when both rotation and magnetic $U(1)$-gauge field components are present in the spacetime under consideration. To introduce the magnetic field we use the method proposed by Wald \cite{wal74}, where the vector potential is sourced by Killing vectors of the rotating spacetime. In general it has a form \begin{equation} A_\mu = \frac{1}{2}B (m_\mu + 2 a k_\mu), \end{equation} where $k_\mu$ and $m_\mu$ are the Killing vectors connected with temporal invariance and $\phi$ rotation respectively. As in \cite{kic21}, where we have studied rotating magnetized black holes submerged into axionic {\it dark matter} cloud, one can introduce a static magnetic field to the system, which will be oriented along the rotation axis. It seems to be plausible from the point of astrophysical perspective and can be regarded as a starting point for studies of the magnetic field influence of the system in question. Because of the fact that our investigations focus on static magnetic field, parallel to the wormhole rotation axis, the gauge potential may be rewritten in the form as $ A_\mu dx^\mu = B/2~ g_{\mu \nu} m^\nu dx^\mu.$ For our considerations we choose a static, time independent ansatz. The symmetry of the problem enables us to elaborate the axion field in the form provided by \begin{equation} \Psi = \psi(r, \theta), \label{eq:ansatz} \end{equation} which will be plugged into the equation \eqref{eq:field_eqn}, for the considered line element. \section{Rotating wormhole metrics} The simplicity of the static line element describing a wormhole may suggest that the spinning generalization can be achieved analytically and ought to be globally regular. But in vain, it happens that finding the stationary solution with an extended source is far more complicated (see for the recent aspects of this problem \cite{vol21}). However, the rotating wormhole solutions are widely discussed in literature \cite{teo98}-\cite{che16}, but one should be aware that they do not constitute the exact solutions of the equations of motion but rather comprise some model of geometries. In this section, we shall study two kinds of rotating wormhole model metrics. First one accounts for the extension of the regular black hole Kerr metric \cite{bue18,ami19}. The other is the Teo class wormhole \cite{teo98}, a rotating generalization of Morris-Thorne wormhole, which serves us as comparison to a bit more realistic Kerr-like wormhole. \subsection{Kerr-like wormhole} To begin with, we consider the metric of Kerr-like rotating wormhole. It is constructed by a slight modification of stationary axisymmetric line element with a parameter ${\lambda}$. For the first time, such construction was proposed in \cite{dam07}, where the static Schwarzschild black hole was considered. Then, it was generalized to the case of stationary axisymmetric line element \cite{bue18,ami19}. The Kerr-like wormhole line element yields \begin{eqnarray} ds^2 &=& - \left( 1 - \frac{2 M r}{\Sigma} \right)dt^2 - \frac{4 M ar \sin^2 \theta}{\Sigma} dt d\phi + \frac{\Sigma}{\tilde{\Delta}} dr^2 + \Sigma d\theta^2\\ \nonumber &+& \Big(r^2 + a^2 + \frac{2 M a^2 r \sin^2 \theta}{\Sigma} \Big) \sin^2 \theta d\phi^2, \end{eqnarray} where we set \begin{align} \Sigma(r, \theta) = r^2 + a^2 cos^2 \theta, \\ \tilde{\Delta}(r) = r^2 + a^2 - 2M(1 + \lambda^2)r. \end{align} The parameters $M$ and $a M$ correspond to mass and angular momentum of a wormhole. For a small deviation parameter ${\lambda}$, one achieves almost indistinguishable from of Kerr black hole line element. These three parameters describe the system as seen from the outside. Moreover its Arnowitt-Deser-Misner (ADM) mass, as seen by the observer at asymptotic spatial infinity, is given by $M_{ADM} = M (1 + {\lambda}^2)$. The largest root of $\tilde{\Delta}(r) = 0$, establishes the surface provided by \begin{equation} r_+ = M ( 1 + \lambda^2 ) + \sqrt{M^2 ( 1 + \lambda^2)^2 - a^2}. \end{equation} For the model in question it does not constitute a radius of the event horizon, but describes the radius of the throat of the rotating wormhole, which connects two asymptotically flat regions of the spacetime. It can be explicitly seen by the adequate changes of variables \cite{bue18,ami19}. The points with the condition $r<r_+$ do not exist. Consequently the axion field equation written in the Kerr-like wormhole spacetime implies the following: \begin{align} \tilde{\Delta} \partial_r^2 \psi + \frac{2(r - M)\tilde{\Delta} - M \lambda^2 (r^2 + a^2)}{\Delta} \partial_r \psi + \partial_{\theta}^2 \psi + \cot \theta \partial_{\theta} \psi - \mu^2 \Sigma \psi = \frac{k \Sigma}{2} \mathcal{I}_{KWH}, \label{eqn:kwh_axion} \end{align} where the electromagnetic field invariant is provided by \begin{align} \mathcal{I}_{KWH} = - \frac{a B^2 M \tilde{\Delta} \sin^2 \theta \cos \theta}{2 \Delta \Sigma^4} \big[ 3 a^6 + 2 a^4 M r - 5 a^4 r^2 - 8 a^2 M r^3 - 32 a^2 r^4 - 24 r^6 \nonumber \\ + 4 a^2 (a^4 - a^2 r^2 + 2(M - r)r^3 ) \cos 2\theta + a^4 (a^2 - 2 M r + r^2) \cos 4\theta \big]. \end{align} The equation \eqref{eqn:kwh_axion} undergoes a following scaling transformation \begin{equation} r \rightarrow \eta r, \quad a \rightarrow \eta a, \quad M \rightarrow \eta M, \quad B \rightarrow B/\eta, \quad \mu^2 \rightarrow \mu^2 / \eta^2, \quad r_+ \rightarrow \eta r_+, \end{equation} \red{which allows us to fix one of model parameters to unity. For this we pick $M = 1$.} \subsection{Teo rotating wormhole} The well-known Morris-Thorne metric, introduced in Ref. \cite{mor88}, describes a traversable wormhole spacetime, which is stabilised by exotic matter in the area of its throat. That solution was achieved by using reverse engineering of general relativity, namely the metric was postulated first and with a help of Einstein equations the suitable matter components were found. Generalization of the aforementioned solution, by including the rotation into the consideration, was performed in \cite{teo98}. The resulting metric of the rotating wormhole has a following form: \begin{equation} ds^2 = -N^2 dt^2 + \frac{dr^2}{1 - \frac{b}{r}} + K^2 r^2 \left[ d \theta^2 + \sin^2 \theta (d \phi - \omega dt)^2 \right], \end{equation} where, as in the Morris-Thorne case, one has a lot of freedom in choosing the shape of $N$, $b$, $K$ and $\omega$ functions, as long as they meet specific requirements. Firstly, all the functions can be functions of $r$ and $\theta$ and should be regular on the symmetry axis $\theta =0, \pi$. Secondly, $N$, the gravitational redshift function, ought to be finite and nonzero, $b$ as the shape function determining the shape of the wormhole throat, should satisfy $b \leqslant r$. $K$ accounts for the radial distance with respect to the coordinate origin and $\omega$ stands for the angular velocity of the wormhole. The embedding of constant $t$ and $\theta$-cross sections in the three-dimensional Euclidean space reveals the well-recognizable form of the wormhole spacetime. The constructed geometry describes two regions, where the radial coordinates are given by $r \in [r_+,~\infty)$, which are joined together at the wormhole throat $r=r_+$. At spatial infinity, the requirement of asymptotic flatness regions provides that the metric coefficients ought to satisfy the following expansions: \begin{equation} N = 1 - \frac{M}{r} + {\cal O} \Big(\frac{1}{r^2}\Big), \qquad K = 1 + {\cal O}\Big(\frac{1}{r}\Big), \qquad \frac{b}{r} = {\cal O}\Big(\frac{1}{r}\Big), \qquad \omega = \frac{2 J}{r^3} + {\cal O}\Big(\frac{1}{r^4}\Big), \label{eq:twh_asympt} \end{equation} where we have denoted by $M$ the mass of the wormhole and by $J$ its angular momentum. In general, one encounters the whole range of functions, which fulfil the aforementioned conditions and constitute a regular rotating wormhole solution. For the numerical calculations, we pick a set of functions which appear to be quite popular in the literature of the subject, and were previously used by different authors \cite{shaikh18, nedkova13, abdujabbarov16, harko09, bambi13} \begin{equation} N = \exp\left[- \frac{r_+}{r} \right], \qquad b(r) = r_+ \left( \frac{r_+}{r} \right)^\gamma, \qquad \omega = \frac{2 a r_+}{r^3}, \qquad K=1, \label{eq:twh_metric_fun} \end{equation} where we use the $r_+$ symbol, for denoting the wormhole throat radius. \red{The angular momentum parameter is defined in the standard way $a = J/M$. Using the asymptotic relations \eqref{eq:twh_asympt} we find that for the picked set of functions \eqref{eq:twh_metric_fun} $M = r_+$.} Thus, the family of the above solutions is described by three parameters, i.e., the throat radius $r_+$, angular momentum parameter $a$ and the shape parameter $\gamma$. After putting the ansatz \eqref{eq:ansatz} and the metric into the field equation \eqref{eq:axion_only_action} we arrive at the equation of motion \begin{align} \left[ r^2 - r_+ r \left( \frac{r_+}{r} \right)^\gamma \right] \partial_r^2 \psi + \left[ 2r + r_+ + \left(\frac{r_+}{r} \right)^\gamma \left(\frac{1}{2}r_+ \gamma - \frac{r_+^2}{r} -\frac{3}{2} r_+ \right) \right] \partial_r \psi \nonumber \\ + \partial_{\theta}^2 \psi + \cot \theta \partial_{\theta} \psi - \mu^2 r^2 \psi = \frac{1}{2} k r^2 \mathcal{I}_{TWH}, \label{eqn:twh_axion} \end{align} which radial part is strongly dependent on $\gamma$. The Maxwell field invariant related to uniform magnetic field in this spacetime implies \begin{equation} \mathcal{I}_{TWH} = \frac{12 a B^2 r_+ \cos \theta \sin^2 \theta}{r^{5/2}} \sqrt{\frac{r - r_+ \left(\frac{r_+}{r} \right)^\gamma}{\exp \left[ -\frac{2 r_+}{r} \right]}}. \end{equation} The equation \eqref{eqn:twh_axion} follows a scaling transformation \begin{equation} r \rightarrow \eta r, \quad r_+ \rightarrow \eta r_+, \quad a \rightarrow \eta a, \quad B \rightarrow B/ \eta, \quad \mu^2 \rightarrow \mu^2 / \eta^2. \end{equation} \red{Using this transformation we fix $r_+ = 1$.} \subsection{Free energy} As a benchmark for the thermodynamical preference of the obtained states we use free energy by evaluating the on-shell action of the axion dependent part of the theory \begin{equation} \mathcal{S}_{axion} = \int d^4 x \sqrt{-g} \left[- \frac{1}{2} \nabla_\mu \Psi \nabla^\mu \Psi - \frac{\mu^2}{2} \Psi^2 - \frac{k}{2} \Psi \ast F^{\mu \nu} F_{\mu \nu} \right]. \label{eq:axion_only_action} \end{equation} By substituting the equations of motion into the action and imposing the ansatz of the field we arrive to the formula for the free energy \begin{equation} F = - 2 \pi \int_\mathcal{M} dr d\theta ~\sqrt{-g} \bigg[ (\partial_r \psi)^2 g^{rr} + (\partial_\theta \psi)^2 g^{\theta \theta} + \mu^2 \psi^2 \bigg]. \label{eq_freeenergy} \end{equation} The straightforward integration of the equation \eqref{eq_freeenergy} appears to be problematic. It is because both considered backgrounds have singular metric determinant at the throat, which makes simple integration from throat to infinity impossible in these coordinates. It should be noted that this singularity is merely a coordinate singularity, as the curvature of both wormholes is regular and finite at the throat. In the case of Kerr-like wormhole metric, we have \begin{equation} \sqrt{-g} = \sqrt{\frac{\Delta}{\tilde{\Delta}}} \Sigma \sin^2 \theta, \end{equation} where for the case of ${\lambda}$ equal to zero we obtain that $\Delta = {\tilde{\Delta}}$. This fact naturally eradicates the singularity problem in the black hole scenario. Here, however, as we radially fall toward the wormhole, the root of ${\tilde{\Delta}}$ comes first and creates the singularity. On the other hand, for the Teo rotating line element we get \begin{equation} \sqrt{-g} = \frac{\exp \left( - \frac{r_+}{r} \right) r^2 \sin \theta}{\sqrt{1 - \left( \frac{r_+}{r} \right)^{\gamma + 1}}}, \end{equation} with the denominator naturally generating the infinity. To deal with the integration in such spacetimes we use energy differences instead. Also we introduce a cutoff to the lower integration bound, so we start from $r_+ + \epsilon$ rather than simply $r_+$. In this way we ensure the finiteness of energy differences and give them straightforward physical interpretation. With the change of the background parameter the solution becomes more or less thermodynamically stable with respect to some \textit{ground} solution. \section{Results} In this section we pay attention to the solutions of the equations of motion for the previously described two toy models of rotating wormholes. Due to the complications of the relations \eqref{eqn:kwh_axion} and \eqref{eqn:twh_axion}, we solve them numerically by virtue of spectral methods. Firstly the adequate equation is discretized on Gauss-Lobato grid \cite{matlabnum} and next translated into a system of algebraic equations with spectral differentiation matrices. The method in question has already been implemented in Python and tested for the numerical stability. The technical details, especially convergence tests of the numerical method are described in the Appendix of \cite{kic21}, where we studied the problem of axionlike particle clouds in the spacetimes of rotating magnetized black holes. The spectral nature of the numerical scheme requires remapping the coordinates onto the $[-1, 1]$ intervals. It can be achieved by the coordinate transformation provided by \begin{align} z = 1 - \frac{2 r_+}{r}, \\ u = \frac{4 \theta}{\pi} - 1, \end{align} where $r_+$ is the wormhole throat radius. After such operation, our numerical domain may be written in the form $[-1, 1]\times[-1, 1]$. For $z$-coordinate, the boundaries are the wormhole throat ($z = -1$) and spatial infinity ($z=1$), while for $u = -1$, one talks about \textit{north pole} of a wormhole and the \textit{equator} with $u = 1$. Consequently after the coordinate transformation in the underlying equations, one shall impose the adequate boundary conditions. Namely, on the throat surface we demand that the axion field should be regular, therefore $\partial_r \psi = 0$ provides a desirable conduct of the field. \red{Alternatively, setting the field to a constant value, such as zero in a wormhole scenario, is also a possible choice. However we wish to explore the Kerr-like solution for different values of $\lambda$ parameter, including its zeroing when it simplifies to the Kerr black hole. Given that for the consistency between these two kinds of solutions we use the Neumann boundary condition.} At the spatial infinity, we take a look on the asymptotic behaviour of the equation itself and the source term $\mathcal{I}$. It appears that the Maxwell field invariants in both backgrounds are vanishing functions. As $r \rightarrow \infty$, we have \begin{equation} I_{KWH} = \mathcal{O}\left(\frac{1}{r^4}\right), \end{equation} \begin{equation} I_{TWH} = \mathcal{O}\left( \frac{1}{r^2} \right). \end{equation} Which means that both equations \eqref{eqn:kwh_axion} and \eqref{eqn:twh_axion} reach a simple, asymptotic form, to the leading order \begin{equation} \partial^2_r \psi + \frac{2}{r} \partial_r \psi - \mu^2 \psi = 0. \end{equation} This simple equation has a solution \begin{equation} \psi = A \frac{\exp(\mu r)}{r} + B \frac{\exp(-\mu r)}{r}, \label{eq:psi_asympt} \end{equation} where $A$ and $B$ are constants. Naturally the field ought to decay for the sake of asymptotic flatness of the spacetime. Given that we are allowed to choose $A = 0$, with arbitrary $B$. This means that a boundary condition $\psi(r \rightarrow \infty) = 0$ is an adequate and mathematically motivated choice. On the other hand, the boundary conditions for the angular dependency are built on the basis of the spacetime symmetry. Both considered spacetimes are rotating, therefore we demand $\partial_{\theta} \psi = 0$ on the \textit{north pole}. On the \textit{equator}, the presence of magnetic field combined with the spacetime symmetry implies that $\psi = 0$. \subsection{Kerr-like wormhole} To commence with, we solve the equation \eqref{eqn:kwh_axion} for the Kerr-like background metric. A portion of obtained distributions is depicted in Fig. \ref{fig_kw_maps}. In the following panels we see the increasing mass of the axionic field. In the panel (a) the field is ultralight, subsequently in (b) $\mu^2 = 0.01$, (c) $\mu^2 = 0.1$ and finally in (d) $\mu^2 = 1$. In every panel we have $a = 0.99$ and $\lambda = 0.5$. We can clearly see how the mass of the axionlike field changes the angular distribution of it around the wormhole. For little masses the clouds are concentrated around the poles of the wormhole and spread in the space for several throat radii. As we increase the axion mass we see that the polar regions of the wormhole become depleted and the field drifts towards the equator. The largest concentration is visible on the latitude $\theta \simeq \pi/4$. Second important effect is the influence of the field mass on the magnitude of the field. Inspection of the colorbars reveals that the larger the mass the smaller the field. The spatial tail of the field is also much shorter, when the mass of the field is larger. Intuitively, in the asymptotic solution \eqref{eq:psi_asympt} $\mu$ enters the suppressing exponential term. The field decays faster for larger masses, which means the massive fields are localized in the vicinity of the throat surface. Another important thing that stands out in relation to the black hole solutions is the repulsion of the axion cloud from the wormhole throat surface. While in the case of the black hole, the field had non-zero values on the surface of the event horizon, and its radial character was monotonically decreasing, here we have a completely different situation. For the wormhole, the field vanishes or at least has a significantly smaller value on the throat. Then it grows with the radius as it reaches the maximum and finally decreases. This effect is particularly visible for the high values of the angular momentum. \begin{figure}[h] \centering \subfloat[$\mu^2 \rightarrow 0^+, \quad \lambda = 0.5$]{ \includegraphics[width=0.45 \textwidth]{wh_a09_la05_m0_HD.pdf} } \qquad \subfloat[$\mu^2 = 0.01, \quad \lambda = 0.5$]{ \includegraphics[width=0.45 \textwidth]{wh_a09_la05_m001_HD.pdf} } \vspace{0.5cm} \subfloat[$\mu^2 = 0.1, \quad \lambda = 0.5$]{ \includegraphics[width=0.45 \textwidth]{wh_a09_la05_m01_HD.pdf} } \qquad \subfloat[$\mu^2 = 1, \quad \lambda = 0.5$]{ \includegraphics[width=0.45 \textwidth]{wh_a09_la05_m1_HD.pdf} } \caption{Axion field distribution around Kerr-like wormholes for given sets of parameters. The blank space in the vicinity of wormhole throat distinguishes the solution from the black hole counterpart, where the field appears to be non-zero on the event horizon. Subsequent panels for each mass parameter show how the angular distribution of the field is affected.} \label{fig_kw_maps} \end{figure} The radial behaviour of the axionic field can be seen more precisely in Fig. \ref{fig:kw_slices}. We present there a slice of $\psi$ as a function of $r$ in throat radius units, for constant $\theta=\pi/4$ and few different values of the $\lambda$ parameter. In contrast, we also plot the behaviour of axions in Kerr black hole metric (that is $\lambda = 0$). What we can see is the increasing $\lambda$ consequently extinguishes the axionic hair. In the foreground a structural change in the field profile is visible as we compare it to the black hole scenario. An axionic field over a black hole has a maximum value on the event horizon. The opposite is true for a wormhole, on the throat the field vanishes, then grows to its maximum and fades away with the radius. Then, the bigger is $\lambda$ the smaller are the maxima and overall magnitude of the axionic hair. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{psi_pi4_a99_m0.pdf} \caption{A closer look on the axion cloud gap near the wormhole throat. Here we show slices of $\psi$ for constant $\theta = \pi/4$, with parameters $a = 0.99, \mu^2 = 0^+$. Increasing of $\lambda$ decreases the magnitude of the axions and cuts off its tail.} \label{fig:kw_slices} \end{figure} In the next step we investigate the free energy of the obtained axion cloud configurations. It is interesting to see how the parameters describing the spacetime geometry around the wormhole influence the thermodynamics of the axion clouds. Due to the previously mentioned difficulties in computing the free energy in these metrics, we rather talk about energy differences, than the exact values. In Fig. \ref{fig:kw_fe} we present the differences of the free energy versus angular momentum $a$, with respect to the $\lambda=0$ level, which constitutes a plain Kerr black hole. It is clearly visible that the larger value of the distortion parameter ${\lambda}$ one takes into account, the higher value of the free energy of the cloud we achieve. It turns out, that the more the gravitational background deviates from from the black hole metric, the less thermodynamically desirable axion clouds are. This effect works together with the diminishing magnitude of the field on the previously discussed Fig. \ref{fig:kw_slices}. Additionally the increasing angular momentum of the wormhole also increases the free energy difference. This means that for \textit{extreme} Kerr-like wormhole axion hair is the least favourable. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{fediff_a_lambs_m0.pdf} \caption{Free energy differences as a function of angular momentum (with $\lambda = 0$ as the ground curve) for different values of $\lambda$. With axion mass $\mu^2 = 0^+$ we see that the cloud thermodynamical favourability decreases with growth of both angular momentum and $\lambda$ parameter.} \label{fig:kw_fe} \end{figure} \subsection{Teo rotating wormhole} Teo class wormhole has a different set of parameters and it does not simply transform into a black hole solution, just like a Kerr-like wormhole does. Here the throat radius is independent of the other parameters and is imposed manually in $g_{tt}$ and $g_{rr}$. With the particular choice of functions \eqref{eq:twh_metric_fun}, we can only steer with the shape of metric components via $\gamma$ parameter. Therefore, let us consider values of $\gamma$ in the interval $(-1, 1]$, where for $\gamma = -1$ the function $g_{rr}$ is singular, so we can only approach this value. \red{As it was mentioned, the Kerr-like wormhole can be reduced to a black hole solution by setting $\lambda = 0$. Teo solution does not share this feature, but is a well-known wormhole metric, just like its non-rotating counterpart the archetypical Morris-Thorne wormhole. While it can not serve as a testing field for differences between axion clouds around wormholes and black holes, one can treat it as a benchmark for behaviours of the axion hair in another wormhole environment. Using this background might help us to see if the obtained axion solutions share similar features. } In Fig. \ref{fig_teo_maps} one can see the distribution of the axionic cloud around a Teo wormhole for different axionlike field masses. In the panel (a) we have an ultralight field, then it takes values $0.1$, $0.5$ and $1.0$ for (b), (c) and (d) respectively. In all panels we use $\gamma = -0.99$, which gives us tiny value of the axionic field (see the colorbars). The angular distribution has similar features to the Kerr-like metric. For ultralight axions the field is localized in the majority of the wormhole surroundings. As the mass increases the hair tightens spatially and disappears from the polar regions. For the large mass case the axionic clouds are drifting toward the equator with the polar caps left almost empty. Moreover the radial reach is very short - around one throat radius. One can clearly notice that for the negative gamma value the analogous effect to the Kerr-like scenario is observed. The axionic cloud is also pushed away from the throat surface - in its vicinity the field acquires small values, reaches the maximum and the descends monotonically to zero. However in this gravitational background this effect is not as dramatic as in case of Kerr-like wormhole. The weakening of the axionic field in the vicinity of the throat is easily visible in the distributions, although it is not that large. Increasing $\gamma$ up to zero and beyond causes the rise of $\psi$ field. It grants bigger values, but the spatial qualitative characteristic remains intact. \begin{figure}[h] \centering \subfloat[$\mu^2 \rightarrow 0^+, \quad \gamma = -0.99$]{ \includegraphics[width=0.45 \textwidth]{psidist_g-099_a99_m0.pdf} } \qquad \subfloat[$\mu^2 = 0.1, \quad \gamma = -0.99$]{ \includegraphics[width=0.45 \textwidth]{psidist_g-099_a99_m01.pdf} } \vspace{0.5cm} \subfloat[$\mu^2 = 0.5, \quad \gamma = -0.99$]{ \includegraphics[width=0.45 \textwidth]{psidist_g-099_a99_m05.pdf} } \qquad \subfloat[$\mu^2 = 1, \quad \gamma = -0.99$]{ \includegraphics[width=0.45 \textwidth]{psidist_g-099_a99_m1.pdf} } \caption{Axion cloud distribution around the magnetized Teo type wormhole. The negative value of $\gamma$ resembles the results for Kerr-like wormhole, with large $\lambda$ value, i.e., the decreasing of the field magnitude. Each panel depicts the field distribution for different axionlike masses. Once again a big mass results with a localized field, concentrated closely to the throat with depleted polar region. For bigger values of $\gamma$ we observe similar influence of $\mu$ on the spatial distribution.} \label{fig_teo_maps} \end{figure} The next figure brings a closer look on the field drop near the throat. In Fig. \ref{fig_teo_slice} we present radial slices of the field distribution with $\theta = \pi/4$. In this particular figure we depict the behaviour for ultralight field, however a similar tendencies are shown by more massive fields. First of all, we observe a significant amplification of the field with the growth of $\gamma$. The field does not acquire new features however, but it seems that the curves follow some kind of scaling related to gamma. Additionally the profiles resemble the results obtained for Kerr-like wormholes. As we have previously mentioned, Teo class wormhole cannot be simply transformed into a black hole by a simple choice of parameter value. However the features like a drop near the throat surface, then maximum and monotonic fall show that these might be more general wormhole related behaviours of axionic hair. \begin{figure}[h] \centering \includegraphics[width=0.7 \textwidth]{teowh_psi_pi4_a99_m0.pdf} \caption{Radial slices of the axion field for constant angle $\theta = \pi/4$. The growth of $\gamma$ increases the maximum value of the field. However in this metric the significant growth of the blank space near the throat is not present. Also the growth of $\gamma$ does not seem to greatly affect the tail of the field, away from the throat, which is different from the Kerr-like wormhole results.} \label{fig_teo_slice} \end{figure} If we consider the free energy, it appears that the axion clouds for the background with negative $\gamma$, are definitely less thermodynamically favourable. In Fig. \ref{fig_teo_fe} we plot the free energy difference as a function of angular momentum for several values of the gamma parameter. We use the curve with $\gamma = 0$ as a baseline for calculating energy differences. Free energy difference curves for $\gamma < 0$ are positive, especially the $\gamma = -0.99$ curve reaches relatively big values. Therefore thermodynamically speaking wormholes with $\gamma$ close to $-1$ have least chances to hold axionic hair. With the growth of the parameter, the free energy of the cloud decreases, which makes the axions more thermodynamically favourable. However this fall is rather moderate comparing to the rise of the top curve. In both cases the increase of angular momentum amplifies the tendencies of the curves. Curves for negative gamma grow, while the positive fall. A consequent growth of $\gamma$ parameter leads the hair to some limit characteristics which can be seen in Fig. \ref{fig_teo_slice} and Fig. \ref{fig_teo_fe}. \begin{figure}[h] \centering \includegraphics[width=0.7 \textwidth]{fediff_a_gam_m0.pdf} \caption{Free energy differences vs. angular momentum for different values of $\gamma$. The curve of $\gamma = 0$ is the reference level. We see that free energy increases for negative values of gamma and slightly drops for positive ones, as angular momentum grows.} \label{fig_teo_fe} \end{figure} Finally let us conduct qualitative comparison of the axionic clouds in considered metrics. The solutions have undoubtedly similar features, especially when one takes a look on the $\psi$ slices. We also observe the separation of the cloud from the surface of the throat in both cases. This allows us to notice some general wormhole related phenomena, which are not present around the black holes \cite{kic21}. Naturally we cannot speak in a fully general manner, as we only considered here merely two distinct gravitational backgrounds, which on the other hand should be treated as toy models. \section{Conclusion} In our paper, we have considered the problem of the distribution of axionlike particles, being regarded as {\it dark matter} sector model, around the toy models of rotating wormholes. We have investigated the Kerr-like wormhole line element with the distortion parameter ${\lambda}$ and Teo model of a rotating axisymmetric wormhole. The models under inspection were characterized by the mass, angular momentum, distortion parameter (for Kerr-like wormhole) and the shape parameter $\gamma$ (for the Teo model). We numerically solve the equations of motion for the underlying cases, using spectral methods. Among all we have found that the axion clouds are pushed forward from the wormhole throat, especially for the case of large value of the rotation parameter $a$. The voids in the vicinity of the wormhole throat appear for the larger value of the distortion parameter ${\lambda}$. This phenomenon distinguishes the studied system from the previously elaborated black holes \cite{kic21}. On the other hand, for the larger ${\lambda}$, one achieves higher value of the free energy, and therefore this solution is less thermodynamically favoured. As far as the Teo class of rotating wormholes is concerned, we have for the negative value of $\gamma$ the analogous effect as in the latter case is obtained. However for the positive value, the behavior of axionic clouds resembles features of the {\it dark matter} clouds around Kerr black hole in a uniform magnetic field. The solution with negative $\gamma$ is not thermodynamically favourable, as it has been revealed in free energy studies. However when $\gamma$ increases, the free energy of the axionic cloud decreases. We have found that the behavior of the axionic clouds significantly differs from the black hole scenario, which we discussed in our previous work \cite{kic21}. This fact will account for the possible guidance, enabling one to distinguish between these two classes of compact objects. Nevertheless, the search for astronomically observable criteria require far more complex approach. A more realistic dynamical gravitational model is needed, when the time dependence of the studied fields is taken into account, as well as, the direct mathematical proofs of the stabilities of rotating wormhole spacetimes ought to be found. These subjects impose a real mathematical challenge and require also solid numerical relativity machinery. These problems shall be investigated elsewhere.
proofpile-arXiv_065-4714
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Coulomb potentials} \par The muon binding energy of $1s_{1/2}$ orbital, $\varepsilon_{\mu}$, and the wave function, $\phi \equiv \phi \left( r \right)$, are calculated by solving the Dirac equation under the Coulomb potential formed by the atomic nucleus and the $ \left( Z - 1 \right) $ electrons. where $ Z $ denotes the atomic number of the atom. The charge density distribution of the atomic nucleus is considered in the Coulomb potential, which reads \begin{align} V_{\text{$ N $-$ \mu $}}^{\text{Coul}} \left( \bm{r} \right) & = - 4 \pi \int_r^{\infty} \frac{1}{r'^2} \int_0^{r'} \overline{\rho}_{\text{ch}} \left( r' \right) r' \, dr' \notag \\ & = - \frac{4 \pi}{r} \int_0^r \overline{\rho}_{\text{ch}} \left( r' \right) r'^2 \, dr' - 4 \pi \int_r^{\infty} \overline{\rho}_{\text{ch}} \left( r' \right) r' \, dr', \end{align} where $ \overline{\rho}_{\text{ch}} $ is the spherical-averaged charge density distribution of the atomic nucleus. The Coulomb potential formed by the $ \left( Z - 1 \right) $ electrons reads \begin{align} V_{\text{$ e $-$ \mu $}}^{\text{Coul}} \left( \bm{r} \right) & = 4 \pi \int_r^{\infty} \frac{1}{r'^2} \int_0^{r'} \rho_e \left( r' \right) r' \, dr' \notag \\ & = \frac{4 \pi}{r} \int_0^r \rho_e \left( r' \right) r'^2 \, dr' + 4 \pi \int_r^{\infty} \rho_e \left( r' \right) r' \, dr', \end{align} where $ \rho_e $ is the electron density distribution. \section{Single-particle state density} \begin{figure}[b] \centering \includegraphics[width=0.4\linewidth]{ssd_sg2.eps} \includegraphics[width=0.4\linewidth]{ssd_sko2.eps} \caption{Single-particle state densities of SGII (left) and SkO' (right) as a function of energy range $\varepsilon$. Note that those of neutron and proton are almost overlapped.} \label{fig:ssd} \end{figure} \par To calculate the two-component exciton model numerically, one needs the single-particle state densities of neutrons ($g_{\nu}$) and protons ($g_{\pi}$). These values are usually estimated from that of the Fermi-gas model at the Fermi energies, which are given by $g_{\nu}\simeq N/15$ and $g_{\pi}\simeq Z/15$~\cite{BohrMottelson}. However, we used $g_{\nu}\simeq N/19$ and $g_{\pi}\simeq Z/19$ rather than those of the Fermi-gas model in this letter, considering the closed-shell structure of $^{28} \mathrm{Si}$ and $^{40} \mathrm{Ca}$. \par To validate this choice, we estimated the single-particle state density with the Skyrme-Hartree-Fock (SHF), which is defined as \begin{equation} w \left( \varepsilon \right) = \frac{\int_{\varepsilon_{f} - \varepsilon/2}^{\varepsilon_{f} + \varepsilon/2} \sum_{i} n_i \delta \left( \varepsilon_{i} - \varepsilon' \right) \, d \varepsilon'}{\varepsilon}, \end{equation} where $\varepsilon_{f}$ the Fermi energy, $\epsilon_{i}$ the single-particle energy of bound state calculated by SHF, and $n_{i}=2j_{i}+1$ the degeneracy. The Fermi energy is computed by averaging the single-particle energies of the last occupied and first unoccupied levels. The results of the single-particle state density with an energy bin $\Delta \varepsilon=0.5 \, \mathrm{MeV}$ is shown in Fig.~\ref{fig:ssd}, where those for $N/15$ and $N/19$ are also drawn. Note that the results of neutron and proton are almost overlapped. Since the level structure is discrete, the state density is zero when $\varepsilon$ is close to $0$. Increments observed in the state density occur when a single-particle state is involved within $\varepsilon_{f}\pm\varepsilon/2$. Actually, it is vague what to extent we should include energy range of $\varepsilon$ to determine an appropriate single-particle state density. However, we may exclude small $\varepsilon$ where only a few levels are involved and large $\varepsilon$ where is too far from $\varepsilon_{f}$. It may be reasonable to select $\varepsilon > 8 \, \mathrm{MeV}$ where the state density settles down to some extent and $\varepsilon < 18 \, \mathrm{MeV}$ since $\varepsilon_{f}$ for proton is about $9 \, \mathrm{MeV}$. We can see that the line of $N/19$ is closer to the calculated state densities in the wide range of $10\le\varepsilon\le18 \, \mathrm{MeV}$ than $N/15$. We have also calculated the particle emission spectra with $N/15$. However, the result largely underestimated the experimental data because a large single-particle state density prompts transitions to the compound state and high energy spectra attributed from the preequilibrium state are hindered. \section{Muon Capture Rate and Particle Emission Spectra of SGII} In the main article, we give only the muon capture rates and spectra calculated by SkO'. Here, we show the result of SGII. Figure~\ref{fig:exc} depicts the capture rate. The result is qualitatively same as SkO'; however, the enhancement in energy range $30\le E \le 60 \, \mathrm{MeV}$ due to the coupling with $2p$-$2h$ states of STDA is moderate compared to SkO'. Figure~\ref{fig:spec} illustrate the paticle emission spectra. Again, the overall feature is qualitatively same as SkO'. However, as compared to SkO', SGII slightly underestimates particle emission spectra. Consequently, the multiplicities of SGII tend to be smaller than SkO' as seen in Table II of the main article and Table~\ref{tab:multiplicity} below. \begin{figure*}[bht] \centering \includegraphics[width=0.49\linewidth]{Exc_SG2.eps} \caption{Normalized capture rate $R \left( E \right)$ for $^{28} \mathrm{Si}$ (top) and $^{40} \mathrm{Ca}$ (bottom) obtained by FREE, TDA, and STDA. The result of SGII with $g_{A}=-1$ is shown.} \label{fig:exc} \end{figure*} \begin{figure*}[bht] \centering \includegraphics[width=0.49\linewidth]{Spec2_SG2.eps} \includegraphics[width=0.49\linewidth]{Spec_SG2.eps} \caption{Particle yields after the muon capture on $^{28}\mathrm{Si}$ (left) and $^{40}\mathrm{Ca}$ (right). The result of SGII with $g_{A}=-1$ is shown. Experimental data of neutron (filled symbols) and proton (open symbols) are taken from Refs.~\cite{Sundelin1973, Kozlowski1985, VanDerPluym1986} and \cite{Budyashov1971, Edmonds2022, Manabe2020}, respectively. Note that unit is not given in the original paper of Budyashov~\cite{Budyashov1971}, so that we normalized the second point from the low energy to STDA+MEC.} \label{fig:spec} \end{figure*} \section{Multiplicities of charged particles for entire energy region} \par In the main article, the multiplicities of charged particles of $^{28}\mathrm{Si}$ are estimated in the limited energy range. Here, we show the multiplicities of them for entire energy range in Table~\ref{tab:multiplicity}, where $g_{A}=-1$ is used in the calculation. In addition to $^{28} \mathrm{Si}$, the result of $^{40} \mathrm{Ca}$ is also listed. We confirmed that the difference from $g_{A}=-1.26$ is less than $3\,\% $. Compared to Table II of the main article, estimated in the limited energy range, the multiplicities in the entire energy region are considerably large. Namely, the measured energy ranges are very narrow to discuss the influences, and therefore further experimental investigates are needed. \begin{table*}[hbt] \centering \caption{Calculated and experimental multiplicities per $10^{3}$ muon captures for $^{28} \mathrm{Si}$ and $^{40} \mathrm{Ca}$ in case of the entire energy range. The results of SGII and SkO' forces with the axial-vector coupling $g_{A}=-1$ are shown.} \label{tab:multiplicity} \begin{tabular}{lc|D{.}{.}{4}D{.}{.}{4}D{.}{.}{4}D{.}{.}{4}D{.}{.}{4}D{.}{.}{4}D{.}{.}{4}D{.}{.}{4}} \hline \hline \multicolumn{1}{c}{Nucl.} & \multicolumn{1}{c|}{Emitted} & \multicolumn{2}{c}{FREE} & \multicolumn{2}{c}{TDA} & \multicolumn{2}{c}{STDA} & \multicolumn{2}{c}{STDA+MEC} \\ & \multicolumn{1}{c|}{particle} & \multicolumn{1}{c}{SGII} & \multicolumn{1}{c}{SkO'} & \multicolumn{1}{c}{SGII} & \multicolumn{1}{c}{SkO'} & \multicolumn{1}{c}{SGII} & \multicolumn{1}{c}{SkO'} & \multicolumn{1}{c}{SGII} & \multicolumn{1}{c}{SkO'} \\ \hline $^{28} \mathrm{Si}$ & $p$ & 60.7 & 61.5 & 94.2 & 98.9 & 101 & 119 & 128 & 144 \\ & $d$ & 2.51 & 2.49 & 4.68 & 4.91 & 6.71 & 8.66 & 9.32 & 11.2 \\ & $t$ & 0.442 & 0.448 & 0.940 & 0.979 & 1.52 & 2.07 & 2.16 & 2.68 \\ & $^{3} \mathrm{He}$ & 0.0512 & 0.0540 & 0.127 & 0.130 & 0.230 & 0.368 & 0.383 & 0.514 \\ & $\alpha$ & 12.5 & 11.5 & 18.2 & 18.0 & 20.3 & 24.4 & 24.7 & 28.6 \\ \hline $^{40} \mathrm{Ca}$ & $p$ & 101 & 136 & 169 & 221 & 166 & 203 & 202 & 238 \\ & $d$ & 2.28 & 2.62 & 5.01 & 5.58 & 5.42 & 5.87 & 7.74 & 8.17 \\ & $t$ & 0.286 & 0.331 & 0.733 & 0.808 & 0.934 & 1.07 & 1.46 & 1.58 \\ & $^{3} \mathrm{He}$ & 0.0948 & 0.110 & 0.250 & 0.276 & 0.339 & 0.397 & 0.548 & 0.630 \\ & $\alpha$ & 25.7 & 25.5 & 39.3 & 42.4 & 38.5 & 39.6 & 42.0 & 43.0 \\ \hline \hline \end{tabular} \end{table*} \clearpage
proofpile-arXiv_065-4719
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Models of hadronic physics are to be compared with experimental results as well as with our understanding of QCD for large momentum transfers {\em and} for low momentum transfers where Feynman diagrams are not useful. In this way, models may help us in improving our knowledge of QCD for low and intermediate energies of interest to hadronic spectroscopy and eventually nuclear physics. One way to get this understanding is to note some features present in the perturbative QCD, lattice gauge theory or models of atomic and nuclear physics and check if these features can be used in the low and intermediate energy hadronic physics. One such feature is an approach based on pair-wise interaction for an interacting multiparticle system (composed of more than two or three particles). This has been successful in atomic and many-nucleon systems; the corresponding two-body interaction being described by Coulombic and Yukawa potential, for example. The question is if the explicit presence of {\em Non-Abelian} gluon field can also be replaced by {\em two body} interquark potentials. The simplest way to use such a model is to try a {\em sum} of two-body potentials or interactions, the usual approach of atomic and nuclear physics. For comparison, it can be noted that the lowest order perturbative Feynman diagrams amplitudes are of this form, and a simple extension of this diagrammatic approach to multiquarks also has this pattern; see ref. \cite{T. Barnes}, and the later ones in the same approach, where the one gluon exchange potential, though, is replaced by Coulombic-plus-linear-plus-hypefine. If the numerically calculated energies of the four-quark systems on a lattice, in the static quark limit, are compared with a model that use only a {\em sum} of two-quark potentials, the model give a gross overestimate of the (magnitude of) four-quark binding energies; see fig. 4 of the same ref.\cite{Green1}. A gluon field overlap factor $f$ was introduced \cite{B.Masud} essentially as a solution to this discrepancy. This factor multiplied only off-diagonal elements of the overlap, kinetic and potential energy matrices of the otherwise pairwise-sum-based Hamiltonian in the three-dimensional basis of the model system of four valence quarks/antiquarks and the purely gluonic field between them. Initially \cite{B.Masud} the geometrical dependence of $f$ on the quark positions was chosen purely based on computational convenience and had no known comparison with any QCD simulations. But when its different forms were compared \cite{P. Pennanen} with two-colour lattice numerical simulations in the pure gluonic theory, a factor of $\exp(-k S_{min})$ had to be included in off-diagonal terms of the every version of the potential and overlap matrices; $S_{min}$ is the minimal spatial area bounded by four external lines joining the four quarks and $k$ is geometrically a constant. Only this way, a version of the model was eventually able to fit well "100 pieces of data---the ground and first excited states of configurations from six (kinds of) different four-quark geometries (including squares, rectangles, quadrilaterals and tetrahedra) calculated on a $16^3\times 32$ lattice---with only four independent parameters to describe the interactions connecting the basis states". It is to be noted that this exponential dependence on the spatial area in the model can be possibly traced back to the related use of the space-time area in more familiar models of Wilson loops studying time evolutions of a quark-antiquark pair. The connection was first suggested by a Wilson loop matrix in a strong coupling expansion scheme (see figs. 4 and 5 of ref.\cite{matsuoka}) and appeared in above mentioned model \cite{P. Pennanen} of the numerically evaluated Wilson loop matrix of the $SU(2)_c$ simulations. A full $SU(3)_c$ simulation \cite{green2} performed a bit later again showed the need for $f$ model, though detailed geometrical dependence of $f$ could not be studied in this 3 colour lattice gauge study. But a later numerical lattice study \cite{V. G. Bornyakov} by a reputed group of the {\em full} $2\times 2$ matrix of the Wilson loops correlators and of "the interaction energy of the confining strings in the static rectangular tetraquark system in $SU(3)$ gluodynamics" was well modeled again by a surface model, namely their {\em soap film} model that also incorporates (a flip to) the multi $Y$ type linear potential emerging from recent numerical simulations \cite{Hideo Suganuma}. The basis state overlap $g$ \cite{V. G. Bornyakov} in the soap film model has a role similar to the gluon field overlap factor $f$ of ref.\cite{P. Pennanen}; both $f$ and $g$ appear only in the off-diagonal terms of the respective matrices ($N$ and $T$) of overlaps of the basis states. Continuing on the spatial and space-time areas, it can be pointed out that both kind of areas appear in eq.(13) of the ref.\cite{V. G. Bornyakov} and are related to Wilson loops as earlier eq.(12) there indicates. Actually, above are models of the matrices of pure gluonic theory Wilson loops. The diagonal terms in these matrices are time evolutions of a tetraquark clustering and off-diagonal terms \cite{matsuoka, V. G. Bornyakov} are for time evolution that start from one tetraquark clustering (or topology) and end at another one. The numerical evaluation of the off-diagonal Wilson loops has been perhaps done in refs. \cite{P. Pennanen} (and previous ones by the same group) and \cite{V. G. Bornyakov} only. But the diagonal Wilson loops have been studied by many other groups, most familiar being the studies reported in ref.\cite{Hideo Suganuma} and the previous works by the same group. For one set of quark configurations, the diagonal studies are limited to only one Wilson loop. In a sense, this means limitation to only one state of the gluonic field as well, namely the one with the least energy {\em to which} system flips; if there are other states mentioned in the literature, these are either for comparison purpose ({\em from which} the system flips) or the excited state "contaminations". But a general study should {\em actually} incorporate a variety of basis states and thus include off-diagonal Wilson loops {\em as well}. The spatial area we are working on appears only in the off-diagonal Wilson loops and thus our work is not to be confused with the usual study of the diagonal Wilson loops effects. It is to be admitted that works like ref.\cite{Hideo Suganuma} have indicated improvements in both evaluations and models of the diagonal Wilson loops and we have not included these improvements in our model of the diagonal term. But this is not a serious flaw, as a dynamical study \cite{J. Vijande} using these improved diagonal models mentions in its conclusions and comments that the "dynamics of (tetraquark) binding is dominated by the simple flip-flop term", meaning that the essentially new connected string (butterfly) term introduced through the work of ref.\cite{Hideo Suganuma} is dynamically "not rewarding". It is to be noted that our diagonal terms include both terms whose minimum is the flip-flop term. This advocates the $\exp(-k S)$ form of $f$ for static two quarks and two antiquarks. For a comparison with actual (hadron) experiments, we have to incorporate quark motion as well, possibly through using quark wave functions. The resulting four-body Schr\"{o}dinger equation can be solved, as in ref. \cite{J. Weinstein}, variationally for the ground state of the system and the effective meson-meson potentials. Alternatively, the Hamiltonian emerging from the $q^2\bar{q}^2$ model has been diagonalized in the simple harmonic oscillator basis \cite{B. Silvestre-Brac}, or was sandwiched between the external meson wave functions to give a transition amplitude of the {\em Born diagrams} \cite{T. Barnes, E.S. Swanson} that is related to meson-meson phase shifts. We have used a formalism (resonating group method \cite{wheeler}) that was, for the $q^2\bar{q}^2$ system, originally \cite{B.Masud} used in a way that allowed finding {\em inter-meson} dependence with the dependence on a quark-antiquark distance being pre-fixed. The formalism allows using the best available knowledge of a meson wave function, though a simple Gaussian form for the wave functions and correspondingly a quadratic quark-antiquark potential was used for computational convenience. We have used this same formalism that can be generalized to using realistic meson wave functions and finding the inter-cluster dependence. But because of the additional computational problems due to a totally non-separable exponential of (a negative constant times) area in $f$, presently we had to also pre-specify a plane-wave form of the inter-cluster dependence along with still using Gaussian wave functions; effectively this means using a Born approximation as well. We have pointed out, though, that these wave function approximations are better than what their naive impression conveys: The Gaussian dependence on the quark-antiquark distance has the potential of resembling the realistic meson wave functions through an adjustment of its parameters \cite{ackleh}. And the plane wave form of the inter-cluster dependence is justified through a feeble inter-cluster (meson-meson) interaction noted in previous works \cite{J. Weinstein, B.Masud, T. Barnes}; the meson-meson phase shifts resulting from this work are also much less than a radian. {\em Only by using the Born-approximation}, the resulting coupled integral equations for inter-cluster wave functions could be decoupled in this work. This decoupling allowed us to numerically calculate the off-diagonal elements as nine-dimensional integrals for the components of the eventual four position 3-vectors; only the overall center-of-mass dependence could be analytically dealt with in a trivial manner. Before this numerical integration, for the kinetic energy terms we had to differentiate the area in $f$. The form of area used in the detailed form of the $Q^2\bar{Q}^2$ model, that we take from ref. \cite{P. Pennanen}, has square roots of the functions of our position variables. Thus a differentiation of this area form yields in denominators combinations of position variables that become zero somewhere in the ranges of integrations to be later done. The numerical evaluations of the resulting nine-dimensional improper integrals is expected to be too demanding, as our initial explorations indicated. Thus, for the to-be-differentiated right $\sqrt{f}$ part of the some kinetic energy terms we replaced the area by an approximated quadratic form whose differentiation does not result in negative powers of the position variables. We also find that the use of the $f$ factor in the new form reduces the long range meson-meson interaction and thus as usual solves the well known Van der Walls force problem \cite{O. Morimatsu, J. Vijande} with the otherwise naive sum of one gluon exchange pair-wise interaction. It has been said \cite{J. Weinstein} that dynamically it is not a serious problem because of the quark-antiquark pair creation and because of the wave function damping of the large distance configurations. Though ref. \cite{green2} partly incorporated both the quark-antiquark pair creation and the meson wave functions and still showed a need for the $f$ factor, through the present work we want to point out that the dynamical role of the $f$ factor in meson-meson interactions is {\em not limited} to solving Van der Walls force problem or pointing out \cite{masud} otherwise over-binding in certain meson-meson systems. The $f$ factor points certain features (non-separability of $f$) of QCD that are 1) indicated by lattice simulations and 2) can be compared with actual experiments. There have been recent hadron-level studies \cite{P. Bicudo}\cite{J. Vijande} using the above mentioned improved models of the diagonal Wilson loops. But the quark level limitations of the models \cite{Fumiko Okiharu}\cite{Hideo Suganuma} mean similar limitations for the hadron-level results: \cite{P. Bicudo}\cite{J. Vijande} study the properties (like binding energy and {\em direct potential} \cite{wong}) of the ground state itself (or in isolation), whereas we aim to study the dynamical coupling of a tetraquark state to other basis state(s) of the tetraquark system---essentially to the other clustering or the exchanged channel. Thus, as we say in the abstract, in addition to doing the phase-shift calculations for a meson-meson scattering, we study a coupling that can affect even the ground state itself through a second order perturbation theory effect named polarization potential \cite{wong}. That is, after including the quark mass differences, a meson-meson state may not be degenerate with an exchanged channel and thus the coupling between this state and the exchanged intermediate one (a hadron loop) may help resolve the underlying structure of a possible meson-meson state. Such a state may be a meson-meson molecule that can be formed by the ground state. This also makes a study of the dynamical coupling of a meson-meson channel to exchanged one worth pursuing. In section 2 we have written the total state vector of the $q^2\bar{q}^2$ system as in RGM, along with introducing the Hamiltonian $H$ of the system without the $f$ factor and then modifying $H$ through the $f$. In section 3 different position dependent forms of $f$ have been described, including the approximate forms that we had to use. In section 4 we have solved the integral equations for a meson meson molecule in the absence of spin degrees of freedom and with all equal quark masses. This section ends with a prescription to find the phase shifts. In the last section we have presented the numerical values of the phase shifts for different forms of $f$, for different values of free parameter $k_{f}$ and for different values of angle $\theta$ between $\textbf{P}_{1}$ and $\textbf{P}_{2}$. \section{The $Q^2\bar{Q}^2$ Hamiltonian and the wave-function} Using adiabatic approximation we can write the total state vector of a system containing two quarks two antiquarks and the gluonic field between them as a sum of the product of the quarks ($Q$ or $\bar Q$) position dependent function $\Psi_{g}(\textbf{r}_1,\textbf{r}_2,\textbf{r}_{\bar{3}},\textbf{r}_{\bar{4}})$ and the gluonic field state $|k\rangle_{g}$. $|k\rangle_{g}$ is defined as a state which approaches $|k\rangle_{c}$ in the weak coupling limit , with $|1\rangle_{c}=|1_{1\bar{3}}1_{2\bar{4}}\rangle_{c}$, $|2\rangle_{c}=|1_{1\bar{4}}1_{2\bar{3}}\rangle_{c}$ and $|3\rangle_{c}=|\bar{3}_{12}3_{\bar{3}\bar{4}}\rangle_{c}$. In lattice simulations of the corresponding (gluonic) Wilson loops it is found that the lowest eigenvalue of the Wilson matrix, that is energy of the lowest state, is always the same for both $2\times2$ and $3\times3$ matrices provided that $|1\rangle_{g}$ or $|2\rangle_{g}$ has the lowest energy \cite{P. Pennanen}. The later calculations \cite{V. G. Bornyakov} of the tetraquark system were also done with a two level approximation. Taking advantage of these observations, we have included in our expansion only two basis states. As in resonating group method, $\Psi_{g}(\textbf{r}_1,\textbf{r}_2,\textbf{r}_{\bar{3}},\textbf{r}_{\bar{4}})$ or $\Psi_{g}(\textbf{R}_{c},\textbf{R}_{k},\textbf{y}_{k},\textbf{z}_{k})$ is written as product of known dependence on $\textbf{R}_{c},\textbf{y}_{k},\textbf{z}_{k}$ and unknown dependence on $\textbf{R}_{k}$. i.e. $\Psi_{g}(\textbf{r}_1,\textbf{r}_2,\textbf{r}_{\bar{3}},\textbf{r}_{\bar{4}})=\Psi_{c}(\textbf{R}_{c})\chi_{k}(\textbf{R}_{k})\psi_{k}(\textbf{y}_{k},\textbf{z}_{k})$. Here $\textbf{R}_{c}$ is the center of mass coordinate of the whole system, $\textbf{R}_{1}$ is the vector joining the center of mass of the clusters $(1,\overline{3})$ and $(2,\overline{4})$, $\textbf{y}_{1}$ is the position vector of quark 1 with respect to $\overline{3}$ within the cluster $(1,\overline{3})$ and $\textbf{z}_{1}$ is the position vector of quark 2 with respect to $\overline{4}$ within the cluster $(2,\overline{4})$. The same applies to $\textbf{R}_{2}, \textbf{y}_{2}$ and $\textbf{z}_{2}$ for the clusters $(1,\overline{4})$ and $(2,\overline{3})$. Similarly we can define $\textbf{R}_{3}, \textbf{y}_{3}$ and $\textbf{z}_{3}$ for the clusters $(1,2)$ and $(\overline{3},\overline{4})$. Or we can write them in terms of position vector of the four particles (quarks or antiquarks) as follow \begin{equation}\textbf{R}_{1}=\frac{1}{2}(\textbf{r}_{1}+\textbf{r}_{\overline{3}}-\textbf{r}_{2}-\textbf{r}_{\overline{4}})\text{ , }\textbf{y}_{1}=\textbf{r}_{1}-\textbf{r}_{\overline{3}}\text{ and }\textbf{z}_{1}=\textbf{r}_{2}-\textbf{r}_{\overline{4}},\label{e12} \end{equation} \begin{equation}\textbf{R}_{2}=\frac{1}{2}(\textbf{r}_{1}+\textbf{r}_{\overline{4}}-\textbf{r}_{2}-\textbf{r}_{\overline{3}})\text{ , }\textbf{y}_{2}=\textbf{r}_{1}-\textbf{r}_{\overline{4}}\text{ and }\textbf{z}_{2}=\textbf{r}_{2}-\textbf{r}_{\overline{3}}\label{e13} \end{equation} and \begin{equation}\textbf{R}_{3}=\frac{1}{2}(\textbf{r}_{1}+\textbf{r}_{2}-\textbf{r}_{\overline{3}}-\textbf{r}_{\overline{4}})\text{ , }\textbf{y}_{3}=\textbf{r}_{1}-\textbf{r}_{2}\text{ and }\textbf{z}_{3}=\textbf{r}_{\overline{3}}-\textbf{r}_{\overline{4}}.\label{e14} \end{equation} Thus meson meson state vector in the restricted gluonic basis is written as \begin{equation}|\Psi(\textbf{r}_1,\textbf{r}_2,\textbf{r}_{\bar{3}},\textbf{r}_{\bar{4}};g)\rangle= \sum_{k=1}^2|k\rangle_{g}\Psi_{c}(\textbf{R}_{c})\chi_{k}(\textbf{R}_{k})\xi_{k}(\textbf{y}_{k})\zeta_{k}(\textbf{z}_{k}).\label{e11}\end{equation} Here $\xi_{k}(\textbf{y}_{k})=\frac{1}{(2\pi d^{2})^{\frac{3}{4}}}\exp(\frac{-\textbf{y}_{k}^{2}}{4 d^{2}})$ and $\zeta_{k}(\textbf{z}_{k})=\frac{1}{(2\pi d^{2})^{\frac{3}{4}}}\exp(\frac{-\textbf{z}_{k}^{2}}{4 d^{2}})$. These Gaussian forms of meson wave functions are, strictly speaking, the wave functions of a quadratic confining potential. But, as pointed out in text below fig. 1 of ref. \cite{ackleh}, the overlap of a Gaussian wave function and the eigenfunction of the realistic linear plus colombic potential can be made as close as 99.4\% by properly adjusting its parameter $d$. A realistic value of $d$ mimicking a realistic meson wave function depends on the chosen scattering mesons and thus is postponed to our future work \cite{next}. Presently, to explore the qualitative implications of the geometric features of the gluonic overlap factor $f$, we have used in $\xi_{k}(\textbf{y}_{k})$ and $\zeta_{k}(\textbf{z}_{k})$ a value $d=0.558$ fm defined by the relation $d^{2}=\sqrt{3}R_{c}^{2}/2$ \cite{masud}, with $R_{c}=0.6$ fm being the r.m.s. charge radius of the qqq system whose wave function is derived by using the same quadratic confining potential. As for the Hamiltonian, for $f$=1 the total Hamiltonian $H$ of our 4-particle system is taken as \cite{John Weinstein} \begin{equation} \hat{H}=\sum_{i=1}^{\overline{4}}\Big[m_{i}+\frac{\hat{P}_{i}^{2}}{2m_{i}}\Big]+ \sum_{i<j}^{\overline{4}} v(\textbf{r}_{ij})\mathbf{F}_{i}.\mathbf{F}_{j}.\label{e9} \end{equation} Our same constituent quark mass value $m=0.3\texttt{GeV}$ for all quarks and antiquarks is one used in refs. \cite{B.Masud}, and our kinetic energy operator is similarly non-relativistic; it is included in our aims to compare with this work and isolate the effects only due to a different expression for the $f$. In above each of $\mathbf{F}_i$ has $8$ components $F_i^l=\lambda^l/2$, $l=1,2,3,...,8$ and $F_i^{l*}=\lambda^{l*}/2$, $\lambda^{l}$ are Gell-Mann matrices operating on the $i$-th particle; $l$ is shown as a superscript only to avoid any possible confusion with subscript $i$ which labels a particle. For the pairwise $q\overline{q}$ potential, we have used a quadratic confinement \begin{equation}v_{ij}=C r_{ij}^{2}+\bar{C} \text{ with } i,j=1,2,\bar{3},\bar{4}.\label{e16}\end{equation} for exploratory study. While we have neglected short range coulomb like interactions as well as spin-dependent terms. Along with that non relativistic limit has also been taken. The model used by Vijande \cite{J. Vijande} is also restricted to these limits. As for the within-a-cluster dependence of the wave function, this use of the quadratic potential in place of the realistic Coulumbic plus linear may change the full wave function. In the within-a-cluster, this change of wave function is found to result in a change of an overlap integral from 100\% to 99.4\% only provided the parameter $d$ of the wave function is adjusted. Although the expression for $P_{c}$ written immediately after eq.(\ref{e20}) suggests a way to connect the additional parameter of the full wave function with that of a cluster, it is difficult to make a similar overlap test for the full wave function. But there is no a priori reason to deny that at least the qualitative features (like a new kind of angle dependence mentioned in the results part) we want to point out using this quadratic confinement would survive in a more realistic calculation; a similar exploration of the $q^2\bar{q}^2$ system properties was first done \cite{J. Weinstein} using a quadratic confinement and later extended and improved calculation \cite{John Weinstein} with more realistic pair-wise interaction reinforced the $K\bar{K}$ results obtained initially through the quadratic confinement. It seems that a proper adjustment the parameters of the quadratic (or SHO) model can reasonably simulate a $q\bar{q}$ or even a $q^2\bar{q}^2$ system. In our case, this adjustment of the parameters can be done once a choice of actual scattering mesons is made in a formalism \cite{next} that incorporates spin and flavour degrees of freedom. But, as shown in fig. 2(b) of ref.\cite{J. Weinstein}, properties of the $q^2\bar{q}^2$ system may not be very sensitive to the actual values of the parameters and we expect our presently chosen values of the parameters to well indicate the essential features resulting from the non-separable form of the gluonic field overlap factor $f$. For the central simple harmonic oscillator potential of eq.(\ref{e16}), the above mentioned size $d$ in the eigenfunctions $\xi_{k}(\textbf{y}_{k})$ and $ \zeta_{k}(\textbf{z}_{k})$ is related to the quadratic coefficient $C$ which thus is given a value of $-0.0097\texttt{GeV}^{3}$. As in a resonating group calculation, we take only variations in the $\chi_{k}$ factor of the total state vector of the system written in eq.(\ref{e11}). Setting the coefficients of linearly independent arbitrary variations $\delta\chi_{k}(\textbf{R}_{k})$ as zero and integrating out $R_{c}$, $\langle\delta\psi\mid H-E_{c}\mid\psi\rangle=0$ from eq.(\ref{e11}) gives \begin{equation} \sum_{l=1}^2\int d^3y_{k}d^3z_{k} \xi_{k}(\textbf{y}_{k}) \zeta_{{k}}(\textbf{z}_{k})_{g}\langle k\mid H-E_{c}\mid\l\rangle_{g} \chi_{l}(\textbf{R}_{l})\xi_{l}(\textbf{y}_{l}) \zeta_{l}(\textbf{z}_{l})=0, \label{e1} \end{equation} for each of the $k$ values (1 and 2). According to the (2 dimensional basis) model $I_{a}$ of ref. \cite{P. Pennanen}, the normalization, potential energy and kinetic energy matrices in the corresponding gluonic basis are \begin{equation}N=\left( \begin{array}{cc} 1 & \frac{1}{3} f \\ \frac{1}{3} f & 1 \\ \end{array} \right), \end{equation} \begin{equation}V=\left( \begin{array}{cc} \frac{-4}{3} (v_{1\overline{3}}+v_{2\overline{4}}) & \frac{4}{9} f (v_{12}+v_{\overline{3}\overline{4}}-v_{1\overline{3}}-v_{2\overline{4}}-v_{1\overline{4}}-v_{2\overline{3}})\\ \frac{4}{9} f (v_{12}+v_{\overline{3}\overline{4}}-v_{1\overline{3}}-v_{2\overline{4}}-v_{1\overline{4}}-v_{2\overline{3}}) & \frac{-4}{3} (v_{1\overline{4}}+v_{2\overline{3}}) \\ \end{array} \right) \end{equation} and \begin{equation} _g\langle k\mid K\mid\ l\rangle_{g}= N(f)_{k,l}^{\frac{1}{2}} \Big(\sum_{i=1}^{\overline{4}}-\frac{\nabla_{i}^{2}}{2m}\Big)N(f)_{k,l}^{\frac{1}{2}}.\label{e15} \end{equation} This is the modification, through the $f$ factor, to the Hamiltonian as much as we need it for the integral equations below in section 4 (that is only the modified matrix elements). \section{Different forms of {\it f}} Ref. \cite{P. Pennanen} supports through a comparison with numerical lattice simulations a form of $f$ that was earlier \cite{O. Morimatsu} suggested through a quark-string model extracted from the strong coupling lattice Hamiltonian gauge theory. This is \begin{equation}f=\exp(-b_{s} k_{f} S)\label{e6},\end{equation} S being the area of minimal surface bounded by external lines joining the position of the two quarks and two antiquarks, and $b_s=0.18\texttt{GeV}^{2}$ is the standard string tension \cite{Isgur, Fumiko Okiharu}, $k_f$ is a dimensionless parameter whose value of 0.57 was decided in ref. \cite{P. Pennanen} by a fit of the simplest two-state area-based model (termed model Ia) to the numerical results for a selection of $Q^2\bar{Q}^2$ geometries. It is shown there \cite{P. Pennanen} that the parameters, including $k_f$, extracted at this $SU(2)_{c}$ lattice simulation with $\beta=2.4$ can be used directly in, for example, a resonating group calculation of a four quark model as the continuum limit is achieved for this value of $\beta$. The simulations reported in ref. \cite{P. Pennanen} were done in the 2-colour approximation. But, for calculating the dynamical effects, we use actual SU(3) colour matrix elements of ref. \cite{B.Masud}. The only information we take from the computer simulations of ref. \cite{P. Pennanen} is value of $k_{f}$. This describes a geometrical property of the gluonic field (its spatial rate of decrease to zero) and it may be the case that the geometrical properties of the gluonic field are not much different for different number of colours, as suggested for example by successes of the geometrical flux tube model. Situation is more clear, though, for the mass spectra and the string tension generated by the gluonic field: ref. \cite{Teper} compare these quantities for $SU(2)_{c}$, $SU(3)_{c}$ and $SU(4)_{c}$ gauge theories in 2+1 dimensions and find that the ratio of masses are, to a first approximation, are independent of the number of colours. Their preliminary calculations in 3+1 dimensions indicate a similar trend. Directly for the parameter $k_f$, appearing in the overlap factor $f$ studied in this work, the similar conclusion can be drawn from a comparison of the mentioned lattice calculations \cite{green2} on the interaction energy of the two heavy-light $Q^2\bar{q}^2$ mesons in the realistic $SU(3)_{c}$ gauge theory with ref. \cite{P. Pennanen} that uses $SU(2)_{c}$. For interpreting the results in terms of the potential for the corresponding single heavy-light meson ($Q\bar{q}$), a Gaussian form \begin{equation}f=\exp(-b_{s} k_{f}\sum_{i<j} r_{ij}^{2})\label{e7}\end{equation} of the gluonic filed overlap factor $f$ is used in this ref. \cite{green2} for numerical convenience and not the minimal area form. But for a particular geometry, the two exponents (the minimal area and the sum of squared distances) in these two forms of $f$ are related and thus for a particular geometry a comparison of the parameter $k_f$ multiplying area and corresponding (different!) $k_f$ multiplying sum of squares in eq. (14) of ref. \cite{green2} is possible. We note that, after correcting for a ratio of 8 between the sum of distance squares (including two diagonals) and the area for the square geometry, the colour-number-generated relative difference for this geometry is just $5\%$: the coefficient is $0.075\times 8=0.6$ multiplying sum of squared distances and $0.57$ multiplying the minimal area. But, as the precise form of $f$ is still under development (the latest work \cite{V. G. Bornyakov} has covered only a very limited selection of the positions of tetraquark constituents) and the expression for the area in its exponent needs improvement, it is not sure precisely what value of the $k_f$ best simulates QCD and we have mainly worked with an approximate value of 0.5 that is also mentioned in ref.\cite{P. Pennanen} and is numerically easier to deal with. (It is to be noted that the soap film model of ref.\cite{V. G. Bornyakov} does not treat $k_f$ as a variational parameter. If that is interpreted as fixing $k_f$ at 1, this prescription might have been successful due to their selection of quark configurations being limited to planar ones; a work \cite{pennanen} by UKQCD that is limited to planar geometries also favors a value closer to 1. But their more general work \cite{P. Pennanen} resulted in a value of $k_f$ near 0.5.) For the area as well, ref. \cite{P. Pennanen} used an approximation: A good model of area of the minimal surface could be that given in ref. \cite{green} as \begin{equation} S=\int_{0}^1 du\int_{0}^1 dv |(u\textbf{r}_{1\overline{3}}+(1-u)\textbf{r}_{\overline{4} 2})\times(v\textbf{r}_{2\overline{3}}+(1-v)\textbf{r}_{\overline{4} 1})| \end{equation} (Work is in progress \cite{dawood}, to judge the surface used in this model, and its area, from the point of differential geometry and there are indications that this is quite close to a minimal surface.) But the simulations reported in ref. \cite{P. Pennanen} were carried out for the $S$ in eq.(\ref{e6}) being "the average of the sum of the four triangular areas defined by the positions of the four quarks". Although for the tetrahedral geometry the $S$ used in ref. \cite{P. Pennanen} is as much as 26 percent larger than the corresponding minimal-like area of ref. \cite{green}, it can be expected that their fitted value of $k_f$ is reduced to partially compensate this over estimate of the $S$ area. Anyway, as we are calculating the dynamical effects of the model of ref. \cite{P. Pennanen}, we have used the form of $S$ that is used in this work. The area $S$ of ref. \cite{P. Pennanen} becomes (with a slight renaming) \\ \begin{eqnarray}S=\frac{1}{2}[{S(134)+S(234)+S(123)+S(124)}],\label{e22}\end{eqnarray} where $S(ijk)$ is the area of the triangle joining the vertices of the positions of the quarks labled as i,j and k. In the notation of eqs.(\ref{e12}-\ref{e14}) this becomes $S(134)=\frac{1}{2}|\textbf{y}_{1}\times \textbf{z}_{3}|=\frac{1}{2}|(\textbf{R}_{2}+\textbf{R}_{3})\times(\textbf{R}_{1}-\textbf{R}_{2})|$, $S(234)=\frac{1}{2}|\textbf{z}_{2}\times \textbf{z}_{3}|=\frac{1}{2}|(\textbf{R}_{3}-\textbf{R}_{1})\times(\textbf{R}_{1}-\textbf{R}_{2})|$, $S(123)=\frac{1}{2}|\textbf{y}_{3}\times \textbf{z}_{2}|=\frac{1}{2}|(\textbf{R}_{1}+\textbf{R}_{2})\times(\textbf{R}_{3}-\textbf{R}_{1})|$ and $S(124)=\frac{1}{2}|\textbf{y}_{3}\times \textbf{z}_{1}|=\frac{1}{2}|(\textbf{R}_{1}+\textbf{R}_{2})\times(\textbf{R}_{3}-\textbf{R}_{2})|$. Written in terms of the rectangular components $(x_{1},y_{1},z_{1})$ of $\textbf{R}_{1}$, $(x_{2},y_{2},z_{2})$ of $\textbf{R}_{2}$ and $(x_{3},y_{3},z_{3})$ of $\textbf{R}_{3}$, we have \begin{eqnarray} S(134)=\frac{1}{2}\Bigg\{\Big(x_2{}^2+y_2{}^2+z_2{}^2+2 \left(x_2 x_3+y_2 y_3+z_2 z_3\right)+x_3{}^2+y_3{}^2+z_3{}^2\Big)\hspace{3cm}\nonumber\\ \Big(x_1{}^2+y_1{}^2+z_1{}^2-2\left(x_1 x_2+y_1y_2+z_1 z_2\right)+x_2{}^2+y_2{}^2+z_2{}^2\Big)\hspace{3cm}\nonumber\\ -\Big(x_1 x_2+y_1y_2+z_1z_2-\left(x_2{}^2+y_2{}^2+z_2{}^2\right)+x_1 x_3+y_1 y_3+z_1z_3-\left(x_2 x_3+y_2y_3+z_2z_3\right)\Big)^{2}\Bigg\}^{\frac{1}{2}}. \label{e22} \end{eqnarray} Explicit rectangular expressions for $S(234), S(123)$ and $S(124)$ are similar. This form of $S$ has square roots. For the $K.E$. part of the Hamiltonian matrix (see eq.(\ref{e15}) ), we have to differentiate an exponential of this square root. After differentiating we can have negative powers of $S$ and when they will be integrated in the latter stages can have singularities in the integrands resulting in computationally too demanding nine-dimensional improper integrals. Thus we have availed to ourselves an approximated $S$, named $S_a$, which is a sum of different quadratic combinations of quarks positions. We chose $S_{a}$ by minimizing $\int d^{3}R_{1}d^{3}R_{2}d^{3}R_{3}(S-S_{a})^{2}$ with respect to the coefficients of the quadratic position combinations; that is, these coefficients are treated as variational parameters. The first (successful) form which we tried for $S_{a}$ was \begin{eqnarray}S_{a}=a \left(x_1{}^2+y_1{}^2+z_1{}^2\right)+b \left(x_2{}^2+y_2{}^2+z_2{}^2\right) +c \left(x_3{}^2+y_3{}^2+z_3{}^2\right)+{\acute d} x_1 x_2+\nonumber\\e y_1 y_2+f z_1 z_2+g x_2 x_3+h y_2 y_3+\acute{i} z_2 z_3+j x_1 x_3+k y_1 y_3+l z_1 z_3.~~~~~~~~~~~~\label{e21}\end{eqnarray} This contained 12 variational parameters a,b,c,...,l. Minimization gave values (reported with accuracy 4 though in the computer program accuracy 16 was used)as \begin{eqnarray} a = 0.4065, b = 0.4050, c = \ 0.3931, j = \ -0.0002, l = \ -0.0002. \nonumber\end{eqnarray} In the reported accuracy other parameters are zero. Here limits of integration were from -15 to 15 in $\texttt{GeV}^{-1}$. We also tried $S_{a}$ as $$\sum_{i=1}^3{a_{i}(x_{i}^{2}+y_{i}^{2}+z_{i}^{2})}+\sum_{i,j=1}^3{(b_{i,j}x_{i}y_{j}+c_{i,j}x_{i}z_{j}+d_{i,j}y_{i}z_{j})}+\sum_{i<j,j=2}^3{(e_{i,j}x_{i}x_{j}+f_{i,j}y_{i}y_{j}+g_{i,j}z_{i}z_{j})}$$ and $$\sum_{i=1}^3{(l_{i}x_{i}^{2}+m_{i}y_{i}^{2}+n_{i}z_{i}^{2})}+\sum_{i,j=1}^3{(b_{i,j}x_{i}y_{j}+c_{i,j}x_{i}z_{j}+d_{i,j}y_{i}z_{j})}+\sum_{i<j,j=2}^3{(e_{i,j}x_{i}x_{j}+f_{i,j}y_{i}y_{j}+g_{i,j}z_{i}z_{j})},$$ with variational parameters being 39 and 45 respectively. Both the latter forms gave the same result as we got with 12 variational parameters, and hence this 12 parameter form was used in the section below. This form gives dimensionless standard-deviation, defined as $$\sqrt{\frac{\langle(S-S_{a})^{2}\rangle-(\langle S-S_{a}\rangle)^{2}}{\langle S^{2}\rangle}},$$ being approximately equal to $21 \%$ . Here, $$\langle X\rangle=\frac{\int(X)d^{3}R_{1}d^{3}R_{2}d^{3}R_{3}}{\int(1)d^{3}R_{1}d^{3}R_{2}d^{3}R_{3}}.$$ As this is not too small, in our main calculations we have made a minimal use of this further approximated area $S_a$ (only for the to-be-differentiated right $\sqrt{f}$ part (see eq.\ref{e15} ) of the kinetic energy term and here only for derivatives of the exponent). \section{Solving the integral equations} In eq.(\ref{e1}) for $k=l=1$ (a diagonal term), we used the linear independence of $\textbf{y}_{1}$, $\textbf{z}_{1}$ and $\textbf{R}_{1}$ (see eq.(\ref{e12})) to take $\chi_{1}(\textbf{R}_{1})$ outside the integrations w.r.t. $\textbf{y}_{1}$ and $\textbf{z}_{1}$. For the off-diagonal term with $k=1$ and $l=2$ we replaced $\textbf{y}_{1}$ and $\textbf{z}_{1}$ with $\textbf{R}_{2}$ and $\textbf{R}_{3}$, with Jacobian of transformation as 8. For regulating the space derivatives of the exponent of $f$ (see the three sentences immediately following eq.(\ref{e22}) above) we temporarily replaced $S$ in it by its quadratic approximation $S_{a}$. As a result, we obtained the following equation: \begin{eqnarray} \bigg(\frac{3\omega}{2}-\frac{1}{2\mu_{12}}\nabla^{2}_{R_{1}}+24 C_{o} d^{2}-\frac{8}{3} \overline{C}-E_{c}+4m\bigg)\chi_{1}(\textbf{R}_{1})~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber\\+ \int d^3R_{2}d^3R_{3}\exp\Big(-b_{s}k_{f}S\Big) \exp\Bigg(-\frac{R^{2}_{1}+R^{2}_{2}+2R^{2}_{3}}{2d^{2}}\Bigg) \Bigg[-\frac{8}{6m(2\pi d^{2})^{3}}g_{1}\exp\Big(\frac{1}{2}b_{s}k_{f}S\Big)~~~~~~~~~~~~~~~~~~~~\nonumber\\ \exp\Big(-\frac{1}{2}b_{s}k_{f}S_{a}\Big) +\frac{32}{9(2\pi d^{2})^{3}}\Big(-4CR^{2}_{3}-2\overline{C}\Big)- \frac{8(E_{c}-4m)}{3(2\pi d^{2})^{3}}\Bigg]\chi_{2}(\textbf{R}_{2})=0,~~~~~~~~~~~~~\label{e2} \end{eqnarray} with, written up to accuracy 4, \begin{eqnarray} g_{1}=-1.4417+0.0258 x_1{}^2+0.0258 x_2{}^2\ +0.0254 x_3{}^2+0.0258 y_1{}^2+\nonumber\\ 0.0258 y_2{}^2 +0.0254 y_3{}^2+\ 0.0258 z_1{}^2+ 0.0258 z_2{}^2+ 0.0254 z_3{}^2.\label{e4} \end{eqnarray} For the consistency of $\xi_{k}(\textbf{y}_{k})$ and $\zeta_{k}(\textbf{z}_{k})$ with eq.(\ref{e16}) $\omega=1/md^{2}=0.416\texttt{GeV}$. For convenience in notation we take $C_{o}=-C/3$. Here in the first channel for $k=1$ the constituent quark masses has been replaced by the reduced mass $\mu_{12}=M_{1}M_{2}/(M_{1}+M_{2})$, where $M_{1}$ and $M_{2}$ are masses of hypothetical mesons; a similar replacement has been done in ref.\cite{B.Masud}. At this stage we can fit $\bar C$ to a kind of "hadron spectroscopy" for our equal quark mass case: For the large separation there is no interaction between $M_{1}$ and $M_{2}$. So the total center of mass energy in the large separation limit will be the sum of kinetic energy of relative motion and masses of $M_{1}$ and $M_{2}$ i.e. in the limit $R_{1}\longrightarrow\infty$ we have \begin{equation} \Bigg[-\frac{1}{2\mu_{12}}\nabla^{2}_{R_{1}}+M_{1}+M_{2}\Bigg]\chi_{1}(\textbf{R}_{1})=E_{c}\chi_{1}(\textbf{R}_{1}).\label{e8} \end{equation} By comparing, in this limit, eq.(\ref{e8}) and eq.(\ref{e2}) we have $M_{1}+M_{2}=4m+3\omega-8\bar{C}/3$. (A use of the first term of eq.(\ref{e16}) for the colour-basis diagonal matrix element of eq.(\ref{e9}) gives $-4C/3=\mu \omega^2/2=\mu\omega/2m d^{2}$, giving $24 C_{o}d^{2}=3\omega/2$ for the reduced mass $\mu$ of a pair of equal mass quarks being $m/2$ ; the diagonal elements in any form of the $f$ model for the gluonic basis are the same as those for the colour basis.) By choosing $M_{1}+M_{2}=3\omega$ we have $\bar{C}=3m/2=0.45\texttt{GeV}$. This choice of the hypothetical meson masses is the one frequently used in ref. \cite{B.Masud} for an illustration of the formalism; when we incorporate flavour and spin dependence \cite{next} the same fit, something like in ref. \cite{masud}, we plan to fit our quark masses to actual meson spectroscopy. We can then choose to fit even the parameter $C$ or $C_0$ of our potential model to hadron spectroscopy rather than deciding it, as in ref. \cite{B.Masud} and the present work, through a combination of baryon radii and harmonic oscillator model. But we do not see any reason why the qualitative effects (for example, an angle dependance, see the section below) pointed out through the present work should disappear for a phenomenologically explicit case. Completing our integral equations before finding a solution for two $\chi 's$, for $k=2$ in eq.(\ref{e1}) we took $\chi_{2}(\textbf{R}_{2})$ outside of integration for the diagonal term, for the off-diagonal term we replaced $\textbf{y}_{2}$ and $\textbf{z}_{2}$ by $\textbf{R}_{1}$ and $\textbf{R}_{3}$ and replaced $S$ by $S_a$. This resulted in \begin{eqnarray} \bigg(\frac{3\omega}{2}-\frac{1}{2\mu_{34}}\nabla^{2}_{R_{2}}+24 C_{o} d^{2}-\frac{8}{3} \overline{C}-E_{c}+4m\bigg)\chi_{2}(\textbf{R}_{2})~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber\\+ \int d^3R_{1}d^3R_{3} \exp\Big(-b_{s}k_{f}S\Big)\exp\Bigg(-\frac{R^{2}_{1}+R^{2}_{2}+2R^{2}_{3}}{2d^{2}}\Bigg) \Bigg[-\frac{8}{6m(2\pi d^{2})^{3}} g_{1} \exp\Big(\frac{1}{2}b_{s}k_{f}S\Big)~~~~~~~~~~~~~~~~~~~~~~~\nonumber\\ \exp\Big(-\frac{1}{2}b_{s}k_{f}S_{a}\Big)+\frac{32}{9(2\pi d^{2})^{3}}\Big(-4CR^{2}_{3}-2\overline{C}\Big)- \frac{8(E_{c}-4m)}{3(2\pi d^{2})^{3}}\Bigg]\chi_{1}(\textbf{R}_{1})=0.~~~~~~~~~~~~~~~~~~~~~\label{e3} \end{eqnarray} In the 2nd channel, for $k=2$, the constituent quark masses are replaced by the reduced mass $\mu_{34}=M_{3}M_{4}/(M_{3}+M_{4})$, where $M_{3}$ and $M_{4}$ are masses of hypothetical mesons. Now we solve our two integral equations. As our space derivatives have been regularized, we no longer need further-approximated $S_a$ and we replace this by the original $S$ in eq.(\ref{e3}). Below we take Fourier transform of eq.(\ref{e2}). This gives us a nine dimensional integral of, amongst others, $\exp(-b_{s}k_{f}S)$. Non-separability of $S$ did not allow us to formally solve the two integral equations for a non-trivial solution for $\chi_1$ and $\chi_2$ as in ref. \cite{B.Masud} and we had to pre-specify a form for $\chi_{2}(\textbf{R}_{2})$ in eq.(\ref{e2}) and of $\chi_{1}(\textbf{R}_{1})$ in eq.(\ref{e3}). (As long as all the functions, including the meson wave functions and the gluonic field overlap factor $f$, are separate in $\textbf{R}_{1}$ and $\textbf{R}_{2}$ , we can everywhere replace $\chi_1$ and $\chi_2$ by their analytical integrals which themselves simply multiply if $\chi_1$ and $\chi_2$ do, can solve the resulting linear equations for these integrals and can write the $T$ matrices and phase shifts directly in terms of these integrals. This is what is done in ref.\cite{B.Masud}\cite{masud}, but it is hard to think how to generalize this very specialized technique to a case like us where the $f$ factor is not separable in $\textbf{R}_{1}$ and $\textbf{R}_{2}$.) Compelled to use, thus, Born approximation (something already in use \cite{T. Barnes} for meson-meson scattering; our numerical results mentioned below also justify its use here) for this we used the solutions of eqs.(\ref{e2}) and (\ref{e3}) in absence of interactions (say by letting $k_f$ approach to infinity, meaning $f=0$) for $\chi_{1}(\textbf{R}_{1})$ and $\chi_{2}(\textbf{R}_{2})$. We chose the coefficient of these plane wave solutions so as to make $\chi_{1}(\textbf{R}_{1})$ as Fourier transform of $\delta({P}_{1}-{P}_{c}(1))/P_{c}^{2}(1)$ and $\chi_{2}(\textbf{R}_{2})$ as Fourier transform of $\delta({P}_{2}-{P}_{c}(2))/P_{c}^{2}(2)$, with $P_{c}(1)$ and $P_{c}(2)$ defined below just after eq.(\ref{e20}). Thus we used \begin{equation}\chi_{2}(\textbf{R}_{2})=\sqrt{\frac{2}{\pi}}\exp\Big(i \textbf{P}_{2}.\textbf{R}_{2}\Big)\label{e18}\end{equation} inside the integral to get one equation (after a Fourier transform with respect to $R_{1}$ and kernel $e^{i \textbf{P}_{1}.\textbf{R}_{1}}$) as \begin{eqnarray} \Big(3\omega+\frac{P_{1}^{2}}{2\mu_{12}}-E_{C}\Big)\chi_{1}(\textbf{P}_{1})=~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber\\-\sqrt{\frac{2}{\pi}}\frac{1}{(2\pi)^{\frac{3}{2}}}\int d^3R_{1}d^3R_{2}d^3R_{3} \exp\Bigg\{i\Big(\textbf{P}_{1}.\textbf{R}_{1}+\textbf{P}_{2}.\textbf{R}_{2}\Big)\Bigg\} \nonumber\\\exp\Big(-b_{s}k_{f}S\Big) \exp\Big(-\frac{R^{2}_{1}+R^{2}_{2}+2R^{2}_{3}}{2d^{2}}\Big) \nonumber \\ \Bigg[-\frac{8}{6m(2\pi d^{2})^{3}}g_{1}+\frac{32}{9(2\pi d^{2})^{3}}\Big(-4CR^{2}_{3}-2\overline{C}\Big)- \frac{8(E_{C}-4m)}{3(2\pi d^{2})^{3}}\Bigg],\label{e10} \end{eqnarray} with $\chi_{1}(\textbf{P}_{1})$ being Fourier transform of $\chi_{1}(\textbf{R}_{1})$. The formal solution~\cite{B.Masud} of eq.(\ref{e10}) can be written as \begin{eqnarray} \chi_{1}(\textbf{P}_{1})=\frac{\delta({P}_{1}-{P}_{c}(1))}{P_{c}^{2}(1)}-\frac{1}{\Delta_{1}(P_{1})}\frac{1}{16\pi^{5}d^{6}}\int d^3R_{1}d^3R_{2}d^3R_{3}\exp\Bigg\{i\Big(\textbf{P}_{1}.\textbf{R}_{1}+\textbf{P}_{2}.\textbf{R}_{2}\Big)\Bigg\} ~~~~~~~~~~~~~~~~~~~\nonumber\\ \exp\Big(-b_{s}k_{f}S\Big)\exp\Bigg(-\frac{R^{2}_{1}+R^{2}_{2}+2R^{2}_{3}}{2d^{2}}\Bigg) \Bigg[-\frac{8}{6m}g_{1}+\frac{32}{9}\Big(-4CR^{2}_{3}-2\overline{C}\Big)- \frac{8}{3}(E_{C}-4m)\Bigg], \end{eqnarray} with $$\Delta_{1}(P_{1})=\frac{P_{1}^{2}}{2\mu_{12}}+3\omega-E_{c}-i\varepsilon.$$ If we choose \emph{x}-axis along $\textbf{P}_{1}$ and choose \emph{z}-axis in such a way that \emph{xz}-plane becomes the plane containing $\textbf{P}_{1}$ and $\textbf{P}_{2}$, the above equation becomes \begin{eqnarray} \chi_{1}(\textbf{P}_{1})=\frac{\delta({P}_{1}-{P}_{c}(1))}{P_{c}^{2}(1)}-\frac{1}{\Delta_{1}(P_{1})}F_{1}, \label{e19} \end{eqnarray} where, in the notation of eq.(\ref{e22}), \begin{eqnarray} F_{1}=\frac{1}{16\pi^{5}d^{6}}\int_{-\infty}^{\infty}dx_{1}dx_{2}dx_{3}dy_{1}dy_{2}dy_{3}dz_{1}dz_{2}dz_{3}\exp\Big\{i P(x_{1}+x_{2}\cos\theta+z_{2}\sin\theta)\Big\} \exp\Big(-b_{s}k_{f}S\Big)~~~~~~~~~~~~~~\nonumber\\ \exp\Bigg\{-\frac{x^{2}_{1}+y^{2}_{1}+z^{2}_{1}+x^{2}_{2}+y^{2}_{2}+z^{2}_{2}+2(x^{2}_{3}+y^{2}_{3}+z^{2}_{3})}{2d^{2}}\Bigg\}\nonumber\\ \left[-\frac{8}{6m}g_{1}+\frac{32}{9}\Big\{-4C(x^{2}_{3}+y^{2}_{3}+z^{2}_{3})-2\overline{C}\Big\}-\frac{8}{3}(E_C-4m)\right].\label{e40} \end{eqnarray} Here $\theta$ is the angle between $\textbf{P}_{2}$ and $\textbf{P}_{1}$ and because of elastic scattering $P_{1}=P_{2}=P$. From eq.(\ref{e19}) we can write, as in ref. \cite{B.Masud}, the $1,2$ element of the T-matrix as \\ \begin{eqnarray} T_{12}=2\mu_{12} \frac{\pi}{2}P_{c} F_{1}.\label{e20} \end{eqnarray} Here $P_{c}=P_{c}(2)=P_{c}(1)=\sqrt{2\mu_{12} (E_{c}-(M_{1}+M_{2}))}$ and $M_{1}=M_{2}=3\omega/2$; see paragraph after eq.(\ref{e8}). Using the relation $$s=I-2i T=\exp(2i\Delta)$$ or $$\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array} \right) -2i \left( \begin{array}{cc} T_{11} & T_{12} \\ T_{21} & T_{22} \\ \end{array} \right)=\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array} \right) +2i \left( \begin{array}{cc} \delta_{11} & \delta_{12} \\ \delta_{21} & \delta_{22} \\ \end{array} \right) $$ between $s$ matrix and the $T$ matrix (actually in the form of elements $\delta_{ij}=-T_{ij}$ for $i,j=1,2$) we got different results for phase shifts for different values of center of mass kinetic energy $T_c$ and the angle $\theta$ between $\textbf{P}_{1}$ and $\textbf{P}_{2}$; we have used the Born approximation to neglect higher powers in the exponential series. We also probed different values of the parameter $k_f$. For a comparison, we also did the much less time consuming (but approximate) calculation using $S_{a}$ in place of $S$ in eq.(\ref{e10}). This allowed us separating the $9$ variables dependence of the integrand as a product, resulting in three triple integrals to be only multiplied, making the convergence very fast in the numerical computation of the integral. Thus we had instead \begin{eqnarray} \chi_{1}(\textbf{P}_{1})=\frac{\delta({P}_{1}-{P}_{c}(1))}{P_{c}^{2}(1)}-\frac{1}{\Delta_{1}(P_{1})}{\emph{F}}, \hspace{1in} \label{e5} \text{with} \end{eqnarray} \\ \begin{eqnarray} \emph{F}=\frac{1}{16\pi^{5}d^{6}}\Bigg[\int_{-\infty}^{\infty} dx_{1}dx_{2}dx_{3}\Bigg\{f_{1}(x_{1},x_{2},x_{3}) \exp\Bigg[-\frac{{x_{1}}^2+x_{2}^{2}+2x_{3}^{2}}{2d^{2}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber\\-b_{s}k_{f} \Big(a x_{1}^{2}+dx_{1}x_{2}+jx_{1}x_{3}+b x_{2}^{2}+gx_{2}x_{3}+cx_{3}^{2}\Big)+i P(x_{1}+x_{2}\cos\theta)\Bigg]\Bigg\}Q(y)\times Q(z) + \nonumber\\ \int_{-\infty}^{\infty} dy_{1}dy_{2}dy_{3}\Bigg\{f_{2}(y_{1},y_{2},y_{3}) \exp\Bigg[-\frac{{y_{1}}^2+y_{2}^{2}+2y_{3}^{2}}{2d^{2}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber\\-b_{s}k_{f} \Big(a y_{1}^{2}+ey_{1}y_{2}+ky_{1}y_{3}+b y_{2}^{2}+hy_{2}y_{3}+cy_{3}^{2}\Big)\Bigg]\Bigg\}Q(x)\times Q(z)~~~\nonumber\\ + \int_{-\infty}^{\infty} dz_{1}dz_{2}dz_{3}\Bigg\{f_{3}(z_{1},z_{2},z_{3}) \exp\Bigg[-\frac{{z_{1}}^2+z_{2}^{2}+2z_{3}^{2}}{2d^{2}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber\\-b_{s}k_{f} \Big(a z_{1}^{2}+fz_{1}z_{2}+lz_{1}z_{3}+b z_{2}^{2}+\acute{i}z_{2}z_{3}+cz_{3}^{2}\Big)+i Pz_{2}\sin\theta\Bigg]\Bigg\}Q(x)\times Q(y) \Bigg]. \end{eqnarray} Here $f_{1}(x_{1},x_{2},x_{3})=-\frac{8}{6m} \Big(-1.4417+0.0258 {x_{1}}^2+0.0254 {x_{2}}^2-{4.1914\times 10^{-7}} x_{1} x_{3}+0.0258 {x_{3}}^2\Big)+ \\~~~~~~~~~~~~~~~~~~~~~~~~~~~ \frac{32}{9} \Big(-4 C x_{3}^{2}-2 \overline{C}\Big)-\frac{8}{3} \Big(E_{c}-4 m\Big)$, $f_{2}(y_{1},y_{2},y_{3})=-\frac{8}{6m}\Big(0.0258 {y_{1}}^2+0.0254 {y_{2}}^2-{4.1914\times 10^{-7}} y_{1} y_{3}+ 0.0258 {y_{3}}^2 \Big)-\frac{128}{9} C y_{3}^{2}$, $f_{3}(z_{1},z_{2},z_{3})=-\frac{8}{6m}\Big(0.0258 {z_{1}}^2+0.0254 {z_{2}}^2+{5.1396\times 10^{-6}} z_{1} z_{3}+0.0258 {z_{3}}^2\Big)-\frac{128}{9} C z_{3}^{2}$,\\ $Q(x)=\int_{-\infty}^{\infty} dx_{1}dx_{2}dx_{3}\exp\Big[-\frac{{x_{1}}^2+x_{2}^{2}+2x_{3}^{2}}{2d^{2}}-b_{s}k_{f} \Big(a x_{1}^{2}+dx_{1}x_{2}+jx_{1}x_{3}+b x_{2}^{2}+gx_{2}x_{3}+cx_{3}^{2}\Big)+\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~i P(x_{1}+x_{2}\cos\theta)\Big]$, $Q(y)=\int_{-\infty}^{\infty} dy_{1}dy_{2}dy_{3}\exp\Big[-\frac{{y_{1}}^2+y_{2}^{2}+2y_{3}^{2}}{2d^{2}}-b_{s}k_{f} \Big(a y_{1}^{2}+ey_{1}y_{2}+ky_{1}y_{3}+b y_{2}^{2}+hy_{2}y_{3}+cy_{3}^{2}\Big)\Big]$\\ $Q(z)=\int_{-\infty}^{\infty} dz_{1}dz_{2}dz_{3}\exp\Big[-\frac{{z_{1}}^2+z_{2}^{2}+2z_{3}^{2}}{2d^{2}}-b_{s}k_{f} \Big(a z_{1}^{2}+fz_{1}z_{2}+lz_{1}z_{3}+b z_{2}^{2}+\acute{i}z_{2}z_{3}+cz_{3}^{2}\Big)+i Pz_{2}\sin\theta\Big].$\newline For this choice of $S$, we also calculated the phase shifts that are reported in the next section. By treating eq.(\ref{e3}) in the same fashion as that of eq.(\ref{e2}) and using the Born approximation \begin{equation}\chi_{1}(\textbf{R}_{1})=\sqrt{\frac{2}{\pi}}\exp\Big(i \textbf{P}_{1}.\textbf{R}_{1}\Big)\label{e17}\end{equation} it was checked that the results for phase shifts remain same. Actually eq.(\ref{e3}) and eq.(\ref{e2}) become identical if we interchange $\textbf{R}_{1}$ and $\textbf{R}_{2}$. \section{Results and conclusion} Fig.\ref{graph1} shows our results, with $k_f$ defined by eq.(\ref{e6}) taken as 0.5, for the phase shifts for a selection of center of mass kinetic energies for different angles between $\textbf{P}_{1}$ and $\textbf{P}_{2}$ (Some numerical uncertainty appears at $0.15\texttt{GeV}$ for $\theta=0$. When we further explored the region between $0.14\texttt{GeV}$ and $0.16\texttt{GeV}$ there appeared fluctuations in the results. For smoothness in graph we have neglected the data point at $0.15\texttt{GeV}$ in fig.\ref{graph1} at $\theta=0$ and interpolated data points are taken there.) We found no numerical fluctuations for kinetic energies above $0.16\texttt{GeV}$, and thus we conclude that in this kinematical range the scattering angle has large effect on phase shifts, indicating a true gluonic field effect. (The origin of this angle dependence is the exponent $S$ which is essentially in a model of $W_{12}$ Wilson loop, a pure gluonic theory expectation value; we do not get any angle dependence if this $S$ is not used. So the angle dependence emerges from gluonic field related to the area law, and is thus a QCD effect.) By increasing the scattering angle the phase shifts become large. We noted that a faster convergence of the nine-dimensional integration (see eq.(\ref{e40})) for large kinetic energy values was possible for smaller values of the parameter $k_f$; for a decrease in $k_{f}$ of 0.1 the CPU time reduced at least three times to that for the previous value. Thus we used the smaller value of $k_f=0.5$ mentioned in ref.\cite{P. Pennanen} to get phase shifts for a larger set of kinetic energies resulting in smoother graphs. For the above mentioned value $C=-0.0249281\texttt{GeV}^{3}$ (meaning $\omega=0.665707\texttt{GeV}$ and $d=0.441357$ fm) used in ref.\cite{ackleh} (giving the 99.4\% overlap of the wave functions) we found that, at $T_{c}=0.1\texttt{GeV}$, there is about a $1$ degree change in phase shift for a $30$ degree change in scattering angle $\theta$, even larger in magnitude than the phase shifts of fig.\ref{graph1} for the corresponding $T_c$ given by our routine value $d=0.556$ fm. So we can say that the characteristic feature of angle dependence will remain if we study the scattering of some realistic meson meson system by taking sizes of mesons accordingly and adjust the parameters of the Gaussian wave functions to simulate realistic linear plus Columbic potential eigenfunctions. For a comparison with the other crude forms of $f$ previously used, we show in the following fig.\ref{graph2} the average of these $k_f=0.5$ phase shifts over our selection of angle $\theta$ values along with the corresponding phase shifts for other forms of $f$ i.e. exponent in $f$ being proportional to $S_{a}$, $\sum_{i<j} r_{ij}^{2}$ and zero; the phase shifts were found to be independent of the angle $\theta$ for all these older forms of $f$ and hence there was no need to take any angle-average for these other forms. This figure shows that in comparison to $k_{f}=0$ (sum of two body potential model) we get relatively very small coupling with $S$, $S_{a}$ and Gaussian form in $f$. The introduction of a many-body interaction in the previous (Gaussian) form of $f$ resulted in a reduced meson-meson interaction. In ref. \cite{B.Masud} this reduction was noted as decreased meson-meson phase shifts. So there are less chances of making a bound state with modifications in sum of two body approach i.e. the inclusion of gluonic field effects significantly decrease coupling between two mesons in a $q^{2}\overline{q}^{2}$ system. Phase shifts are much less than $1$ radian which indicates the validity of Born approximation. The phase shifts we get are lesser than reported by others who have used Born approximation \cite{T. Barnes} but not used $f$ factor in off-diagonal terms. It is to be noted that the $S_a$ form also does not result in any angle dependence, although in contrast to the $f=1$ and Gaussian form there is apparently no a priori reason to expect such an angle independence for the use of $S_a$. This may be because $S_a$ is almost Gaussian with a little mixture of $x_1 x_3$ and $z_1 z_3$ terms (see eq.(\ref{e21}) and the parameter values reported just below it) or because $S_a$ can be converted to a Gaussian form by a completing of squares. As for a comparison of the $S_a$ phase shifts with the angle-average of the $S$ phase shifts, it can be pointed out that the height of phase shift with $S_a$ became less than that with original $S$ but the shape remains identical. Perhaps this indicates that $S_{a}$ simulates well some variations resulting from the original $S$ form. In fig.\ref{graph2}, if we compare graph of Gaussian form with the angle averaged phase shifts using $S$ in $f$ we find that as compared to Gaussian form the graph of other forms is closer to $k_{f}=0$, though the height of graph with $k_{f}=0$ is still very large as compared to both Gaussian form of $f$ and that of $S$ in $f$; see fig.\ref{graph7} which clarifies any ambiguity, if so, in fig.\ref{graph2} about the $k_f$=0 results. Fig.\ref{graph3} reports most of the results for the higher values 0.57 of $k_f$ mentioned in ref. \cite{P. Pennanen}. This value is more precise for their form of model and the crude area expression in the exponent in it. But our numerical calculations for this value turned out to be more demanding and thus were done for a smaller selection of kinetic energies. The numerical uncertainties for this value are for $\theta=\frac{\pi}{2}$ and the kinetic energy between $0.11\texttt{GeV}$ and $0.12\texttt{GeV}$; the results for this value of $\theta$ are in fig.\ref{graph5}. A value of $k_f=1.0$ higher than 0.5 and 0.57 mentioned above has been reported in ref. \cite{pennanen}. Although this work analyzes a relatively limited collection of geometries (only squares and tilted rectangles), we have tried to see effects of using a higher $k_f$. The numerical problems for large $k_f$ implied in the above mentioned numerical convenience for smaller $k_f$ did not allow us to get results for $k_f$=1 in a manageable time even for $T_{c}=0.1\texttt{GeV}$. The best we could do was to do a number of calculations for $k_f=0.8$; the resulting phase shifts from these are shown in fig.\ref{graph4} except for the phase shifts for $\theta=\pi/2$ that are reported in fig.\ref{graph6}, showing some numerical uncertainties at the kinetic energy=$0.13\texttt{GeV}$ and at $0.49-0.51\texttt{GeV}$. Based on fig.\ref{graph4} and fig.\ref{graph6}, we expect that for higher values of $k_{f}$ the results will remain qualitatively same and do not expect any new feature to emerge for $k_{f}=1$. We did a partial wave analysis for the phase shifts in fig. \ref{graph1}. For this we have projected our angle dependence on $m=0$ spherical harmonics. The angle dependence is independent of azimuthal angle $\phi$ so the partial wave expansion will only contain terms independent of angle $\phi$ or terms with m=0. This analysis shows that below $0.2\texttt{GeV}$ the S-wave is very much dominant as can be seen by a sharp rise of graphs towards lower side in fig. \ref{graph8} and fig. \ref{graph9} near $0.2\texttt{GeV}$ and it is also justified from fig. \ref{graph1} which shows a $\theta$ independence below $0.2\texttt{GeV}$ i.e. purely an S-wave. Our partial wave analysis shows that only even partial waves are present. Furthermore partial wave phase shifts decrease as we go from S-wave to I-wave, as is clear from large reciprocals in fig. \ref{graph9}, and they become negligible as compared to corresponding S-wave phase shifts as we go beyond G-wave. So here in figs. (\ref{graph8},\ref{graph9}) only S/D and S/G ratios are plotted. The reason for the absence of odd partial waves is that our phase shifts are symmetric around $\theta=\frac{\pi}{2}$ and the product of an even and odd function is an odd function giving rise to a zero result after integration. Thus phase shift is different for different angle and partial wave analysis of this angle dependence indicates presence of $l=2,4$ spherical harmonics along with the angle independent $l=0$. We have found this extra presence of D, G waves in angle dependence only when we use $e^{-constant*area}$ form of $f$. But when $f$ is spherically symmetric, i.e. old Gaussian form, then there appears no D, G waves. Thus D, G waves must be a property of $e^{-constant*area}$. So $e^{-constant*area}$ may couple an l=0 meson-meson system to l=2, 4,... etc. systems. This means two possibilities 1) l=0 meson-meson system may have t-matrix and phase shifts to l=2, 4,... final state meson-meson systems and 2) l=0 may couple to l=2, 4,... states as intermediate states in a polarization potential \cite{wong}, through $e^{-constant*area}$. \section*{Acknowledgments} We are thankful to Higher Education Commission (HEC) of Pakistan for there financial support through Grant No. 17-5-3(Ps3-056) HEC/Sch/2006.
proofpile-arXiv_065-4743
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \subsection{QED strong fields} In Quantum ElectroDynamics (QED) an electric field, $E$, should be treated as strong if it exceeds the Schwinger limit: $E\ge E_S$, where $$ E_S=\frac{m_ec^2}{|e|\lambdabar_C} $$ (see \cite{schw}). Such field is potentially capable of separating a virtual electron-positron pair providing an energy, which exceeds the electron rest mass energy, $m_ec^2$, to a charge, $e=-|e|$, over an acceleration length as small as the Compton wavelength, $$ \lambdabar_C=\frac\hbar{m_ec}\approx 3.9\cdot10^{-11}{\rm cm}. $$ Spatial scales associated with the field should be greater than $\lambdabar_C$. Typical effects in QED strong fields are: electron-positron pair creation from high-energy photons, high-energy photon emission from electrons or positrons and electron-positron cascade development (see \cite{Mark}- \cite{kb}) as the result of the first two processes. Less typical and often forbidden by conservation laws is direct pair separation from vacuum. This effect may only be significant if the field invariants as defined in \cite{ll}, $F_1=({\bf B}\cdot{\bf E})$, $F_2=E^2-B^2$, are large enough. Indeed, the considerations relating to pair creation are applicable only in the frame of reference in which $B=0$ or ${\bf B}\|{\bf E}$. The electric field in this frame of reference, $E_0^2=F_2/2+\sqrt{F^2_1+F^2_2/4}$, exceeds the Schwinger limit only if the field invariants are sufficiently large. Here the case of {\it weak} field invariants is considered: \begin{equation}\label{fielditself} |F_1|\ll E_S^2, \qquad |F_2|\ll E_S^2, \end{equation} and any corrections of the order of $F_1/E_S^2,\,F_2/E_S^2$ are neglected (see \cite{dep} about such corrections). So, neither the cases when the field {\it itself} is too strong, nor the cases when its spatial scale is too short are considered here. Below, the term 'strong field' is only applied to the field experienced by a particle (electron or positron). Particularly, a QED-strong electric field, \begin{equation}\label{forstar} E_{0}=\frac{|{\bf p}\times{\bf B}|}{m_ec}, \end{equation} may be exerted on relativistic charged particles with momentum, ${\bf p}$, gyrating in the strong magnetic field, ${\bf B}$, of a neutron star, as the result of the Lorentz transformation of the electromagnetic field. The field as in Eq.(\ref{forstar}) may exceed the Schwinger limit, as long as $|{\bf p}|\gg m_ec$ and$/$or the magnetic field is strong enough. \subsection{QED-strong laser fields} In a laboratory experiment QED-strong fields may be created in the focus of an ultra-bright laser. Consider QED-effects in a relativistically strong pulsed field \cite{Mark}: \begin{equation}\label{eq:strong} \sqrt{{\bf a}^2}\gg1,\qquad{\bf a}=\frac{e{\bf A}}{m_ec^2}, \end{equation} ${\bf A}$ being the vector potential of the wave. In the laboratory frame of reference the electric field is not QED-strong for achieved laser intensities, $J\sim10^{22}\ {\rm W/cm^2}$ \cite{1022}, and even for the $J\sim10^{25}\ {\rm W/cm^2}$ intensity projected \cite{ELI}. Moreover, both field invariants vanish for 1D waves, reducing the probability of direct pair creation from vacuum by virtue of the laser field's proximity to 1D wave. Nonetheless, a counter-propagating particle in a 1D wave, ${\bf a}(\xi),\,\xi=\omega t-({\bf k}\cdot{\bf x})$, may experience a QED-strong field, $E_0=|d{\bf A}/d\xi|\omega({\cal E}-p_\|)/c$, because the laser frequency, $\omega$, is Doppler upshifted in the frame of reference co-moving with the electron. Herewith the electron dimensionless energy, ${\cal E}$, and its momentum are related to $m_ec^2$, and $m_ec$ correspondingly, and subscript $\|$ herewith denotes the vector projection on the direction of the wave propagation. The Lorentz-transformed field exceeds the Schwinger limit, if \begin{equation}\label{QED-strong} \chi\sim E_0/E_S =\frac{\lambdabar_C}{\lambdabar}({\cal E}-p_\|)\left|\frac{d{\bf a}}{d\xi}\right|\gg1, \end{equation} where $\lambdabar=c/\omega$. Note, that the above-mentioned restriction on the field spatial scale is here assumed to be fulfilled for the {\it upshifted} wave frequency: \begin{equation}\label{eq:boundfreq2} \omega({\cal E}-p_\|)\ll c/\lambdabar_C. \end{equation} Nevertheless, the condition as in (\ref{QED-strong}) may be fulfilled as long as the field is strong enough. Numerical values of the parameter, $\chi$, may be conveniently expressed in terms of the local instantaneous (not time-average!) intensity of the laser wave, $J$: $$ \chi=\frac32 \frac{\lambdabar_C}{\lambdabar}({\cal E}-p_\|)\left|\frac{d{\bf a}}{d\xi}\right| \approx 0.7\frac{({\cal E}-p_\|)}{10^3}\sqrt{\frac{J}{10^{23}[{\rm W/cm}^2]}}, $$ the choice of the numerical factor of $3/2$ is explained below. For a counter-propagating electron of energy $\sim 1$ GeV, that is for $({\cal E}-p_\|)\sim 4\cdot 10^3$, the QED-strength parameter is greater than one even with the laser intensities already achieved. The condition of $\chi>1$ also separates the parameter range of the Compton effect from that of the Thomson effect, under the condition of Eq.(\ref{eq:strong}). The distinctive feature of the Compton effect is an electron recoil, which is significant, if a typical emitted photon energy, $\hbar\omega_c$, is comparable with the electron energy \cite{kogaetal}. Their ratio, $\chi=\lambdabar_C\omega_c/(c{\cal E})$, equals $\chi$ as defined in Eq. (\ref{eq:chifirst}) with the proper numerical factors (cf Eq.(\ref{eq:omegaccl})). It should be noted, however, that under the conditions discussed in (\ref{eq:strong},\ref{eq:boundfreq2}) the Compton effect drastically changes. \subsection{Classical radiation loss rate as an input parameter for QED} Radiation processes in QED-strong fields are entirely controlled by the local value of $E_0$ (this statement may be found in \cite{lp}, \S101). A good signature for $E_0$ is the radiation loss rate of a charge, as introduced in the classical electrodynamics: \begin{equation} I_{\rm cl}(E_0)=\frac{2e^4E^2_0}{3m_e^2c^3}=-\frac{2e^2f^\mu f_\nu}{3m_e^2c^3}, \end{equation} which is a Lorentz invariant. Therefore, it may be expressed in any frame of reference in terms of a 4-square of the Lorentz 4-force, $f^\mu=(f^0,{\bf f})={\cal E}({\bf f}^{(3)}\cdot{\bf v}/c,{\bf f}^{(3)})$, where ${\bf v}$ is the velocity vector and: $$ {\bf f}^{(3)}=e{\bf E}+\frac1c[{\bf v}\times{\bf B}], $$ $$ f^0=e{\bf E}\cdot{\bf p},\qquad {\bf f} = e{\cal E}{\bf E}+e[{\bf p}\times{\bf B}]). $$ So, the QED-strength of the field may be determined in evaluating $I_{\rm cl}$ and its ratio to $I_C=I_{\rm cl}(2E_S/3)$: \begin{equation}\label{eq:chifirst} \chi= \sqrt{\frac{I_{\rm cl}}{I_C}},\qquad I_C=\frac{8e^2c}{27\lambdabar_C^2}. \end{equation} If $\chi\ge 1$ then the actual radiation loss rate differs from $I_{\rm cl}$, however, it may be re-calculated using $I_{\rm cl}$ as a sole input parameter. \subsection{Possible realization of QED-strong fields in laboratory experiments} The first experiments which demonstrated QED effects in a laser field were fulfilled using an electron beam of energy $\approx 46.6$ GeV (see \cite{bb}), which interacted with a counter-propagating terawatt laser pulse of intensity $J\sim10^{18}{\rm W/cm}^2$. A reasonably high value of $\chi\approx 0.4$ had been achieved, however, the laser field was not relativistically strong with $|{\bf a}|\le 1$. The high value of $\chi$ had been achieved at the cost of very high energy of the upshifted laser wave: the transformed photon energy amounted to $\sim 0.1$ MeV, which is not small as compared to the electron rest mass energy, $m_ec^2\approx 0.51$ MeV. It could be interesting to upgrade this experiment towards the highest achievable laser intensities $\ge 2\cdot 10^{22}$ with the use of a wakefield-accelerated beam of electrons of energy $\sim1$ GeV (see \cite{heg}). First, the 2-3 times larger value of the QED parameter, $\chi$, may be achieved with the exponentially increased probability for pair creation. Second, such experiment would be highly relevant to the processes which will occur in the course of laser-plasma interaction at even stronger laser intensities. Indeed, counter-propagating electrons can be generated while a laser pulse is interacting with a solid target. For this reason, the radiation effects in the course of laser-plasma interaction are widely investigated ((see \cite{kogaetal,lau03}). With future progress in laser technology and by achieving intensities of $J\sim 10^{23} {\rm W/cm}^2$ laser-plasma interactions will be strongly modified by QED effects, so that the capability to model these effects now is of interest. \subsection{Radiation back-reaction} The principle matter in this paper is an account of the radiation back-reaction acting on a charged particle. The radiation losses reduce the particle energy, affecting both the particle motion and the radiation losses themselves. This effect can be consistently described by solving the dynamical equation which appears to reduce to the modified Lorentz-Abraham-Dirac equation as derived in \cite{mine},\cite{our}. The new element here is that in this equation the radiation back-reaction on the electron motion should be expressed in terms of the emission probability, while applied to QED-strong fields. In section II the emission from an electron in the field of a relativistically strong wave is discussed within the framework of classical electrodynamics. The transition from the vector amplitude of emission as described in \cite{jack} to the instantaneous spectrum of emission is treated in terms of the {\it formation time}, a concept, which is not often used in classical electrodynamics. For QED-strong laser pulses the calculation of the emission probability is given in Section III. The radiation processes in QED-strong fields appear to be reducible to a frequency downshift in the classical vector amplitude of emission resulting from the electron recoil, accompanied by a contribution to emission associated with the magnetic moment of electron. The radiation effect on the electron motion in strong fields is discussed in Section IV. The Conclusion summarizes the results and discusses future prospectives. \section{Emission in relativistically strong fields} In this section QED effects are not yet considered, but the electromagnetic wave field is assumed to be relativistically strong. Angular and frequency distributions of electron emission are discussed. The goal is to establish a connection between the methods usually applied to calculate emission in weaker fields, on one hand, and the conceptually different QED approach on the other. For relativistically strong laser fields, even though QED effects do not yet come into a power, still some concepts of the QED emission theory appear to be applicable and useful, among them are the formation time of emission and instantaneous spectrum of emission. In weaker fields, especially for the particular case of a harmonic wave, the emitted power is given by an integral over many periods of the wave. This standard approach, however, may become meaningless as applied to the ultra-strong laser pulses, for many reasons. These pulses may be so short that they cannot be thought of as harmonic waves. Their fields may be strong enough to force an electron to expend its energy on radiation faster than a single wave period. However, an even more important point is that the radiation loss rate and even the spectrum of radiation is no longer an integral characteristic of the particle motion through a number of wave periods: a local dependence of emission on both particle and field characteristics is typical for the strong fields. \subsection{Transformed space-time} A method facilitating many derivations involves the introduction of a specific time-space coordinate frame. Consider a 1D wave field taken in the Lorentz calibration: $$ a^\mu=a^\mu(\xi),\qquad \xi=(k\cdot x),\qquad(k\cdot a)=0, $$ $a^\mu=(0,{\bf a})$, $k^\mu$ and $x^\mu$ being the 4-vectors of the potential, the wave and the coordinates. Herewith the 4-dot-product is introduced in a usual manner: $$(k\cdot x)=k^\mu x_\mu = \omega t-({\bf k}\cdot{\bf x})$$ etc. Space-like 3-vectors (i.e., the first to the third components of a 4-vector) in contrast with 4-vectors are denoted in bold, 4-indices are denoted with Greek letters. Note that a metric signature $(+,-,-,-)$ is used, therefore, for space-like vectors the 3D scalar product and 4-dot-product have opposite signs, particularly: $$ \left(\frac{d{\bf a}}{d\xi}\right)^2= -\left(\frac{da}{d\xi}\right)^2\ge 0. $$ Introduce a Transformed Space-Time (TST) : $$ x^{0,1}=(ct\mp x_\|)/\sqrt{2},\qquad x^{2,3}={\bf x}_\perp, $$ subscript $\perp$ denoting the vector components orthogonal to ${\bf k}$. The properties of the TST provide a convenient description for the classical motion of an electron in the 1D wave field. First, note, that $$ dx^0=\frac{\lambdabar d\xi}{\sqrt{2}}, \qquad p^0=\frac{\lambdabar(k\cdot p)}{\sqrt{2}}, \qquad (p\cdot k)= \frac{{\cal E}-p_\|}{\lambdabar}. $$ Second, the generalized momentum components, $p^0$ and ${\bf p}_{\perp 0}={\bf p}_\perp+{\bf a}$, are conserved. Third, the metric tensor in the TST is: $$ G^{01}=G^{10}=1,\qquad G^{22}=G^{33}=-1, \qquad G^{\mu\nu}=G_{\mu\nu}. $$ Finally, the identity, ${\cal E}^2=p^2+1$, being expanded in the TST metric, gives: $$ p^1=\frac{1+{\bf p}_\perp^2}{2p^0}= \frac{1+({\bf p}_{\perp 0}-{\bf a})^2}{\sqrt{2}\lambdabar(k\cdot p)}. $$ The classical radiation loss rate is found by virtue of expanding the Lorentz force squared in the TST: \begin{equation}\label{eq:classicintense} I_{\rm cl}=-\frac{2e^2}{3c}\frac{(f\cdot f)}{m_e^2c^2}=\frac{2e^2c(k\cdot p)^2}{3}\left(\frac{d{\bf a}}{d\xi}\right)^2. \end{equation} The derivative over $x^0$ or, the same, over $\xi$ is conveniently related to the derivative over the proper time for electron: \begin{equation} \frac{d}{d\tau}={\cal E}\left[\frac\partial{\partial t}+({\bf v}\cdot\frac\partial{\partial {\bf x}})\right]=c(k\cdot p)\frac{d}{d\xi}. \end{equation} \subsection{Classical trajectory and momenta retarded product} Many characteristics of emission may be expressed in terms of the relationship between the 4-momenta of the electron at different instants: \begin{equation}\label{eq:classic} p^\mu(\xi)=p^\mu(\xi^\prime)-\delta a^\mu+ \frac{2(p(\xi^\prime)\cdot \delta a)-(\delta a)^2}{2(k\cdot p)}k^\mu, \end{equation} where $$\delta a^\mu=a^\mu(\xi)-a^\mu(\xi^\prime).$$ As a consequence from Eq.(\ref{eq:classic}), one can obtain the expression for the {\it Momenta Retarded Product} (MRP): \begin{equation}\label{eq:pp} (p(\xi)\cdot p(\xi^\prime))=1-\frac{(\delta a)^2}2=1+\frac{(\delta{\bf a})^2}2, \end{equation} Note, that the MRP is given by Eq.(\ref{eq:pp}) for an arbitrary difference between $\xi$ and $\xi^\prime$, but only for the particular case of the 1D wave field. However the limit of this formula as $|\xi-\xi^\prime|\rightarrow 0$, which is as follows: $$ (p(\xi)\cdot p(\xi^\prime))|_{|\xi-\xi^\prime|\rightarrow 0}\approx 1+ \frac12(\xi-\xi^\prime)^2\left|\frac{d{\bf a}}{d\xi}\right|^2 $$ or, in terms of the MRP in the proper time, $\tau$: \begin{equation}\label{eq:gencovariance} (p(\tau)\cdot p(\tau+\delta\tau))=1-(\delta\tau)^2\frac{(f\cdot f)}{2m_e^2c^2}, \end{equation} has a much wider range of applicability. Eq.(\ref{eq:gencovariance}) is derived from the equation of motion: $$ \frac{dp^\mu}{d\tau}=\frac{f^\mu}{m_ec}, $$ using the identities: $$ (p(\tau)\cdot p(\tau))=1,\qquad (p(\tau)\cdot f(\tau))=0, $$ $$ \frac{d(p(\tau)\cdot p(\tau+\delta\tau))}{d(\delta\tau)}=-\delta\tau\left(\frac{dp}{d\tau}\cdot\frac{dp}{d\tau}\right)+O((\delta\tau)^2). $$ \subsection{Vector amplitude of emission in classical electrodynamics} In Ref.\cite{jack} the frequency spectrum and angular distribution, $dR_{\rm cl}/(d\omega^\prime d{\bf n})$, of the radiation energy, $dR_{\rm cl}$, emitted by an electron and related to the interval of frequency, $d\omega^\prime$, and to the element of solid angle, $d{\bf n}$, for a polarization vector, ${\bf l}$, is described with the following formula: \begin{equation}\label{eq:Jackson} \frac{dR_{\rm cl}} {d\omega^\prime d{\bf n}}=\frac{(\omega^\prime)^2}{4\pi^2c}\left|({\bf A}_{\rm cl}(\omega^\prime)\cdot {\bf l}^*)\right|^2. \end{equation} Here the superscript asterisk means the complex conjugation and the vector amplitude of emission, ${\bf A}_{\rm cl}(\omega^\prime)$, is given by the following equation: $$ {\bf A}_{\rm cl}(\omega^\prime,{\bf n})=\frac{e}c\int_{-\infty}^{+\infty}{{\bf v}(t)\exp\left\{i\omega^\prime[t-\frac{({\bf n}\cdot{\bf r}(t))}c]\right\}dt}, $$ see Eq.(14.67) in Ref.\cite{jack} followed by the discussion of the way to account for a polarization. The use of the same notation, ${\bf A}$, both for the emission vector amplitude and for the vector potential should not mislead the reader. Recall, that the emission vector amplitude is closely related to the Fourier-transformed vector potential in the far-field zone of emission. Introduce a 4-vector amplitude of emission, $$ A^\mu_{\rm cl}(\omega^\prime,{\bf n})=e \int_{-\infty}^{+\infty}{ p^\mu(\tau) \exp\left[ ic\int^\tau{(k^\prime\cdot p(\tau^\prime))d\tau^\prime}\right]d\tau}, $$ which is expressed in terms of the proper time for the electron and its (dimensionless) 4-momentum. As long as $(k^\prime\cdot p)/c$ is a frequency of the emitted photon in the frame of reference co-moving with the electron, the 4-vector amplitude is the Fourier integral of the electron 4-momentum with the Lorentz-modified frequency. Note the following properties of the 4-vector amplitude. First, its space-like vector components, which are perpendicular to the wave vector of the emitted photon, coincide with those for 3-vector amplitude, hence, they quantify the polarization properties of the emission for two different polarizations. Second, the dot-product, $(A_{\rm cl}\cdot k^\prime)$, vanishes as being the integral of a perfect time derivative. Now construct the dot-product, $(A_{\rm cl}\cdot A^*_{\rm cl})$ and expand it in the TST, which is formulated in terms of the {\it emitted} wave: $$ (A_{\rm cl}\cdot A^*_{\rm cl})=A^0_{\rm cl}(A^1_{\rm cl})^* +A^1_{\rm cl}(A^0_{\rm cl})^* -|A^2_{\rm cl}|^2 - |A^3_{\rm cl}|^2. $$ From the above properties of the 4-vector amplitude, the first two terms vanish identically as $A^0_{\rm cl}\propto ( A_{\rm cl}\cdot k^\prime)=0$. The other two terms as taken with the proper factor give the emitted energy summed up over polarizations, therefore, the latter sum may be expressed as follows: \begin{equation}\label{eq:sumdr} \sum_{\bf l}{\frac{dR_{\rm cl}}{d\omega^\prime d{\bf n}}}=- \frac{(\omega^\prime)^2}{4\pi^2c}(A_{\rm cl}\cdot A^*_{\rm cl}). \end{equation} Now introduce the radiation loss rate, $dI_{\rm cl}/(d\omega^\prime d{\bf n})$ related to the unit of time, the element of a solid angle and the frequency interval. Its connection to $dR_{\rm cl}/(d\omega^\prime d{\bf n})$ is evident: $$ \sum_{\bf l} {\frac{dR_{\rm cl} } {d\omega^\prime d{\bf n} } }= \int_{-\infty}^{+\infty}{ \frac{dI_{\rm cl}(t)} {d\omega^\prime d{\bf n} } dt}= \int_{-\infty}^{+\infty}{\frac{dI_{\rm cl}(\tau)}{d\omega^\prime d{\bf n}}{\cal E}(\tau)d\tau}. $$ In Eq.(\ref{eq:sumdr}) the dot-product of the 4-vector amplitudes, $(A_{\rm cl}\cdot A^*_{\rm cl})$ is in fact the product of two integrals over $d\tau$, which can be represented as the double integral, over, say, $d\tau_1d\tau_2$. Transform the integration variables in this double integral, by introducing $\tau=(\tau_1+\tau_2)/2$, $\theta=\tau_1-\tau_2$. The spectral and angular distribution of the radiation loss rate may be expressed in terms of the Fourier integral of the MRP: $$ \frac{dI_{\rm cl}(\tau)}{d\omega^\prime d{\bf n}}=- \frac{e^2(\omega^\prime)^2} {4\pi^2c{\cal E}(\tau)} \int_{-\infty}^{+\infty} (p(\tau+\frac\theta2)\cdot p(\tau-\frac\theta2) ) \times $$ $$ \times \exp\left[ ic \int_{\tau-\theta/2}^{\tau+\theta/2} {(k^\prime\cdot p(\tau^\prime))d\tau^\prime}\right]d\theta. $$ \subsection{Frequency spectrum and formation time} The specific feature of the particle relativistic motion in strong laser fields is that the main contribution to the above integral comes from a brief time interval with small values of $\theta$. A closely related point is that the emitted radiation is abruptly beamed about the direction of the velocity vector, ${\bf p}(\tau)/|{\bf p}(\tau)|$. Therefore, in the following expansion of the frequency in the co-moving frame, $$ (k^\prime\cdot p(\tau^\prime))= \left([k^\prime-\frac{\omega^\prime}{c{\cal E}(\tau)}p(\tau)]\cdot p(\tau^\prime)\right)+\frac{\omega^\prime}{c{\cal E}}\left(p(\tau) \cdot p(\tau^\prime)\right), $$ in the first dot-product one may approximate $p^\mu(\tau^\prime)\approx p^\mu(\tau)$. Indeed, for an ultrarelativistic electron and a photon, which both propagate in the same direction, the difference between $p^\mu/{\cal E}$ and $c(k^\prime)^\mu/\omega^\prime$ is already small, therefore in the second multiplier of the dot-product the small difference, $p^\mu(\tau)-p^\mu(\tau^\prime)$, may be neglected: $$ c\int_{\tau-\theta/2}^{\tau+\theta/2}{(k^\prime\cdot p(\tau^\prime))d\tau^\prime}\approx \theta c (k^\prime\cdot p(\tau))+ $$ $$ +\frac{\omega^\prime}{\cal E}\int_{\tau-\theta/2}^{\tau+\theta/2}{\left[\left(p(\tau)\cdot p(\tau^\prime)\right)-1\right]d\tau^\prime}. $$ Now the only angle-dependent multiplier is $\exp[-i\theta c (k^\prime\cdot p(\tau))]$. For simplicity, the angular spectrum of emission can be approximated with the Dirac function: $$ \frac{dI_{\rm cl}(\tau)}{d\omega^\prime d{\bf n}}=\delta^2\left({\bf n}-\frac{\bf p}{|{\bf p}|}\right)\frac{dI_{\rm cl}(\tau)}{d\omega^\prime}, $$ and with the use of the formula (see \S90 in Ref.(\cite{lp}), $$ \int{\exp[i\theta c (k^\prime\cdot p(\tau))]d{\bf n}}= \frac{2\pi i} {\omega^\prime{\cal E}(\tau)\theta}\exp\left(\frac{i\omega\theta}{2{\cal E}(\tau)}\right), $$ the following expression may be found for the frequency spectrum of emission: $$ \frac{dI_{\rm cl}(\tau)}{d\omega^\prime}= \frac{e^2\omega^\prime} {2\pi c{\cal E}^2(\tau)} \int_{-\infty}^{+\infty} \frac1\theta(p(\tau+\frac\theta2)\cdot p(\tau-\frac\theta2) ) \times $$ $$ \times \sin\left\{ \frac{\omega^\prime}{{\cal E}(\tau)} \left[\frac\theta2+ \int_{\tau-\theta/2}^{\tau+\theta/2} {[(p(\tau)\cdot p(\tau^\prime))-1]d\tau^\prime} \right] \right\} d\theta. $$ Thus, the frequency spectrum of emission is entirely determined by the MRP, which is a scalar Lorentz-invariant funcion of the proper time. Both the fore-exponential factor and the argument of the exponential function depend on the mentioned MRP. Therefore, both the spectral composition of the MRP and its magnitude may be of importance. Their relative role is controlled by the ratio of the frequency of the electron motion, $\omega_0$, to the acceleration magnitude, both being determined in the co-moving frame of reference. Here the field is assumed to be so strong, that the acceleration it causes plays the dominant role, i.e. the following inequality is claimed: \begin{equation}\label{eq:highac} -\frac{(f\cdot f)}{m_e^2 c^2}\gg \omega_0^2. \end{equation} Under these circumstances, the integral determining the emission spectrum is calculated by virtue of the displacement of the integration contour in the plane of the complex variable, $\theta$, so that the deformed contour passes through the point of a {\it stationary phase}, $\theta_{\rm st}$. In this 'saddle' point the phase gradient turns to zero: $$ \frac{d}{d\theta}\left[\frac\theta2+\int_{\tau-\theta/2}^{\tau+\theta/2} {[(p(\tau)\cdot p(\tau^\prime))-1]d\tau^\prime}\right]=0. $$ The larger the acceleration becomes, the closer the stationary phase point, $\theta_{\rm st}$ draws to the real axis, and, hence, the shorter the time interval becomes, $\theta\sim \theta_{\rm f}=|\theta_{\rm st}|$, which gives the non-vanishing contribution to the emission spectrum. The characteristic duration of this time interval, $\theta_{\rm f}=|\theta_{\rm st}|$ is referred to as a formation time (or coherence time - see \cite{nr},\cite{kat}). At the limit of large accelerations the formation time is given by the following formula: \begin{equation} \theta_{\rm st} = \pm i \frac{2m_ec}{\sqrt{-(f\cdot f)}},\qquad \theta_{\rm f}=\frac{2m_ec}{\sqrt{-(f\cdot f)}}, \end{equation} where the approximation for the MRP as in Eq.(\ref{eq:gencovariance}) is applied at $|\theta|\le \theta_{\rm f}$. With the use of Eq.(\ref{eq:gencovariance}) the {\it universal} emission spectrum is obtained: $$ \frac{dI_{\rm cl}(\tau)}{d\omega^\prime}= \frac{e^2\omega^\prime} {2\pi c{\cal E}^2(\tau)} \int_{-\infty}^{+\infty} [\frac1\theta-\frac{(f(\tau)\cdot f(\tau))\theta}{2m_e^2c^2}] \times $$ $$ \times \sin\left\{ \frac{\omega^\prime}{{\cal E}(\tau)} \left[\frac\theta2-\frac{(f(\tau)\cdot f(\tau))\theta^3}{24m_e^2c^2} \right] \right\} d\theta. $$ The integral can be expressed in terms of the MacDonald function (= the modified Bessel function): $$ \frac{dI_{\rm cl}(\tau)}{d\omega^\prime}=\frac{I_{\rm cl}(\tau)}{\omega_c}Q_{\rm cl}(r_0),\qquad \frac{dI_{\rm cl}(\tau)}{dr_0}=I_{\rm cl}(\tau)Q_{\rm cl}(r_0), $$ where $Q_{\rm cl}(r_0)$ is the unity-normalized spectrum of the gyrosynchrotron emission ($\int{Q_{\rm cl}(r)dr}=1$): \begin{equation} Q_{\rm cl}(r_0)= \frac{9\sqrt{3}}{8\pi}r_0 \int_{r_0}^\infty{K_{5/3}(r^\prime)dr^\prime},\qquad r_0=\frac{\omega^\prime}{\omega_c},\label{eq:classicspectrum} \end{equation} and $$ \omega_c=\frac32 \frac{ {\cal E}(\tau)\sqrt{-(f(\tau)\cdot f(\tau))} } {m_ec}= \frac32 {\cal E}(\tau) \sqrt{ \frac{3I_{\rm cl}(\tau)c}{2e^2}}, $$ or, which is the same, \begin{equation}\label{eq:omegaccl} \frac{\hbar\omega_c}{m_ec^2}={\cal E}\sqrt{\frac{I_{\rm cl}(\tau)}{I_C}}={\cal E}\chi. \end{equation} Note, that despite all approximations, the integral over the frequency spectrum is consistently equal to $I_{\rm cl}$. \subsection{Implications for strong laser fields} As discussed, the condition ${\cal E}\gg1$ and the inequality (\ref{eq:highac}) are both fulfilled for an ultra-relativistic electron gyrating in a uniform steady-state magnetic field. By expressing the 4-force squared, $-(f\cdot f)=e^2{\bf p}_\perp^2{\bf B}^2$ (see Eq.(\ref{forstar})), and taking the gyrofrequency in the co-moving frame, $\omega_0^2={\cal E}^2e^2{\bf B}^2/(m_e^2c^2{\cal E}^2)=e^2{\bf B}^2/(m_e^2c^2)$, one finds that (\ref{eq:highac}) is fulfilled as long as ${\bf p}_\perp^2\gg1$. Furthermore, application to the 1D wave field is no less straight forward. The laser wave frequency in the comoving frame, $\omega_0=c(k\cdot p)$, is present on the left hand side of Eq.(\ref{eq:boundfreq2}). The Lorentz 4-force squared is given in Eq.(\ref{eq:classicintense}), resulting in the following estimate for the formation time: \begin{equation}\label{eq:thetafforwave} \theta_{\rm f}=\frac{2}{c(k\cdot p)|d{\bf a}/d\xi|} \end{equation} Now it is easy to see, that the condition, $\theta_{\rm f}\omega_0\ll1$ as in Eq.(\ref{eq:highac}) is fulfilled in relativistically strong wave field, at \begin{equation} \left|\frac{d{\bf a}}{d\xi}\right|\gg1. \end{equation} The formation time tends to zero as the wave amplitude tends to infinity. The change in the electron energy within the formation time is always much less than the particle energy, ${\cal E}m_ec^2$. Within the classical field theory this statement follows from Eqs.(\ref{eq:classicintense},\ref{eq:thetafforwave}). With an account of an extra factor of ${\cal E}$, which arises while transforming the formation time to the laboratory frame of reference, the relative change in energy equals: \begin{equation}\label{eq:Stepansratio} \frac{\theta_{\rm f}I_{\rm cl}}{m_ec^2}=2\alpha\chi, \end{equation} where $$\alpha=\frac{e^2}{\hbar c}\approx\frac1{137}$$ is fine structure constant. The ratio (\ref{eq:Stepansratio}) is much less than unity as long as $\chi\le 1$. Note that in the opposite limiting case of QED-strong field, the extra factor of $I_{\rm QED}/I_{\rm cl}\propto \chi^{-4/3}$ (see Fig.\ref{fig_3} below) makes ratio (\ref{eq:Stepansratio}) small at $\chi\gg1$ as well. The same estimate of the formation time is applicable to any relativistically strong electromagnetic field, not only to 1D wave. Particularly, in the wakefield acceleration scheme, where the electric field in the wakefield of the pulse may be even larger than the relativistically strong field in the pulse itself. With this account, the acceleration of almost monoenergetic electron beam by the laser pulse must be accompanied by the gyrosynchrotron-like spectrum of emission (which are actually observed - see \cite{Rousse}, \cite{kneip}). These observations demonstrate the general character of the gyrosynchrotron emission spectrum (this point of view, presented in \cite{Rousse}, may be also found in \S77 in \cite{ll}). \subsection{Emission within short time intervals and implications for numerical simulation} In strong fields satisfying the condition as in (\ref{eq:highac}) both emission vector amplitude and the emission spectrum may be determined with respect to {\it brief} time interval, $\Delta t$, which may be much shorter than the field period. The only requirement is that this interval should be large as compared to the formation time: \begin{equation}\label{eq:Deltat} \int_t^{t+\Delta t}{\frac{dt^\prime}{{\cal E}(t^\prime)}}\gg \theta_{\rm f}, \end{equation} however, the change in the field and particle characteristics within this time interval may be small, as long as $\Delta t\omega\ll1$. In this case the span in the integral determining the vector amplitude may be chosen to be $(t,t+\Delta t)$. However, in the integral over $d\theta$, which determines the emission spectrum, the integration limits are much larger than the formation time, therefore, they may be again set to $(-\infty,+\infty)$. These considerations justify the numerical scheme for collecting the high frequency emission as described in \cite{our} (which does not seem different from that briefly described in \cite{Rousse}). In addition to calculating the electromagnetic fields on the grid using Particle-In-Cell (PIC) scheme, in which fields are created by a moving particle within the time step, $\Delta t$, one can also account for the higher-frequency (subgrid) emission spectrum, by calculating the instantly radiated energy, $I_{\rm cl}\Delta t$, and its distribution over frequency, parameterized via $I_{\rm cl}$. Another often used approach based on the calculation of the vector amplitude of emission (see, e.g.,\cite{spie}) seems to be less efficient, although, theoretically, should provide the same result. The vector amplitude formalism, on the other hand, may be better applicable to the cases, where the high-frequency emission from multiple electrons is coherent (see \cite{habs}). Stemming from these considerations it is now easy to proceed to the QED approach. \section{Electron in QED-strong field: the emission probability} The emission probability in the QED-strong 1D wave field may be found in \S\S40,90,101 in \cite{lp}, as well as in \cite{nr},\cite{gs}. In application to the wakefield acceleration of electrons of energy $\approx 1$ TeV the QED effects had been also discussed in \cite{Khok}. However, to simulate highly dynamical effects in pulsed fields, one needs a reformulated emission probability, related to short time intervals (not $(-\infty,+\infty)$). Indeed, it is demonstrated above that in strong fields the emission processes are essentially local functions of the instantaneous parameters. Therefore, in QED-strong fields the emission probability should be formulated in terms of the local values of the electromagnetic field intensities, or, the way we adopt, it may be parameterized via the classical radiation loss rate or the Lorentz 4-force squared: $-f^\mu f_\mu\propto I_{\rm cl}$. This emission probability is rederived here with careful attention to consistent problem formulation and neglecting technical details. \subsection{A QED solution of the Dirac equation} The Dirac equation which determines the evolution of the wave function, $\psi$, for a {\it non-emitting} electron in the external field, reads: \begin{equation}\label{eq:Dirac} \left[i \lambdabar_C\left(\gamma\cdot\frac\partial{\partial x}\right)-(\gamma\cdot a)\right]\psi=\psi, \end{equation} $\gamma^\mu$ being the Dirac $4*4$ matrices, $(\gamma^0,\gamma^1,\gamma^2,\gamma^3)$. The relativistic dot-product of the Dirac matrices by 4-vector, such as $(\gamma\cdot a)$, is the linear combination of the Dirac matrices: $(\gamma\cdot a)=\gamma^0a^0-\gamma^1a^1-\gamma^2a^2-\gamma^3a^3$. Such linear combination, which is also a $4*4$ matrix, may be multiplied by another matrix of this kind or by 4-component bi-spinor, such as $\psi$, following matrix multiplication rules. For example, $(\gamma\cdot a)\psi$ is a bi-spinor, as is the matrix, $(\gamma\cdot a)$ multiplied from the right hand side by the bi-spinor, $\psi$. By expanding Eq.(\ref{eq:Dirac}) in the TST it is easy to find its solution in a form of a plane electron wave (the normalization coefficient $N={\rm const}$): \begin{equation}\label{eq:volkov} \psi=\frac{ u(p(\xi))P(\xi)}{\sqrt{N}} \exp\left[ \frac{i \left[ ({\bf p}_{\perp0}\cdot {\bf x}_\perp)- \frac{\lambdabar(k\cdot p)x^1}{\sqrt{2}} \right] }{\lambdabar_C} \right]. \end{equation} Here $u(p(\xi))$ is plane wave bi-spinor amplitude, which satisfies the system of four linear algebraic equations: \begin{equation}\label{bispinor} (\gamma\cdot p(\xi))u(p(\xi))=u(p(\xi)), \end{equation} as well as the normalization condition: $\hat{u}u=2.$ The $\xi$-dependent momentum, $p(\xi)$, in the bi-spinor amplitude should be taken in accordance with Eq.(\ref{eq:classic}) as for the classical trajectory of the electron. The $\xi$-dependent phase multiplier, $P(\xi)$ is as follows: $$ P(\xi)=\exp\left(-\frac{i}{\lambdabar_C} \int^{\xi}{\frac{1+{\bf p}_\perp^2(\xi_2)}{2(k\cdot p)} d\xi_2}\right), $$ or, \begin{equation}\label{xidependent} P(\xi)=P(\xi^\prime)\exp\left(-\frac{i}{\lambdabar_C} \int^{\xi}_{\xi^\prime}{\frac{1+{\bf p}_\perp^2(\xi_2)}{2(k\cdot p)} d\xi_2}\right). \end{equation} Using Eq.(\ref{eq:classic}), one can find: \begin{equation}\label{eq:QEDpropagator} u(p(\xi))=\left[1+\frac{(\gamma\cdot k)\left(\gamma\cdot [a(\xi)-a(\xi^\prime)] \right)}{2(k\cdot p)}\right]u(p(\xi^\prime)) \end{equation} and verify that Eq.(\ref{eq:volkov}) satisfies the Dirac equation. The advantage of the approach used here as compared to the known Volkov solution presented in \S40 in \cite{lp} is that the wave function in Eqs.(\ref{eq:volkov}-\ref{eq:QEDpropagator}) is described in a self-contained manner within some finite time interval, $(\xi^\prime,\xi)$ (in fact, this interval is assumed to be very short below) in terms of the local parameters of the classical trajectory of electrons. This approach is better applicable to strong fields, in which the time interval between subsequent emission occurrences, which destroys the unperturbed wave function, becomes very short. \subsection{The matrix element for emission} The emission problem is formulated in the following way. The electron motion in the strong field may be thought of as the sequence of short intervals. Within each of these intervals the electron follows a piece of a classical trajectory, as in Eq.(\ref{eq:classic}), and its wave function (an electron state) is given by Eq.(\ref{eq:volkov}). The transition from one piece of the classical trajectory to another, or, the same, from one electron state to another occurs in a probabilistic manner. The probability of this transition, which is accompanied by a photon emission is calculated below using the QED perturbation theory. The only difficulty specific to strong pulsed fields is that the short piece of the electron trajectory is strictly bounded in space and in time, while the QED invariant perturbation theory is based on the 'matrix element', which is the integral over infinite 4-volume. \begin{figure} \includegraphics[scale=0.4, angle=90]{Fig1.eps} \caption{The volume over which to integrate the matrix element while finding the emission probability: in the standard scheme for the dipole emission (in dashed lines) and in the TST (in solid lines). Arrows show the direction, along which the Heisenberg operator advances the wave functions. } \label{fig_1} \end{figure} To avoid this difficulty the following method is suggested, which is analogous to the dipole emission theory as applied in TST. Introduce domain, $\Delta^4x=(\Delta x^1*S_\perp)*\Delta x^0$, bounded by two hypersurfaces, $\xi=\xi_{-}$ and $\xi=\xi_+$ (see Fig.1). The difference $\xi_+-\xi_-$ is bounded as described below, so that $\Delta^4x$ covers only a minor part of the pulse. A volume, $$ V=S_\perp\lambdabar(\xi_+-\xi_-) =S_\perp\lambdabar\int_{\xi_-}^{\xi_+}{d\xi_2},$$ is a section of $\Delta^4x$ subtended by a line $t={\rm const}$. With the following choice for the normalization coefficient in Eq.(\ref{eq:volkov}): $$ N=2S_\perp \lambdabar\int_{\xi_-}^{\xi_+}{{\cal E} (\xi_2)d\xi_2}, $$ the integral of the electron density in the volume V, $$ \int{\hat{\psi}\gamma^0\psi dV}= S_\perp \lambdabar\int_{\xi_-}^{\xi_+}{\hat{\psi}\gamma^0\psi d\xi_2},$$ is set to unity, i.e. there is a single electron in the volume $V$. This statement follows from Eq.(\ref{eq:volkov}) and the known property of normalized bi-spinor amplitudes: $\hat{u}\cdot\gamma^0 \cdot u=2{\cal E}$. Here the hat means the Dirac conjugation. For a photon of wave vector, $(k^\prime)^\mu$, and polarization vector, $l^\mu$, introduce the wave function: $$ (A^\prime)^\mu= \frac{ \exp[-i(k^\prime\cdot x)/\lambdabar_C] } { \sqrt{N_p} }l^\mu, $$ or, by expanding this in the TST: $$ (A^\prime)^\mu= \frac{P_p(\xi)}{\sqrt{N_p}} \exp\left[ \frac{i({\bf k}^\prime_\perp\cdot {\bf x}_\perp)}{\lambdabar_C}- \frac{ i\lambdabar (k\cdot k^\prime) x^1} {\sqrt{2}\lambdabar_C} \right] l^\mu,$$ where: $$P_p(\xi)= \exp\left[-i \xi \frac{ ({\bf k}^\prime_\perp)^2} {2(k\cdot k^\prime)\lambdabar_C} \right]. $$ Here the photon momentum and photon energy are related to $m_ec$ and $m_ec^2$ correspondingly, or, equivalently, dimensionless $(k^\prime)^\mu$ equals dimensional $(k^\prime)^\mu$ multiplied by $\lambdabar_C$. The choice of the normalization coefficient, $$ N_p=\frac{\omega^\prime V}{2\pi\hbar c\lambdabar_C}, $$ corresponds to a single photon in the volume, $V$. The emission probability, $dW$, is given by an integral over $\Delta^4x$: \begin{equation}\label{eq:probab} dW=\frac{\alpha L_fL_p}{\hbar c} \left|\int{\hat{\psi}_f(\gamma\cdot (A^\prime)^*)\psi_idx^0dx^1dx^2dx^3}\right|^2. \end{equation} Here \begin{equation}\label{eq:firstphase} L_p=\frac{Vd^3{\bf k}^\prime}{(2\pi\lambdabar_C)^3}= \frac{\hbar c N_p d^2{\bf k}^\prime_\perp d(k\cdot k^\prime)}{(2\pi\lambdabar_C)^2(k\cdot k^\prime)} \end{equation} is the number of states for the emitted photon. The transformation of the phase volume as in Eq.(\ref{eq:firstphase}) is based on the following Jacobian: $$ \left(\frac{\partial k^\prime_\|}{\partial(k^\prime\cdot k)}\right)_{{\bf k}^\prime_\perp={\rm const}}=\frac{\omega^\prime}{(k^\prime\cdot k)}, $$ which is also used below in many places. A subscript $i,f$ denotes the electron in the initial (i) or final (f) state. The number of electron states in the presence of the wave field, $L_{i,f}$, should be integrated over the volume $V$ $$ L_{i,f}=\frac{1}{(2\pi\lambdabar_C)^3}\int_V{d^3{\bf p}_{i,f}dV}= \frac{d(k\cdot p)_{i,f}d^2{\bf p}_{\perp i,f}N_{i,f}} {2(2\pi)^3\lambdabar_C^3(k\cdot p)_{i,f}}. $$ \subsection{Conservation laws} The integration by $dx^1dx^2dx^3=c\sqrt{2} dtd^2{\bf x}_\perp$ results in three $\delta-$ functions, expressing the conservation of totals of ${\bf p}_\perp$ and $(k\cdot p)$, for particles in initial and final states: $$ {\bf p}_{\perp i}={\bf p}_{\perp f}+{\bf k}_\perp^\prime,\qquad (k\cdot p_i)=(k\cdot p_f)+ (k\cdot k^\prime). $$ Twice integrated with respect to $dx^1$, the probability $dW$ is proportional to a long time interval, $\Delta t=\Delta x^1/(c\sqrt{2})$, if the boundary condition for the electron wave at $\xi=\xi_-$ is maintained within that long time. On transforming the integral over $dx^0$ to that over $d\xi$, one can find: $$ \left|\int{...d^4x}\right|^2 =(2\pi\lambdabar_C)^3 S_\perp c\Delta t\lambdabar\left|\int{...d\xi}\right|^2\times $$ $$ \times \delta^2({\bf p}_{\perp i}-{\bf p}_{\perp f}-{\bf k}_\perp^\prime)\delta((k\cdot p_i)-(k\cdot p_f)- (k\cdot k^\prime)). $$ To take the large value of $\Delta t$ seems to be the only way to calculate the integral, however, the emission probability calculated in this way relates to multiple electrons in the initial state, each of them locating between the wave fronts $\xi=\xi_-$ and $\xi=\xi_+$ during much shorter time, \begin{equation}\label{eq:time} \delta t(\xi_-,\xi_+)=(1/c)\int_{\xi_-}^{\xi^+}{{\cal E}_i(\xi)d\xi_2}/(k\cdot p_i). \end{equation} For a single electron the emission probability becomes: $$dW_{fi}(\xi_-,\xi_+)=\delta t dW/\Delta t.$$ Using $\delta-$ functions it is easy to integrate Eq.(\ref{eq:probab}) over $d{\bf p}_{\perp f}d(k\cdot p_f)$: $$ \frac{dW_{fi}(\xi_-,\xi_+)}{d(k\cdot k^\prime)d^2{\bf k}^\prime_\perp}= \frac{\alpha \left| \int_{\xi_-}^{\xi_+}{T(\xi)\hat{u}(p_f)(\gamma\cdot l^*)u(p_i)d\xi }\right|^2}{(4\pi\lambdabar_C)^2(k\cdot k^\prime)(k\cdot p_i)(k\cdot p_f)}, $$ where $$ T(\xi)=\frac{P_i(\xi)}{P_f(\xi)P_p(\xi)}= \exp\left[\frac{i\int^\xi{(k^\prime\cdot p_{i}(\xi_2))d\xi_2}}{\lambdabar_C(k\cdot p_{f})} \right], $$ $P_i(\xi)$ and $P_f(\xi)$ are the electron phase multipliers, $P(\xi)$, for the electron in initial and final states and \begin{equation}\label{eq:cons} p^\mu_f(\xi)=p^\mu_i(\xi)-(k^\prime)^\mu+\frac{(k^\prime\cdot p_i(\xi))} {(k\cdot p_i)-(k\cdot k^\prime)}k^\mu . \end{equation} Prior to discussing Eq.(\ref{eq:cons}), return to Eq.(\ref{eq:QEDpropagator}) and analyze it component-by-component in the TST. It appears that three of the four components of that equation describe the conservation of $(k\cdot p)$ and ${\bf p}_{\perp 0}={\bf p}_{\perp}+{\bf a}$ for electron {\it in the course of its emission-free motion}. At the same time, yet another component of Eq.(\ref{eq:QEDpropagator}), specifically, $p^1$, directed along $k^\mu$, describes the energy-momentum exchange between the electron and the 1D wave field, maintaining the identity, $(p\cdot p)=1$. Now turn to Eq.(\ref{eq:cons}). Again, three of the four components express the conservation of the same variables {\it in the course of the photon emission}, while the $p^1$ component, directed along $k^\mu$ describes the absorption of energy and momentum from the wave field in the course of the photon emission. Note, that in the case of a strong field, the energy absorbed from field is not an integer number of quanta, and that for short non-harmonic field it is not even a constant, but a function of the local field. \subsection{Calculation of the matrix element} To calculate the matrix element, one can re-write it as the double integral over $d\xi d\xi_1$ and then reduce the matrices $u(p_{i,f}(\xi))\otimes \hat{u}(p_{i,f}(\xi_1))$ in the integrand to the polarization matrices of the electron at $\xi$ or at $\xi_1$ using Eq.(\ref{eq:QEDpropagator}). These standard manipulations with the Dirac matrices are omitted here. Although in a strong wave electrons may be polarized (see \cite{Omori}), in the present work the emission probability is assumed to be averaged over the electron initial polarizations and summed over its final polarizations. The ultimate result of these derivations is as follows: \begin{equation}\label{eq:dWfi} \frac{dW_{fi}}{d(k\cdot k^\prime)d^2{\bf k}^\prime_\perp}= \frac{\alpha \int_{\xi_-}^{\xi_+} {\int_{\xi_-}^{\xi_+}{T(\xi)T(-\xi_1)]D d\xi d\xi_1}}}{(2\pi\lambdabar_C)^2 (k\cdot k^\prime)(k\cdot p_i)(k\cdot p_f)}, \end{equation} where $$ D= (l^*\cdot p_{i}(\xi_1)) (l\cdot p_i(\xi))- \frac{\left[(p(\xi)\cdot p(\xi_1))-1\right](1-C_{fi})^2 }{4 C_{fi}} $$ and $$ C_{fi}=\frac{(k\cdot p_f)}{(k\cdot p_i)}=1-\frac{\lambdabar_C(k\cdot k^\prime)}{(k\cdot p_i)}\le1 $$ is a recoil parameter which characterizes the reduction in the photon momentum due to emission. The matrix element may also be summed, if desired, for two possible directions of the polarization vector. The second term in the integrand is simply multiplied by two, while in the first one the negative of the metric tensor should be substituted for the product of the polarization vectors (see \S8 in \cite{lp}), so that $-\left(p_i(\xi)\cdot p_i(\xi_1)\right)$ substitutes for $(l^*\cdot p_{i}(\xi_1)) (l\cdot p_i(\xi))$. The latter may be transformed using Eq.(\ref{eq:pp}), thus, giving: $$ \sum_{l}{D}=- \left( \frac{\left[(p(\xi)\cdot p(\xi_1))-1\right]\left(1+C_{fi}^2\right) }{2 C_{fi}}+1\right). $$ \subsection{Vector amplitude of emission in QED case} Now moving to connection between the obtained result, on one hand, and the way the high-frequency emission is treated in the framework of classical theory, on the other. To facilitate the comparison, both here and in Section II the photon frequency and wave vector, $\omega^\prime$ and ${\bf k}^\prime$, are not dimensionless. It appears that the QED result obtained above can be reformulated in a form similar to Eq.(\ref{eq:Jackson}). Using the following relationships between the differentials: $$ dt={\cal E}d\tau=\frac{{\cal E}}{c} \frac{d\xi}{(k\cdot p)},\qquad dR_{\rm QED}=\hbar\omega^\prime dW_{fi}, $$ $$ \frac{(\omega^\prime)^2d\omega^\prime d{\bf n}}{c^3}=d^3{\bf k}^\prime=\frac{\omega^\prime d^2{\bf k}^\prime_\perp d(k\cdot k^\prime)}{c(k\cdot k^\prime)}, $$ one can reduce Eq.(\ref{eq:dWfi}) for the {\it polarized} part of the emission to the same form as that of Eq.(\ref{eq:Jackson}): $$ \frac{dR_{\rm QED}^{\rm pol}} {d\omega^\prime d{\bf n}}=\frac{(\omega^\prime)^2}{4\pi^2c}\left|({\bf A}_{\rm QED}(\omega^\prime)\cdot {\bf l}^*)\right|^2, $$ where $$ {\bf A}_{\rm QED}(\omega^\prime)=\sqrt{\frac{e^2}{C_{fi}}} \times $$ $$ \times \int_{t_-}^{t_+}{\frac{{\bf v}(t)}c\exp\left\{\frac{i\omega^\prime}{C_{fi}}[t-\frac{({\bf n}\cdot{\bf r}(t))}c]\right\}dt}, $$ where $t_-,t_+$ are the time instants when the electron crosses the hypersurfaces $\xi=\xi_-$ and $\xi=\xi_+$ correspondingly. In the considered strong field case the finite integration limits are admissible as long as the integration span well exceeds the formation time. Therefore, $$ {\bf A}_{\rm QED}(\omega^\prime)=\sqrt{ \frac{1}{C_{fi}}} {\bf A}_{\rm cl}\left( \frac{\omega^\prime}{C_{fi}}\right), $$ \begin{equation}\label{eq:pol} \frac{dI_{\rm QED}^{\rm pol}(\omega^\prime)} {d\omega^\prime d{\bf n}}=C_{fi}\frac{dI_{\rm cl}(\omega^\prime/C_{fi})} {d\omega^\prime d{\bf n}}. \end{equation} Thus, the QED effect on the emission from an electron reduces to the classical electric-dipole emission from a moving charge in an electromagnetic field with a very simple rule to transform the polarized emission intensity and polarized emission amplitude, which accounts for the recoil effect. However, the electron in the QED-strong field emits not only as an electric charge, because it also possesses a magnetic moment associated with its spin. In Eq.(\ref{eq:dWfi}) a depolarized contribution to the emission is present. This contribution may be related to the electron spin, which is {\it assumed} to be depolarized. The depolarized emission energy related to the interval of frequency and to the element of solid angle and summed over two polarization directions (i.e., multiplied by two) equals: $$ \frac{dR^{\rm depol}_{\rm QED}}{d\omega^\prime d{\bf n}}= \frac{(\omega^\prime)^2}{4\pi^2c}\frac{\left(1-C_{fi}\right)^2}{2C_{fi}} \times $$ $$ \times\left\{\sum_{\bf l}\left|({\bf A}_{\rm QED}(\omega^\prime)\cdot {\bf l}^*)\right|^2+\left|\varphi_{\rm QED}\right|^2\right\}, $$ where $$ \varphi_{\rm QED}=\sqrt{\frac{e^2}{C_{fi}}} \int_{t_-}^{t_+}{ \exp\left\{\frac{i\omega^\prime}{C_{fi}}[t-\frac{({\bf n}\cdot{\bf r}(t))}c]\right\}\frac{dt}{{\cal E}(t)}} $$ is a scalar amplitude of emission, introduced in a way similar to that for the vector amplitude. After derivations analogous to those of Section II, the radiation loss rate due to the electron magnetic moment reads: \begin{equation}\label{eq:depol} \frac{dI^{\rm depol}_{\rm QED}}{d\omega^\prime}=\frac{I_{\rm cl}(\tau)}{\omega_c} \frac{9\sqrt{3}}{8\pi}\left(1-C_{fi}\right)^2 \frac{r_0}{C_{fi}}K_{2/3}\left(\frac{r_0}{C_{fi}}\right). \end{equation} Thus, the QED effect in the emission from an electron in a strong electromagnetic field reduces to a downshift in frequency accompanied by an extra contribution from the magnetic moment of electron. Note a general character of these conclusions: only in the recoil parameter, $C_{fi}$, is there a direct dependence on the 1D wave vector. This dependence can be also excluded, because, for the photons emitted along the direction of the particle motion, the following approximation is valid: $$ C_{fi}\approx 1-\frac{\hbar\omega^\prime}{{\cal E}m_ec^2}=1-\chi r_0,\qquad \frac{r_0}{C_{fi}}=r_\chi=\frac{r_0}{1-\chi r_0}, $$ so that in QED-strong fields the emission spectrum is also universal and may be parameterized with the sole parameter, $I_{\rm cl}$. Combining Eqs.(\ref{eq:thetafforwave},\ref{eq:time},\ref{eq:Deltat}) one can derive the condition \begin{equation}\label{eq:bounds} \xi_+-\xi_-\gg\frac{2}{|d{\bf a}/d\xi|}. \end{equation} Under this condition, the time interval within which the emitting electron locates between the wave fronts $\xi=\xi_-$ and $\xi=\xi_+$ much exceeds the formation time of emission validating the above considerations. With these results the scheme to account for the high-frequency emission as outlined in Section II may be easily extended to 3D QED-strong fields. However, the radiation back reaction needs to be incorporated for this scheme to be consistent. \section{Radiation and its back-reaction} \begin{figure} \includegraphics[scale=0.38]{Fig2.eps} \caption{Emission spectra for various values of $\chi$. } \label{fig_2} \end{figure} \begin{figure} \includegraphics[scale=0.38]{Fig3.eps} \caption{Emitted radiation power in the QED approach {\it vs} classical (solid); an interpolation formula $I_{\rm QED}=I_{\rm cl}/(1+1.04\sqrt{I_{\rm cl}/I_C})^{4/3}$ (dashed).} \label{fig_3} \end{figure} Unless the field is QED-strong the radiation back-reaction in a relativistically strong laser wave may or may not be significant. The condition for the field to be radiation-dominant (see, e.g., Ref.\cite{kogaetal}) is formulated in terms of the ratio between the magnitudes of the Lorentz force and the radiation force, which become comparable at intensities $J\sim 10^{23}{\rm W/cm}^2$. For an electron moving toward the laser wave, the radiation force starts to dominate at a lower wave intensity, depending on the electron energy \cite{kogaetal}. The radiation back-reaction decelerates such an electron, the effect being more pronounced for longer laser pulses \cite{our}. As the result, at intensities $J\sim 10^{22}{\rm W/cm}^2$ the radiation back-reaction drastically changes the character of the laser pulse interaction with dense plasmas and $\gamma$-ray emission becomes a leading mechanism of the laser energy losses \cite{FI}. In QED-strong fields the radiation back-reaction is {\it always} significant, as long as at each photon emission the electron looses a noticeable fraction of its momentum and energy. The matter of principle is also a consistency of the perturbation theory of emission in QED-strong fields. Within the framework of classical theory the momentum-energy change resulting from the radiation back-reaction should be small in some sense, to properly approximate the radiation force (see \cite{ll},\cite{jack} as well as the considerations relating to the estimate as in Eq.(\ref{eq:Stepansratio})). In QED-strong fields this change cannot be claimed to be small, but the probability of emission can be! Specifically, the difference, $\xi_+-\xi_-$, should be small enough, so that the probability of emission within the time interval of Eq.(\ref{eq:time}) should be much less (or at least less) than unity: \begin{equation}\label{eq:upperbound} \int{\frac{dW_{fi}}{d{\bf k}^\prime_\perp d(k^\prime\cdot k)}d{\bf k}^\prime_\perp d(k^\prime\cdot k)}\ll1. \end{equation} \subsection{Emission probability and radiation loss rate} The derivations performed in Section II for the radiation loss rate, namely, the approximation of the angular distribution with the Dirac function and approximation of the frequency spectrum with the MacDonald functions may be directly applied to the emission probability. On developing the dot-product, $(k^\prime\cdot p_i)$, in $T(\xi)$ in the TST metric, $G^{\mu\nu}$, one can find: $$ T(\xi)T(-\xi_1)=\exp\left[i(T_1+T_2)\right], $$ where $$T_1=\frac{(k\cdot p_i)}{2\lambdabar_C(k\cdot k^\prime)(k\cdot p_f)}\left(\frac{(k\cdot k^\prime)}{(k\cdot p_i)}\left<{\bf p}_{\perp i}\right>- {\bf k}^\prime_{\perp}\right)^2(\xi-\xi_1), $$ $$ T_2=\frac{ (k\cdot k^\prime) \left\{(\xi-\xi_1)+\int_{\xi_1}^\xi{ \left[{\bf a}(\xi_2)-\left<{\bf a}\right>\right]^2d\xi_2}\right\}}{2\lambdabar_C(k\cdot p_i)(k\cdot p_f)}, $$ $$ \left<{\bf a}\right>=\frac{\int_{\xi_1}^\xi{{\bf a}d\xi_2}}{\xi-\xi_1},\qquad \left<{\bf p}_{\perp i}\right>={\bf p}_{\perp 0 i}-\left<{\bf a}\right>.$$ Integration over $d^2{\bf k}^\prime_\perp$ then gives: $$ \frac{dW_{fi}(\xi_-,\xi_+)}{d(k\cdot k^\prime)}= \frac{\alpha \int_{\xi_-}^{\xi_+} {\int_{\xi_-}^{\xi_+}{\frac{i\exp(iT_2)}{\xi-\xi_1}\sum_{l}{D(\xi,\xi_1)} d\xi d\xi_1}}}{2\pi\lambdabar_C(k\cdot p_i)^2}. $$ In strong fields the following estimates may be applied: $$ (k\cdot k^\prime)\sim \lambdabar_C(k\cdot p_i)^2\left|\frac{d{\bf a}}{d\xi}\right|,\qquad dW_{fi}\sim\alpha\left|(\xi_+ - \xi_-)\frac{d{\bf a}}{d\xi}\right|.$$ Now the bounds for $\xi_+-\xi_-$ can be {\it consistently} introduced: \begin{equation}\label{consistency} \left|d{\bf a}/d\xi\right|^{-1}\ll\xi_+-\xi_-\ll \min\left(\alpha^{-1}\left|d{\bf a}/d\xi\right|^{-1},1\right). \end{equation} Under these bounds, first, the condition (\ref{eq:bounds}) is satisfied. Therefore, the time interval (\ref{eq:time}) is much greater than the formation time and the emission probability is linear in $\xi_+-\xi_-$: $$dW_{fi}(\xi_-,\xi_+)=(dW/d\xi)(\xi_+-\xi_-).$$ Second, the emission probability satisfies the condition (\ref{eq:upperbound}). Therefore, perturbation theory is applicable. In addition, the emission probability can be expressed in terms of the local electric field. Note, that consistency in (\ref{consistency}) is ensured in relativistically strong electromagnetic fields as long as $\alpha\ll1$, with no restriction on the magnitude of the electromagnetic field experienced by an electron. Under the condition (\ref{consistency}) the probability may be expressed in terms of McDonald functions: \begin{equation}\label{eq:probabf} \frac{dW_{fi}}{dr_0d\xi}= \frac{\alpha \chi\left(\int_{r_\chi}^\infty{K_{\frac53}(y)dy}+r_0r_\chi\chi^2 K_{\frac23}(r_\chi)\right)}{\sqrt{3}\pi\lambdabar_C(k\cdot p_i)}, \end{equation} $$ r_0=\frac{(k\cdot k^\prime)}{\chi(k\cdot p_i)},\qquad \chi=\frac32(k\cdot p_i)\left|\frac{d{\bf a}}{d\xi}\right|\lambdabar_C=\sqrt{\frac{I_{\rm cl}}{I_C}}. $$ Probability (similar to that found in \cite{nr}), is expressed in terms of functions of $r_0$, and related to interval of $dr_0$. The way to introduce $r_0$ and $\chi$ looks different from that adopted in Eqs.(\ref{eq:classicspectrum},\ref{eq:omegaccl}), however, the difference is negligible as long as ${\omega^\prime}/{{\cal E}_i}\approx {(k\cdot k^\prime)}/{(k\cdot p_i)}$ for collinear $k^\prime$ and $p_i$. The momentum of the emitted radiation, related to the interval of the electron {\it proper} time, may be found from Eqs.(\ref{eq:time},\ref{eq:probabf}): \begin{eqnarray} \frac{dp_{\rm rad}}{d\tau}= \int{k^\prime \frac{c(k\cdot p_i)dW_{fi}}{ d(k\cdot k^\prime)d^2{\bf k}_\perp d\xi}d(k\cdot k^\prime)d^2{\bf k}_\perp}=\nonumber\\ =[p+k\ O((k\cdot p_i)^{-1})]\int{c(k^\prime\cdot k) \frac{dW}{dr_0 d\xi}dr_0}.\label{eq:prad} \end{eqnarray} As with other 4-momenta, $p_{\rm rad}$ is related to $m_ec$. To prove the 4-vector relationship (\ref{eq:prad}), its components in the TST metric should be integrated over ${\bf k}_\perp$ using the symmetry of $T_1$. The small term, $O(1/(kp_i))$, arises from the electron rest mass energy and from the small ($\sim 1/|p_\perp|$) but finite width of the photon angular distribution. In neglecting this term: $$\frac{dp_{\rm rad}}{d\tau}=p_i\frac1{m_ec^2}\int{ \frac{dI_{\rm QED}}{dr_0}dr_0},$$ $$I_{\rm QED}=m_ec^2\int{c(k\cdot k^\prime)\frac{dW_{fi}}{d\xi dr_0}dr_0}$$ being the radiation loss rate. The photon energy spectrum, $dI_{\rm QED}/dr_0$, is described as a function only of the random {\it scalar}, $r_0$, using only the parameter, $\chi$ (see Fig.\ref{fig_2}). The latter may be parameterized in terms of the radiation loss rate, evaluated within the framework of classical theory (see Eq.(\ref{eq:chifirst}) and Fig.\ref{fig_3}). The expressions for $q(I_{\rm cl})=I_{\rm QED}/I_{\rm cl}$ and for the normalized spectrum function, $Q(r_0,\chi)$, coincide with formulae known from the gyrosynchrotron emission theory (see \S90 in \cite{lp}): $$ q=\frac{9 \sqrt{3}}{8\pi}\int_0^\infty{dr_0r_0\left(\int_{r_\chi}^\infty{K_{\frac53}(y)dy}+r_0r_\chi\chi^2 K_{\frac23}(r_\chi)\right)}, $$ $$ Q(r_0,\chi)=\frac{9\sqrt{3} r_0}{8\pi q}\left(\int_{r_\chi}^\infty{K_{\frac53}(y)dy}+ r_0r_\chi\chi^2 K_{\frac23}(r_\chi)\right). $$ As mentioned above, $$ \frac{dI_{\rm QED}}{dr_0}=\frac{dI^{pol}_{\rm QED}}{dr_0}+\frac{dI^{depol}_{\rm QED}}{dr_0}, $$ where polarized and non-polarized contributions are given by Eqs.(\ref{eq:pol}-\ref{eq:depol}) \subsection{Radiation back-reaction: radiation force approximation} \begin{figure} \includegraphics[scale=0.4]{Fig4.eps} \caption{The emission spectrum for 600 MeV electrons interacting with 30-fs laser pulses of intensity $2\cdot 10^{22} W/cm^2 $: with (solid) or without (dashed) accounting for the QED effects. Here $\hbar\omega_{c0}\approx 1.1$ MeV for $\lambda=0.8\mu$m. } \label{fig_4} \end{figure} While emitting a photon, an electron also acquires 4-momentum from the external field (see Eq.(\ref{eq:cons})): $$ dp^\mu_F=\frac{(k^\prime\cdot p_i(\xi))} {(k\cdot p_i)-(k\cdot k^\prime)}k^\mu. $$ The interaction with the field ensures that the {\it total} effect of emission on the electron not to break the entity $(p_f\cdot p_f)=1$. As long as the angular distribution of emission is approximated with the Dirac function, the expression for $dp^\mu_F$ needs to be corrected to ensure exact momentum-energy conservation with approximated momentum of radiation. The choices of near-unity correction coefficients in $dp_F$ are somewhat different in the cases $\chi\le 1$ and $\chi\gg 1$. For moderate values of $\chi\le 1$ the {\it radiation force}, $(dp_F-dp_{\rm rad})/d\tau$, may be introduced. In this approximation it is admitted that the change in the electron momentum within the infinitesimal time interval is also infinitesimal. This 'Newton's law' approximation is pertinent to classical physics and it both ignores the point that the change in the electron momentum at $\chi\sim1$ is essentially finite because of the finite momentum of emitted photon and breaks the low bound on the time interval presented in (\ref{eq:bounds}). The approximation, however, is highly efficient and allows one to avoid time-consuming statistical simulations. The approximation error tends to zero as $\chi\rightarrow0$, however, it is not huge at $\chi\sim1$ and even at $\chi=10$. The latter can be seen from Fig.\ref{fig_5} given below in which the average relative change in the electron energy in the course of single-photon emission is presented (assumed to be negligible within the radiation force approximation). Within the radiation force approximation the best correction is $dp^\mu_F \approx k^\mu (k^\prime\cdot p_i)/(k\cdot p_i)$. The total radiation force may now be found by integrating $dp_F$ over $d(k^\prime\cdot k)$: \begin{equation}\label{eq:radf} \frac{d(p^\mu_f-p^\mu_i)}{d\tau}=\left(k^\mu\frac{(p_i\cdot p_i)}{(k\cdot p_i)}-p^\mu_i\right)\frac{I_{\rm QED}}{m_ec^2}. \end{equation} The radiation force maintains the abovementioned entity as long as $(p_i\cdot d(p_f-p_i)/d\tau)=0$. Eq.(\ref{eq:radf}) is presented in a form which is applicable both with dimensionless or with dimensional momenta. In \cite{mine},\cite{our} it was mentioned, that QED is not compatible with the traditional approach to the radiation force in classical electrodynamics and an alternative equation of motion for a radiating electron was suggested: \begin{equation}\label{eq:our} \frac{dp^\mu}{d\tau}= \Omega^{\mu\nu}p_\nu- \frac{I_{\rm QED}}{m_ec^2}p^\mu+\tau_0\frac{I_{\rm QED}}{I_{\rm cl}} \Omega^{\mu\nu}\Omega_{\nu\beta}p^\beta, \end{equation} where $\Omega^{\mu\nu}=eF^{\mu\nu}/(m_ec)$, $F^{\mu\nu}$ is the field tensor and $$\tau_0=2e^2/(3m_ec^3)=(2/3)\alpha\lambdabar_C/c.$$ In the 1D plane wave the particular expression for the radiation force can be found using the following equation: $$\tau_0\Omega^{\mu\nu}\Omega_{\nu\beta}p^\beta=k^\mu\frac{(p\cdot p)I_{\rm cl}}{m_ec^2(k\cdot p)}.$$ With this account, the radiation force in Eq.(\ref{eq:our}) is the same as its QED formulation in Eq.(\ref{eq:radf}). This proves that the earlier derived Eq.(\ref{eq:our}) has a wide range of applicability including electron quasi-classical motion in QED-strong fields and in the particular case of 1D wave fields it can be directly derived from the QED principles. Note, that the efforts to derive the radiation force from quantum mechanics were applied many times (see \cite{moniz}, the most convincing approach which gives the equation quite similar to (\ref{eq:our}) may be found in \cite{neil}). However, for the first time the derivation from QED side is provided, with the resulting equation being different from those given in textbooks \cite{ll},\cite{jack}. The way to solve Eq.(\ref{eq:our}) within the PIC scheme and integrate the emission is described in \cite{our}. In Fig.\ref{fig_4} we show the numerical result for an electron interacting with a laser pulse. We see that the QED effects essentially modify the radiation spectrum even with laser intensities which are already achieved. \subsection{Radiation back-reaction: Monte-Carlo approach} The radiation force approximation does not fully account for the statistical character of the emission process at $\chi\ge 1$. Specifically, we mentioned above that in the 'Newton's law' approximation, the force, ${\bf f}$, provides only the {\it infinitesimal} change in the electron momentum, $\Delta{\bf p}={\bf f}\Delta t\rightarrow 0$ over an infinitely short time interval, as $\Delta t\rightarrow 0$. For radiation processes in QED-strong fields this point is in contradiction with a small {\it probability}, $\Delta t\cdot dW_{fi}/dt\rightarrow0$, for an electron to acquire a {\it finite} change in momentum, $|\delta{\bf p}|\sim|{\bf p}|$, in the course of emission. \begin{figure} \includegraphics[scale=0.4]{Fig5.eps} \caption{Expectation of the emitted photon energy. In dimensional units $\Delta{\cal E}=\hbar\omega^\prime$ and ${\cal E}$ is the dimensional energy of an electron prior to emission.} \label{fig_5} \end{figure} A more quantitative, though more cumbersome, description may be achieved within the QED Monte-Carlo approach. It is convenient to relate the emission probability to an interval of proper time, $\Delta\tau=\Delta t/{\cal E}_i$. From (\ref{eq:probabf}) it follows that: \begin{equation}\label{eq:probabtau} \frac{dW_{fi}}{dr_0d\tau}=\frac{I_{\rm QED}}{m_ec^2}\frac{Q(r_0,\chi)}{\chi r_0}. \end{equation} (Note, that on multiplying Eq.(\ref{eq:probabtau}) by $m_ec^2(\omega^\prime/{\cal E}_i)\approx m_ec^2\chi r_0$, one obtains again the formula for spectral distribution of energy emitted per interval of time.) As long as Eq.(\ref{eq:probabtau}) for the differential probability is available, one can find the expected energy of the emitted photon: $$\frac{1}{{\cal E}_i}\left<\omega^\prime\right>= \frac\chi{\int_0^{1/\chi}{Q(r_0,\chi)(dr_0/r_0)}}. $$ A plot of the expectation for the ratio, $<\omega^\prime/{\cal E}_i>$, vs $\chi$ is given in Fig.\ref{fig_5}, with energy of emitted photons being denoted as $\Delta{\cal E}$. The total probability of emission within the interval of proper time is given by a complete integral of probability: $$ W_{fi}= \Delta\tau \int{\frac{dW_{fi}}{dr_0}d\tau dr_0}=\Delta\tau\frac{I_{\rm QED}}{m_ec^2}\left<\frac{\omega^\prime}{{\cal E}_i}\right>^{-1}. $$ Both within the QED perturbation theory and within the Monte-Carlo scheme $W$ is assumed to be small $W<1$. The probability of no emission equals $1-W\ge0$. The partial probability, $W_{fi} (\omega^\prime<\omega^\prime_0)$, for the emission with the photon energy not exceeding the given value, $\omega^\prime_0$, is given by the incomplete probability integral: $$ W_{fi} (\omega^\prime<\omega^\prime_0)= W_{fi}\left<\frac{\omega^\prime}{\cal E}\right>\int^{ \omega^\prime/({\cal E}_i\chi)}{ Q(r_0,\chi)\frac{dr_0} {\chi r_0} }. $$ Therefore, for a given interval of proper time and calculated $\chi$, $\left<\omega^\prime\right>/{\cal E}$ and $W_{fi}<1$, the expression of the only scalar to gamble, $\omega^\prime/{\cal E}_i$, in terms of a random number, $0\le {\rm rnd}<1$, is implicitly given by an integral equation as follows: $$ \int^{ \omega^\prime_0/({\cal E}_i\chi)}_0{ Q(r_0,\chi)\frac{dr_0} {\chi r_0} }=\frac{\rm rnd}{W_{fi}}\left<\frac{\omega^\prime}{ {\cal E}_i }\right>^{-1}, $$ if the gambled value of ${\rm rnd}$ does not exceed $W_{fi}$: $0\le{\rm rnd}\le W_{fi}$. Otherwise, i.e. if $W_{fi}<{\rm rnd}\le1$, the emission within this interval does not occur. Once the value of $\omega^\prime/{\cal E}$ is found, the change in the electron 4-momentum due to single photon emission during the time interval, $\Delta\tau$, may be determined as follows: $$ p^\mu_f-p^\mu_i=\left\{k^\mu\frac{(p_i\cdot p_i)[1-\omega^\prime/(2{\cal E})]}{(k\cdot p_i)[1-\omega^\prime/{\cal E}]}-p^\mu_i\right\}\frac{\omega^\prime}{\cal E}. $$ It is easy to see that the identity $(p_f\cdot p_f)=1$ is maintained. To achieve this, a correction factor, $1-\omega^\prime/(2{\cal E})$ is applied to the momentum exchange with the wave field as present in Eq.(\ref{eq:cons}). The implementation of this method for 3D realistic laser fields together with simulation results and an account for pair production will be described in detail in a forthcoming publication. \section{Conclusion} QED-strong fields in the focus of an ultra-bright laser may be realized, if desired, using the technologies which already exist. In any case, these effects will come into power when laser-plasma interactions are explored with the next generation of lasers. It is demonstrated that electron motion in very strong laser fields with pronounced QED effects may be successfully described within the radiation force approximation. The necessary corrections in the radiation force and the emission spectra to account for the QED effects are parameterized by the {\it sole} parameter, $I_{\rm cl}$. {\bf We acknowledge} an invaluable help and fruitful advices we received from S.S. Bulanov. V.T. Tikhonchuk kindly pointed out some effects we missed. We are grateful to J. Rafelski, R.F. O'Connel and V. Hnizdo for critical comments and to V. Tenishev for discussing the Monte-Carlo method. The work of one of the authors (I.S.) on high energy density physics is supported by the DOE NNSA under the Predictive Science Academic Alliances Program by grant DE-FC52-08NA28616.
proofpile-arXiv_065-4746
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} As an extension of crossed modules (Whitehead) and $2$-crossed modules (Conduch\'{e}); Arvasi, Kuzp\i nar\i\ and Uslu in \cite{arkaus}, defined $3$% -crossed modules as a model for homotopy 4-types. Kan, in \cite{kan}, also proved that simplicial groups are algebraic models for homotopy types. It is known from \cite{patron3, grand, portervascon} that simplicial algebras with Moore complex of length $1,$ $2$ lead to crossed module and $2$-crossed modules which are related to Koszul complex and Andre-Quillen homology constructions for use in homotopical and homological algebra. \bigskip PJ.L.Doncel, A.R. Grandjean and M.J.Vale in \cite{doncel} extent the $2$% -crossed modules of groups to commutative algebras. As in indicated in \cite% {doncel}, they defined a homology theory and obtain the relation with Andre-Quillen homology for $n=0,1,2,$. This homology theory includes the projective $2$-crossed \ resolution and the homotopy operator given in \cite% {vale1}. Of course these results based on the work of T.Porter \cite% {portervascon}, which involves the relation between Koszul complex and the Andre-Quillen homology by means of free crossed modules of commutative algebra. In this vein, we hope that it would be possible to generalise these results by using commutative algebra case of higher dimensional crossed algebraic gadgets. \bigskip The present work involves the relation between $3$-crossed modules and simplicial algebra without details since the most calculation are same as group case given in \cite{arkaus}. Furthermore the work involves the existence of projective 3-crossed resolution of a $\mathbf{k}$\textbf{-}% algebra to obtain an higher dimensional homological information about commutative algebras. Here the construction is a bit different from the 2-crossed resolution given in \cite{doncel} because of the number of Peiffer liftings. At the end of the work we give the Lie algebra 3-crossed modules. \bigskip The main results of this work are; \begin{enumerate} \item Introduce the notion of $3$-crossed modules of commutative algebras and Lie algebras; \item Construct the passage from $3$-crossed modules of algebras to simplicial algebras and the converse passage as an analogue result given in \cite{arkaus}; \item Define the projective $3$-crossed resolution for investigate a higher dimensional homological information \ and show the existence of this resolution for an arbitrary $\mathbf{k}$-algebra which was shown for two crossed modules in \cite{doncel}. \end{enumerate} \section{Preliminaries} In this work $\mathbf{k}$ will be a fixed commutative ring with identity $1$ not equal to zero and all algebras will be commutative $\mathbf{k}$\textbf{-}% algebras, we accept they are not required to have the identity $1$. \subsection{\textbf{Simplicial Algebras }} See \cite{may}, \cite{curtis} for most of the basic properties of simplicial structures. A simplicial algebra $\mathbf{E}$ consists of a family of algebras $\left\{ E_{n}\right\} $ together with face and degeneracy maps $d_{i}^{n}:E_{n}% \rightarrow E_{n-1}$, $0\leq i\leq n$, $(n\neq 0)$ and $s_{i}^{n}:E_{n-1}% \rightarrow E_{n},$ $0\leq i\leq n$, satisfying the usual simplicial identities given in \cite{andre}, \cite{ill}. The category of simplicial algebras will be denoted by $\mathbf{SimpAlg}$. Let $\Delta $ denotes the category of finite ordinals. For each $k\geq 0$ we obtain a subcategory $\Delta _{\leq k}$ determined by the objects $\left[ i% \right] $ of $\Delta $ with $i\leq k$. A $k$-truncated simplicial algebras is a functor from $\Delta _{\leq k}^{op}$ to $\mathbf{Alg }$ (the category of algberas). We will denote the category of $k$-truncated simplicial algebras by $\mathbf{Tr}_{k}\mathbf{SimpAlg }\mathfrak{.}$ By a $k$-$% truncation$ $of$ $a$ $simplicial$ $algebra,$ we mean a $k$-truncated simplicial algebra $\mathbf{tr}_{k}\mathbf{E}$ obtained by forgetting dimensions of order $>k$ in a simplicial algebra $\mathbf{E.}$ Then we have the adjoints situations \begin{center} $\xymatrix@R=60pt@C=60pt{\ \mathbf{SimpAlg} \ar@{->}@<2pt>[r]^-{\mathbf{tr}% _{k}} & \mathbf{Tr}_{k} \mathbf{SimpAlg} \ar@{->}@<2pt>[l]^-{\mathbf{st}_{k}} \ar@{->}@<2pt>[r]^-{\mathbf{cost}_{k}} & \mathbf{SimpAlg} \ar@{->}@<2pt>[l]^-% {\mathbf{tr}_{k}} }$ \end{center} \noindent where $\mathbf{st}_{k}$ and $\mathbf{cost}_{k\text{ }}$are called the $k$-skeleton and the $k$-coskeleton functors respectivily For detailed definitions see \cite{duskin}. \subsection{\textbf{The Moore Complex}.} The Moore complex $\mathbf{NE}$\ of a simplicial algebra $\mathbf{E}$\ is defined to be the normal chain complex $(\mathbf{NE,\partial })$ with% \begin{equation*} NE_{n}=\bigcap\limits_{i=0}^{n-1}kerd_{i} \end{equation*}% and with differential $\partial _{n}:NE_{n}\rightarrow NE_{n-1}$ induced from $d_{n}$ by restriction. We say that the Moore complex $\mathbf{NE}$ of a simplicial algebra $\mathbf{% E}$ is of \textit{length k} if $\mathbf{NE}_{n}=0$ for all $n\geq k+1$. We denote the category of simplicial algebras with Moore complex of length $k$ by $\mathbf{SimpAlg}_{\leq k}.$ The Moore complex, $\mathbf{NE}$, carries a hypercrossed complex structure (see Carrasco \cite{c} ) from which $\mathbf{E}$ can be rebuilt. Now we will have a look to this construction slightly. The details can be found in \cite% {c}. \subsection{\textbf{The Poset of Surjective Maps}} The following notation and terminology is derived from \cite{cc}. For the ordered set $[n]=\{0<1<\dots <n\}$, let $\alpha _{i}^{n}:[n+1]\rightarrow \lbrack n]$ be the increasing surjective map given by; \begin{equation*} \alpha _{i}^{n}(j)=\left\{ \begin{array}{ll} j & \text{if }j\leq i, \\ j-1 & \text{if }j>i.% \end{array}% \right. \end{equation*}% Let $S(n,n-r)$ be the set of all monotone increasing surjective maps from $% [n]$ to $[n-r]$. This can be generated from the various $\alpha _{i}^{n}$ by composition. The composition of these generating maps is subject to the following rule: $\alpha _{j}\alpha _{i}=\alpha _{i-1}\alpha _{j},j<i$. This implies that every element $\alpha \in S(n,n-r)$ has a unique expression as $% \alpha =\alpha _{i_{1}}\circ \alpha _{i_{2}}\circ \dots \circ \alpha _{i_{r}} $ with $0\leq i_{1}<i_{2}<\dots <i_{r}\leq n-1$, where the indices $% i_{k}$ are the elements of $[n]$ such that $\{i_{1},\dots ,i_{r}\}=\{i:\alpha (i)=\alpha (i+1)\}$. We thus can identify $S(n,n-r)$ with the set $\{(i_{r},\dots ,i_{1}):0\leq i_{1}<i_{2}<\dots <i_{r}\leq n-1\} $. In particular, the single element of $S(n,n)$, defined by the identity map on $[n]$, corresponds to the empty 0-tuple ( ) denoted by $% \emptyset _{n} $. Similarly the only element of $S(n,0)$ is $(n-1,n-2,\dots ,0)$. For all $n\geq 0$, let \begin{equation*} S(n)=\bigcup_{0\leq r\leq n}S(n,n-r). \end{equation*}% We say that $\alpha =(i_{r},\dots ,i_{1})<\beta =(j_{s},\dots ,j_{1})$ in $% S(n)$ if $i_{1}=j_{1},\dots ,i_{k}=j_{k}$ but $i_{k+1}>j_{k+1},$ $(k\geq 0)$ or if $i_{1}=j_{1},\dots ,i_{r}=j_{r}$ and $r<s$. This makes $S(n)$ an ordered set. For example \begin{eqnarray*} S(2) &=&\{\phi _{2}<(1)<(0)<(1,0)\} \\ S(3) &=&\{\phi _{3}<(2)<(1)<(2,1)<(0)<(2,0)<(1,0)<(2,1,0)\} \\ S(4) &=&\{\phi _{4}<(3)<(2)<(3,2)<(1)<(3,1)<(2,1)<(3,2,1)<(0)<(3,0)<(2,0) \\ &<&(3,2,0)<(1,0)<(3,1,0)<(2,1,0)<(3,2,1,0)\} \end{eqnarray*} \subsection{The Semidirect Decomposition of a Simplicial Algebra} The fundamental idea behind this can be found in Conduch\'{e} \cite{conduche}% . A detailed investigation of this for the case of simplicial groups is given in Carrasco and Cegarra \cite{cc}.The algebra case of the structure is also given in \cite{c}. \begin{proposition} If \textbf{E} is a simplicial algebra, then for any $n\geq 0$% \begin{equation*} \begin{array}{lll} E_n & \cong & (\ldots (NE_n \rtimes s_{n-1}NE_{n-1})\rtimes \ldots \rtimes s_{n-2}\ldots s_0NE_1)\rtimes \\ & & \qquad (\ldots (s_{n-2}NE_{n-1}\rtimes s_{n-1}s_{n-2}NE_{n-2})\rtimes \ldots \rtimes s_{n-1}s_{n-2}\dots s_0NE_0). ~~% \end{array}% \end{equation*} \end{proposition} \begin{proof} This is by repeated use of the following lemma. \end{proof} \begin{lemma} Let \textbf{E} be a simplicial algebra. Then $E_{n}$ can be decomposed as a semidirect product: \begin{equation*} E_{n}\cong \mathrm{ker}d_{n}^{n}\rtimes s_{n-1}^{n-1}(E_{n-1}). \end{equation*} \end{lemma} The bracketing and the order of terms in this multiple semidirect product are generated by the sequence: \begin{equation*} \begin{array}{lll} E_{1} & \cong & NE_{1}\rtimes s_{0}NE_{0} \\ E_{2} & \cong & (NE_{2}\rtimes s_{1}NE_{1})\rtimes (s_{0}NE_{1}\rtimes s_{1}s_{0}NE_{0}) \\ E_{3} & \cong & ((NE_{3}\rtimes s_{2}NE_{2})\rtimes (s_{1}NE_{2}\rtimes s_{2}s_{1}NE_{1}))\rtimes \\ & & \qquad \qquad \qquad ((s_{0}NE_{2}\rtimes s_{2}s_{0}NE_{1})\rtimes (s_{1}s_{0}NE_{1}\rtimes s_{2}s_{1}s_{0}NE_{0})).% \end{array}% \end{equation*}% and% \begin{equation*} \begin{array}{lll} E_{4} & \cong & (((NE_{4}\rtimes s_{3}NE_{3})\rtimes (s_{2}NE_{3}\rtimes s_{3}s_{2}NE_{2}))\rtimes \\ & & \qquad \ ((s_{1}NE_{3}\rtimes s_{3}s_{1}NE_{2})\rtimes (s_{2}s_{1}NE_{2}\rtimes s_{3}s_{2}s_{1}NE_{1})))\rtimes \\ & & \qquad \qquad s_{0}(\text{decomposition of }E_{3}).% \end{array}% \end{equation*} Note that the term corresponding to $\alpha =(i_{r},\ldots ,i_{1})\in S(n)$ is% \begin{equation*} s_{\alpha }(NE_{n-\#\alpha })=s_{i_{r}...i_{1}}(NE_{n-\#\alpha })=s_{i_{r}}...s_{i_{1}}(NE_{n-\#\alpha }), \end{equation*}% where $\#\alpha =r.$ Hence any element $x\in E_{n}$ can be written in the form% \begin{equation*} x=y+\sum\limits_{\alpha \in S(n){\backslash }\left\{ \emptyset _{n}\right\} }s_{\alpha }(x_{\alpha })\ \text{with }y\in NE_{n}\ \text{and }x_{\alpha }\in NE_{n-\#\alpha }. \end{equation*} \subsection{\textbf{Hypercrossed Complex Pairings}} In the following we recall from \cite{patron3} hypercrossed complex pairings for commutative algebras. The fundamental idea behind this can be found in Carrasco and Cegarra (cf. \cite{cc}). The construction depends on a variety of sources, mainly Conduch\'{e} \cite{conduche}, Z. Arvasi and T. Porter, \cite{patron3}. Define a set $P(n)$ consisting of pairs of elements $(\alpha ,\beta )$ from $S(n)$ with $\alpha \cap \beta =\emptyset $ and $\beta <\alpha $ , with respect to lexicographic ordering in $S(n)$ where $\alpha =(i_{r},\dots ,i_{1}),\beta =(j_{s},\dots ,j_{1})\in S(n)$. The pairings that we will need, \begin{equation*} \{C_{\alpha ,\beta }:NE_{n-\sharp \alpha }\otimes NE_{n-\sharp \beta }\rightarrow NE_{n}:(\alpha ,\beta )\in P(n),n\geq 0\} \end{equation*}% are given as composites by the diagram \begin{center} $\xymatrix@R=40pt@C=60pt{\ NE_{n-\#\alpha}\otimes NE_{n-\#\beta} \ar[d]% _{s_{\alpha}\otimes s_{\beta}} \ar[r]^-{C_{\alpha ,\beta}} & NE_n \\ E_n \otimes E_n \ar[r]_{\mu} & E_n \ar[u]_{p} }$ \end{center} where $s_{\alpha }=s_{i_{r}},\dots ,s_{i_{1}}:NE_{n-\sharp \alpha }\rightarrow E_{n},$\quad $s_{\beta }=s_{j_{s}},\dots ,s_{j_{1}}:NE_{n-\sharp \beta }\rightarrow E_{n},$ \newline $p:E_{n}\rightarrow NE_{n}$ is defined by composite projections $% p(x)=p_{n-1}\dots p_{0}(x),$ where \newline $p_{j}(z)=zs_{j}d_{j}(z)^{-1}$ with $j=0,1,\dots ,n-1.$ $\mu :E_{n}\otimes E_{n}\rightarrow E_{n}$ is given by multiplication map and $\sharp \alpha $ is the number of the elements in the set of $\alpha ,$ similarly for $\sharp \beta .$ Thus% \begin{eqnarray*} C_{\alpha ,\beta }(x_{\alpha }\otimes y_{\beta }) &=&p\mu (s_{\alpha }\otimes s_{\beta })(x_{\alpha }\otimes y_{\beta }) \\ &=&p(s_{\alpha }(x_{\alpha })\otimes s_{\beta }(y_{\beta })) \\ &=&(1-s_{n-1}d_{n-1})\dots (1-s_{0}d_{0})(s_{\alpha }(x_{\alpha })s_{\beta }(y_{\beta })) \end{eqnarray*} Let $I_{n}$ be the ideal in $E_{n}$ generated by elements of the form \begin{equation*} C_{\alpha ,\beta }(x_{\alpha }\otimes y_{\beta }) \end{equation*} where $x_{\alpha }\in NE_{n-\sharp \alpha }$ and $y_{\beta }\in NE_{n-\sharp \beta }.$ We illustrate this for $n=3$ and $n=4$ as follows: For $n=3$, the possible Peiffer pairings are the following \begin{center} $C_{(1,0)(2)}$, $C_{(2,0)(1)}$, $C_{(0)(2,1)}$, $C_{(2)(0)}$, $C_{(2)(1)}$, $% C_{(1)(0)}$ \end{center} For all $x_{1}\in NE_{1},y_{2}\in NE_{2},$ the corresponding generators of $% I_{3}$ are: \begin{align*} C_{(1,0)(2)}(x_{1}\otimes y_{2})& =(s_{1}s_{0}x_{1}-s_{2}s_{0}x_{1})s_{2}y_{2}, \\ C_{(2,0)(1)}(x_{1}\otimes y_{2})& =(s_{2}s_{0}x_{1}-s_{2}s_{1}x_{1})(s_{1}y_{2}-s_{2}y_{2}) \\ C_{(0)(2,1)}(x_{2}\otimes y_{1})& =s_{2}s_{1}x_{2}(s_{0}y_{1}-s_{1}y_{1}+s_{2}y_{1}) \\ C_{(1)(0)}(x_{2}\otimes y_{2})& =[s_{1}x_{2}(s_{0}y_{2}-s_{1}y_{2})+s_{2}(x_{2}y_{2}), \\ C_{(2)(0)}(x_{2}\otimes y_{2})& =(s_{2}x_{2})(s_{0}y_{2}), \\ C_{(2)(1)}(x_{2}\otimes y_{2})& =s_{2}x_{2}(s_{1}y_{2}-s_{2}y_{2}). \end{align*} For $n=4$, the key pairings are thus the following \begin{center} \begin{tabular}{lllll} $C_{(3,2,1)(0)},$ & $C_{(3,2,0)(1)},$ & $C_{(3,1,0)(2)},$ & $C_{(2,1,0)(3)},$ & $C_{(3,0)(2,1)},$ \\ $C_{(3,1)(2,0)},$ & $C_{(3,2)(1,0)},$ & $C_{(3,2)(1)},$ & $C_{(3,2)(0)},$ & $% C_{(3,1)(0)},$ \\ $C_{(0)(2,1)},$ & $C_{(3,1)(2)},$ & $C_{(2,1)(3)},$ & $C_{(3,0)(2)},$ & $% C_{(3,0)(1)},$ \\ $C_{(2,0)(3)},$ & $C_{(2,0)(1)},$ & $C_{(1,0)(3)},$ & $C_{(1,0)(2)},$ & $% C_{(3)(2)},$ \\ $C_{(3)(1)},$ & $C_{(3)(0)},$ & $C_{(2)(1)},$ & $C_{(2)(0)},$ & $C_{(1)(0)},$% \end{tabular} \end{center} \begin{theorem} (\cite{patron3}) Let $\mathbf{E}$ be a simplicial algebra with Moore complex $\mathbf{NE}$ in which $E_{n}=D_{n},$ is an ideal of $E_{n}$ generated by the degenerate elements in dimension $n,$ then \begin{equation*} \begin{tabular}{l} $\partial _{n}(NE_{n})=\sum\limits_{I,J}\left[ K_{I},K_{J}\right] $% \end{tabular}% \end{equation*}% \ for $I,J\subseteq \lbrack n-1]$ with $I\cup J=[n-1],$ $I=[n-1]-\{\alpha \}$ $J=[n-1]-\{\beta \}$ where $(\alpha ,\beta )\in P(n)$ for $n=2,3$ and $4,$ \end{theorem} \bigskip \begin{remark} Shortly in \cite{amut4} they defined the normal subgroup $\partial _{n}(NG_{n}\cap D_{n})$ by $F_{\alpha ,\beta }$ elements which were defined first by Carrasco in \cite{c}. Castiglioni and Ladra generalised this inclusion in \cite{ladra}. \end{remark} Following \cite{patron3} we have \begin{lemma} Let $\mathbf{E}$ be a simplicial algebra with Moore complex $\mathbf{NE}$% \textbf{\ }of\textbf{\ }length $3$. Then for $n=4$ the images of $C_{\alpha ,\beta }$ elements under $\partial _{4}$ given in Table 1 are trivial. \end{lemma} \begin{proof} Since $NG_{4}=1$ by the results in \cite{patron3} result is trivial. \end{proof} \newpage \begin{tabular}{|l|l|l|l|} \hline 1 & $d_{4}[C_{(3,2,1)(0)}(x_{1}\otimes y_{3})]$ & $=$ & $% s_{2}s_{1}x_{1}(s_{0}d_{3}y_{3}-s_{1}d_{3}y_{3}+s_{2}d_{3}y_{3}-y_{3})$ \\ \hline 2 & $d_{4}[C_{(3,2,0)(1)}(x_{1}\otimes y_{3})]$ & $=$ & $% (s_{2}s_{0}x_{1}-s_{2}s_{1}x_{1})(s_{1}d_{3}y_{3}-s_{2}d_{3}y_{3}+y_{3})$ \\ \hline 3 & $d_{4}[C_{(3,1,0)(2)}(x_{1}\otimes y_{3})]$ & $=$ & $% (s_{1}s_{0}x_{1}-s_{2}s_{0}x_{1})(s_{2}d_{3}y_{3}-y_{3})$ \\ \hline 4 & $d_{4}[C_{(2,1,0)(3)}(x_{1}\otimes y_{3})]$ & $=$ & $% (s_{2}s_{1}s_{0}d_{1}x_{1}-s_{1}s_{0}x_{1})y_{3}$ \\ \hline 5 & $d_{4}[C_{(3,2)(1,0)}(x_{2}\otimes y_{2})]$ & $=$ & $% (s_{1}s_{0}d_{2}x_{2}-s_{2}s_{0}d_{2}x_{2}-s_{0}x_{2})s_{2}y_{2}$ \\ \hline 6 & $d_{4}[C_{(3,1)(2,0)}(x_{2}\otimes y_{2})]$ & $=$ & $% (s_{1}x_{2}-s_{0}x_{2}+s_{2}s_{0}d_{2}x_{2}-s_{2}s_{1}d_{2}x_{2})(s_{1}y_{2}-s_{2}y_{2}) $ \\ \hline 7 & $d_{4}[C_{(3,0)(2,1)}(x_{2}\otimes y_{2})]$ & $=$ & $% (s_{2}s_{1}d_{2}x_{2}-s_{1}x_{2})(s_{0}y_{2}-s_{1}y_{2}+s_{2}y_{2})$ \\ \hline 8 & $d_{4}[C_{(3,2)(1)}(x_{2}\otimes y_{3})]$ & $=$ & $% s_{2}x_{2}(s_{1}d_{3}y_{3}-s_{2}d_{3}y_{3}+y_{3})$ \\ \hline 9 & $d_{4}[C_{(3,2)(0)}(x_{2}\otimes y_{3})]$ & $=$ & $% s_{2}x_{2}(s_{2}d_{3}y_{3}-s_{1}d_{3}y_{3}+s_{0}d_{3}y_{3}-y_{3})$ \\ \hline 10 & $d_{4}[C_{(3,1)(2)}(x_{2}\otimes y_{3})]$ & $=$ & $% (s_{1}x_{2}-s_{2}x_{2})(s_{2}d_{3}y_{3}-y_{3})$ \\ \hline 11 & $d_{4}[C_{(3,1)(0)}(x_{2}\otimes y_{3})]$ & $=$ & $% (s_{1}x_{2}-s_{2}x_{2})(s_{2}d_{3}y_{3}-s_{1}d_{3}y_{3}+s_{0}d_{3}y_{3}-y_{3}) $ \\ \hline 12 & $d_{4}[C_{(3,0)(2)}(x_{2}\otimes y_{3})]$ & $=$ & $% (s_{0}x_{2}-s_{1}x_{2}+s_{2}x_{2})(s_{2}d_{3}y_{3}-y_{3})$ \\ \hline 13 & $d_{4}[C_{(3,0)(1)}(x_{2}\otimes y_{3})]$ & $=$ & $% (s_{0}x_{2}-s_{1}x_{2}+s_{2}x_{2})(s_{1}d_{3}y_{3}-s_{2}d_{3}y_{3}+y_{3})$ \\ \hline 14 & $d_{4}[C_{(2,1)(3)}(x_{2}\otimes y_{3})]$ & $=$ & $% (s_{2}s_{1}d_{2}x_{2}-s_{1}x_{2})y_{3}$ \\ \hline 15 & $d_{4}[C_{(0)(2,1)}(x_{2}\otimes y_{3})]$ & $=$ & $% (s_{2}s_{1}d_{2}x_{2}-s_{1}x_{2})(s_{2}d_{3}y_{3}-s_{1}d_{3}y_{3}+s_{0}d_{3}y_{3}-y_{3}) $ \\ \hline 16 & $d_{4}[C_{(2,0)(3)}(x_{2}\otimes y_{3})]$ & $=$ & $% (s_{2}s_{0}d_{2}x_{2}-s_{0}x_{2}+s_{1}x_{2}-s_{1}s_{1}d_{2}x_{2})y_{3}$ \\ \hline 17 & $d_{4}[C_{(2,0)(1)}(x_{2}\otimes y_{3})]$ & $=$ & $% (s_{2}s_{0}d_{2}x_{2}-s_{0}x_{2}+s_{1}x_{2}-s_{2}s_{1}d_{2}x_{2})$ \\ \hline & & & $(s_{1}d_{3}y_{3}-s_{2}d_{3}y_{3}+y_{3})$ \\ \hline 18 & $d_{4}[C_{(1,0)(3)}(x_{2}\otimes y_{3})]$ & $=$ & $% (s_{2}s_{0}d_{2}x_{2}-s_{0}x_{2}-s_{1}s_{0}d_{0}x_{2})y_{3}$ \\ \hline 19 & $d_{4}[C_{(1,0)(2)}(x_{2}\otimes y_{3})]$ & $=$ & $% (s_{1}s_{0}d_{2}x_{2}-s_{2}s_{0}d_{2}x_{2}+s_{0}x_{2})(s_{2}d_{3}y_{3}-y_{3}) $ \\ \hline 20 & $d_{4}[C_{(3)(2)}(x_{3}\otimes y_{3})]$ & $=$ & $% x_{3}(s_{2}d_{3}y_{3}-y_{3})$ \\ \hline 21 & $d_{4}[C_{(3)(1)}(x_{3}\otimes y_{3})]$ & $=$ & $% x_{3}(s_{1}d_{3}y_{3}-s_{2}d_{3}y_{3}+y_{3})$ \\ \hline 22 & $d_{4}[C_{(3)(0)}(x_{3}\otimes y_{3})]$ & $=$ & $% x_{3}(s_{2}d_{3}y_{3}-s_{1}d_{3}y_{3}+s_{0}d_{3}y_{3}-y_{3})$ \\ \hline 23 & $d_{4}[C_{(2)(1)}(x_{3}\otimes y_{3})]$ & $=$ & $% (s_{2}d_{3}x_{3}-x_{3})(s_{1}d_{3}y_{3}-s_{2}d_{3}y_{3}+y_{3})$ \\ \hline 24 & $d_{4}[C_{(2)(0)}(x_{3}\otimes y_{3})]$ & $=$ & $% (s_{2}d_{3}x_{3}-x_{3})(s_{2}d_{3}y_{3}-s_{1}d_{3}y_{3}+s_{0}d_{3}y_{3}-y_{3}) $ \\ \hline 25 & $d_{4}[C_{(1)(0)}(x_{3}\otimes y_{3})]$ & $=$ & $% (s_{1}d_{3}x_{3}-s_{2}d_{3}x_{3}+x_{3})(s_{2}d_{3}y_{3}-s_{1}d_{3}y_{3}+s_{0}d_{3}y_{3}-y_{3}) $ \\ \hline \end{tabular} \begin{center} Table 1 \end{center} where $x_{3},y_{3}\in NG_{3},x_{2},y_{2}\in NG_{2},x_{1}\in NG_{1}$. \newpage \subsection{Crossed Modules} Here we will recall the notion of crossed modules of commutative algebras given in \cite{portervascon} and \cite{e1} Let $R$ be a $\mathbf{k}$-algebra with identity. A $crossed$ $module$ of commutative algebra is an $R$-algebra $C$, together with a commutative action of $R$ on $C$ and $R$-algebra morphism $\partial :C\rightarrow R$ together with an action of $R$ on $C$, written $r\cdot c$ for $r\in R$ and $% c\in C$, satisfying the conditions. \textbf{CM1)} for all $r\in R$ , $c\in C $% \begin{equation*} \partial (r\cdot c)=r\partial c \end{equation*} \textbf{CM2) }(Peiffer Identity) for all $c,c^{\prime }\in C$% \begin{equation*} {\partial c}\cdot c^{\prime }=cc^{\prime } \end{equation*}% We will denote such a crossed module by $(C,R,\partial )$. A \textit{morphism of crossed module} from $(C,R,\partial )$ to $(C^{\prime },R^{\prime },\partial ^{\prime })$ is a pair of $\mathbf{k}$-algebra morphisms% \begin{equation*} \phi :C\longrightarrow C^{\prime }\text{ , \ }\psi :R\longrightarrow R^{\prime }\text{ } \end{equation*}% such that $\phi (r\cdot c)={\psi (r)}\cdot \phi (r)$ and $\partial ^{\prime }\phi (c)=\psi \partial (c)$. We thus get a category $\mathbf{XMod}$ of crossed modules. \newline \textit{Examples of Crossed Modules} (i) Any ideal, $I$, in $R$ gives an inclusion map $I\longrightarrow R,$ which is a crossed module then we will say $\left( I,R,i\right) $ is \ an ideal pair. In this case, of course, $R$ acts on $I$ by multiplication and the inclusion homomorphism $i$ makes $\left( I,R,i\right) $ into a crossed module, an \textquotedblleft inclusion crossed modules\textquotedblright . Conversely, \begin{lemma} If $(C,R,\partial )$ is a crossed module, $\partial (C)$ is an ideal of $R.$ \end{lemma} (ii) Any $R$-module $M$ can be considered as an $R$-algebra with zero multiplication and hence the zero morphism $0:M\rightarrow R$ sending everything in $M$ to the zero element of $R$ is a crossed module. Again conversely: \begin{lemma} If $(C,R,\partial )$ is a crossed module, $\ker \partial $ is an ideal in $C$ and inherits a natural $R$-module structure from $R$-action on $C.$ Moreover, $\partial (C)$ acts trivially on $\ker \partial ,$ hence $\ker \partial $ has a natural $R/\partial (C)$-module structure. \end{lemma} As these two examples suggest, general crossed modules lie between the two extremes of ideal and modules. Both aspects are important.\bigskip (iii) In the category of algebras, the appropriate replacement for automorphism \ groups is the multiplication algebra defined by Mac Lane \cite{[m]}. Then automorphism crossed module correspond to the multiplication crossed module $\left( R,M\left( R\right) ,\mu \right) $. To see this crossed module, we need to assume $Ann\left( R\right) =0$ or $% R^{2}=R$ and let $M\left( R\right) $ be the set of all multipliers $\delta :R\rightarrow R$ such that for all $c,c^{\prime }\in C$,\ $\delta \left( rr^{\prime }\right) =\delta \left( r\right) r^{\prime }.$ $M\left( R\right) $ acts on $R$ by \[ \begin{array}{rcl} M\left( R\right) \times R & \longrightarrow & R \\ \left( \delta ,r\right) & \longmapsto & \delta \left( r\right) \end{array} \] and there is a morphism $\mu :R\rightarrow M\left( R\right) $ defined by $\mu \left( r\right) =\delta _{r}$ with $\delta _{r}\left( r^{\prime }\right) =rr^{\prime }$ for all $r,r^{\prime }\in R.$ \subsection{$2$-Crossed Modules} Now we recall the commutative algebra case of $2$-crossed modules due to A.R.Grandjean and Vale, \cite{grand}. A 2-crossed module of $k$-algebras is a complex \begin{equation*} C_{2}\overset{\partial _{2}}{\longrightarrow }C_{1}\overset{\partial _{1}}{% \longrightarrow }C_{0} \end{equation*}% of $C_{0}$-algebras with $\partial _{2},\partial _{1}$ morphisms of $C_{0}$% -algebras, where $C_{0}$ acts on $C_{0}$ by multiplication, with a bilinear function \begin{equation*} \{\quad \otimes \quad \}:C_{1}\otimes _{C_{0}}C_{1}\longrightarrow C_{2} \end{equation*}% called as \textit{Peiffer lifting} which satisfies the following axioms: \begin{equation*} \begin{array}{lrrl} \mathbf{2CM1)} & \partial _{2}\{y_{0}\otimes y_{1}\} & = & y_{0}y_{1}-\text{ }^{\partial _{1}(y_{1})}y_{0} \\ \mathbf{2CM2)} & \{\partial _{2}(x_{1})\otimes \partial _{2}(x_{2})\} & = & x_{1}x_{2} \\ \mathbf{2CM3)} & \{y_{0}\otimes y_{1}y_{2}\} & = & \{y_{0}y_{1}\otimes y_{2}\}+\text{ }^{\partial _{1}y_{2}}\{{y_{0}\otimes y_{1}}\} \\ \mathbf{2CM4)} & (i)\quad \{\partial _{2}(x)\otimes y\} & = & y\cdot x-\text{ }^{\partial _{1}(y)}x \\ & (ii)\quad \{y\otimes {\partial _{2}(x)}\} & = & y\cdot x\newline \\ \mathbf{2CM5)} & \text{ }^{z}\{y_{0}\otimes y_{1}\} & = & \{^{z}y_{0}\otimes y_{1}\}=\{y_{0}\text{ }\otimes \text{ }^{z}y_{1}\}\newline \end{array}% \end{equation*}% \newline for all $x,x_{1},x_{2}\in C_{2}$, $y,y_{0},y_{1},y_{2}\in C_{1}$ and $z\in C_{0}$. A morphism of $2$-crossed modules can be defined in an obvious way. We thus define the category of $2$-crossed modules denoting it by $\mathbf{X}% _{2}\mathbf{Mod}$. The proof of the following theorem can be found in \cite{patron3}. \begin{theorem} The category of $2$-crossed modules is equivalent to the category of simplicial algebras with Moore complex of length $2$. \end{theorem} Now we will give some remarks on $2$-crossed modules where the group case can be found in \cite{timcrossed}. 1) Let $C_{1}\overset{\partial _{1}}{\longrightarrow }C_{0}$ be a crossed module. If we take $C_{2}$ trivial then \begin{equation*} C_{2}\overset{\partial _{2}}{\longrightarrow }C_{1}\overset{\partial _{1}}{% \longrightarrow }C_{0} \end{equation*}% is a $2$-crossed module with the Peiffer lifting defined by $\left\{ x\otimes y\right\} =0$ for $x,y\in C_{1}$. 2) If \begin{equation*} C_{2}\overset{\partial _{2}}{\longrightarrow }C_{1}\overset{\partial _{1}}{% \longrightarrow }C_{0} \end{equation*}% is a $2$-crossed module then \begin{equation*} \frac{C_{1}}{{Im}\partial _{2}}\overset{\partial _{1}}{\longrightarrow }% C_{0} \end{equation*}% is a crossed module. 3) Let \begin{equation*} C_{2}\overset{\partial _{2}}{\longrightarrow }C_{1}\overset{\partial _{1}}{% \longrightarrow }C_{0} \end{equation*}% be a $2$-crossed module with trivial Peiffer lifting then $C_{1}\overset{% \partial _{1}}{\longrightarrow }C_{0}$ will be a crossed module. Also in this situation we have the trivial action of $C_{0}$ on $C_{2}$. \section{Three Crossed Modules} \qquad As a consequence of \cite{arkaus}, here we will define $3$-crossed modules of commutative algebras. The way is similar but some of the conditions are different. Let $\mathbf{E}$ be a simplicial algebra with Moore complex of length $3$ and $NE_{0}=C_{0},$ $NE_{1}=C_{1},$ $NE_{2}=C_{2},$ $NE_{3}=C_{3}$. Thus we have a $\mathbf{k}$-algebra complex \begin{equation*} C_{3}\overset{\partial _{3}}{\longrightarrow }C_{2}\overset{\partial _{2}}{% \longrightarrow }C_{1}\overset{\partial _{1}}{\longrightarrow }C_{0} \end{equation*}% Let the actions of $C_{0}$ on $C_{3}$, $C_{2}$, $C_{1}$, $C_{1}$ on $C_{2}$, $C_{3}$ and $C_{2}$ on $C_{3}$ be as follows;% \begin{equation*} \begin{array}{ll} ^{x_{0}}x_{1} & =s_{0}x_{0}x_{1} \\ ^{x_{0}}x_{2} & =s_{1}s_{0}x_{0}x_{2} \\ ^{x_{0}}x_{3} & =s_{2}s_{1}s_{0}x_{0}x_{3} \\ ^{x_{1}}x_{2} & =s_{1}x_{1}x_{2} \\ ^{x_{1}}x_{3} & =s_{2}s_{1}x_{1}x_{3} \\ x_{2}\cdot x_{3} & =s_{2}x_{2}x_{3}% \end{array}% \end{equation*} \bigskip Then, since% \begin{equation*} \begin{array}{rr} (s_{2}s_{1}s_{0}\partial _{1}x_{1}-s_{1}s_{0}x_{1})y_{3} & =0 \\ (s_{2}s_{1}\partial _{2}x_{2}-s_{1}x_{2})y_{3} & =0 \\ x_{3}(s_{2}\partial _{3}y_{3}-y_{3}) & =0% \end{array}% \end{equation*}% we get \begin{equation*} \begin{array}[t]{ll} ^{\partial _{1}x_{1}}\text{ }y_{3} & =s_{1}s_{0}x_{1}y_{3} \\ ^{\partial _{2}x_{2}}\text{ }y_{3} & =s_{1}x_{2}y_{3} \\ \partial _{3}x_{3}\cdot y_{3} & =x_{3}y_{3}% \end{array}% \end{equation*}% and using the simplicial identities we get, \begin{equation*} \partial _{3}(x_{2}\cdot x_{3})=\partial _{3}(s_{2}x_{2}x_{3})=\partial _{3}(s_{2}x_{2})\partial _{3}(x_{3})=x_{2}\partial _{3}(x_{3}) \end{equation*}% Thus $\partial _{3}:C_{3}\rightarrow C_{2}$ is a crossed module. \begin{definition} Let $C_{3}\overset{\partial _{3}}{\longrightarrow }C_{2}\overset{\partial _{2}}{\longrightarrow }C_{1}\overset{\partial _{1}}{\longrightarrow }C_{0}$ be a complex of $\mathbf{k}$-algebras defined above. We define Peiffer liftings as follows; \begin{equation*} \begin{array}{lllll} \left\{ ~\otimes ~\right\} & : & C_{1}\otimes C_{1} & \rightarrow & C_{2} \\ & & \left\{ x_{1}\otimes y_{1}\right\} & = & s_{1}x_{1}(s_{0}y_{1}-s_{1}y_{1}) \\ \left\{ ~\otimes ~\right\} _{(1,0)(2)} & : & C_{1}\otimes C_{2} & \rightarrow & C_{3} \\ & & \left\{ x_{1}\otimes y_{2}\right\} _{(1,0)(2)} & = & (s_{1}s_{0}x_{1}-s_{2}s_{0}x_{1})s_{2}y_{2} \\ \left\{ ~\otimes ~\right\} _{(2,0)(1)} & : & C_{1}\otimes C_{2} & \rightarrow & C_{3} \\ & & \left\{ x_{1}\otimes y_{2}\right\} _{(2,0)(1)} & = & (s_{2}s_{0}x_{1}-s_{2}s_{1}x_{1})(s_{1}y_{2}-s_{2}y_{2}) \\ \left\{ ~\otimes ~\right\} _{(0)(2,1)} & : & C_{1}\otimes C_{2} & \rightarrow & C_{3} \\ & & \left\{ x_{1}\otimes y_{2}\right\} _{(0)(2,1)} & = & s_{2}s_{1}x_{1}(s_{0}y_{2}-s_{1}y_{2}+s_{2}y_{2}) \\ \left\{ ~\otimes ~\right\} _{(1)(0)} & : & C_{2}\otimes C_{2} & \rightarrow & C_{3} \\ & & \left\{ x_{2}\otimes y_{2}\right\} _{(1)(0)} & = & (s_{0}x_{2}-s_{1}x_{2})s_{1}y_{2}+s_{2}(x_{2}y_{2}) \\ \left\{ ~\otimes ~\right\} _{(2)(0)} & : & C_{2}\otimes C_{2} & \rightarrow & C_{3} \\ & & \left\{ x_{2}\otimes y_{2}\right\} _{(2)(0)} & = & s_{2}x_{2}s_{0}y_{2} \\ \left\{ ~\otimes ~\right\} _{(2)(1)} & : & C_{2}\otimes C_{2} & \rightarrow & C_{3} \\ & & \left\{ x_{2}\otimes y_{2}\right\} _{(2)(1)} & = & s_{2}x_{2}(s_{1}y_{2}-s_{2}y_{2})% \end{array}% \end{equation*}% where \ $x_{1}$, $y_{1}\in C_{1},$ $x_{2}$ $y_{2}\in C_{2}$. \end{definition} Then using Table 1 we get the following identities. \begin{center} \begin{tabular}{|l|l|l|} \hline $\left\{ x_{2}\otimes \partial _{2}y_{2}\right\} _{(0)(2,1)}$ & $=$ & $% \left\{ x_{2}\otimes y_{2}\right\} _{(1)(0)}+\left\{ x_{2}\otimes y_{2}\right\} _{(2)(1)}$ \\ \hline $\left\{ x_{1}\otimes \partial _{3}y_{3}\right\} _{(2,0)(1)}$ & $=$ & $% \left\{ x_{1}\otimes \partial _{3}y_{3}\right\} _{(0)(2,1)}{\small +}\left\{ x_{1}\otimes \partial _{3}y_{3}\right\} _{(1,0)(2)}{\small -}$ $^{{\small % \partial }_{1}{\small x}_{1}}{\small y}_{3}$ \\ \hline $\left\{ \partial _{2}x_{2}\otimes y_{2}\right\} _{(1,0)(2)}$ & $=$ & $% -\left\{ x_{2}\otimes y_{2}\right\} _{(0)(2)}$ \\ \hline $\left\{ \partial _{3}x_{3}\otimes \partial _{3}y_{3}\right\} _{(1)(0)}$ & $% = $ & $x_{3}y_{3}$ \\ \hline $\left\{ \partial _{2}x_{2}\otimes \partial _{3}y_{3}\right\} _{(0)(2,1)}$ & $=$ & $^{\partial _{2}x_{2}}y_{3}$ \\ \hline $\left\{ \partial _{2}x_{2}\otimes \partial _{3}y_{3}\right\} _{(1,0)(2)}$ & $=$ & $-\left\{ x_{2}\otimes \partial _{3}y_{3}\right\} _{(0)(2)}$ \\ \hline $\left\{ \partial _{2}x_{2}\otimes \partial _{3}y_{3}\right\} _{(2,0)(1)}$ & $=$ & $\partial _{2}x_{2}\cdot y_{3}-\left\{ x_{2}\otimes \partial _{3}y_{3}\right\} _{(0)(2)}$ \\ \hline $\left\{ x_{2}\otimes y_{2}y\prime _{2}\right\} _{(1)(0)}$ & $=$ & $\left\{ x_{2}\otimes \partial _{2}(y_{2}y\prime _{2})\right\} _{(0)(2,1)}{\small -}% \left\{ x_{2}\otimes (y_{2}y\prime _{2})\right\} _{(2)(1)}$ \\ \hline $\left\{ x\prime _{2}x_{2}\otimes y_{2}\right\} _{(1)(0)}$ & $=$ & $\left\{ x\prime _{2}x_{2}\otimes \partial _{2}y_{2}\right\} _{(0)(2,1)}-\left\{ x\prime _{2}x_{2}\otimes y_{2}\right\} _{(2)(1)}$ \\ \hline $\left\{ x_{2}\otimes y_{2}y\prime _{2}\right\} _{(2)(1)}$ & $=$ & $\left\{ x_{2}\otimes \partial _{2}(y_{2}y\prime _{2})\right\} _{(1)(2,0)}+\left\{ x_{2}\otimes y_{2}y\prime _{2}\right\} _{(2)(0)}-\left\{ x_{2}\otimes y_{2}y\prime _{2}\right\} _{(1)(0)}$ \\ \hline $\left\{ x_{2}x\prime _{2}\otimes y_{2}\right\} _{(2)(1)}$ & $=$ & $\left\{ x_{2}x\prime _{2}\otimes \partial _{2}y_{2}\right\} _{(1)(2,0)}+\left\{ x_{2}x\prime _{2}\otimes y_{2}\right\} _{(2)(0)}-\left\{ x_{2}x\prime _{2}\otimes y_{2}\right\} _{(1)(0)}$ \\ \hline $\left\{ x_{2}\otimes y_{2}y\prime _{2}\right\} _{(2)(0)}$ & $=$ & $-\left\{ x_{2}\otimes \partial _{2}(y_{2}y\prime _{2})\right\} _{(2)(1,0)}$ \\ \hline $\left\{ x_{2}\otimes \partial _{3}y_{3}\right\} _{(2)(1)}$ & $=$ & $% x_{2}\cdot y_{3}$ \\ \hline $\left\{ \partial _{3}x_{3}\otimes y_{2}\right\} _{(2)(1)}$ & $=$ & $% x_{3}^{\partial _{2}y_{2}}+x_{3}\cdot y_{2}$ \\ \hline $\left\{ \partial _{3}x_{3}\otimes y_{2}\right\} _{(1)(0)}$ & $=$ & $% y_{2}\cdot x_{3}$ \\ \hline $\left\{ \partial _{3}x_{3}\otimes y_{2}\right\} _{(2)(0)}$ & $=$ & $0$ \\ \hline $\partial _{3}\left\{ x_{2}\otimes y_{2}\right\} _{(2)(0)}$ & $=$ & $% -\partial _{3}\left\{ \partial _{2}x_{2}\otimes y_{2}\right\} _{(1,0)(2)}$ \\ \hline $\partial _{3}\left\{ x_{2}\otimes y_{2}\right\} _{(1)(0)}$ & $=$ & $\left\{ \partial _{2}x_{2}\otimes \partial _{2}y_{2}\right\} _{(1)(0)}+x_{2}y_{2}$ \\ \hline $\partial _{3}\left\{ x_{2}\otimes y_{2}\right\} _{(2)(1)}$ & $=$ & $% ^{\partial _{2}y_{2}}x_{2}-x_{2}y_{2}$ \\ \hline $\partial _{3}\left\{ x_{1}\otimes y_{2}\right\} _{(2,0)(1)}$ & $=$ & $% \partial _{3}\left\{ x_{1}\otimes y_{2}\right\} _{(1,0)(2)}+\left\{ x_{1}\otimes \partial _{2}y_{2}\right\} -^{\partial _{1}x_{1}}y_{2}+^{x_{1}}y_{2}$ \\ \hline $\partial _{3}\left\{ x_{1}\otimes y_{2}\right\} _{(0)(2,1)}$ & $=$ & $% \left\{ x_{1}\otimes \partial _{2}y_{2}\right\} +^{x_{1}}y_{2}$ \\ \hline \end{tabular} Table 2 \begin{tabular}{|l|l|l|l|l|} \hline $^{x_{0}}\left\{ x_{1}\otimes y_{1}\right\} $ & $=$ & $\left\{ ^{x_{0}}x_{1}\otimes y_{1}\right\} $ & $=$ & $\left\{ x_{1}\otimes ^{x_{0}}y_{1}\right\} $ \\ \hline $^{x_{0}}\left\{ x_{1}\otimes y_{2}\right\} _{(1,0)(2)}$ & $=$ & $\left\{ ^{x_{0}}x_{1}\otimes y_{2}\right\} _{(1,0)(2)}$ & $=$ & $\left\{ x_{1}\otimes ^{x_{0}}y_{2}\right\} _{(1,0)(2)}$ \\ \hline $^{x_{0}}\left\{ x_{1}\otimes y_{2}\right\} _{(0)(2,1)}$ & $=$ & $\left\{ ^{x_{0}}x_{1}\otimes y_{2}\right\} _{(0)(2,1)}$ & $=$ & $\left\{ x_{1}\otimes ^{x_{0}}y_{2}\right\} _{(0)(2,1)}$ \\ \hline $^{x_{0}}\left\{ x_{1}\otimes y_{2}\right\} _{(2,0)(1)}$ & $=$ & $\left\{ ^{x_{0}}x_{1}\otimes y_{2}\right\} _{(2,0)(1)}$ & $=$ & $\left\{ x_{1}\otimes ^{x_{0}}y_{2}\right\} _{(2,0)(1)}$ \\ \hline $^{x_{0}}\left\{ x_{2}\otimes y_{2}\right\} _{(1)(0)}$ & $=$ & $\left\{ ^{x_{0}}x_{2}\otimes y_{2}\right\} _{(1)(0)}$ & $=$ & $\left\{ x_{2}\otimes ^{x_{0}}y_{2}\right\} _{(1)(0)}$ \\ \hline $^{x_{0}}\left\{ x_{2}\otimes y_{2}\right\} _{(2)(0)}$ & $=$ & $\left\{ ^{x_{0}}x_{2}\otimes y_{2}\right\} _{(2)(0)}$ & $=$ & $\left\{ x_{2}\otimes ^{x_{0}}y_{2}\right\} _{(2)(0)}$ \\ \hline $^{x_{0}}\left\{ x_{2}\otimes y_{2}\right\} _{(2)(1)}$ & $=$ & $\left\{ ^{x_{0}}x_{2}\otimes y_{2}\right\} _{(2)(1)}$ & $=$ & $\left\{ x_{2}\otimes ^{x_{0}}y_{2}\right\} _{(2)(1)}$ \\ \hline \end{tabular} Table 3 \begin{tabular}{|l|l|l|l|l|} \hline $^{z_{1}}\left\{ x_{1}\otimes y_{1}\right\} $ & $=$ & $\left\{ ^{z_{1}}x_{1}\otimes y_{1}\right\} $ & $=$ & $\left\{ x_{1}\otimes ^{z_{1}}y_{1}\right\} $ \\ $^{z_{1}}\left\{ x_{1}\otimes y_{2}\right\} _{(1,0)(2)}$ & $=$ & $\left\{ ^{z_{1}}x_{1}\otimes y_{2}\right\} _{(1,0)(2)}$ & $=$ & $\left\{ x_{1}\otimes ^{z_{1}}y_{2}\right\} _{(1,0)(2)}$ \\ $^{z_{1}}\left\{ x_{1}\otimes y_{2}\right\} _{(0)(2,1)}$ & $=$ & $\left\{ ^{z_{1}}x_{1}\otimes y_{2}\right\} _{(0)(2,1)}$ & $=$ & $\left\{ x_{1}\otimes ^{z_{1}}y_{2}\right\} _{(0)(2,1)}$ \\ $^{z_{1}}\left\{ x_{1}\otimes y_{2}\right\} _{(2,0)(1)}$ & $=$ & $\left\{ ^{z_{1}}x_{1}\otimes y_{2}\right\} _{(2,0)(1)}$ & $=$ & $\left\{ x_{1}\otimes ^{z_{1}}y_{2}\right\} _{(2,0)(1)}$ \\ $^{z_{1}}\left\{ x_{2}\otimes y_{2}\right\} _{(1)(0)}$ & $=$ & $\left\{ ^{z_{1}}x_{2}\otimes y_{2}\right\} _{(1)(0)}$ & $=$ & $\left\{ x_{2}\otimes ^{z_{1}}y_{2}\right\} _{(1)(0)}$ \\ $^{z_{1}}\left\{ x_{2}\otimes y_{2}\right\} _{(2)(0)}$ & $=$ & $\left\{ ^{z_{1}}x_{2}\otimes y_{2}\right\} _{(2)(0)}$ & $=$ & $\left\{ x_{2}\otimes ^{z_{1}}y_{2}\right\} _{(2)(0)}$ \\ $^{z_{1}}\left\{ x_{2}\otimes y_{2}\right\} _{(2)(1)}$ & $=$ & $\left\{ ^{z_{1}}x_{2}\otimes y_{2}\right\} _{(2)(1)}$ & $=$ & $\left\{ x_{2}\otimes ^{z_{1}}y_{2}\right\} _{(2)(1)}$ \\ \hline \end{tabular} Table 4 \end{center} \noindent where $x_{0}\in C_{0}$, $x_{1},y_{1}\in C_{1},$ $x_{2},y_{2}\in C_{2},$ $% x_{3},y_{3}\in C_{3}$. From these results all liftings given in definition 1 are $C_{0}$,$C_{1}$-bilinear maps. \begin{definition} A \textit{3-crossed module} consist of a complex \begin{equation*} C_{3}\overset{\partial _{3}}{\longrightarrow }C_{2}\overset{\partial _{2}}{% \longrightarrow }C_{1}\overset{\partial _{1}}{\longrightarrow }C_{0} \end{equation*}% together with $\partial _{3}$, $\partial _{2}$,$\partial _{1}$ which are $% C_{0},C_{1}$-algebra morphisms, an action of $C_{0}$ on $C_{3},C_{2},C_{1}$, an action of $C_{1}$ on $C_{2},C_{3}$ and an action of $C_{2}$ on $C_{3},$% further $C_{0},C_{1}$-bilinear maps \begin{equation*} \begin{tabular}{lll} $\{$ $\otimes $ $\}_{(1)(0)}:C_{2}\otimes C_{2}\longrightarrow C_{3},$ & $\{$ $\otimes $ $\}_{(0)(2)}:C_{2}\otimes C_{2}\longrightarrow C_{3},$ & $\{$ $% \otimes $ $\}_{(2)(1)}:C_{2}\otimes C_{2}\longrightarrow C_{3},$ \\ & & \\ $\{$ $\otimes $ $\}_{(1,0)(2)}:C_{1}\otimes C_{2}\longrightarrow C_{3},$ & $% \{$ $\otimes $ $\}_{(2,0)(1)}:C_{1}\otimes C_{2}\longrightarrow C_{3},$ & \\ & & \\ $\{$ $\otimes $ $\}_{(0)(2,1)}:C_{2}\otimes C_{1}\longrightarrow C_{3},$ & $% \{$ $\otimes $ $\}:C_{1}\otimes C_{1}\longrightarrow C_{2}$ & \end{tabular}% \end{equation*}% called \textit{\ Peiffer lifting}s which satisfy the following axioms for all $x_{1}\in C_{1},$ $x_{2},y_{2}\in C_{2},$ and $x_{3},y_{3}\in C_{3}$: \begin{equation*} \begin{array}{lrrl} \mathbf{3CM1)} & & C_{3}\overset{\partial _{3}}{\longrightarrow }C_{2}% \overset{\partial _{2}}{\longrightarrow }C_{1} & \text{is a }2\text{-crossed module with the Peiffer lifting }\{\text{ }\otimes \text{ }\}_{(2),(1)} \\ \mathbf{3CM2)} & & \partial _{2}\left\{ x_{1}\otimes y_{1}\right\} & =\text{ }^{\text{ }\partial _{1}y_{1}}x_{1}-x_{1}y_{1} \\ \mathbf{3CM3)} & & \left\{ x_{2}\otimes \partial _{2}y_{2}\right\} _{(0)(2,1)} & =\left\{ x_{2}\otimes y_{2}\right\} _{(2)(1)}-\left\{ x_{2}\otimes y_{2}\right\} _{(1)(0)} \\ \mathbf{3CM4)} & & \partial _{3}\left\{ x_{2}\otimes y_{2}\right\} _{(1)(0)} & =\left\{ \partial _{2}x_{2}\otimes \partial _{2}y_{2}\right\} +x_{2}y_{2} \\ \mathbf{3CM5)} & & \left\{ x_{1}\otimes \partial _{3}y_{3}\right\} _{(2,0)(1)} & =\left\{ x_{1}\otimes \partial _{3}y_{3}\right\} _{(0)(2,1)}+\left\{ x_{1}\otimes \partial _{3}y_{3}\right\} _{(1,0)(2)}-% \text{ }^{\partial _{1}x_{1}}y_{3} \\ \mathbf{3CM6)} & & \left\{ \partial _{2}x_{2}\otimes y_{2}\right\} _{(2,0)(1)} & =-\left\{ x_{2}\otimes y_{2}\right\} _{(0)(2)}+\left( x_{2}y_{2}\right) \cdot \left\{ x_{2}\otimes y_{2}\right\} _{(2)(1)}+\left\{ x_{2}\otimes y_{2}\right\} _{(1)(0)} \\ \mathbf{3CM7)} & & \left\{ \partial _{3}x_{3}\otimes \partial _{3}y_{3}\right\} _{(1)(0)} & =y_{3}x_{3} \\ \mathbf{3CM8}) & & \left\{ \partial _{3}y_{3}\otimes \partial _{2}x_{2}\right\} _{(0)(2,1)} & =\text{ }^{-\partial _{2}x_{2}}y_{3} \\ \mathbf{3CM9)} & & \left\{ \partial _{2}x_{2}\otimes \partial _{3}y_{3}\right\} _{(1,0)(2)} & =-\left\{ x_{2}\otimes \partial _{3}y_{3}\right\} _{(0)(2)} \\ \mathbf{3CM10)} & & \left\{ \partial _{2}x_{2}\otimes \partial _{3}y_{3}\right\} _{(2,0)(1)} & =\text{ }^{\partial _{2}x_{2}}y_{3}-\left\{ x_{2}\otimes \partial _{3}y_{3}\right\} _{(0)(2)} \\ \mathbf{3CM11)} & & \left\{ \partial _{3}y_{3}\otimes x_{1}\right\} _{(0)(2,1)} & =\text{ }^{-x_{1}}y_{3} \\ \mathbf{3CM12)} & & \left\{ y_{2}\otimes \partial _{3}x_{3}\right\} _{(1)(0)} & =-y_{2}\cdot x_{3} \\ \mathbf{3CM13)} & & \left\{ \partial _{3}x_{3}\otimes y_{2}\right\} _{(1)(0)} & =y_{2}\cdot x_{3} \\ \mathbf{3CM14)} & & \left\{ \partial _{3}x_{3}\otimes y_{2}\right\} _{(2)(0)} & =0 \\ \mathbf{3CM15)} & & \partial _{3}\left\{ x_{1}\otimes y_{2}\right\} _{(2,0)(1)} & =\partial _{3}\left\{ x_{1}\otimes y_{2}\right\} _{(1,0)(2)}+\left\{ x_{1}\otimes \partial _{2}y_{2}\right\} -\text{ }% ^{\partial _{1}x_{1}}y_{2}+\text{ }^{x_{1}}y_{2} \\ \mathbf{3CM16)} & & \partial _{3}\left\{ x_{1}\otimes y_{2}\right\} _{(0)(2,1)} & =\left\{ x_{1}\otimes \partial _{2}y_{2}\right\} -\text{ }% ^{x_{1}}y_{2}% \end{array}% \end{equation*} \end{definition} We denote such a 3-crossed module by $(C_{3},C_{2},C_{1},C_{0},\partial _{3},\partial _{2},\partial _{1}).$ A \textit{morphism of $3$-crossed modules} of groups may be pictured by the diagram \begin{center} $\xymatrix@R=40pt@C=40pt{\ C_{3} \ar[d]_-{f_{3}} \ar@{->}@<0pt>[r]^-{% \partial_{3}} & C_{2} \ar[d]_-{f_{2}} \ar@{->}@<0pt>[r]^-{\partial_{2}} & C_{1} \ar[d]_-{f_{1}} \ar@{->}@<0pt>[r]^-{\partial_{1}} & C_{0} \ar[d]_-{% f_{0}} \\ C^{\prime }_{3} \ar@{->}@<0pt>[r]_-{\partial^{\prime }_{3}} & C^{\prime }_{2} \ar@{->}@<0pt>[r]_-{\partial^{\prime }_{2}} & C^{\prime }_{1} \ar@{->}% @<0pt>[r]_-{\partial^{\prime }_{1}} & C^{\prime }_{0} } $ \end{center} \noindent where \begin{equation*} f_{1}(^{c_{0}}c_{1})=\text{ }^{(f_{0}(c_{0}))}f_{1}(c_{1}),\text{ }% f_{2}(^{c_{0}}c_{2})=\text{ }^{(f_{0}(c_{0}))}f_{2}(c_{2}),\text{ }% f_{3}(^{c_{0}}c_{3})=\text{ }^{(f_{0}(c_{0}))}f_{3}(c_{3}) \end{equation*}% for $\left\{ \text{ }\otimes \text{ }\right\} _{(0)(2)},\left\{ \text{ }% \otimes \text{ }\right\} _{(2)(1)},$ $\left\{ \text{ }\otimes \text{ }% \right\} _{(1)(0)}$ \begin{equation*} \left\{ \text{ }\otimes \text{ }\right\} f_{2}\otimes f_{2}=f_{3}\left\{ \text{ }\otimes \text{ }\right\} \end{equation*}% for $\left\{ \text{ }\otimes \text{ }\right\} _{(1,0)(2)},\left\{ \text{ }% \otimes \text{ }\right\} _{(2,0)(1)}$ ,$\left\{ \text{ }\otimes \text{ }% \right\} _{(0)(2,1)}$ \begin{equation*} \left\{ \text{ }\otimes \text{ }\right\} f_{1}\otimes f_{2}=f_{3}\left\{ \text{ }\otimes \text{ }\right\} \end{equation*}% for $\left\{ \text{ }\otimes \text{ }\right\} $ \begin{equation*} \left\{ \text{ }\otimes \text{ }\right\} f_{1}\otimes f_{1}=f_{2}\left\{ \text{ }\otimes \text{ }\right\} \end{equation*}% for all $c_{3}\in C_{3},c_{2}\in C_{3},c_{1}\in C_{3},c_{0}\in C_{3}$. These compose in an obvious way. So we can define the category of $3$-crossed modules of commutative algebras,which we will be denoted by $\mathbf{X}_{3}% \mathbf{ModAlg}$. \section{Applications} \subsection{Simplicial Algebras} As an application we consider the relation between simplicial algebras and $% 3 $-crossed modules which were given in \cite{arkaus} for group case. So proofs in this section are omitted, since can be checked easily by using the proofs given in \cite{arkaus}. \begin{proposition} Let $\mathbf{E}$ be a simplicial algebra with Moore complex $\mathbf{NE}$. Then the complex \begin{equation*} NE_{3}/\partial _{4}(NE_{4}\cap D_{4})\overset{\overline{\partial }_{3}}{% \longrightarrow }NE_{2}\overset{\partial _{2}}{\longrightarrow }NE_{1}% \overset{\partial _{1}}{\longrightarrow }NE_{0} \end{equation*}% is a $3$-crossed module with the Peiffer liftings defined below: \begin{equation*} \begin{array}{lllll} \left\{ ~\otimes ~\right\} & : & NE_{1}\otimes NE_{1} & \longrightarrow & NE_{2} \\ & & \left\{ x_{1}\otimes y_{1}\right\} _{(1)(0)} & \longmapsto & \overline{% s_{1}x_{1}(s_{1}y_{1}-s_{0}y_{1})} \\ \left\{ ~\otimes ~\right\} _{(1,0)(2)} & : & NE_{1}\otimes NE_{2} & \longrightarrow & NE_{3}/\partial _{4}(NE_{4}\cap D_{4}) \\ & & \left\{ x_{1}\otimes y_{2}\right\} _{(1,0)(2)} & \longmapsto & \overline{(s_{2}s_{0}x_{1}-s_{1}s_{0}x_{1})s_{2}y_{2}} \\ \left\{ ~\otimes ~\right\} _{(2,0)(1)} & : & NE_{1}\otimes NE_{2} & \longrightarrow & NE_{3}/\partial _{4}(NE_{4}\cap D_{4})C_{3} \\ & & \left\{ x_{1}\otimes y_{2}\right\} _{(2,0)(1)} & \longmapsto & \overline{(s_{2}s_{1}x_{1}-s_{2}s_{0}x_{1})(s_{1}y_{2}-s_{2}y_{2})} \\ \left\{ ~\otimes ~\right\} _{(0)(2,1)} & : & NE_{1}\otimes NE_{2} & \longrightarrow & C_{3}NE_{3}/\partial _{4}(NE_{4}\cap D_{4}) \\ & & \left\{ x_{1}\otimes y_{2}\right\} _{(0)(2,1)} & \longmapsto & \overline{s_{2}s_{1}x_{1}(s_{1}y_{2}-s_{0}y_{2}-s_{2}y_{2})} \\ \left\{ ~\otimes ~\right\} _{(1)(0)} & : & NE_{2}\otimes NE_{2} & \longrightarrow & NE_{3}/\partial _{4}(NE_{4}\cap D_{4}) \\ & & \left\{ x_{2}\otimes y_{2}\right\} _{(1)(0)} & \longmapsto & \overline{% (s_{1}x_{2}-s_{2}x_{2})s_{1}y_{2}+s_{2}(x_{2}y_{2})} \\ \left\{ ~\otimes ~\right\} _{(2)(0)} & : & NE_{2}\otimes NE_{2} & \longrightarrow & NE_{3}/\partial _{4}(NE_{4}\cap D_{4}) \\ & & \left\{ x_{2}\otimes y_{2}\right\} _{(2)(0)} & \longmapsto & \overline{% -s_{2}x_{2}s_{0}y_{2}} \\ \left\{ ~\otimes ~\right\} _{(2)(1)} & : & NE_{2}\otimes NE_{2} & \longrightarrow & NE_{3}/\partial _{4}(NE_{4}\cap D_{4}) \\ & & \left\{ x_{2}\otimes y_{2}\right\} _{(2)(1)} & \longmapsto & \overline{% s_{2}x_{2}(s_{2}y_{2}-s_{1}y_{2})}% \end{array}% \end{equation*}% (The elements denoted by $\overline{(\text{ },\text{ })}$ are cosets in $% NE_{3}/\partial _{4}(NE_{4}\cap D_{4})$ and given by the elements in $% NE_{3}. $) \end{proposition} \begin{proof} Here we will check some of the conditions.The others can be checked easily% \newline \newline \textbf{3CM9)} Since% \begin{equation*} \begin{array}{lll} \partial _{4}\left( C_{\left( 3,2\right) \left( 1,0\right) }(x_{2}\otimes y_{3}\right) ) & = & \left( s_{2}s_{0}d_{2}x_{2}-s_{1}s_{0}d_{2}x_{2}-s_{0}x_{2}\right) s_{2}d_{3}y_{3}% \end{array}% \end{equation*} iwe find% \begin{equation*} \begin{array}[b]{lll} \left\{ \overline{\partial }_{2}x_{2}\otimes \overline{\partial }% _{3}y_{3}\right\} _{(1,0)(2)}^{3} & = & \left( s_{2}s_{0}d_{2}x_{2}-s_{1}s_{0}d_{2}x_{2}-s_{0}x_{2}\right) s_{2}d_{3}y_{3} \\ & & s_{0}x_{2}s_{2}d_{3}y_{3}\in {mod}\partial _{4}(NE_{4}\cap D_{4}) \\ & = & -\left\{ x_{2}\otimes \partial _{3}y_{3}\right\} _{(0)(2)}^{3}% \end{array}% \end{equation*} \newline \textbf{3CM13) }Since\textbf{\ }% \begin{equation*} \begin{array}{lll} \partial _{4}\left( C_{\left( 3,2\right) \left( 1\right) }(x_{2}\otimes y_{3}\right) & = & s_{2}x_{2}(s_{2}y_{3}-s_{1}y_{3}-y_{3})% \end{array}% \end{equation*}% \begin{equation*} \begin{array}{lll} & = & \end{array}% \end{equation*}% we find% \begin{equation} \begin{array}[b]{lll} \left\{ x_{2}\otimes \partial _{3}y_{3}\right\} _{(2)(1)}^{3} & = & s_{2}x_{2}(s_{1}d_{3}y_{3}-s_{2}d_{3}y_{3}) \\ & \equiv & s_{2}x_{2}y_{3}\in {mod}\partial _{4}(NE_{4}\cap D_{4}) \\ & = & x_{2}\cdot y_{3}% \end{array} \tag{3.13} \end{equation} \end{proof} \begin{theorem} The category of $3$-crossed modules is equivalent to the category of simplicial algebras with Moore complex of length $3$. \end{theorem} \subsection{Projective 3-crossed Resolution} Here as an application we will define projective $3$-crossed resolution of commutative algebras. This construction was defined by PJ.L.Doncel, A.R. Grandjean and M.J.Vale\textsc{\ }in \cite{doncel} for $2$-crossed modules. \begin{definition} A projective $\mathit{3}$\textit{-crossed }resolution of an $\mathbf{k}$% -algebra $E$ is an exact sequence \begin{equation*} ...\longrightarrow C_{k+1}\overset{\partial _{k}}{\longrightarrow }% C_{k}\longrightarrow ...\longrightarrow C_{3}\overset{\partial _{3}}{% \longrightarrow }C_{2}\overset{\partial _{2}}{\longrightarrow }C_{1}\overset{% \partial _{1}}{\longrightarrow }C_{0}\overset{\partial _{0}}{\longrightarrow }E\longrightarrow 0 \end{equation*}% of $k$-modules such that \newline $1)$ $C_{0}$ is projective in the category of $\mathbf{k}$-algebras\newline $2)$ $C_{i}$ is a $C_{i-1}$-algebras and projective in the category of $% C_{i-1}$ algebras for $i=1,2$\newline $3)$ For any epimorphism $F=(f,id,id,id):(C_{3}^{\prime },C_{2},C_{1},C_{0},\partial _{3}^{\prime },\partial _{2},\partial _{1})\longrightarrow (C_{3}^{\prime \prime },C_{2},C_{1},C_{0},\partial _{3}^{\prime \prime },\partial _{2},\partial _{1})$ and morphism $% H=(h,id,id,id):(C_{3},C_{2},C_{1},C_{0},\partial _{3},\partial _{2},\partial _{1})\longrightarrow (C_{3}^{\prime \prime },C_{2},C_{1},C_{0},\partial _{3}^{\prime },\partial _{2},\partial _{1})$ there exist a morphism $% Q=(q,id,id,id):(C_{3},C_{2},C_{1},C_{0},\partial _{3},\partial _{2},\partial _{1})\longrightarrow (C_{3}^{\prime },C_{2},C_{1},C_{0},\partial _{3}^{\prime },\partial _{2},\partial _{1})$ such that $FQ=H$ \newline $4)$ for $k\geq 4$, $C_{k}$ is a projective $\mathbf{k}$-module\newline $5)$ $\partial _{4}$ is a homomorphism of $C_{0}$-module where the action of $C_{0}$ on $C_{4}$ is defined by $\partial _{0}$\newline $6)$ For $k\geq 5,$ $\partial _{k}$ is a homomorphism of $k$-modules\newline \end{definition} \begin{proposition} Any commutative $\mathbf{k}$-algebra with a unit has a projective $3$% -crossed resolution. \end{proposition} \begin{proof} Let $\mathbf{E}$ be a $\mathbf{k}$-algebra and $C_{0}=\mathbf{k}[X_{i}]$ a polynomial ring such that there exist an epimorphism $\partial _{0}:C_{0}\rightarrow B$.\newline Now let define $C_{1}$ as $C_{0}=[ker\partial _{0}]$ the positively graded part of polynomial ring on $ker\partial _{0}$ and define $\partial _{1}:C_{1}\rightarrow C_{0}$ by inducing from the inclusion $i:ker\partial _{0}\rightarrow C_{0}$.\newline Now let define $K_{2}=C_{0}(C_{1}\times C_{1}\bigcup ker\partial _{1})$, the free $C_{0}$-module on the disjoint union \newline $(C_{1}\times C_{1})\bigcup ker\partial _{1}$ and define \begin{equation*} \begin{array}{lll} \partial _{2}^{\prime }:K_{2}\rightarrow ker\partial _{1} & & \end{array}% \end{equation*}% as% \begin{equation*} \begin{array}{lll} \partial _{2}^{\prime }(x_{1}y_{1})=x_{1}y_{1}-\partial _{2}(y_{1})x_{1},\ \ \ x_{1}y_{1}\in C_{1} & & \\ \partial _{2}^{\prime }(x)=x,\ \ \ \ x\in ker\partial _{1} & & \end{array}% \end{equation*}% Let $\mathbf{R}^{\prime }$ be the $C_{0}$-module generated by the relations \newline \begin{equation*} \begin{array}{lll} (\alpha x_{1}+\beta y_{1},z_{1})-\alpha (x_{1},z_{1})-\beta (y_{1},z_{1}) & & \\ (x_{1},\alpha y_{1}+\beta z_{1})-\alpha (x_{1},y_{1})-\beta (x_{1},z_{1}) & & \end{array}% \end{equation*}% when $\alpha ,\beta \in C_{0}$ and $x_{1},y_{1},z_{1}\in C_{1}$. Now define $% C_{2}=K_{2}/R^{\prime }$. Now define $\partial _{2}:C_{2}\rightarrow ker\partial _{1}$ with $\partial _{2}\pi =\partial _{2}^{\prime }$ where $% \pi :K_{2}\rightarrow (C_{2}=K_{2}/R^{\prime })$ is projection.\newline Now we will define $C_{3}$. Let $K_{3}$ is the $C_{0}$-module defined on the disjoint union \newline $C_{0}(A_{(1,0)}\cup A_{(0,2)}\cup A_{(2,1)}\cup A_{(1,0)(2)}\cup A_{(2,0)(1)}\cup A_{(0)(2,1)}\cup ker\partial _{2})$ \newline where \newline $A_{(1,0)}=A_{(0,2)}=A_{(2,1)}=C_{2}\times C_{2}$ ,\newline $A_{(1,0)(2)}=A_{(2,0)(1)}=C_{1}\times C_{2}$ \newline and $A_{(0)(2,1)}=C_{2}\times C_{1}$.\newline We have $\partial _{3}^{\prime }:K_{3}\rightarrow (C_{1}\times C_{1}\cup ker\partial _{1})$ \begin{equation*} \begin{array}{lll} \partial _{3}^{\prime }(x,y)=xy-\partial _{2}yx,\ \ \ \ \ (x,y)\in A_{(2)(1)} & & \\ \partial _{3}^{\prime }(x,y)=(\partial _{2}x,\partial _{2}y)+xy,\ \ \ \ \ (x,y)\in A_{(1)(0)} & & \\ \partial _{3}^{\prime }(x,y)=(\partial _{2}x,y)+yx,\ \ \ \ \ (x,y)\in A_{(0)(2,1)} & & \\ \partial _{3}^{\prime }(x,y)=(x,\partial _{2}y)-\partial _{1}xy+xy,\ \ \ \ \ (x,y)\in A_{(2,0)(1)} & & \\ \partial _{3}^{\prime }(x,y)=0,\ \ \ \ \ (x,y)\in A_{(0)(2)} & & \\ \partial _{3}^{\prime }(x,y)=0,\ \ \ \ \ (x,y)\in A_{(1,0)(2)} & & \\ \partial _{3}^{\prime }(x,y)=x,\ \ \ \ \ x\in ker\partial _{2} & & \\ & & \end{array}% \end{equation*}% Now define a $C_{0}$-module $R$, generated by the relations,where $\alpha ,\beta \in C_{0}$. Now define $C_{3}=K_{3}/R$ we have $\partial _{3}:C_{3}\rightarrow ker\partial _{2}$ with $\partial _{3}\pi :\partial _{3}^{\prime }$ where $\pi :K_{3}\rightarrow (C_{3}=K_{3}/R)$ is projection. With these constructions \begin{equation*} C_{3}\overset{\partial _{3}}{\longrightarrow }C_{2}\overset{\partial _{2}}{% \longrightarrow }C_{1}\overset{\partial _{1}}{\longrightarrow }C_{0} \end{equation*}% is a projective $3$-crossed module. If we define $C_{4}$ as the projection resolution of the $\mathbf{E}$-module $ker\partial _{3}$ then we have the projective crossed resolution \begin{equation*} \cdots C_{k}{\longrightarrow }C_{k-1}\cdots C_{4}\overset{q}{\longrightarrow }C_{3}\overset{\partial _{3}}{\longrightarrow }C_{2}\overset{\partial _{2}}{% \longrightarrow }C_{1}\overset{\partial _{1}}{\longrightarrow }C_{0}\overset{% \partial _{0}}{\longrightarrow }E{\longrightarrow }0 \end{equation*}% where $q$ is the projection. \end{proof} \subsection{Lie Algebra Case} Lie algebraic version of crossed modules were introduced by Kassel and Loday in, \cite{kas}, and that of $2$-crossed modules were introduced by G. J. Ellis in, \cite{ellis1}. Also higher dimensional Peiffer elements in simplicial Lie algebras introduced in \cite{aa}. As a consequence of the commutative algebra version of $3$-crossed modules in this section we will define the $3$-crossed modules of Lie algebras by using the results given in \cite{aa} with a similar way used for defining the commutative algebra case. The relations for commutative algebra case given in the previous section can be applied to the Lie algebra case with some slight differences in the proofs up to the definition. \begin{definition} A $\mathit{3}$\textit{-crossed module} over Lie algebras consists of a complex of Lie algebras \begin{equation*} L_{3}\overset{\partial _{3}}{\longrightarrow }L_{2}\overset{\partial _{2}}{% \longrightarrow }L_{1}\overset{\partial _{1}}{\longrightarrow }L_{0} \end{equation*}% together with an action of $L_{0}$ on $L_{3},L_{2},L_{1}$ and an action of $% L_{1}$ on $L_{3},L_{2}$ and an action of $L_{2}$ on $L_{3}$ so that $% \partial _{3}$, $\partial _{2}$,$\partial _{1}$ are morphisms of $% L_{0},L_{1} $-groups and the $L_{1},L_{0}$-equivariant liftings \begin{equation*} \begin{tabular}{lll} $\{$ $,$ $\}_{(1)(0)}:L_{2}\times L_{2}\longrightarrow L_{3},$ & $\{$ $,$ $% \}_{(0)(2)}:L_{2}\times L_{2}\longrightarrow L_{3},$ & $\{$ $,$ $% \}_{(2)(1)}:L_{2}\times L_{2}\longrightarrow L_{3},$ \\ & & \\ $\{$ $,$ $\}_{(1,0)(2)}:L_{1}\times L_{2}\longrightarrow L_{3},$ & $\{$ $,$ $% \}_{(2,0)(1)}:L_{1}\times L_{2}\longrightarrow L_{3},$ & \\ & & \\ $\{$ $,$ $\}_{(0)(2,1)}:L_{2}\times L_{1}\longrightarrow L_{3},$ & $\{$ $,$ $% \}:L_{1}\times L_{1}\longrightarrow L_{2}$ & \end{tabular}% \end{equation*}% called \textit{$3$-dimensional Peiffer liftings}. This data must satisfy the following axioms:% \begin{equation*} \begin{array}{lrrl} \mathbf{3CM1)} & & C_{3}\overset{\partial _{3}}{\longrightarrow }C_{2}% \overset{\partial _{2}}{\longrightarrow }C_{1} & \text{is a }2\text{-crossed module with the Peiffer lifting }\{\text{ }\otimes \text{ }\}_{(2),(1)} \\ \mathbf{3CM2)} & & \partial _{2}\left\{ l_{1}\otimes m_{1}\right\} & =\text{ }^{\text{ }\partial _{1}m_{1}}l_{1}-\left[ l_{1},m_{1}\right] \\ \mathbf{3CM3)} & & \left\{ l_{2}\otimes \partial _{2}m_{2}\right\} _{(0)(2,1)} & =\left\{ l_{2}\otimes m_{2}\right\} _{(2)(1)}-\left\{ l_{2}\otimes m_{2}\right\} _{(1)(0)} \\ \mathbf{3CM4)} & & \partial _{3}\left\{ l_{2}\otimes m_{2}\right\} _{(1)(0)} & =\left\{ \partial _{2}l_{2}\otimes \partial _{2}m_{2}\right\} +\left[ l_{2},m_{2}\right] \\ \mathbf{3CM5)} & & \left\{ l_{1}\otimes \partial _{3}l_{3}\right\} _{(2,0)(1)} & =\left\{ l_{1}\otimes \partial _{3}l_{3}\right\} _{(0)(2,1)}+\left\{ l_{1}\otimes \partial _{3}l_{3}\right\} _{(1,0)(2)}-% \text{ }^{\partial _{1}l_{1}}l_{3} \\ \mathbf{3CM6)} & & \left\{ \partial _{2}l_{2}\otimes m_{2}\right\} _{(2,0)(1)} & =-\left\{ l_{2}\otimes m_{2}\right\} _{(0)(2)}+\left[ l_{2},m_{2}\right] \cdot \left\{ l_{2}\otimes m_{2}\right\} _{(2)(1)}+\left\{ l_{2}\otimes m_{2}\right\} _{(1)(0)} \\ \mathbf{3CM7)} & & \left\{ \partial _{3}l_{3}\otimes \partial _{3}m_{3}\right\} _{(1)(0)} & =\left[ m_{3},l_{3}\right] \\ \mathbf{3CM8}) & & \left\{ \partial _{3}l_{3}\otimes \partial _{2}l_{2}\right\} _{(0)(2,1)} & =\text{ }^{-\partial _{2}l_{2}}l_{3} \\ \mathbf{3CM9)} & & \left\{ \partial _{2}l_{2}\otimes \partial _{3}l_{3}\right\} _{(1,0)(2)} & =-\left\{ l_{2}\otimes \partial _{3}l_{3}\right\} _{(0)(2)} \\ \mathbf{3CM10)} & & \left\{ \partial _{2}l_{2}\otimes \partial _{3}l_{3}\right\} _{(2,0)(1)} & =\text{ }^{\partial _{2}l_{2}}l_{3}-\left\{ l_{2}\otimes \partial _{3}l_{3}\right\} _{(0)(2)} \\ \mathbf{3CM11)} & & \left\{ \partial _{3}l_{3}\otimes l_{1}\right\} _{(0)(2,1)} & =\text{ }^{-l_{1}}l_{3} \\ \mathbf{3CM12)} & & \left\{ l_{2}\otimes \partial _{3}l_{3}\right\} _{(1)(0)} & =-l_{2}\cdot l_{3} \\ \mathbf{3CM13)} & & \left\{ \partial _{3}l_{3}\otimes l_{2}\right\} _{(1)(0)} & =l_{2}\cdot l_{3} \\ \mathbf{3CM14)} & & \left\{ \partial _{3}l_{3}\otimes l_{2}\right\} _{(2)(0)} & =0 \\ \mathbf{3CM15)} & & \partial _{3}\left\{ l_{1}\otimes l_{2}\right\} _{(2,0)(1)} & =\partial _{3}\left\{ l_{1}\otimes l_{2}\right\} _{(1,0)(2)}+\left\{ l_{1}\otimes \partial _{2}l_{2}\right\} -\text{ }% ^{\partial _{1}l_{1}}l_{2}+\text{ }^{l_{1}}l_{2} \\ \mathbf{3CM16)} & & \partial _{3}\left\{ l_{1}\otimes l_{2}\right\} _{(0)(2,1)} & =\left\{ l_{1}\otimes \partial _{2}l_{2}\right\} -\text{ }% ^{l_{1}}l_{2}% \end{array}% \end{equation*} \end{definition}
proofpile-arXiv_065-4751
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \subsection{KPZ/Stochastic Heat Equation/Continuum Random Polymer}\label{KPZ_SHE_Poly} Despite its popularity as perhaps {\it the} default model of stochastic growth of a one dimensional interface, we are still far from a satisfactory theory of the Kardar-Parisi-Zhang (KPZ) equation, \begin{equation}\label{KPZ0} \partial_T h = -\frac12(\partial_Xh)^2 + \frac12 \partial_X^2 h + \dot{\mathscr{W}} \end{equation} where $\dot{\mathscr{W}}(T,X)$\footnote{We attempt to use capital letters for all variables (such as $X$, $T$) on the macroscopic level of the stochastic PDEs and polymers. Lower case letters (such as $x$, $t$) will denote WASEP variables, the microscopic discretization of these SPDEs.} is Gaussian space-time white noise, \begin{equation*} E[ \dot{\mathscr{W}}(T,X) \dot{\mathscr{W}}(S,Y)] = \delta(T-S)\delta(Y-X). \end{equation*} The reason is that, even for nice initial data, the solution at time $T>0$ will look locally like a Brownian motion in $X$. Hence the nonlinear term is ill-defined. Naturally, one expects that an appropriate Wick ordering of the non-linearity can lead to well defined solutions. However, numerous attempts have led to non-physical answers \cite{TChan}. By a physical answer one means that the solution should be close to discrete growth models. In particular, for a large class of initial data, the solution $h(T,X)$ should look like \begin{equation}\label{five} h(T,X) \sim C(T) + T^{1/3} \zeta(X) \end{equation} where $C(T)$ is deterministic and where the statistics of $\zeta$ fits into various universality classes depending on the regime of initial data one is looking at. More precisely, one expects that the variance scales as \begin{equation}\label{five2} {\rm Var}(h(T,X)) \sim CT^{2/3}. \end{equation} The scaling exponent is the result of extensive Monte Carlo simulations and a few theoretical arguments \cite{fns,vBKS,KPZ,K}. The correct interpretation of (\ref{KPZ0}) appears to be that of \cite{BG}, where $h(T,X)$ is simply {\it defined} by the Hopf-Cole transform: \begin{equation}\label{hc} h(T,X) = -\log \mathcal{Z} (T,X) \end{equation} where $\mathcal{Z} (T,X)$ is the well-defined \cite{W} solution of the stochastic heat equation, \begin{equation}\label{she} \partial_T \mathcal{Z} = \frac12 \partial_X^2 \mathcal{Z} - \mathcal{Z} \dot{\mathscr{W}}. \end{equation} Recently \cite{bqs} proved upper and lower bounds of the type (\ref{five2}) for this {\it Hopf-Cole solution} $h$ {\it of KPZ} defined through (\ref{hc}), in the {\it equilibrium regime}, corresponding to starting (\ref{KPZ0}) with a two sided Brownian motion. Strictly speaking, this is not an equilibrium solution for KPZ, but for the stochastic Burgers equation \begin{equation*} \partial_T u = -\frac12 \partial_X u^2 + \frac12 \partial_X^2 u + \partial_X\dot{\mathscr{W}}, \end{equation*} formally satisfied by its derivative $ u(T,X)=\partial_X h(T,X)$. See also \cite{Timo} for a similar bound for the free energy of a particular discrete polymer model. In this article, we will be interested in a very different regime, far from equilibrium. It is most convenient to state in terms of the stochastic heat equation (\ref{she}) for which we will have as initial condition a delta function, \begin{equation}\label{sheinit} \mathcal{Z} (T=0,X)= \delta_{X=0}. \end{equation} This initial condition is natural for the interpretation in terms of random polymers, where it corresponds to the point-to-point free energy. The free energy of the continuum directed random polymer in $1+1$ dimensions is \begin{equation}\label{FK} \mathcal{F}(T,X) = \log E_{T,X}\left[ :\!\exp\!: \left\{-\int_0^T \dot{\mathscr{W}}(t,b(t)) dt\right\}\right] \end{equation} where $E_{T,X}$ denotes expectation over the Brownian bridge $b(t)$, $0\le t\le T$ with $b(0)=0$ and $b(T)=X$. The expectation of the Wick ordered exponential $ :\!\exp\!:$ is defined using the $n$ step probability densities $p_{ t_1, \ldots, t_n}(x_1,\ldots,x_n)$ of the bridge in terms of a series of multiple It\^o integrals; \begin{eqnarray}\label{nine} && E_{T,X}\left[ :\!\exp\!: \left\{-\int_0^T \dot{\mathscr{W}}(t,b(t)) dt \right\}\right] \\ && = \sum_{n=0}^\infty \int_{\Delta_n(T)} \int_{\mathbb R^n}(-1)^n p_{ t_1, \ldots, t_n}(x_1,\ldots,x_n) \mathscr{W} (dt_1 dx_1) \cdots \mathscr{W} (dt_n dx_n),\nonumber \end{eqnarray} where $\Delta_n(T)=\{(t_1,\ldots,t_n):0\le t_1\le \cdots\le t_n \le T\}$. Note that the series is convergent in $\mathscr{L}^2(\mathscr{W})$ as one can check that \begin{equation*} \int_{\Delta_n(T)} \int_{\mathbb R^n} p^2_{ t_1, \ldots, t_n}(x_1,\ldots,x_n) dt_1 dx_1 \cdots dt_n dx_n \le C (n!)^{-1/2} \end{equation*} and hence the square of the norm, $ \sum_{n=0}^\infty \int_{\Delta_n(T)} \int_{\mathbb R^n} p^2_{ t_1, \ldots, t_n}(x_1,\ldots,x_n) dt_1 dx_1 \cdots dt_n dx_n $, is finite. Let \begin{equation}\label{heat_kernel} p(T,X) = \frac{1}{\sqrt{2\pi T}} e^{ -X^2/2T } \end{equation} denote the heat kernel. Then we have \begin{equation}\label{zed} \mathcal{Z}(T,X) = p(T,X) \exp\{ \mathcal{F}(T,X) \} \end{equation} as can be seen by writing the integral equation for $\mathcal{Z}(T,X)$; \begin{equation}\label{intshe} \mathcal{Z}(T,X) = p(T,X) + \int_0^T\int_{-\infty}^\infty p(T-S,X-Y)\mathcal{Z}(S,Y) \mathscr{W} (dY,dS) \end{equation} and then iterating. The factor $p(T,X)$ in (\ref{zed}) represents the difference between conditioning on the bridge going to $X$, as in (\ref{nine}), and having a delta function initial condition, as in (\ref{sheinit}). The initial condition corresponds to \begin{equation*} \mathcal{F}(0,X) = 0, \qquad X\in \mathbb{R}. \end{equation*} In terms of KPZ (\ref{KPZ0}), there is no precise mathematical statement of the initial conditions; what one sees as $T\searrow 0$ is a narrowing parabola. In the physics literature this is referred as the {\it narrow wedge initial conditions}. We can now state our main result which is an exact formula for the probability distribution for the free energy of the continuum directed random polymer in 1+1 dimensions, or, equivalently, the one-point distribution for the stochastic heat equation with delta initial condition, or the KPZ equation with narrow wedge initial conditions. For a function $\sigma(t)$, define the operator $K_{\sigma}$ through its kernel, \begin{equation}\label{sigma_K_def} K_{\sigma}(x,y) = \int_{-\infty}^{\infty} \sigma(t)\ensuremath{\mathrm{Ai}}(x+t)\ensuremath{\mathrm{Ai}}(y+t)dt, \end{equation} where $\mathrm{Ai}(x) = \frac{1}{\pi} \int_0^\infty \cos\left(\tfrac13t^3 + xt\right)\, dt$ is the Airy function. \begin{theorem}\label{main_result_thm} The {\em crossover distributions}, \begin{equation}\label{defofF} F_T(s) \stackrel{\rm def}{=} P(\mathcal{F}(T,X)+\tfrac{T}{4!} \le s) \end{equation} are given by the following equivalent formulas, \begin{enumerate}[i.] \item The {\em crossover Airy kernel formula}, \begin{equation}\label{sigma_Airy_kernel_formula} F_{T}(s) =\int_{\mathcal{\tilde C}}\frac{d\tilde\mu}{\tilde\mu} e^{-\tilde\mu} \det(I-K_{\sigma_{T,\tilde\mu}})_{L^2(\kappa_T^{-1}a,\infty)}, \end{equation} where $\mathcal{\tilde C}$ is defined in Definition \ref{thm_definitions}, and $ K_{\sigma_{T,\tilde\mu}}$ is as above with \begin{equation}\label{airy_like_kernel} \sigma_{T,\tilde\mu}(t) = \frac{\tilde\mu}{\tilde\mu-e^{-\kappa_T t }}, \end{equation} and \begin{equation*} a=a(s)=s-\log\sqrt{2\pi T}, \quad \textrm{and}\quad \kappa_T=2^{-1/3}T^{1/3}. \end{equation*} Alternatively, if $\sqrt{z}$ is defined by taking the branch cut of the logarithm on the negative real axis, then \begin{eqnarray}\label{sym_F_eqn} F_{T}(s) &=&\int_{\mathcal{\tilde C}}\frac{d\tilde\mu}{\tilde\mu} e^{-\tilde\mu} \det(I-\hat{K}_{\sigma_{T,\tilde\mu}})_{L^2(-\infty,\infty)}\\ \hat{K}_{\sigma_{T,\tilde\mu}}(x,y) &=& \sqrt{\sigma_{T,\tilde\mu}(x-s)}K_{\ensuremath{\mathrm{Ai}}}\sqrt{\sigma_{T,\tilde\mu}(y-s)} \end{eqnarray} where $K_{\ensuremath{\mathrm{Ai}}}(x,y)$ is the Airy kernel, ie. $K_{\ensuremath{\mathrm{Ai}}}=K_{\sigma}$ with $\sigma(t)=\mathbf{1}_{[0,\infty)}(t)$. \mbox{} \item The {\em Gumbel convolution formula}, \begin{equation*} F_T(s) =1- \int_{-\infty}^{\infty} G(r) f(a-r) dr, \end{equation*} where $G(r)$ is given by $G(r) =e^{-e^{-r}}$ and where \begin{equation*} f(r) =\kappa_T^{-1} \det(I-K_{\sigma_T})\mathrm{tr}\left((I-K_{\sigma_T})^{-1}\rm{P}_{\ensuremath{\mathrm{Ai}}}\right), \end{equation*} where the operators $K_{\sigma_T}$ and $\rm{P}_{\ensuremath{\mathrm{Ai}}}$ act on $L^2(\kappa_T^{-1}r,\infty)$ and are given by their kernels with \begin{equation*} \rm{P}_{\ensuremath{\mathrm{Ai}}}(x,y) = \ensuremath{\mathrm{Ai}}(x)\ensuremath{\mathrm{Ai}}(y),\qquad \sigma_T(t) = \frac{1}{1-e^{-\kappa_T t}}. \end{equation*} For $\sigma_T$ above, the integral in (\ref{sigma_K_def}) should be intepreted as a principal value integral. The operator $K_{\sigma_T}$ contains a Hilbert transform of the product of Airy functions which can be partially computed with the result that \begin{equation*} K_{\sigma_T}(x,y) = \int_{-\infty}^{\infty} \tilde\sigma_T(t)\ensuremath{\mathrm{Ai}}(x+t)\ensuremath{\mathrm{Ai}}(y+t)dt + \kappa_T^{-1} \pi G_{\frac{x-y}{2}}(\frac{x+y}{2}) \end{equation*} where \begin{eqnarray}\label{tilde_sigma_form} \tilde\sigma_T(t) &=& \frac{1}{1-e^{-\kappa_T t}} -\frac{1}{\kappa_T t}\\ \nonumber G_a(x) &=& \frac{1}{2\pi^{3/2}}\int_0^{\infty} \frac{\sin(x\xi+\tfrac{\xi^3}{12}-\tfrac{a^2}{\xi}+\tfrac{\pi}{4})}{\sqrt{\xi}}d\xi. \end{eqnarray} \mbox{} \item The {\em cosecant kernel formula}, \begin{equation*} F_T(s)= \int_{\mathcal{\tilde C}} e^{-\tilde\mu}\det(I-K^{\csc}_{a})_{L^2(\tilde\Gamma_{\eta})} \frac{d\tilde\mu}{\tilde\mu}, \end{equation*} where the contour $\mathcal{\tilde C}$, the contour $\tilde\Gamma_{\eta}$ and the operator $K_a^{\csc}$ are defined in Definition \ref{thm_definitions}. \end{enumerate} \end{theorem} The proof of the theorem relies on the explicit limit calculation for the weakly asymmetric simple exclusion process (WASEP) contained in Theorem \ref{epsilon_to_zero_theorem}, as well as the relationship between WASEP and the stochastic heat equation stated in Theorem \ref{BG_thm}. Combining these two theorems proves the cosecant kernel formula. The alternative versions of the formula are proved in Section \ref{kernelmanipulations} We also have the following representation for the Fredholm determinant involved in the above theorem. One should compare this result to the formula for the GUE Tracy-Widom distribution given in terms of the Painlev\'{e} II equation (see \cite{TW0,TWAiry} or the discussion of Section \ref{integro_differential}). \begin{proposition}\label{prop2} Let $\sigma_{T,\tilde\mu}$ be as in (\ref{airy_like_kernel}). Then \begin{eqnarray*} \nonumber\frac{d^2}{dr^2} \log{\det}(I - K_{\sigma_{T,\tilde\mu}})_{L^2(r,\infty)} &=& -\int_{-\infty}^{\infty} \sigma^{\prime}_{T,\tilde\mu}(t) q_t^2(r) dt\\ \det(I - K_{\sigma_{T,\tilde\mu}})_{L^2(r,\infty)} &=& \exp\left(-\int_r^{\infty}(x-r)\int_{-\infty}^{\infty} \sigma^{\prime}_{T,\tilde\mu}(t) q_t^2(x)dtdx\right) \end{eqnarray*} where \begin{equation*} \frac{d^2}{d r^2}q_t(r) = \left(r + t + 2\int_{-\infty}^{\infty} \sigma^{\prime}_{T,\tilde\mu}(t) q_t^2(r)dt \right) q_t(r) \end{equation*} with $q_t(r) \sim \ensuremath{\mathrm{Ai}}(t+r)$ as $r\to \infty$ and where $\sigma^{\prime}_{T,\tilde\mu}(t)$ is the derivative of the function in (\ref{airy_like_kernel}). \end{proposition} This proposition is proved in Section \ref{integro_differential} and follows from a more general theory developed in Section \ref{int_int_op_sec} about a class of generalized integrable integral operators. It is not hard to show from the formulas in Theorem \ref{main_result_thm} that $\lim_{s\to\infty}F_T(s)=~1$, but we do not at the present time know how show directly from the determinental formulas that $\lim_{s\to-\infty}F_T(s)=0$, or even that $F_T$ is non-decreasing in $s$. However, for each $T$, $\mathcal{F}(T,X)$ is an almost surely finite random variable, and hence we know from the definition (\ref{defofF}) that $F_T$ is indeed a non-degenerate distribution function. The formulas in Theorem \ref{main_result_thm} suggest that in the limit as $T$ goes to infinity, under $T^{1/3}$ scaling, we recover the celebrated $F_{\mathrm{GUE}}$ distribution (sometimes written as $F_2$) which is the GUE Tracy-Widom distribution, i.e., the limiting distribution of the scaled and centered largest eigenvalue in the Gaussian unitary ensemble. \begin{corollary}\label{TW} As $\lim_{T\nearrow\infty} F_T\left(T^{1/3} s\right)=F_{\mathrm{GUE}}(2^{1/3} s)$ \end{corollary} This is most easily seen from the cosecant kernel formula for $F_T(s)$. Formally, as $T$ goes to infinity, the kernel $K_{a}^{\csc}$ behaves as $K_{T^{1/3}s}^{\csc}$ and making a change of variables to remove the $T$ from the exponential argument of the kernel, this approaches the Airy kernel on a complex contour, as given in \cite{TW3} equation (33). The full proof is given in Section \ref{twasymp}. An inspection the formula for $F_T$ given in Theorem \ref{main_result_thm} immediately reveals that there is no dependence on $X$. In fact, one can check directly from (\ref{nine}) that \begin{proposition} For each $T\ge 0$, $\mathcal{F}(T, X)$ is stationary in $X$. \end{proposition} This is simply because the Brownian bridge transition probabilities are affine transformations of each other. Performing the change of variables, the white noise remains invariant in distribution. The following conjecture is thus natural: \begin{conj} For each fixed $T>0$, as $T\nearrow \infty$, \begin{equation*} 2^{1/3}T^{-1/3}\left(\mathcal{F}(T,T^{2/3}X) + \frac{T}{4!}\right)\to{\mathscr{A}}_2(X) \end{equation*} where ${\mathscr{A}}_2(X)$ is the ${\rm Airy}_2$ process (see \cite{PS}). \end{conj} Unfortunately, the very recent extensions of the Tracy-Widom formula for ASEP (\ref{TW_prob_equation}) to multipoint distributions \cite{TWmulti} appear not to be conducive to the asymptotic analysis necessary to obtain this conjecture following the program of this article. Corollary \ref{TW} immediately implies the convergence of one point distributions, \begin{corollary} $\lim_{T\nearrow \infty} P\left(\frac{\mathcal{F}(T,T^{2/3}X) + \frac{T}{4!}}{T^{1/3}} \leq s\right) = F_{\mathrm{GUE}}(2^{1/3}s) $. \end{corollary} It is elementary to add a temperature $\beta^{-1}$ into the model. Let \begin{equation*} \mathcal{F}_\beta(T,X) = \log E_{T,X}\left[ :\!\exp\!: \left\{-\beta\int_0^T \dot{\mathscr{W}}(t,b(t)) dt\right\}\right]. \end{equation*} The corresponding function $\mathcal{Z}_\beta(T,X) = p(T,X) \exp\{ \mathcal{F}_\beta(T,X) \}$ is the solution of $\partial_T \mathcal{Z}_\beta =\frac12\partial^2_X \mathcal{Z}_\beta -\beta \dot{\mathscr{W}} \mathcal{Z}_\beta $ with $\mathcal{Z}_\beta(0,X) = \delta_0(X)$ and hence \begin{equation*} \mathcal{Z}_\beta(T,X)\stackrel{\rm distr.}{=} \beta^{2} \mathcal{Z}(\beta^{4} T, \beta^{2} X), \end{equation*}giving the relationship \begin{equation*} \beta\sim T^{1/4}, \end{equation*} between the time $T$ and the temperature $\beta^{-1}$. Now, just as in (\ref{zed}) we define $\mathcal{F}_{\beta}(T,X)$ in terms of $\mathcal{Z}_{\beta}(T,X)$ and $p(T,X)$. From this we see that \begin{equation*} \mathcal{F}_{\beta}(T,X)\stackrel{\rm distr.}{=} \mathcal{F}(\beta^4 T,\beta^2 X). \end{equation*} Hence the following result about the low temperature limit is, just like Corollary \ref{TW}, a consequence of Theorem \ref{main_result_thm}: \begin{corollary} For each fixed $X\in \mathbb R$ and $T>0$, \begin{equation*} \lim_{\beta\rightarrow \infty} P\left( \frac{\mathcal{F}_{\beta}(T,\beta^{2/3}T^{2/3}X) + \frac{\beta^4 T}{4!}}{\beta^{4/3} T^{1/3}}\leq s\right) = F_{\mathrm{GUE}}(2^{1/3}s). \end{equation*} \end{corollary} Now we turn to the behavior as $T$ or $\beta\searrow0$. \begin{proposition}\label{gauss_limit_thm} As $T\beta^{4} \searrow0$, \begin{equation*} 2^{1/2}\pi^{-1/4}\beta^{-1}T^{-1/4}\mathcal{F}_\beta(T,X) \end{equation*} converges in distribution to a standard Gaussian. \end{proposition} This proposition is proved in Section \ref{Gaussian_asymptotics}. As an application, taking $\beta=1$ the above theorem shows that \begin{equation*} \lim_{T\searrow 0} F_T (2^{-1/2}\pi^{1/4} T^{1/4} s) = \int_{-\infty}^s \frac{ e^{ - x^2/2} }{\sqrt{2\pi}} dx. \end{equation*} Proposition \ref{gauss_limit_thm} and Corollary \ref{TW} show that, under appropriate scalings, the family of distributions $F_T$ {\it cross over} from the Gaussian distribution for small $T$ to the GUE Tracy-Widom distribution for large $T$. The main physical prediction (\ref{five2}) is based on the exact computation \cite{K}, \begin{equation}\label{phys101} \lim_{T\to \infty} T^{-1} \log E[ \mathcal{Z}^n(T,0) ] = \frac{1}{4!} n(n^2-1), \end{equation} which can be performed rigorously \cite{BC} by expanding the Feynman-Kac formula (\ref{FK}) for $\mathcal{Z}(T,0)$ into an expectation over $n$ independent copies (replicas) of the Brownian bridge. In the physics literature, the computation is done by noting that the correlation functions \begin{equation} E[ \mathcal{Z}(T,X_1)\cdots\mathcal{Z}(T,X_n) ] \end{equation} can be computed using the Bethe ansatz \cite{LL} for a system of particles on the line interacting via an attractive delta function potential. (\ref{phys101}) suggests, but does not imply, the scaling (\ref{five2}). The problem is that the moments in (\ref{phys101}) grow far too quickly to uniquely determine the underlying distribution. It is interesting to note that the Tracy-Widom formula for ASEP (\ref{TW_prob_equation}), which is our main tool, is also based on the same idea that hard core interacting systems in one dimension can be rigorously solved via the Bethe ansatz. H. Spohn has pointed out, however, that the analogy is at best partial because the interaction is attractive in the case of the $\delta-$Bose gas. The probability distribution for the free energy of the continuum directed random polymer, as well as for the solution to the stochastic heat equation and the KPZ equation has been a subject of interest for many years, with a large physics literature (see, for example, \cite{Calabrese-et-al}, \cite{Dotsenko}, \cite{KolKor} and references therein.) The reason why we can now compute the distribution is because of the exact formula of Tracy and Widom for the asymmetric simple exclusion process (ASEP) with step initial condition. Once we observe that the weakly asymmetric simple exlusion process (WASEP) with these initial conditions converges to the solution of the stochastic heat equation with delta initial conditions, the calculation of the probability distribution boils down to a careful asymptotic analysis of the Tracy-Widom ASEP formula. This connection is made in Theorem \ref{BG_thm} and the WASEP asymptotic analysis is recorded by Theorem \ref{epsilon_to_zero_theorem}. \begin{remark} During the preparation of this article, we learned that T. Sasamoto and H. Spohn \cite{SaSp1,SaSp2,SaSp3} independently obtained a formula equivalent to (\ref{intintop}) for the distribution function $F_T$. They also use a steepest descent analysis on the Tracy-Widom ASEP formula. Note that their argument is at the level of formal asymptotics of operator kernels and they have not attempted a mathematical proof. Very recently, two groups of physicists (\cite{Calabrese-et-al}, \cite{Dotsenko}) have successfully employed the Bethe Ansatz for the attractive $\delta-$Bose gas and the replica trick to rederive the formulas for $F_T$. These methods are non-rigorous, employing divergent series. However, they suggest a deeper relationship between the work of Tracy and Widom for ASEP and the traditional methods of the Bethe Ansatz for the $\delta-$Bose gas.\end{remark} \subsubsection{Outline} There are three main results in this paper. The first pertains to the KPZ / stochastic heat equation / continuum directed polymer and is contained in the theorems and corollaries above in Section \ref{KPZ_SHE_Poly}. The proof of the equivalence of the formulas of Theorem \ref{main_result_thm} is given in Section \ref{kernelmanipulations}. The Painlev\'{e} II like formula of Proposition \ref{prop2} is proved in Section \ref{integro_differential} along with the formulation of a general theory about a class of generalized integrable integral operators. The other results of the above section are proved in Section \ref{corollary_sec}. The second result is about the WASEP. In Section \ref{asepscalingth} we introduce the fluctuation scaling theory of the ASEP and motivate the second main result which is contained in Section \ref{WASEP_crossover}. The Tracy-Widom ASEP formula is reviewed in Section \ref{TW_ASEP_formula_sec} and then a formal explanation of the result is given in Section \ref{formal_calc_subsec}. A full proof of this result is contained in Section \ref{epsilon_section} and its various subsections. The third result is about the connection between the first (stochastic heat equation, etc.) and second (WASEP) and is stated in Section \ref{WASEP_SHE} and proved in Section \ref{BG}. \subsection{ASEP scaling theory}\label{asepscalingth} The simple exclusion process with parameters $p,q\geq0$ (such that $p+q=1$) is a continuous time Markov process on the discrete lattice $\ensuremath{\mathbb{Z}}$ with state space $\{0,1\}^{\ensuremath{\mathbb{Z}}}$. The 1's are thought of as particles and the 0's as holes. The dynamics for this process are given as follows: Each particle has an independent exponential alarmclock which rings at rate one. When the alarm goes off the particle flips a coin and with probability $p$ attempts to jump one site to the right and with probability $q$ attempts to jump one site to the left. If there is a particle at the destination, the jump is suppressed and the alarm is reset (see \cite{Liggett} for a rigorous construction of the process). If $q=1,p=0$ this process is the totally asymmetric simple exclusion process (TASEP); if $q>p$ it is the asymmetric simple exclusion process (ASEP); if $q=p$ it is the symmetric simple exclusion process (SSEP). Finally, if we introduce a parameter into the model, we can let $q-p$ go to zero with that parameter, and then this class of processes are known as the weakly asymmetric simple exclusion process (WASEP). It is the WASEP that is of central interest to us. ASEP is often thought of as a discretization of KPZ (for the height function) or stochastic Burgers (for the particle density). For WASEP the connection can be made precise (see Sections \ref{WASEP_SHE} and \ref{BG}). There are many ways to initialize these exclusion processes (such as stationary, flat, two-sided Bernoulli, etc.) analogous to the various initial conditions for KPZ/Stochastic Burgers. We consider a very simple initial condition known as {\it step initial condition} where every positive integer lattice site (i.e. $\{1,2,3,\ldots\}$) is initially occupied by a particle and every other site is empty. Associated to the ASEP are occupation variables $\eta(t,x)$ which equal 1 if there is a particle at position $x$ at time $t$ and 0 otherwise. From these we define $\hat{\eta}=2\eta-1$ which take values $\pm1$ and define the height function for WASEP with asymmetry $\gamma=q-p$ by, \begin{equation}\label{defofheight} h_{\gamma}(t,x) = \begin{cases} 2N(t) + \sum_{0<y\leq x}\hat{\eta}(t,y), & x>0,\\ 2N(t), & x=0,\\ 2N(t)- \sum_{x<y\leq 0}\hat{\eta}(t,y), & x<0, \end{cases} \end{equation} where $N(t)$ is equal to the net number of particles which crossed from the site 1 to the site 0 in time $t$. Since we are dealing with step initial conditions $h_{\gamma}$ is initially given by (connecting the points with slope $\pm 1$ lines) $h_{\gamma}(0,x)=|x|$. It is easy to show that because of step initial conditions, the following three events are equivalent: \begin{equation*} \left\{h_{\gamma}(t,x)\geq 2m-x\right\} =\{\tilde J_{\gamma}(t,x)\geq m\} = \{\mathbf{x}_{\gamma}(t,m)\leq x) \end{equation*} where $\mathbf{x}_{\gamma}(t,m)$ is the location at time $t$ of the particle which started at $m>0$ and where $\tilde J_{\gamma}(t,x)$ is a random variable which records the number of particles which started to the right of the origin at time 0 and ended to the left or at $x$ at time $t$. For this particular initial condition $\tilde J_{\gamma}(t,x) = J_{\gamma}(t,x) + x\vee 0$ where $J_{\gamma}(t,x)$ is the usual time integrated current which measures the signed number of particles which cross the bond $(x,x+1)$ up to time $t$ (positive sign for jumps from $x+1$ to $x$ and negative for jumps from $x$ to $x+1$). The $\gamma$ throughout emphasizes the strength of the asymmetry. In the case of the ASEP ($q>p$, $\gamma\in (0,1)$) and the TASEP ($q=1,p=0$, $\gamma=1$) there is a well developed fluctuation theory for the height function. We briefly review this, since it motivates the time/space/fluctuation scale we will use throughout the paper, and also since we are ultimately interested in understanding the transition in behaviour from WASEP to ASEP. The following result was proved for $\gamma=1$ (TASEP) by Johansson \cite{J} and for $0<\gamma<1$ (ASEP) by Tracy and Widom \cite{TW3}: \begin{equation*} \lim_{t\rightarrow \infty} P\left(\frac{h_{\gamma}(\frac{t}{\gamma},0)-\frac{1}{2}t}{t^{1/3}} \geq -s\right) =F_{\mathrm{GUE}}(2^{1/3}s). \end{equation*} In the case of TASEP, the one point distribution limit has been extended to a process level limit. Consider a time $t$, a space scale of order $t^{2/3}$ and a fluctuation scale of order $t^{1/3}$. Then, as $t$ goes to infinity, the spatial fluctuation process, scaled by $t^{1/3}$ converges to the $\textrm{Airy}_2$ process (see \cite{Borodin:Ferrari,Corwin:Ferrari:Peche} for this result for TASEP, \cite{JDTASEP} for DTASEP and \cite{PS} for the closely related PNG model). Precisely, for $m\geq 1$ and real numbers $x_1,\ldots,x_m$ and $s_1,\ldots,s_m$: \begin{equation*} \lim_{t\rightarrow \infty} P\left(h_{\gamma}(t,x_k t^{2/3})\geq \frac{1}{2}t +(\frac{x_k^2}{2}-s_k)t^{1/3}, ~k\in [m]\right) = P\left(\mathcal{A}_2(x_k)\leq 2^{1/3}s_k, ~k\in [m]\right) \end{equation*} where $[m]=\{1,\ldots, m\}$ and where $\mathcal{A}_2$ is the $\textrm{Airy}_2$ process (see, for example, \cite{Borodin:Ferrari,Corwin:Ferrari:Peche}) and has one-point marginals $F_{\mathrm{GUE}}$. In \cite{JDTASEP}, it is proved that this process has a continuous version and that (for DTASEP) the above convergence can be strengthened due to tightness. Notice that in order to get this process limit, we needed to deal with the parabolic curvature of the height function above the origin by including $(\frac{x_k^2}{2}-s_k)$ rather than just $-s_k$. In fact, if one were to replace $t$ by $t T$ for some fixed $T$, then the parabola would become $\frac{x_k^2}{2T}$. We shall see that this parabola comes up again soon. An important take away from the result above is the relationship between the exponents for time, space and fluctuations --- their $3:2:1$ ratio. It is only with this ratio that we encounter a non-trivial limiting spatial process. For the purposes of this paper, it is more convenient for us to introduce a parameter $\epsilon$ which goes to zero, instead of the parameter $t$ which goes to infinity. Keeping in mind the $3:2:1$ ratio of time, space and fluctuations we define scaling variables \begin{equation*} t=\epsilon^{-3/2}T, \qquad x=\epsilon^{-1}X, \end{equation*} where $T>0$ and $X\in \ensuremath{\mathbb{R}}$. With these variables the height function fluctuations around the origin are written as \begin{equation*} \epsilon^{1/2}\left(h_{\gamma}(\tfrac{t}{\gamma},x)-\tfrac{1}{2}t\right). \end{equation*} Motivated by the relationship we will establish in Section \ref{WASEP_SHE}, we are interested in studying the Hopf-Cole transformation of the height function fluctuations given by \begin{equation*}\exp\left\{-\epsilon^{1/2}\left(h_{\gamma}(\tfrac{t}{\gamma},x)-\tfrac{1}{2}t\right)\right\}. \end{equation*} When $T=0$ we would like this transformed object to become, in some sense, a delta function at $X=0$. Plugging in $T=0$ we see that the height function is given by $|\epsilon^{-1}X|$ and so the exponential becomes $\exp\{-\epsilon^{-1/2}|X|\}$. If we introduce a factor of $\epsilon^{-1/2}/2$ in front of this, then the total integral in $X$ is 1 and this does approach a delta function as $\epsilon$ goes to zero. Thus we consider \begin{equation}\label{above_eqn} \frac{\epsilon^{-1/2}}{2} \exp\left\{-\epsilon^{1/2}\left(h_{\gamma}(\tfrac{t}{\gamma},x)-\tfrac{1}{2}t\right)\right\}. \end{equation} As we shall explain in Section \ref{WASEP_crossover}, the correct scaling of $\gamma$ to see the crossover behaviour between ASEP and SSEP is $\gamma=b\epsilon^{1/2}$. We can set $b=1$, as other values of $b$ can be recovered by scaling. This corresponds with setting \begin{equation*} \gamma=\epsilon^{1/2},\qquad p=\tfrac{1}{2}-\tfrac{1}{2}\epsilon^{1/2}, \qquad q=\tfrac{1}{2}+\tfrac{1}{2}\epsilon^{1/2}. \end{equation*} Under this scaling, the WASEP is related to the KPZ equation and stochastic heat equation. To help facilitate this connection, define \begin{eqnarray*} \label{nu} \nu_\epsilon =& p+q-2\sqrt{qp} &= \tfrac{1}{2} \epsilon + \tfrac{1}{8}\epsilon^2+ O(\epsilon^3),\\ \label{lambda} \lambda_\epsilon =& \tfrac{1}{2} \log (q/p) &= \epsilon^{1/2} + \tfrac{1}{3}\epsilon^{3/2}+O(\epsilon^{5/2}), \end{eqnarray*} and the discrete Hopf-Cole transformed height function \begin{equation}\label{rescaledheight} Z_\epsilon(T,X) ={\scriptstyle\frac12} \epsilon^{-1/2}\exp\left \{ - \lambda_\epsilon h_{\gamma}(\tfrac{t}{\gamma},x) + \nu_\epsilon \epsilon^{-1/2}t\right\}. \end{equation} Observe that this differs from the expression in (\ref{above_eqn}) only to second order in $\epsilon$. This second order difference, however, introduces a shift of $T/4!$ which we will see now. Note that the same factor appears in \cite{BG}. With the connection to the polymer free energy in mind, write \begin{equation*} Z_{\epsilon}(T,X) = p(T,X)\exp\{F_{\epsilon}(T,X)\}. \end{equation*} where $p(T,X)$ is the heat kernel defined in (\ref{heat_kernel}). This implies that the field should be defined by, \begin{equation*} F_{\epsilon}(T,X) =\log(\epsilon^{-1/2}/2) -\lambda_{\epsilon}h_{\gamma}(\tfrac{t}{\gamma},x)+\nu_\epsilon \epsilon^{-1/2}t+\frac{X^2}{2T}+\log\sqrt{2\pi T}. \end{equation*} We are interested in understanding the behavior of $P(F_{\epsilon}(T,X)\le s)$ as $\epsilon$ goes to zero. This probability can be translated into a probability for the height function, the current and finally the position of a tagged particle: \begin{eqnarray}\label{string_of_eqns} && \qquad P(F_{\epsilon}(T,X)+\tfrac{T}{4!}\leq s)= \\ \nonumber && P\left(\log(\epsilon^{-1/2}/2) -\lambda_{\epsilon}h_{\gamma}(\tfrac{t}{\gamma},x)+\nu_\epsilon \epsilon^{-1/2}t+\frac{X^2}{2T}+\log\sqrt{2\pi T} + \tfrac{T}{4!}\leq s\right)=\\ \nonumber&& P\left(h_{\gamma}(\tfrac{t}{\gamma},x) \geq \lambda_{\epsilon}^{-1}[-s+\log\sqrt{2\pi T}+\log(\epsilon^{-1/2}/2)+\frac{X^2}{2T}+\nu_\epsilon \epsilon^{-1/2}t+\tfrac{T}{4!}]\right)=\\ \nonumber&& P\left(h_{\gamma}(\tfrac{t}{\gamma},x) \geq \epsilon^{-1/2}\left[-a+\log(\epsilon^{-1/2}/2)+\frac{X^2}{2T}\right]+\frac{t}{2}\right)=\\ \nonumber&& P(\tilde J_{\gamma}(\tfrac{t}{\gamma},x) \geq m) = P(\mathbf{x}_{\gamma}(\tfrac{t}{\gamma},m)\leq x), \end{eqnarray} where $m$ is defined as \begin{eqnarray}\label{m_eqn} m&=&\frac{1}{2}\left[\epsilon^{-1/2}\left(-a+\log(\epsilon^{-1/2}/2)+\frac{X^2}{2T}\right)+\frac{1}{2}t+x\right]\\ \nonumber a &=& s-\log\sqrt{2\pi T}. \end{eqnarray} \subsection{WASEP crossover regime}\label{WASEP_crossover} We now turn to the question of how $\gamma$ should vary with $\epsilon$. The simplest heuristic argument is to use the KPZ equation \begin{equation*}\partial_T h_\gamma = -\frac{\gamma}2(\partial_Xh_\gamma)^2 + \frac12 \partial_X^2 h_\gamma + \dot{\mathscr{W}}. \end{equation*} as a proxy for its discretization ASEP, and rescale \begin{equation*} h_{\epsilon,\gamma}(t,x) = \epsilon^{1/2} h_\gamma(t/\gamma,x) \end{equation*} to obtain \begin{equation*}\partial_t h_{\epsilon,\gamma} = -\frac{1}2(\partial_xh_{\epsilon,\gamma})^2 + \frac{\epsilon^{1/2}\gamma^{-1}}2 \partial_x^2 h_{\epsilon,\gamma} + \epsilon^{1/4}\gamma^{-1/2} \dot{\mathscr{W}}, \end{equation*} from which we conclude that we want $\gamma =b\epsilon^{1/2}$ for some $b\in (0,\infty)$. We expect Gaussian behavior as $b\searrow 0$ and $F_{GUE}$ behavior as $b\nearrow \infty$. On the other hand, a simple rescaling reduces everything to the case $b=1$. Thus it suffices to consider \begin{equation*} \gamma:=\epsilon^{1/2}. \end{equation*} From now on we will assume that $\gamma=\epsilon^{1/2}$ unless we state explicitly otherwise. In particular, $F_{\epsilon}(T,X)$ should be considered with respect to $\gamma$ as defined above. The following theorem is proved in Section \ref{epsilon_section} though an informative but non-rigorous derivation is given in Section \ref{formal_calc_subsec}. \begin{theorem}\label{epsilon_to_zero_theorem} For all $s\in \ensuremath{\mathbb{R}}$, $T>0$ and $X\in\ensuremath{\mathbb{R}}$ we have the following convergence: \begin{equation}\label{intintop} F_T(s):=\lim_{\epsilon\rightarrow 0}P(F_\epsilon(T,X)+\tfrac{T}{4!}\leq s) = \int_{\mathcal{\tilde C}} e^{-\tilde\mu}\det(I-K^{\csc}_{a})_{L^2(\tilde\Gamma_{\eta})} \frac{d\tilde\mu}{\tilde\mu}, \end{equation} where $a=a(s)$ is given as in the statement of Theorem \ref{main_result_thm} and where the contour $\mathcal{\tilde C}$, the contour $\tilde\Gamma_{\eta}$ and the operator $K_a^{\csc}$ is defined below in Definition \ref{thm_definitions}. \end{theorem} \begin{remark} The limiting distribution function $F_T(s)$ above is, a priori, unrelated to the crossover distribution function (notated suggestively as $F_T(s)$ too) defined in Theorem \ref{main_result_thm} which pretains to KPZ, etc., and not to WASEP. Theorem \ref{BG_thm} below, however, establishes that these two distribution function definitions are, in fact, equivalent. \end{remark} \begin{definition}\label{thm_definitions} The contour $\mathcal{\tilde C}$ is defined as \begin{equation*} \mathcal{\tilde C}=\{e^{i\theta}\}_{\pi/2\leq \theta\leq 3\pi/2} \cup \{x\pm i\}_{x>0} \end{equation*} The contours $\tilde\Gamma_{\eta}$, $\tilde\Gamma_{\zeta}$ are defined as \begin{eqnarray*} \tilde\Gamma_{\eta}&=&\{\frac{c_3}{2}+ir: r\in (-\infty,\infty)\}\\ \tilde\Gamma_{\zeta}&=&\{-\frac{c_3}{2}+ir: r\in (-\infty,\infty)\}, \end{eqnarray*} where the constant $c_3$ is defined henceforth as \begin{equation*} c_3=2^{-4/3}. \end{equation*} The kernel $K_a^{\csc}$ acts on the function space $L^2(\tilde\Gamma_{\eta})$ through its kernel: \begin{equation}\label{k_csc_definition} K_a^{\csc}(\tilde\eta,\tilde\eta') = \int_{\tilde\Gamma_{\zeta}} e^{-\frac{T}{3}(\tilde\zeta^3-\tilde\eta'^3)+2^{1/3}a(\tilde\zeta-\tilde\eta')} \left(2^{1/3}\int_{-\infty}^{\infty} \frac{\tilde\mu e^{-2^{1/3}t(\tilde\zeta-\tilde\eta')}}{e^{t}-\tilde\mu}dt\right) \frac{d\tilde\zeta}{\tilde\zeta-\tilde\eta}. \end{equation} \end{definition} \begin{remark} It is very important to observe that our choice of contours for $\tilde\zeta$ and $\tilde\eta'$ ensure that $\ensuremath{\mathrm{Re}}(-2^{1/3}(\tilde\zeta-\tilde\eta'))=1/2$. This ensures that the integral in $t$ above converges for all $\tilde\zeta$ and $\tilde\eta'$. In fact, the convergence holds as long as we keep $\ensuremath{\mathrm{Re}}(-2^{1/3}(\tilde\zeta-\tilde\eta'))$ in a closed subset of $(0,1)$. The inner integral in (\ref{k_csc_definition}) can be evaluated and we find that following equivalent expression: \begin{equation*} K_a^{\csc}(\tilde\eta,\tilde\eta') = \int_{\tilde\Gamma_{\zeta}} e^{-\frac{T}{3}(\tilde\zeta^3-\tilde\eta'^3)+2^{1/3}a(\tilde\zeta-\tilde\eta')} \frac{\pi 2^{1/3} (-\tilde\mu)^{-2^{1/3}(\tilde\zeta-\tilde\eta')}}{ \sin(\pi 2^{1/3}(\tilde\zeta-\tilde\eta'))} \frac{d\tilde\zeta}{\tilde\zeta-\tilde\eta}. \end{equation*} This serves as an analytic extension of the first kernel to a larger domain of $\tilde\eta$, $\tilde\eta'$ and $\tilde\zeta$. We do not, however, make use of this analytic extension, and simply record it as a matter of interest. \end{remark} \subsection{The connection between WASEP and the stochastic heat equation}\label{WASEP_SHE} We now state a result about the convergence of the $Z_{\epsilon}(T,X)$ from (\ref{rescaledheight}) to the solution $\mathcal{Z}(T,X)$ of the stochastic heat equation (\ref{she}) with delta initial data (\ref{sheinit}). First we take the opportunity to state (\ref{she}) precisely: $\mathscr{W}(T)$, $T \ge 0$ is the cylindrical Wiener process, i.e. the continuous Gaussian process taking values in $H^{-1/2-}_{\rm loc} (\mathbb R)=\cap_{\alpha<-1/2} H^{\alpha}_{\rm loc}(\mathbb R)$ with \begin{equation*} E[ \langle \varphi ,\mathscr{W}(T)\rangle \langle \psi, \mathscr{W}(S)\rangle ] =\min(T,S) \langle \varphi, \psi\rangle \end{equation*} for any $\varphi,\psi\in C_c^\infty(\mathbb R)$, the smooth functions with compact support in $\mathbb R$. Here $H^{\alpha}_{\rm loc}(\mathbb R)$, $\alpha<0$, consists of distributions $f$ such that for any $\varphi\in C_c^\infty(\mathbb R)$, $\varphi f$ is in the standard Sobolev space $H^{-\alpha}(\mathbb R)$, i.e. the dual of $H^{\alpha}(\mathbb R)$ under the $L^2$ pairing. $H^{-\alpha}(\mathbb R)$ is the closure of $C_c^\infty(\mathbb R)$ under the norm $\int_{-\infty}^{\infty} (1+ |t|^{-2\alpha}) |\hat f(t)|^2dt$ where $\hat f$ denotes the Fourier transform. The distributional time derivative $\dot{\mathscr{W}}(T,X)$ is the space-time white noise,\begin{equation*} E[ \dot{\mathscr{W}}(T,X) \dot{\mathscr{W}}(S,Y)] = \delta(T-S)\delta(Y-X). \end{equation*} Note the mild abuse of notation for the sake of clarity; we write $\dot{\mathscr{W}}(T,X)$ even though it is a distribution on $(T,X)\in [0,\infty)\times \mathbb R$ as opposed to a classical function of $T$ and $X$. Let $\mathscr{F}(T)$, $T\ge 0$, be the natural filtration, i.e. the smallest $\sigma$-field with respect to which $\mathscr{W}(S)$ are measurable for all $0\le S\le T$. The stochastic heat equation is then shorthand for its integrated version (\ref{intshe}) where the stochastic integral is interpreted in the It\^o sense \cite{W}, so that, in particular, if $f(T,X)$ is any non-anticipating integrand, \begin{eqnarray*}& E[ (\int_0^T \int_{-\infty}^\infty f(S,Y)\mathscr{W}(dY, dS) )^2]=E[ (\int_0^T \int_{-\infty}^\infty f^2(S,Y)dY dS].& \end{eqnarray*} The awkward notation is inherited from stochastic partial differential equations: $\mathscr{W}$ for (cylindrical) Wiener process, $\dot{\mathscr{W}}$ for white noise, and stochastic integrals are taken with respect to white noise $\mathscr{W}(dY, dS)$. Note that the solution can be written explicitly as a series of multiple Wiener integrals. With $X_0=0$ and $X_{n+1}=X$, \begin{equation}\label{soln} \mathcal{Z}(T,X)= \sum_{n=0}^\infty (-1)^n \int_{\Delta'_n(T)} \int_{\mathbb R^n}\prod_{i=0}^n p(T_{i+1}-T_{i}, X_{i+1}-X_{i}) \prod_{i=1}^n\mathscr{W} (dT_i dX_i) \end{equation} where $\Delta'_n(T)= \{(T_1,\ldots,T_n) : 0=T_0 \le T_1\le \cdots \le T_n\le T_{n+1}=T\}$. Returning now to the WASEP, the random functions $Z_{\epsilon}(T,X)$ from (\ref{rescaledheight}) have discontinuities both in space and in time. If desired, one can linearly interpolate in space so that they become a jump process taking values in the space of continuous functions. But it does not really make things easier. The key point is that the jumps are small, so we use instead the space $D_u([0,\infty); D_u(\mathbb R))$ where $D_u$ refers to right continuous paths with left limits with the topology of uniform convergence on compact sets. Let $\mathscr{P}_\epsilon$ denote the probability measure on $D_u([0,\infty); D_u(\mathbb R))$ corresponding to the process $Z_{\epsilon}(T,X)$. \begin{theorem}\label{BG_thm} $\mathscr{P}_\epsilon$, $\epsilon\in (0,1/4)$ are a tight family of measures and the unique limit point is supported on $C([0,\infty); C(\mathbb R))$ and corresponds to the solution (\ref{soln}) of the stochastic heat equation (\ref{she}) with delta function initial conditions (\ref{sheinit}). \end{theorem} In particular, for each fixed $X,T$ and $s$, \begin{equation*} \lim_{\epsilon\searrow 0} P( F_{\epsilon}(T,X)\le s) = P(\mathcal{F}(T,X)\le s) . \end{equation*} The result is motivated by, but does not follow directly from, the results of \cite{BG}. This is because of the delta function initial conditions, and the consequent difference in the scaling. It requires a certain amount of work to show that their basic computations are applicable to the present case. This is done in Section \ref{BG}. \subsection{The Tracy-Widom Step Initial Condition ASEP formula}\label{TW_ASEP_formula_sec} Due to the process level convergence of WASEP to the stochastic heat equation, exact information about WASEP can be, with care, translated into information about the stochastic heat equation. Until recently, very little exact information was known about ASEP or WASEP. The work of Tracy and Widom in the past few years, however, has changed the situation significantly. The key tool in determining the limit as $\epsilon$ goes to zero of $P(F_{\epsilon}(T,X)+\tfrac{T}{4!}\leq s)$ is their exact formula for the transition probability for a tagged particle in ASEP started from step initial conditions. This formula was stated in \cite{TW3} in the form below, and was developed in the three papers \cite{TW1,TW2,TW3}. We will apply it to the last line of (\ref{string_of_eqns}) to give us an exact formula for $P(F_{\epsilon}(T,X)+\tfrac{T}{4!}\le s)$. Recall that $\mathbf{x}_{\gamma}(t,m)$ is the location at time $t$ of the particle which started at $m>0$. Consider $q>p$ such that $q+p=1$ and let $\gamma=q-p$ and $\tau=p/q$. For $m>0$, $t\geq 0$ and $x\in \ensuremath{\mathbb{Z}}$, it is shown in \cite{TW3} that, \begin{equation}\label{TW_prob_equation} P(\mathbf{x}(\gamma^{-1}t,m)\leq x) = \int_{S_{\tau^+}}\frac{d\mu}{\mu} \prod_{k=0}^{\infty} (1-\mu\tau^k)\det(I+\mu J_{t,m,x,\mu})_{L^2(\Gamma_{\eta})} \end{equation} where $S_{\tau^+}$ is a circle centered at zero of radius strictly between $\tau$ and 1, and where the kernel of the Fredholm determinant (see Section \ref{pre_lem_ineq_sec}) is given by \begin{equation}\label{J_eqn_def} J_{t,m,x,\mu}(\eta,\eta')=\int_{\Gamma_{\zeta}} \exp\{\Psi_{t,m,x}(\zeta)-\Psi_{t,m,x}(\eta')\}\frac{f(\mu,\zeta/\eta')}{\eta'(\zeta-\eta)}d\zeta \end{equation} where $\eta$ and $\eta'$ are on $\Gamma_{\eta}$, a circle centered at zero of radius $\rho$ strictly between $\tau$ and $1$, and the $\zeta$ integral is on $\Gamma_{\zeta}$, a circle centered at zero of radius strictly between $1$ and $\rho\tau^{-1}$ (so as to ensure that $|\zeta/\eta|\in (1,\tau^{-1})$), and where, for fixed $\xi$, \begin{eqnarray*} \nonumber f(\mu,z)&=&\sum_{k=-\infty}^{\infty} \frac{\tau^k}{1-\tau^k\mu}z^k,\\ \Psi_{t,m,x}(\zeta) &=& \Lambda_{t,m,x}(\zeta)-\Lambda_{t,m,x}(\xi),\\ \nonumber \Lambda_{t,m,x}(\zeta) &=& -x\log(1-\zeta) + \frac{t\zeta}{1-\zeta}+m\log\zeta . \end{eqnarray*} \begin{remark} Throughout the rest of the paper we will only include the subscripts on $J$, $\Psi$ and $\Lambda$ when we want to emphasize their dependence on a given variable. \end{remark} \subsection{The weakly asymmetric limit of the Tracy-Widom ASEP formula}\label{formal_calc_subsec} The Tracy and Widom ASEP formula (\ref{TW_prob_equation}) provides an exact expression for the probability $P(F_{\epsilon}(T,X)+\tfrac{T}{4!}\leq s)$ by interpreting it in terms of a probability of the location of a tagged particle (\ref{string_of_eqns}). It is of great interest to understand this limit ($F_T(s)$) since, as we have seen, it describes a number of interesting limiting objects. We will now present a formal computation of the expressions given in Theorem \ref{epsilon_to_zero_theorem} (see Section \ref{WASEP_crossover}) for $F_T(s)$. After presenting the formal argument, we will stress that there are a number of very important technical points which arise during this argument, many of which require serious work to resolve. In Section \ref{epsilon_section} we will provide a rigorous proof of Theorem \ref{epsilon_to_zero_theorem} in which we deal with all of the possible pitfalls. \begin{definition}\label{quantity_definitions} Recall the definitions for the relevant quantities in this limit: \begin{eqnarray*} && p=\frac{1}{2}-\frac{1}{2}\epsilon^{1/2},\qquad q=\frac{1}{2}+\frac{1}{2}\epsilon^{1/2}\\ && \gamma=\epsilon^{1/2},\qquad \tau=\frac{1-\epsilon^{1/2}}{1+\epsilon^{1/2}}\\ && x=\epsilon^{-1}X,\qquad t=\epsilon^{-3/2}T\\ && m=\frac{1}{2}\left[\epsilon^{-1/2}\left(-a+\log(\epsilon^{-1/2}/2)+\frac{X^2}{2T}\right)+\frac{1}{2}t+x\right]\\ && \left\{F_{\epsilon}(T,X)+\frac{T}{4!}\leq s\right\} = \left\{\mathbf{x}(\frac{t}{\gamma},m)\leq x\right\}, \end{eqnarray*} where $a=a(s)$ is defined in the statement of Theorem \ref{main_result_thm}. We also define the contours $\Gamma_{\eta}$ and $\Gamma_{\zeta}$ to be \begin{equation*} \Gamma_{\eta}=\{z:|z|=1-\tfrac{1}{2}\epsilon^{1/2}\} \qquad \textrm{and} \qquad \Gamma_{\zeta}=\{z:|z|=1+\tfrac{1}{2}\epsilon^{1/2}\} \end{equation*} \end{definition} The first term in the integrand of (\ref{TW_prob_equation}) is the infinite product $\prod_{k=0}^{\infty}(1-\mu \tau^k)$. Observe that $\tau\approx 1-2\epsilon^{1/2}$ and that $S_{\tau^+}$, the contour on which $\mu$ lies, is a circle centered at zero of radius between $\tau$ and 1. The infinite product is not well behaved along most of this contour, so we will deform the contour to one along which the product is not highly oscillatory. Care must be taken, however, since the Fredholm determinant has poles at every $\mu=\tau^k$. The deformation must avoid passing through them. Observe now that \begin{equation*} \prod_{k=0}^{\infty}(1-\mu \tau^k) = \exp\{\sum_{k=0}^{\infty} \log(1-\mu \tau^k)\}, \end{equation*} and that for small $|\mu|$, \begin{eqnarray}\nonumber \sum_{k=0}^{\infty} \log(1-\mu (1-2\epsilon^{1/2})^k)& \approx& \epsilon^{-1/2} \int_0^{\infty} \log(1-\mu e^{-2 r}) dr \\ & \approx & \epsilon^{-1/2}\mu \int_0^{\infty} e^{-2 r} dr = -\frac{\epsilon^{-1/2}\mu}{2}. \end{eqnarray} With this in mind define \begin{equation*} \tilde\mu = \epsilon^{-1/2}\mu, \end{equation*} from which we see that if the Riemann sum approximation is reasonable then the infinite product converges to $e^{-\tilde\mu/2}$. We make the $\mu \mapsto \epsilon^{-1/2}\tilde\mu$ change of variables and find that the above approximations are reasonable if we consider a $\tilde\mu$ contour \begin{equation*} \mathcal{\tilde C}_{\epsilon}=\{e^{i\theta}\}_{\pi/2\leq \theta\leq 3\pi/2} \cup \{x\pm i\}_{0<x<\epsilon^{-1/2}-1}. \end{equation*} Thus the infinite product goes to $e^{-\tilde\mu/2}$. Now we turn to the Fredholm determinant. We determine a candidate for the pointwise limit of the kernel. That the combination of these two pointwise limits gives the actual limiting formula as $\epsilon$ goes to zero is, of course, completely unjustified at this point. Also, the pointwise limits here disregard the existence of a number of singularities encountered during the argument. The kernel $J(\eta,\eta')$ is given by an integral and the integrand has three main components: An exponential term \begin{equation*} \exp\{\Lambda(\zeta)-\Lambda(\eta')\}, \end{equation*} a rational function term (we include the differential with this term for scaling purposes) \begin{equation*} \frac{d\zeta}{\eta'(\zeta-\eta)}, \end{equation*} and the term \begin{equation*} \mu f(\mu,\zeta/\eta'). \end{equation*} We will proceed by the method of steepest descent, so in order to determine the region along the $\zeta$ and $\eta$ contours which affects the asymptotics we consider the exponential term first. The argument of the exponential is given by $\Lambda(\zeta)-\Lambda(\eta')$ where \begin{equation*} \Lambda(\zeta)=-x\log(1-\zeta) + \frac{t\zeta}{1-\zeta}+m\log(\zeta), \end{equation*} and where, for the moment, we take $m=\frac{1}{2}\left[\epsilon^{-1/2}(-a+\frac{X^2}{2T})+\frac{1}{2}t+x\right]$. The real expression for $m$ has a $\log(\epsilon^{-1/2}/2)$ term which we define in with the $a$ for the moment (recall that $a$ is defined in the statement of Theorem \ref{main_result_thm}). Recall that $x, t$ and $m$ all depend on $\epsilon$. For small $\epsilon$, $\Lambda(\zeta)$ has a critical point in an $\epsilon^{1/2}$ neighborhood of -1. For purposes of having a nice ultimate answer, we choose to center in on \begin{equation*} \xi=-1-2\epsilon^{1/2}\frac{X}{T}. \end{equation*} We can rewrite the argument of the exponential as $(\Lambda(\zeta)-\Lambda(\xi))-(\Lambda(\eta')-\Lambda(\xi))=\Psi(\zeta)-\Psi(\eta')$. The idea in \cite{TW3} for extracting asymptotics of this term is to deform the $\zeta$ and $\eta$ contours to lie along curves such that outside the scale $\epsilon^{1/2}$ around $\xi$, $\ensuremath{\mathrm{Re}}\Psi(\zeta)$ is large and negative, and $\ensuremath{\mathrm{Re}}\Psi(\eta')$ is large and positive. Hence we can ignore those parts of the contours. Then, rescaling around $\xi$ to blow up this $\epsilon^{1/2}$ scale, we obtain the asymptotic exponential term. This final change of variables then sets the scale at which we should analyze the other two terms in the integrand for the $J$ kernel. Returning to $\Psi(\zeta)$, we make a Taylor expansion around $\xi$ and find that in a neighborhood of $\xi$, \begin{equation*} \Psi(\zeta) \approx -\frac{T}{48} \epsilon^{-3/2}(\zeta-\xi)^3 + \frac{a}{2}\epsilon^{-1/2}(\zeta-\xi). \end{equation*} This suggests the change of variables, \begin{equation}\label{change_of_var_eqn} \tilde\zeta = 2^{-4/3}\epsilon^{-1/2}(\zeta-\xi) \qquad \tilde\eta' = 2^{-4/3}\epsilon^{-1/2}(\eta'-\xi), \end{equation} and likewise for $\tilde\eta$. After this our Taylor expansion takes the form \begin{equation}\label{taylor_expansion_term} \Psi(\tilde\zeta) \approx -\frac{T}{3} \tilde\zeta^3 +2^{1/3}a\tilde\zeta. \end{equation} In the spirit of steepest descent analysis, we would like the $\zeta$ contour to leave $\xi$ in a direction where this Taylor expansion is decreasing rapidly. This is accomplished by leaving at an angle $\pm 2\pi/3$. Likewise, since $\Psi(\eta)$ should increase rapidly, $\eta$ should leave $\xi$ at angle $\pm\pi/3$. The $\zeta$ contour was originally centred at zero and of radius $1+\epsilon^{1/2}/2$ and the $\eta$ contour of radius $1-\epsilon^{1/2}/2$. In order to deform these contours without changing the value of the determinant, care must be taken since there are poles of $f$ whenever $\zeta/\eta'=\tau^k$, $k\in \ensuremath{\mathbb{Z}}$. We ignore this issue for the formal calculation, and deal with it carefully in Section \ref{epsilon_section} by using different contours. Let us now assume that we can deform our contours to curves along which $\Psi$ rapidly decays in $\zeta$ and increases in $\eta$, as we move along them away from $\xi$. If we apply the change of variables in (\ref{change_of_var_eqn}), the straight part of our contours become infinite at angles $\pm2\pi/3$ and $\pm \pi/3$ which we call $\tilde\Gamma_{\zeta}$ and $\tilde\Gamma_{\eta}$. Note that this is {\em not} the actual definition of these contours which we use in the statement and proof of Theorem \ref{main_result_thm} because of the singularity problem mentioned above. Applying this change of variables to the kernel of the Fredholm determinant changes the $L^2$ space and hence we must multiply the kernel by the Jacobian term $2^{4/3}\epsilon^{1/2}$. We will include this term with the $\mu f(\mu,z)$ term and take the $\epsilon\to0$ limit of that product. As noted before, the term $2^{1/3}a\tilde\zeta$ should have been $2^{1/3}(a-\log(\epsilon^{-1/2}/2))\tilde\zeta$ in the Taylor expansion above, giving \begin{equation*} \Psi(\tilde\zeta) \approx -\frac{T}{3} \tilde\zeta^3 +2^{1/3}(a-\log(\epsilon^{-1/2}/2))\tilde\zeta, \end{equation*} which would appear to blow up as $\epsilon$ goes to zero. We now show how the extra $\log\epsilon$ in the exponent can be absorbed into the $2^{4/3}\epsilon^{1/2}\mu f(\mu,\zeta/\eta')$ term. Recall \begin{equation*} \mu f(\mu,z) = \sum_{k=-\infty}^{\infty} \frac{\mu \tau^k}{1-\tau^k \mu}z^k. \end{equation*} If we let $n_0=\lfloor \log(\epsilon^{-1/2}) /\log(\tau)\rfloor$, then observe that for $1 < |z| < \tau^{-1}$, \begin{equation*} \mu f(\mu,z) = \sum_{k=-\infty}^{\infty} \frac{ \mu \tau^{k+n_0}}{1-\tau^{k+n_0}\mu}z^{k+n_0} =z^{n_0} \tau^{n_0}\mu \sum_{k=-\infty}^{\infty} \frac{ \tau^{k}}{1-\tau^{k}\tau^{n_0}\mu}z^{k}. \end{equation*} By the choice of $n_0$, $\tau^{n_0}\approx \epsilon^{-1/2}$ so \begin{equation*} \mu f(\mu,z) \approx z^{n_0} \tilde\mu f(\tilde\mu,z). \end{equation*} The discussion on the exponential term indicates that it suffices to understand the behaviour of this function when $\zeta$ and $\eta'$ are within $\epsilon^{1/2}$ of $\xi$. Equivalently, letting $z=\zeta/\eta'$, it suffices to understand $ \mu f(\mu,z) \approx z^{n_0} \tilde\mu f(\tilde\mu,z)$ for \begin{equation*} z= \frac{\zeta}{\eta'}=\frac{\xi + 2^{4/3}\epsilon^{1/2}\tilde\zeta}{\xi + 2^{4/3}\epsilon^{1/2}\tilde\eta'}\approx 1-\epsilon^{1/2}\tilde z, \qquad \tilde z=2^{4/3}(\tilde\zeta-\tilde\eta'). \end{equation*} Let us now consider $z^{n_0}$ using the fact that $\log(\tau)\approx -2\epsilon^{1/2}$: \begin{equation*} z^{n_0} \approx (1-\epsilon^{1/2}\tilde z)^{\epsilon^{-1/2}(\frac{1}{4}\log\epsilon)} \approx e^{-\frac{1}{4}\tilde z \log(\epsilon)}. \end{equation*} Plugging back in the value of $\tilde z$ in terms of $\tilde\zeta$ and $\tilde\eta'$ we see that this prefactor of $z^{n_0}$ exactly cancels the $\log\epsilon$ term which accompanies $a$ in the exponential. What remains is to determine the limit of $2^{4/3}\epsilon^{1/2}\tilde\mu f(\tilde\mu, z)$ as $\epsilon$ goes to zero, for $z\approx 1-\epsilon^{1/2} \tilde z$. This can be found by interpreting the infinite sum as a Riemann sum approximation for a certain integral. Define $t=k\epsilon^{1/2}$ and observe that \begin{equation}\label{Riemann_limit} \epsilon^{1/2}\tilde\mu f(\tilde\mu,z) = \sum_{k=-\infty}^{\infty} \frac{ \tilde\mu \tau^{t\epsilon^{-1/2}}z^{t\epsilon^{-1/2}}}{1-\tilde\mu \tau^{t\epsilon^{-1/2}}}\epsilon^{1/2} \rightarrow \int_{-\infty}^{\infty} \frac{\tilde\mu e^{-2t}e^{-\tilde z t}}{1-\tilde\mu e^{-2t}}dt. \end{equation} This used the fact that $\tau^{t\epsilon^{-1/2}}\rightarrow e^{-2t}$ and that $z^{t\epsilon^{-1/2}}\rightarrow e^{-\tilde z t}$, which hold at least pointwise in $t$. For (\ref{Riemann_limit}) to hold , we must have $\ensuremath{\mathrm{Re}} \tilde z$ bounded inside $(0,2)$, but we disregard this difficulty for the heuristic proof. If we change variables of $t$ to $t/2$ and multiply the top and bottom by $e^{-t}$ then we find that \begin{equation*} 2^{4/3}\epsilon^{1/2}\mu f(\mu,\zeta/\eta') \rightarrow 2^{1/3} \int_{-\infty}^{\infty} \frac{\tilde\mu e^{-\tilde zt/2}}{e^{t}-\tilde\mu}dt. \end{equation*} As far as the final term, the rational expression, under the change of variables and zooming in on $\xi$, the factor of $1/\eta'$ goes to -1 and the $\frac{d\zeta}{\zeta-\eta'}$ goes to $\frac{d\tilde\zeta}{\tilde\zeta-\tilde\eta'}$. Thereby we formally obtain from $\mu J$ the kernel $-K_{a'}^{\csc}(\tilde\eta,\tilde\eta')$ acting on $L^2(\tilde\Gamma_{\eta})$, where \begin{equation*} K_{a'}^{\csc}(\tilde\eta,\tilde\eta') = \int_{\tilde\Gamma_{\zeta}} e^{-\frac{T}{3}(\tilde\zeta^3-\tilde\eta'^3)+2^{1/3}a'(\tilde\zeta-\tilde\eta')} \left(2^{1/3}\int_{-\infty}^{\infty} \frac{\tilde\mu e^{-2^{1/3}t(\tilde\zeta-\tilde\eta')}}{e^{t}-\tilde\mu}dt\right) \frac{d\tilde\zeta}{\tilde\zeta-\tilde\eta}, \end{equation*} with $a'=a+\log2$. Recall that the $\log2$ came from the $\log(\epsilon^{-1/2}/2)$ term. We have the identity \begin{equation}\label{cscid} \int_{-\infty}^{\infty} \frac{\tilde\mu e^{-\tilde zt/2}}{e^{t}-\tilde\mu}dt =(-\tilde\mu)^{-\tilde z/2}\pi \csc(\pi \tilde z/2), \end{equation} where the branch cut in $\tilde\mu$ is taken along the positive real axis, hence $(-\tilde\mu)^{-\tilde z/2} =e^{-\log(-\tilde\mu)\tilde z/2}$ where $\log$ is taken with the standard branch cut along the negative real axis. We may use the identity to rewrite the kernel as \begin{equation*} K_{a'}^{\csc}(\tilde\eta,\tilde\eta') = \int_{\tilde\Gamma_{\zeta}} e^{-\frac{T}{3}(\tilde\zeta^3-\tilde\eta'^3)+2^{1/3}a'(\tilde\zeta-\tilde\eta')} \frac{\pi 2^{1/3}(-\tilde\mu)^{-2^{1/3}(\tilde\zeta-\tilde\eta')}}{ \sin(\pi 2^{1/3}(\tilde\zeta-\tilde\eta'))} \frac{d\tilde\zeta}{\tilde\zeta-\tilde\eta}. \end{equation*} Therefore we have shown formally that \begin{equation*} \lim_{\epsilon\rightarrow 0} P(F_{\epsilon}(T,X)+\tfrac{T}{4!}\leq s) := F_T(s) = \int_{\mathcal{\tilde C}}e^{-\tilde \mu/2}\frac{d\tilde\mu}{\tilde\mu}\det(I-K_{a'}^{\csc})_{L^2(\tilde\Gamma_{\eta})}, \end{equation*} where $a'=a+\log 2$. To make it cleaner we replace $\tilde\mu/2$ with $\tilde\mu$. This only affects the $\tilde\mu$ term above given now by $(-2\tilde\mu)^{-\tilde z/2}$$=$$(-\tilde\mu)^{-2^{1/3}(\tilde\zeta-\tilde\eta')} e^{-2^{1/3}\log2(\tilde\zeta-\tilde\eta')}$. This can be absorbed and cancels the $\log2$ in $a'$ and thus we obtain, \begin{equation*} F_T(s) = \int_{\mathcal{\tilde C}}e^{-\tilde \mu}\frac{d\tilde\mu}{\tilde\mu}\det(I-K_{a}^{\csc})_{L^2(\tilde\Gamma_{\eta})}, \end{equation*} which, up to the definitions of the contours $\tilde\Gamma_{\eta}$ and $\tilde\Gamma_{\zeta}$, is the desired limiting formula. We now briefly note some of the problems and pitfalls of the preceeding formal argument, all of which will be addressed in the real proof of Section \ref{epsilon_section}. Firstly, the pointwise convergence of both the prefactor infinite product and the Fredholm determinant is certainly not enough to prove convergence of the $\tilde\mu$ integral. Estimates must be made to control this convergence or to show that we can cut off the tails of the $\tilde\mu$ contour at negligible cost and then show uniform convergence on the trimmed contour. Secondly, the deformations of the $\eta$ and $\zeta$ contours to the steepest descent curves is {\it entirely} illegal, as it involves passing through many poles of the kernel (coming from the $f$ term). In the case of \cite{TW3} this problem could be dealt with rather simply by just slightly modifying the descent curves. However, in our case, since $\tau$ tends to $1$ like $\epsilon^{1/2}$, such a patch is much harder and involves very fine estimates to show that there exists suitable contours which stay close enough together, yet along which $\Psi$ displays the necessary descent and ascent required to make the argument work. This issues also comes up in the convergence of (\ref{Riemann_limit}). In order to make sense of this we must ensure that $1 < |\zeta/\eta'| < \tau^{-1}$ or else the convergence and the resulting expression make no sense. Finally, one must make precise tail estimates to show that the kernel convergence is in the sense of trace-class norm. The Riemann sum approximation argument can in fact be made rigorous (following the proof of Proposition \ref{originally_cut_mu_lemma}). We choose, however, to give an alternative proof of the validity of that limit in which we identify and prove the limit of $f$ via analysis of singularities and residues. \section{Proof of the weakly asymmetric limit of the Tracy-Widom ASEP formula}\label{epsilon_section} In this section we give a proof of Theorem \ref{epsilon_to_zero_theorem}, for which a formal derivation was presented in Section \ref{formal_calc_subsec}. The heart of the argument is Proposition \ref{uniform_limit_det_J_to_Kcsc_proposition} which is proved in Section \ref{J_to_K_sec} and also relies on a number of technical lemmas. These lemmas as well as all of the other propositions are proved in Section \ref{props_and_lemmas_sec}. \subsubsection{Proof of Theorem \ref{epsilon_to_zero_theorem}}\label{proof_of_WASEP_thm_sec} The expression given in equation (\ref{TW_prob_equation}) for $P(F_{\epsilon}(T,X)+\tfrac{T}{4!}\leq s)$ contains an integral over a $\mu$ contour of a product of a prefactor infinite product and a Fredholm determinant. The first step towards taking the limit of this as $\epsilon$ goes to zero is to control the prefactor, $\prod_{k=0}^{\infty} (1-\mu\tau^k)$. Initially $\mu$ lies on a contour $S_{\tau^+}$ which is centered at zero and of radius between $\tau$ and 1. Along this contour the partial products (i.e., product up to $N$) form a highly oscillatory sequence and hence it is hard to control the convergence of the sequence. \begin{figure} \begin{center} \includegraphics[scale=.17]{Deform_to_C.eps} \caption{The $S_{\epsilon}$ contour is deformed to the $C_{\epsilon}$ contour via Cauchy's theorem and then a change of variables leads to $\tilde{C}_{\epsilon}$, with its infinite extension $\tilde{C}$.}\label{deform_to_c} \end{center} \end{figure} The first step in our proof is to deform the $\mu$ contour $S_{\tau^+}$ to\begin{equation*} \mathcal{C}_{\epsilon} = \{\epsilon^{1/2}e^{i\theta}\}\cup \{x\pm i \epsilon^{1/2}\}_{0<x\leq 1-\epsilon^{1/2}}\cup \{1-\epsilon^{1/2}+\epsilon^{1/2}iy\}_{-1<y<1}, \end{equation*} a long, skinny cigar shaped contour (see Fig. \ref{deform_to_c}.) We orient $ \mathcal{C}_{\epsilon}$ counter-clockwise. Notice that this new contour still includes all of the poles at $\mu=\tau^{k}$ associated with the $f$ function in the $J$ kernel. In order to justify replacing $S_{\tau^+}$ by $\mathcal{C}_{\epsilon}$ we need the following (for the proof see Section \ref{proofs_sec}): \begin{lemma}\label{deform_mu_to_C} In equation (\ref{TW_prob_equation}) we can replace the contour $S_\epsilon$ with $\mathcal{C}_\epsilon$ as the contour of integration for $\mu$ without affecting the value of the integral. \end{lemma} Having made this deformation of the $\mu$ contour, we now observe that the natural scale for $\mu$ is on order $\epsilon^{1/2}$. With this in mind we make the change of variables \begin{equation*} \mu = \epsilon^{1/2}\tilde\mu. \end{equation*} \begin{remark} Throughout the proof of this theorem and its lemmas and propositions, we will use the tilde to denote variables which are $\epsilon^{1/2}$ rescaled versions of the original, untilded variables. \end{remark} The $\tilde\mu$ variable now lives on the contour \begin{equation*} \mathcal{\tilde C}_{\epsilon} = \{e^{i\theta}\}\cup \{x\pm i\}_{0<x\leq \epsilon^{-1/2}-1}\cup \{\epsilon^{-1/2}-1+iy\}_{-1<y<1}. \end{equation*} which grow and ultimately approach \begin{equation*} \mathcal{\tilde C} = \{e^{i\theta}\}\cup \{x\pm i\}_{x>0}. \end{equation*} In order to show convergence of the integral as $\epsilon$ goes to zero, we must consider two things, the convergence of the integrand for $\tilde\mu$ in some compact region around the origin on $\mathcal{\tilde{C}}$, and the controlled decay of the integrand on $\mathcal{\tilde C}_{\epsilon}$ outside of that compact region. This second consideration will allow us to approximate the integral by a finite integral in $\tilde\mu$, while the first consideration will tell us what the limit of that integral is. When all is said and done, we will paste back in the remaining part of the $\tilde\mu$ integral and have our answer. With this in mind we give the following bound which is proved in Section \ref{proofs_sec}, \begin{lemma}\label{mu_inequalities_lemma Define two regions, depend on a fixed parameter $r\geq 1$, \begin{eqnarray*} R_1 &=& \{\tilde\mu : |\tilde\mu|\leq \frac{r}{\sin(\pi/10)}\}\\ R_2 &=& \{\tilde\mu : \ensuremath{\mathrm{Re}}(\tilde\mu)\in [\frac{r}{\tan(\pi/10)},\epsilon^{-1/2}], \textrm{ and } \ensuremath{\mathrm{Im}}(\tilde\mu)\in [-2,2]\}. \end{eqnarray*} $R_1$ is compact and $R_1 \cup R_2$ contains all of the contour $\mathcal{\tilde{C}}_{\epsilon}$. Furthermore define the function (the infinite product after the change of variables) \begin{equation*} g_{\epsilon}(\tilde\mu) = \prod_{k=0}^{\infty} (1-\epsilon^{1/2}\tilde\mu \tau^k). \end{equation*} Then uniformly in $\tilde\mu\in R_1$, \begin{equation}\label{g_e_ineq1} g_{\epsilon}(\mu)\to e^{-\tilde\mu/2} \end{equation} Also, for all $\epsilon<\epsilon_0$ (some positive constant) there exists a constant $c$ such that for all $\tilde\mu\in R_2$ we have the following tail bound: \begin{equation}\label{g_e_ineq2} |g_{\epsilon}(\tilde\mu)| \leq |e^{-\tilde\mu/2}| |e^{-c\epsilon^{1/2}\tilde\mu^2}|. \end{equation} (By the choice of $R_2$, for all $\tilde\mu\in R_2$, $\ensuremath{\mathrm{Re}}(\tilde\mu^2)>\delta>0$ for some fixed $\delta$. The constant $c$ can be taken to be $1/8$.) \end{lemma} We now turn our attention to the Fredholm determinant term in the integrand. Just as we did for the prefactor infinite product in Lemma \ref{mu_inequalities_lemma} we must establish uniform convergence of the determinant for $\tilde\mu$ in a fixed compact region around the origin, and a suitable tail estimate valid outside that compact region. The tail estimate must be such that for each finite $\epsilon$, we can combine the two tail estimates (from the prefactor and from the determinant) and show that their integral over the tail part of $\mathcal{\tilde C}_{\epsilon}$ is small and goes to zero as we enlarge the original compact region. For this we have the following two propositions (the first is the most substantial and is proved in Section \ref{J_to_K_sec}, while the second is proved in Section \ref{proofs_sec}). \begin{proposition}\label{uniform_limit_det_J_to_Kcsc_proposition} Fix $s\in \ensuremath{\mathbb{R}}$, $T>0$ and $X\in \ensuremath{\mathbb{R}}$. Then for any compact subset of $\mathcal{\tilde C}$ we have that for all $\delta>0$ there exists an $\epsilon_0>0$ such that for all $\epsilon<\epsilon_0$ and all $\tilde\mu$ in the compact subset, \begin{equation*} \left|\det(I+\epsilon^{1/2}\tilde\mu J_{\epsilon^{1/2}\tilde\mu})_{L^2(\Gamma_{\eta})} - \det(I-K^{\csc}_{a'})_{L^2(\tilde\Gamma_{\eta})}\right|<\delta. \end{equation*} Here $a'=a+\log2$ and $K_{a'}^{\csc}$ is defined in Def. \ref{k_csc_definition} and depends implicitly on $\tilde\mu$. \end{proposition} \begin{proposition}\label{originally_cut_mu_lemma} There exist $c,c'>0$ and $\epsilon_0>0$ such that for all $\epsilon<\epsilon_0$ and all $\tilde\mu\in\mathcal{\tilde C}_{\epsilon}$, \begin{equation*} \left|g_{\epsilon}(\tilde\mu)\det(I+\epsilon^{1/2}\tilde\mu J_{\epsilon^{1/2}\tilde\mu})_{L^2(\Gamma_{\eta})}\right| \leq c'e^{-c|\tilde\mu|}. \end{equation*} \end{proposition} This exponential decay bound on the integrand shows that that, by choosing a suitably large (fixed) compact region around zero along the contour $\mathcal{\tilde C}_{\epsilon}$, it is possible to make the $\tilde\mu$ integral outside of this region arbitrarily small, uniformly in $\epsilon\in (0,\epsilon_0)$. This means that we may assume henceforth that $\tilde\mu$ lies in a compact subset of $\mathcal{\tilde C}$. Now that we are on a fixed compact set of $\tilde\mu$, the first part of Lemma \ref{mu_inequalities_lemma} and Proposition \ref{uniform_limit_det_J_to_Kcsc_proposition} combine to show that the integrand converges uniformly to \begin{equation*} \frac{e^{-\tilde\mu/2}}{\tilde\mu} \det(I-K^{\csc}_{a'})_{L^2(\tilde\Gamma_{\eta})} \end{equation*} and hence the integral converges to the integral with this integrand. To finish the proof of the limit in Theorem \ref{epsilon_to_zero_theorem}, it is necessary that for any $\delta$ we can find a suitably small $\epsilon_0$ such that the difference between the two sides of the limit differ by less than $\delta$ for all $\epsilon<\epsilon_0$. Technically we are in the position of a $\delta/3$ argument. One portion of $\delta/3$ goes to the cost of cutting off the $\tilde\mu$ contour outside of some compact set. Another $\delta/3$ goes to the uniform convergence of the integrand. The final portion goes to repairing the $\tilde\mu$ contour. As $\delta$ gets smaller, the cut for the $\tilde\mu$ contour must occur further out. Therefore the limiting integral will be over the limit of the $\tilde\mu$ contours, which we called $\mathcal{\tilde C}$. The final $\delta/3$ is spent on the following Proposition, whose proof is given in Section \ref{proofs_sec}. \begin{proposition}\label{reinclude_mu_lemma} There exists $c,c'>0$ such that for all $\tilde\mu\in \mathcal{\tilde C}$ with $|\tilde\mu|\geq 1$, \begin{equation*} \left|\frac{e^{-\tilde\mu/2}}{\tilde\mu} \det(I-K^{\csc}_{a})_{L^2(\tilde\Gamma_{\eta})}\right| \leq |c'e^{-c\tilde\mu}|. \end{equation*} \end{proposition} Recall that the kernel $K^{\csc}_{a}$ is a function of $\tilde\mu$. The argument used to prove this proposition immediately shows that $K_a^{\csc}$ is a trace class operator on $L^2(\tilde\Gamma_{\eta})$. It is an immediate corollary of this exponential tail bound that for sufficiently large compact sets of $\tilde\mu$, the cost to include the rest of the $\tilde\mu$ contour is less than $\delta/3$. This, along with the change of variables in $\tilde\mu$ described at the end of Section \ref{formal_calc_subsec} finishes the proof of Theorem \ref{epsilon_to_zero_theorem}. \subsection{Proof of Proposition \ref{uniform_limit_det_J_to_Kcsc_proposition}}\label{J_to_K_sec} In this section we provide all of the steps necessary to prove Proposition \ref{uniform_limit_det_J_to_Kcsc_proposition}. To ease understanding of the argument we relegate more technical points to lemmas whose proof we delay to Section \ref{JK_proofs_sec}. During the proof of this proposition, it is important to keep in mind that we are assuming that $\tilde\mu$ lies in a fixed compact subset of $\mathcal{\tilde C}$. Recall that $\tilde\mu = \epsilon^{-1/2}\mu$. We proceed via the following strategy to find the limit of the Fredholm determinant as $\epsilon$ goes to zero. The first step is to deform the contours $\Gamma_{\eta}$ and $\Gamma_{\zeta}$ to suitable curves along which there exists a small region outside of which the kernel of our operator is exponentially small. This justifies cutting the contours off outside of this small region. We may then rescale everything so this small region becomes order one in size. Then we show uniform convergence of the kernel to the limiting kernel on the compact subset. Finally we need to show that we can complete the finite contour on which this limiting object is defined to an infinite contour without significantly changing the value of the determinant. Recall now that $\Gamma_{\zeta}$ is defined to be a circle centered at zero of radius $1+\epsilon^{1/2}/2$ and $\Gamma_{\eta}$ is a circle centered at zero of radius $1-\epsilon^{1/2}/2$ and that \begin{equation*} \xi = -1 - 2\epsilon^{1/2}\frac {X}{T}. \end{equation*} The function $f(\mu,\zeta/\eta')$ which shows up in the definition of the kernel for $J$ has poles as every point $\zeta/\eta'=z=\tau^k$ for $k\in \ensuremath{\mathbb{Z}}$. As long as we simultaneously deform the $\Gamma_{\zeta}$ contour as we deform $\Gamma_{\eta}$ so as to keep $\zeta/\eta'$ away from these poles, we may use Proposition \ref{TWprop1} (Proposition 1 of \cite{TW3}), to justify the fact that the determinant does not change under this deformation. In this way we may deform our contours to the following modified contours $\Gamma_{\eta,l},\Gamma_{\zeta,l}$: \begin{figure} \begin{center} \includegraphics[scale=.6]{kappa_contour.eps} \caption{$\Gamma_{\zeta,l}$ (the outer most curve) is composed of a small verticle section near $\xi$ labeled $\Gamma_{\zeta,l}^{vert}$ and a large almost circular (small modification due to the function $\kappa(\theta)$) section labeled $\Gamma_{\zeta,l}^{circ}$. Likewise $\Gamma_{\eta,l}$ is the middle curve, and the inner curve is the unit circle. These curves depend on $\epsilon$ in such a way that $|\zeta/\eta|$ is bounded between $1$ and $\tau^{-1}\approx 1+2\epsilon^{1/2}$.}\label{kappa_contour} \end{center} \end{figure} \begin{definition} Let $\Gamma_{\eta,l}$ and $\Gamma_{\zeta,l}$ be two families (indexed by $l>0$) of simple closed contours in $\ensuremath{\mathbb{C}}$ defined as follows. Let \begin{equation}\label{kappa_eqn} \kappa(\theta) = \frac{2X}{T} \tan^2\left(\frac{\theta}{2}\right)\log\left(\frac{2}{1-\cos\theta}\right). \end{equation} Both $\Gamma_{\eta,l}$ and $\Gamma_{\zeta,l}$ will be symmetric across the real axis, so we need only define them on the top half. $\Gamma_{\eta,l}$ begins at $\xi+\epsilon^{1/2}/2$ and moves along a straight vertical line for a distance $l\epsilon^{1/2}$ and then joins the curve \begin{equation}\label{kappa_param_eqn} \left[1+\epsilon^{1/2}(\kappa(\theta)+\alpha)\right]e^{i\theta} \end{equation} parametrized by $\theta$ from $\pi-l\epsilon^{1/2} + O(\epsilon)$ to $0$, and where $\alpha = -1/2 + O(\epsilon^{1/2})$ (see Figure \ref{kappa_contour} for an illustration of these contours). The small errors are necessary to make sure that the curves join up at the end of the vertical section of the curve. We extend this to a closed contour by reflection through the real axis and orient it clockwise. We denote the first, vertical part, of the contour by $\Gamma_{\eta,l}^{vert}$ and the second, roughly circular part by $\Gamma_{\eta,l}^{circ}$. This means that $\Gamma_{\eta,l}=\Gamma_{\eta,l}^{vert}\cup \Gamma_{\eta,l}^{circ}$, and along this contour we can think of parametrizing $\eta$ by $\theta\in [0,\pi]$. We define $\Gamma_{\zeta,l}$ similarly, except that it starts out at $\xi-\epsilon^{1/2}/2$ and joins the curve given by equation (\ref{kappa_param_eqn}) where the value of $\theta$ ranges from $\theta=\pi-l\epsilon^{1/2} + O(\epsilon)$ to $\theta=0$ and where $\alpha = 1/2 + O(\epsilon^{1/2})$. We similarly denote this contour by the union of $\Gamma_{\zeta,l}^{vert}$ and $\Gamma_{\zeta,l}^{circ}$. \end{definition} By virtue of these definitions, it is clear that $\epsilon^{-1/2}|\zeta/\eta'-\tau^k|$ stays bounded away from zero for all $k$, and that $|\zeta/\eta'|$ is bounded in an closed set contained in $(1,\tau^{-1})$ for all $\zeta\in \Gamma_{\zeta,l}$ and $\eta\in \Gamma_{\eta,l}$. Therefore, for any $l>0$ we may, by deforming both the $\eta$ and $\zeta$ contours simultaneously, assume that our operator acts on $L^2(\Gamma_{\eta,l})$ and that its kernel is defined via an integral along $\Gamma_{\zeta,l}$. It is critical that we now show that, due to our choice of contours, we are able to forget about everything except for the vertical part of the contours. To formulate this we have the following: \begin{definition} Let $\chi_l^{vert}$ and $\chi_l^{circ}$ be projection operators acting on $L^2(\Gamma_{\eta,l})$ which project onto $L^2(\Gamma_{\eta,l}^{vert})$ and $L^2(\Gamma_{\eta,l}^{circ})$ respectively. Also define two operators $J_l^{vert}$ and $J_l^{circ}$ which act on $L^2(\Gamma_{\eta,l})$ and have kernels identical to $J$ (see equation (\ref{J_eqn_def})) except the $\zeta$ integral is over $\Gamma_{\zeta,l}^{vert}$ and $\Gamma_{\zeta,l}^{circ}$ respectively. Thus we have a family (indexed by $l>0$) of decompositions of our operator $J$ as follows: \begin{equation*} J = J_l^{vert}\chi_{l}^{vert} +J_l^{vert}\chi_{l}^{circ}+J_l^{circ}\chi_{l}^{vert}+J_l^{circ}\chi_{l}^{circ}. \end{equation*} \end{definition} We now show that it suffices to just consider the first part of this decomposition ($J_l^{vert}\chi_{l}^{vert}$) for sufficiently large $l$. \begin{proposition}\label{det_1_1_prop} Assume that $\tilde\mu$ is restricted to a bounded subset of the contour $\mathcal{\tilde C}$. For all $\delta>0$ there exist $\epsilon_0>0$ and $l_0>0$ such that for all $\epsilon<\epsilon_0$ and all $l>l_0$, \begin{equation*} |\det(I+\mu J)_{L^2(\Gamma_{\eta,l})} - \det(I+J_{l}^{vert})_{L^2(\Gamma_{\eta,l}^{vert})}|<\delta. \end{equation*} \end{proposition} \begin{proof} As was explained in the introduction, if we let \begin{equation}\label{n_0_eqn} n_0=\lfloor \log(\epsilon^{-1/2}) /\log(\tau)\rfloor \end{equation} then it follows from the invariance of the doubly infinite sum for $f(\mu,z)$ that \begin{equation*} \mu f(\mu,z) = z^{n_0} (\tilde\mu f(\tilde\mu,z) + O(\epsilon^{1/2})). \end{equation*} Note that the $O(\epsilon^{1/2})$ does not play a significant role in what follows so we drop it. Using the above argument and the following two lemmas (which are proved in Section \ref{JK_proofs_sec}) we will be able to complete the proof of Proposition \ref{det_1_1_prop}. \begin{lemma}\label{kill_gamma_2_lemma} For all $c>0$ there exist $l_0>0$ and $\epsilon_0>0$ such that for all $l>l_0$, $\epsilon<\epsilon_0$ and $\eta\in \Gamma_{\eta,l}^{circ}$, \begin{equation*} \ensuremath{\mathrm{Re}}(\Psi(\eta)+n_0\log(\eta))\geq c|\xi-\eta|\epsilon^{-1/2}, \end{equation*} where $n_0$ is defined in (\ref{n_0_eqn}). Likewise, for all $\epsilon<\epsilon_0$ and $\zeta\in \Gamma_{\zeta,l}^{circ}$, \begin{equation*} \ensuremath{\mathrm{Re}}(\Psi(\zeta)+n_0\log(\zeta))\leq -c|\xi-\zeta|\epsilon^{-1/2}. \end{equation*} \end{lemma} \begin{lemma}\label{mu_f_polynomial_bound_lemma} For all $l>0$ there exist $\epsilon_0>0$ and $c>0$ such that for all $\epsilon<\epsilon_0$, $\eta'\in \Gamma_{\eta,l}$ and $\zeta\in \Gamma_{\zeta,l}$, \begin{equation*} |\tilde\mu f(\tilde\mu,\zeta/\eta')|\leq \frac{c}{|\zeta-\eta'|}. \end{equation*} \end{lemma} It now follows that for any $\delta>0$, we can find $l_0$ large enough that $||J_l^{vert}\chi_{l}^{circ}||_1$, $||J_l^{circ}\chi_{l}^{vert}||_1$ and $||J_l^{circ}\chi_{l}^{circ}||_1$ are all bounded by $\delta/3$. This is because we may factor these various operators into a product of Hilbert-Schmidt operators and then use the exponential decay of Lemma \ref{kill_gamma_2_lemma} along with the polynomial control of Lemma \ref{mu_f_polynomial_bound_lemma} and the remaining term $1/(\zeta-\eta)$ to prove that each of the Hilbert-Schmidt norms goes to zero (for a similar argument, see the bottom of page 27 of \cite{TW3}). This completes the proof of Proposition \ref{det_1_1_prop}. \end{proof} We now return to the proof of Proposition \ref{uniform_limit_det_J_to_Kcsc_proposition}. We have successfully restricted ourselves to considering $J_{l}^{vert}$ acting on $L^2(\Gamma_{\eta,l}^{vert})$. Having focused on the region of asymptotically non-trivial behavior, we can now rescale and show that the kernel converges to its limit, uniformly on the compact contour. \begin{definition}\label{change_of_var_tilde_definitions} Recall $c_3=2^{-4/3}$ and let \begin{equation*} \eta = \xi + c_3^{-1}\epsilon^{1/2}\tilde\eta, \qquad \eta' = \xi + c_3^{-1}\epsilon^{1/2}\tilde\eta', \qquad \zeta = \xi + c_3^{-1}\epsilon^{1/2}\tilde\zeta. \end{equation*} Under these change of variables the contours $\Gamma_{\eta,l}^{vert}$ and $\Gamma_{\zeta,l}^{vert}$ become \begin{eqnarray*} \tilde\Gamma_{\eta,l} = \{c_3/2+ir:r\in (-c_3l,c_3l)\},\\ \tilde\Gamma_{\zeta,l} = \{-c_3/2+ir:r\in (-c_3l,c_3l)\}. \end{eqnarray*} As $l$ increases to infinity, these contours approach their infinite versions, \begin{eqnarray*} \tilde\Gamma_{\eta} = \{c_3/2+ir:r\in (-\infty,\infty)\},\\ \tilde\Gamma_{\zeta} = \{-c_3/2+ir:r\in (-\infty,\infty)\}. \end{eqnarray*} With respect to the change of variables define an operator $\tilde J$ acting on $L^2(\tilde\Gamma_{\eta})$ via the kernel: \begin{equation*} \mu \tilde J_l (\tilde\eta,\tilde\eta') = c_{3}^{-1}\epsilon^{1/2} \int_{\tilde\Gamma_{\zeta,l}} e^{\Psi(\xi+c_3^{-1} \epsilon^{1/2}\tilde\zeta)-\Psi(\xi+c_3^{-1} \epsilon^{1/2}\tilde\eta')} \frac{\mu f(\mu,\frac{\xi+c_3^{-1}\epsilon^{1/2}\tilde\zeta}{\xi+c_3^{-1}\epsilon^{1/2}\tilde\eta'})}{(\xi+c_3^{-1}\epsilon^{1/2}\tilde\eta')(\tilde\zeta-\tilde\eta)}d\tilde\zeta. \end{equation*} Lastly, define the operator $\tilde\chi_l$ which projects $L^2(\tilde\Gamma_{\eta})$ onto $L^2(\tilde\Gamma_{\eta,l})$. \end{definition} It is clear that applying the change of variables, the Fredholm determinant $\det(I+J_{l}^{vert})_{L^2(\Gamma_{\eta,l}^{vert})}$ becomes $\det(I+\tilde\chi_l \mu\tilde J_l \tilde\chi_l)_{L^2(\tilde\Gamma_{\eta,l})}$. We now state a proposition which gives, with respect to these fixed contours $\tilde\Gamma_{\eta,l}$ and $\tilde\Gamma_{\zeta,l}$, the limit of the determinant in terms of the uniform limit of the kernel. Since all contours in question are finite, uniform convergence of the kernel suffices to show trace class convergence of the operators and hence convergence of the determinant. Recall the definition of the operator $K_a^{\csc}$ given in Definition \ref{thm_definitions}. For the purposes of this proposition, modify the kernel so that the integration in $\zeta$ occurs now only over $\tilde\Gamma_{\zeta,l}$ and not all of $\tilde\Gamma_{\zeta}$. Call this modified operator $K_{a',l}^{\csc}$. \begin{proposition}\label{converges_to_kcsc_proposition} For all $\delta>0$ there exist $\epsilon_0>0$ and $l_0>0$ such that for all $\epsilon<\epsilon_0$, $l>l_0$, and $\tilde\mu$ in our fixed compact subset of $\mathcal{\tilde C}$, \begin{equation*} \left|\det(I+\tilde\chi_l \mu\tilde J_l \tilde\chi_l)_{L^2(\tilde\Gamma_{\eta,l})} - \det(I-\tilde\chi_l K_{a',l}^{\csc}\tilde\chi_l)_{L^2(\tilde\Gamma_{\eta,l})}\right|< \delta, \end{equation*} where $a'=a+\log2$. \end{proposition} \begin{proof} The proof of this proposition relies on showing the uniform convergence of the kernel of $\mu\tilde J$ to the kernel of $K_{a',l}^{\csc}$, which suffices because of the compact contour. Furthermore, since the $\zeta$ integration is itself over a compact set, it suffices to show uniform convergence of this integrand. The two lemmas stated below will imply such uniform convergence and hence complete this proof. First, however, recall that $\mu f(\mu,z) = z^{n_0} (\tilde\mu f(\tilde\mu,z) + O(\epsilon^{1/2}))$ where $n_0$ is defined in equation (\ref{n_0_eqn}). We are interested in having $z=\zeta/\eta'$, which, under the change of variables can be written as \begin{equation*} z=1-\epsilon^{1/2}\tilde z +O(\epsilon), \qquad \tilde z = c_3^{-1}(\tilde\zeta-\tilde\eta')=2^{4/3}(\tilde\zeta-\tilde\eta'). \end{equation*} Therefore, since $n_0= -\frac{1}{2}\log(\epsilon^{-1/2})\epsilon^{-1/2}+ O(1)$ it follows that \begin{equation*} z^{n_0} = \exp\{-2^{1/3}(\tilde\zeta-\tilde\eta')\log(\epsilon^{-1/2})\}(1+o(1)). \end{equation*} This expansion still contains an $\epsilon$ and hence the argument blows up as $\epsilon$ goes to zero. However, this exactly counteracts the $\log(\epsilon^{-1/2})$ term in the definition of $m$ which goes into the argument of the exponential of the integrand. We make use of this cancellation in the proof of this first lemma and hence include the $n_0\log(\zeta/\eta')$ term into the exponential argument. The following two lemmas are proved in Section \ref{JK_proofs_sec}. \begin{lemma}\label{compact_eta_zeta_taylor_lemma} For all $l>0$ and all $\delta>0$ there exists $\epsilon_0>0$ such that for all $\tilde\eta'\in \tilde\Gamma_{\eta,l}$ and $\tilde\zeta\in \tilde\Gamma_{\zeta,l}$ we have for $0<\epsilon\le\epsilon_0$, \begin{equation*} \left|\left(\Psi(\tilde\zeta)-\Psi(\tilde\eta') + n_0\log(\zeta/\eta')\right) - \left(-\frac{T}{3}(\tilde\zeta^3-\tilde\eta'^3) + 2^{1/3}a'(\tilde\zeta-\tilde\eta)\right)\right|<\delta, \end{equation*} where $a=a'+\log2$. Similarly we have \begin{equation*} \left|e^{\Psi(\tilde\zeta)-\Psi(\tilde\eta') + n_0\log(\zeta/\eta')} - e^{-\frac{T}{3}(\tilde\zeta^3-\tilde\eta'^3) + 2^{1/3}a'(\tilde\zeta-\tilde\eta')}\right|<\delta. \end{equation*} \end{lemma} \begin{lemma}\label{muf_compact_sets_csc_limit_lemma} For all $l>0$ and all $\delta>0$ there exists $\epsilon_0>0$ such that for all $\tilde\eta'\in \tilde\Gamma_{\eta,l}$ and $\tilde\zeta\in \tilde\Gamma_{\zeta,l}$ we have for $0<\epsilon\le\epsilon_0$, \begin{equation*} \left|\epsilon^{1/2}\tilde\mu f\left(\tilde\mu, \frac{\xi+c_3^{-1}\epsilon^{1/2}\tilde\zeta}{\xi+c_3^{-1}\epsilon^{1/2}\tilde\eta'}\right) - \int_{-\infty}^{\infty} \frac{\tilde\mu e^{-2^{1/3}t(\tilde\zeta-\tilde\eta')}}{e^{t}-\tilde\mu}dt\right|<\delta. \end{equation*} \end{lemma} As explained in Definition \ref{thm_definitions}, the final integral converges since our choices of $\tilde\zeta$ and $\tilde\eta'$ ensure that $\ensuremath{\mathrm{Re}}(-2^{1/3}(\tilde\zeta-\tilde\eta'))=1/2$. Note that the above integral also has a representation (\ref{cscid}) in terms of the $\csc$ function. This gives the analytic extension of the integral to all $\tilde z\notin 2\ensuremath{\mathbb{Z}}$ where $\tilde{z}=2^{4/3}(\tilde\zeta-\tilde\eta')$. Finally, the sign change in front of the kernel of the Fredholm determinant comes from the $1/\eta'$ term which, under the change of variables converges uniformly to $-1$. \end{proof} Having successfully taken the $\epsilon$ to zero limit, all that now remains is to paste the rest of the contours $\tilde\Gamma_{\eta}$ and $\tilde\Gamma_{\zeta}$ to their abbreviated versions $\tilde\Gamma_{\eta,l}$ and $\tilde\Gamma_{\zeta,l}$. To justify this we must show that the inclusion of the rest of these contours does not significantly affect the Fredholm determinant. Just as in the proof of Proposition \ref{det_1_1_prop} we have three operators which we must re-include at provably small cost. Each of these operators, however, can be factored into the product of Hilbert Schmidt operators and then an analysis similar to that done following Lemma \ref{mu_f_polynomial_bound_lemma} (see in particular page 27-28 of \cite{TW3}) shows that because $\ensuremath{\mathrm{Re}}(\tilde\zeta^3)$ grows like $|\tilde\zeta|^2$ along $\tilde\Gamma_{\zeta}$ (and likewise but opposite for $\eta'$) there is sufficiently strong exponential decay to show that the trace norms of these three additional kernels can be made arbitrarily small by taking $l$ large enough. This last estimate completes the proof of Proposition \ref{uniform_limit_det_J_to_Kcsc_proposition}. \subsection{Technical lemmas, propositions and proofs}\label{props_and_lemmas_sec} \subsubsection{Properties of Fredholm determinants}\label{pre_lem_ineq_sec} Before beginning the proofs of the propositions and lemmas, we give the definitions and some important propeties for Fredholm determinants, trace class operators and Hilbert-Schmidt operators. For a more complete treatment of this theory see, for example, \cite{BS:book}. Consider a (separable) Hilbert space $\Hi$ with bounded linear operators $\mathcal{L}(\Hi)$. If $A\in \mathcal{L}(\Hi)$, let $|A|=\sqrt{A^*A}$ be the unique positive square-root. We say that $A\in\mathcal{B}_1(\Hi)$, the trace class operators, if the trace norm $||A||_1<\infty$. Recall that this norm is defined relative to an orthonormal basis of $\Hi$ as $||A||_1:= \sum_{n=1}^{\infty} (e_n,|A|e_n)$. This norm is well defined as it does not depend on the choice of orthonormal basis $\{e_n\}_{n\geq 1}$. For $A\in\mathcal{B}_1(\Hi)$, one can then define the trace $\tr A :=\sum_{n=1}^{\infty} (e_n,A e_n)$. We say that $A\in \mathcal{B}_{2}(\Hi)$, the Hilbert-Schmidt operators, if the Hilbert-Schmidt norm $||A||_2 := \sqrt{\tr(|A|^2)}<\infty$. \begin{lemma}[Pg. 40 of \cite{BOO}, from Theorem 2.20 from \cite{BS:book}]\label{trace_convergence_lemma} \mbox{} The following conditions are equivalent: \begin{enumerate} \item $||K_n-K||_1\to 0$; \item $\tr K_n\to \tr K$ and $K_n\to K$ in the weak operator topology. \end{enumerate} \end{lemma} For $A\in~\mathcal{B}_1(\Hi)$ we can also define a Fredholm determinant $\det(I+~A)_{\Hi}$. Consider $u_i\in \Hi$ and define the tensor product $u_1\otimes \cdots \otimes u_n$ by its action on $v_1,\ldots, v_n \in\Hi$ as \begin{equation*} u_1\otimes \cdots \otimes u_n (v_1,\ldots, v_n) = \prod_{i=1}^{n} (u_i,v_i). \end{equation*} Then $\bigotimes_{i=1}^{n}\Hi$ is the span of all such tensor products. There is a vector subspace of this space which is known as the alternating product: \begin{equation*} \bigwedge^n(\Hi) = \{h\in\bigotimes_{i=1}^{n} \Hi : \forall \sigma\in S_n, \sigma h =-h\}, \end{equation*} where $\sigma u_1\otimes \cdots \otimes u_n = u_{\sigma(1)}\otimes \cdots \otimes u_{\sigma(n)}$. If $e_1,\ldots,e_n$ is a basis for $\Hi$ then $e_{i_1}\wedge \cdots \wedge e_{i_k}$ for $1\leq i_1<\ldots<i_k\leq n$ form a basis of $\bigwedge^n(\Hi)$. Given an operator $A\in \mathcal{L}(\Hi)$, define \begin{equation*} \Gamma^n(A)(u_1\otimes \cdots \otimes u_n) := Au_1\otimes \cdots \otimes Au_n. \end{equation*} Note that any element in $\bigwedge^n(\Hi)$ can be written as an antisymmetrization of tensor products. Then it follows that $\Gamma^n(A)$ restricts to an operator from $\bigwedge^n(\Hi)$ into $\bigwedge^n(\Hi)$. If $A\in~\mathcal{B}_1(\Hi)$, then $\tr \Gamma^{(n)}(A)\leq ||A||_1^n/n!$, and we can define \begin{equation*} \det(I+~A)= 1 + \sum_{k=1}^{\infty} \tr(\Gamma^{(k)}(A)). \end{equation*} As one expects, $\det(I+~A)=\prod_j (1+\lambda_j)$ where $\lambda_j$ are the eigenvalues of $A$ counted with algebraic multiplicity (Thm XIII.106, \cite{RS:book}). \begin{lemma}[Ch. 3 \cite{BS:book}]\label{fredholm_continuity_lemma} $A\mapsto \det(I+A)$ is a continuous function on $\mathcal{B}_1(\Hi)$. Explicitly, \begin{equation*} |\det(I+A)-\det(I+B)|\leq ||A-B||_{1}\exp(||A||_1+||B||_1+1). \end{equation*} If $A\in \mathcal{B}_1(\Hi)$ and $A=BC$ with $B,C\in \mathcal{B}_2(\Hi)$ then \begin{equation*} ||A||_1\leq ||B||_2||C||_2. \end{equation*} For $A\in \mathcal{B}_1(\Hi)$, \begin{equation*} |\det(I+A)|\leq e^{||A||_1}. \end{equation*} If $A\in \mathcal{B}_2(\Hi)$ with kernel $A(x,y)$ then \begin{equation*} ||A||_2 = \left(\int |A(x,y)|^2 dx dy\right)^{1/2}. \end{equation*} \end{lemma} \begin{lemma}\label{projection_pre_lemma} If $K$ is an operator acting on a contour $\Sigma$ and $\chi$ is a projection operator unto a subinterval of $\Sigma$ then \begin{equation*} \det(I+K\chi)_{L^2(\Sigma,\mu)}=\det(I+\chi K\chi)_{L^2(\Sigma,\mu)}. \end{equation*} \end{lemma} In performing steepest descent analysis on Fredholm determinants, the following proposition allows one to deform contours to descent curves. \begin{lemma}[Proposition 1 of \cite{TW3}]\label{TWprop1} Suppose $s\to \Gamma_s$ is a deformation of closed curves and a kernel $L(\eta,\eta')$ is analytic in a neighborhood of $\Gamma_s\times \Gamma_s\subset \ensuremath{\mathbb{C}}^2$ for each $s$. Then the Fredholm determinant of $L$ acting on $\Gamma_s$ is independent of $s$. \end{lemma} The following lemma, provided to us by Percy Deift, with proof provided in Appendix \ref{PD_appendix}, allows us to use Cauchy's theorem when manipulating integrals which involve Fredholm determinants in the integrand. \begin{lemma}\label{Analytic_fredholm_det_lemma} Suppose $A(z)$ is an analytic map from a region $D\in \ensuremath{\mathbb{C}}$ into the trace-class operators on a (separable) Hilbert space $\mathcal{\Hi}$. Then $z\mapsto \det(I+A(z))$ is analytic on $D$. \end{lemma} \subsubsection{Proofs from Section \ref{proof_of_WASEP_thm_sec}}\label{proofs_sec} We now turn to the proofs of the previously stated lemmas and propsitions. \begin{proof}[Proof of Lemma \ref{deform_mu_to_C}] The lemma follows from Cauchy's theorem once we show that for fixed $\epsilon$, the integrand $\mu^{-1} \prod_{k=0}^{\infty} (1-\mu\tau^k)\det(I+\mu J_{\mu})$ is analytic in $\mu$ between $S_\epsilon$ and $\mathcal{C}_\epsilon$ (note that we now include a subscript $\mu$ on $J$ to emphasize the dependence of the kernel on $\mu$). It is clear that the infinite product and the $\mu^{-1}$ are analytic in this region. In order to show that $\det(I+\mu J_{\mu})$ is analytic in the desired region we may appeal to Lemma \ref{Analytic_fredholm_det_lemma}. Therefore it suffices to show that the map $J(\mu)$ defined by $\mu\mapsto J_{\mu}$ is an analytic map from this region of $\mu$ between $S_\epsilon$ and $\mathcal{C}_\epsilon$ into the trace class operators (this suffices since the multiplication by $\mu$ is clearly analytic). The rest of this proof is devoted to the proof of this fact. In order to prove this, we need to show that $J_{\mu}^h=\frac{J_{\mu+h}-J_{\mu}}{h}$ converges to some trace class operator as $h\in \ensuremath{\mathbb{C}}$ goes to zero. By the criteria of Lemma \ref{trace_convergence_lemma} it suffices to prove that the kernel associated to $J_{\mu}^h$ converges uniformly in $\eta,\eta'\in\Gamma_{\eta}$ to the kernel of $J'_{\mu}$. This will prove both the convergence of traces as well as the weak convergence of operators necessary to prove trace norm convergence and complete this proof. The operator $J'_{\mu}$ acts on $\Gamma_{\eta}$, the circle centered at zero and of radius $1-\tfrac{1}{2}\epsilon^{1/2}$, as \begin{equation*} J'_{\mu}(\eta,\eta')=\int_{\Gamma_{\zeta}} \exp\{\Psi(\zeta)-\Psi(\eta')\}\frac{f'(\mu,\zeta/\eta')}{\eta'(\zeta-\eta)}d\zeta \end{equation*} where \begin{equation*} f'(\mu,z)=\sum_{k=-\infty}^{\infty} \frac{\tau^{2k}}{(1-\tau^k\mu)^2}z^k. \end{equation*} Our desired convergence will follow if we can show that \begin{equation*} \left|h^{-1}\left(f(\mu+h,\zeta/\eta')-f(\mu,\zeta/\eta')\right) - f'(\mu,\zeta/\eta')\right| \end{equation*} tends to zero uniformly in $\zeta\in \Gamma_{\zeta}$ and $\eta'\in \Gamma_{\eta}$ as $|h|$ tends to zero. Expanding this out and taking the absolute value inside of the infinite sum we have \begin{equation}\label{sum_eqn} \sum_{k=-\infty}^{\infty} \left| h^{-1} \left(\frac{\tau^k}{1-\tau^k(\mu+h)}-\frac{\tau^k}{1-\tau^k(\mu)}\right) - \frac{\tau^{2k}}{(1-\tau^k(\mu))^2}\right| z^k \end{equation} where $z=|\zeta/\eta'|\in (1,\tau^{-1})$. For $\epsilon$ and $\mu$ fixed there is a $k^*$ such that for $k\ge k^*$, \begin{equation*} \left|\frac{\tau^k h}{1-\tau^k \mu}\right|<1. \end{equation*} Furthermore, by choosing $|h|$ small enough we can make $k^*$ negative. As a result we also have that for small enough $|h|$, for all $k<k^*$, \begin{equation*} \left|\frac{h}{\tau^{-1}-\mu}\right|<1. \end{equation*} Splitting our sum into $k<k^*$ and $k\ge k^*$, and using the fact that $1/(1-w)=1+w+O(w^2)$ for $|w|<1$ we can Taylor expand as follows: For $k\geq k^*$ \begin{equation*} \frac{\tau^k}{1-\tau^k(\mu+h)} = \frac{\tau^k}{1-\tau^k\mu}\frac{1}{1-\frac{\tau^k h}{1-\tau^k \mu}} = \frac{\tau^k \left(1+\frac{\tau^k h}{1-\tau^k \mu} + \left(\frac{\tau^k}{1-\tau^k\mu}\right)^2O(h^2)\right)}{1-\tau^k \mu}. \end{equation*} Similarly, expanding the second term inside the absolute value in equation (\ref{sum_eqn}) and canceling with the third term we are left with \begin{equation*} \sum_{k=k^*}^{\infty} \frac{\tau^{3k}}{(1-\tau^k \mu)^3} O(h) z^k. \end{equation*} The sum converges since $\tau^3 z <1$ and thus behaves like $O(h)$ as desired. Likewise for $k<k^*$, by multiplying the numerator and denominator by $\tau^{-k}$, the same type of expansion works and we find that the error is given by the same summand as above but over $k$ from $-\infty$ to $k^*-1$. Again, however, the sum converges since the numerator and denominator cancel each other for $k$ large negative, and $z^k$ is a convergent series for $k$ going to negative infinity. Thus this error series also behaves like $O(h)$ as desired. This shows the needed uniform convergence and completes the proof. \end{proof} \begin{proof}[Proof of Lemma \ref{mu_inequalities_lemma}] We prove this with the scaling parameter $r=1$ as the general case follows in a similar way. Consider \begin{equation*} \log(g_{\epsilon}(\tilde\mu))=\sum_{k=0}^{\infty} \log(1-\epsilon^{1/2}\tilde\mu \tau_\epsilon^k). \end{equation*} We have $\sum_{k=0}^{\infty} \epsilon^{1/2} \tau^k = \frac12(1 + \epsilon^{1/2}c_\epsilon )$ where $c_\epsilon=O(1)$. So for $\tilde\mu\in R_1$ we have \begin{eqnarray*} \nonumber|\log(g_{\epsilon}(\tilde\mu))+\frac{\tilde\mu}2(1+ \epsilon^{1/2}c_\epsilon )| &=& \left|\sum_{k=0}^{\infty}\log(1-\epsilon^{1/2}\tilde\mu \tau^k)+ \epsilon^{1/2}\tilde\mu \tau^k\right|\\ &\leq & \sum_{k=0}^{\infty} |\log(1-\epsilon^{1/2}\tilde\mu \tau^k)+ \epsilon^{1/2}\tilde\mu \tau^k|\\ \nonumber &\leq & \sum_{k=0}^{\infty} |\epsilon^{1/2}\tilde\mu \tau^k|^2 = \frac{\epsilon|\tilde\mu|^2}{1-\tau^2}= \frac{\epsilon^{1/2} |\tilde\mu|^2}{4-4\epsilon^{1/2}}\\ &\leq& c\epsilon^{1/2}|\tilde\mu|^2 \nonumber\leq c' \epsilon^{1/2}. \end{eqnarray*} The second inequality uses the fact that for $|z|\leq 1/2$, $|\log(1-z)+z|\leq |z|^2$. Since $\tilde\mu\in R_1$ it follows that $|z|=\epsilon^{1/2}|\tilde\mu|$ is bounded by $1/2$ for small enough $\epsilon$. The constants here are finite and do not depend on any of the parameters. This proves equation (\ref{g_e_ineq1}) and shows that the convergence is uniform in $\tilde\mu$ on $R_1$. We now turn to the second inequality, equation (\ref{g_e_ineq2}). Consider the region, \begin{equation*} D=\{z:\arg(z)\in [-{\scriptstyle\frac{\pi}{10}},{\scriptstyle\frac{\pi}{10}}]\}\cap \{z:\ensuremath{\mathrm{Im}}(z)\in (-{\scriptstyle\frac{1}{10}},{\scriptstyle\frac{1}{10}})\}\cap \{z:\ensuremath{\mathrm{Re}}(z)\leq 1\}. \end{equation*} For all $z\in D$, \begin{equation}\label{ineq2} \ensuremath{\mathrm{Re}}(\log(1-z))\leq \ensuremath{\mathrm{Re}}(-z-z^2/2). \end{equation} For $\tilde\mu\in R_2$, it is clear that $\epsilon^{1/2}\tilde\mu\in D$. Therefore, using (\ref{ineq2}), \begin{eqnarray*} \nonumber\ensuremath{\mathrm{Re}}(\log(g_{\epsilon}(\tilde\mu))) &=& \sum_{k=0}^{\infty}\ensuremath{\mathrm{Re}}[\log(1-\epsilon^{1/2}\tilde\mu \tau^k)]\\ &\leq & \sum_{k=0}^{\infty} \left(-\ensuremath{\mathrm{Re}}[\epsilon^{1/2}\tilde\mu \tau^k]-\ensuremath{\mathrm{Re}}[(\epsilon^{1/2}\tilde\mu \tau^k)^2/2]\right)\\ \nonumber&\leq & -\ensuremath{\mathrm{Re}}(\tilde\mu/2)-\frac{1}{8}\epsilon^{1/2}\ensuremath{\mathrm{Re}}(\tilde\mu^2). \end{eqnarray*} This proves equation (\ref{g_e_ineq2}). Note that from the definition of $R_2$ we can calculate the argument of $\tilde\mu$ and we see that $|\arg \tilde\mu|\leq \arctan(2\tan(\tfrac{\pi}{10}))<\tfrac{\pi}{4}$ and $|\tilde\mu|\geq r\geq 1$. Therefore $\ensuremath{\mathrm{Re}}(\tilde\mu^2)$ is positive and bounded away from zero for all $\tilde\mu\in R_2$. \end{proof} \begin{proof}[Proof of Proposition \ref{originally_cut_mu_lemma}] This proof proceeds in a similar manner to the proof of Proposition \ref{reinclude_mu_lemma}, however, since in this case we have to deal with $\epsilon$ going to zero and changing contours, it is, by necessity, a little more complicated. For this reason we encourage readers to first study the simpler proof of Proposition \ref{reinclude_mu_lemma}. In that proof we factor our operator into two pieces. Then, using the decay of the exponential term, and the control over the size of the $\csc$ term, we are able to show that the Hilbert-Schmidt norm of the first factor is finite and that for the second factor it is bounded by $|\tilde\mu|^{\alpha}$ for $\alpha<1$ (we show it for $\alpha=1/2$ though any $\alpha>0$ works, just with constant getting large as $\alpha\searrow 0$). This gives an estimate on the trace norm of the operator, which, by exponentiating, gives an upper bound $e^{c|\tilde\mu|^{\alpha}}$ on the size of the determinant. This upper bound is beat by the exponential decay in $\tilde\mu$ of the prefactor term $g_{\epsilon}$. For the proof of Proposition \ref{originally_cut_mu_lemma}, we do the same sort of factorization of our operator into $AB$, where here, \begin{equation*} A(\zeta,\eta)=\frac{e^{c[\Psi(\zeta)+n_0\log(\zeta)]}}{\zeta-\eta} \end{equation*} with $n_0$ as explained before the statement of Lemma \ref{kill_gamma_2_lemma}, and $0<c<1$ fixed, and \begin{equation*} B(\eta,\zeta) = e^{-c[\Psi(\zeta)+n_0\log(\zeta)]}e^{\Psi(\zeta)-\Psi(\eta)}\mu f(\mu,\zeta/\eta)\frac{1}{\eta}. \end{equation*} We must be careful in keeping track of the contours on which these operators act. As we have seen we may assume that the $\eta$ variables are on $\Gamma_{\eta,l}$ and the $\zeta$ variables on $\Gamma_{\zeta,l}$ for any fixed choice of $l\geq 0$. Now using the estimates of Lemmas \ref{kill_gamma_2_lemma} and \ref{compact_eta_zeta_taylor_lemma}, we compute that $||A||_2<\infty$ (uniformly in $\epsilon<\epsilon_0$ and, trivially, also in $\tilde\mu$). Here we calculate the Hilbert-Schmidt norm using Lemma \ref{fredholm_continuity_lemma}. Intuitively this norm is uniformly bounded as $\epsilon$ goes to zero, because, while the denominator blows up as badly as $\epsilon^{-1/2}$, the numerator is roughly supported only on a region of measure $\epsilon^{1/2}$ (owing to the exponential decay of the exponential when $\zeta$ differs from $\xi$ by more than order $\epsilon^{1/2}$). We wish to control $||B||_2$ now. Using the discussion before Lemma \ref{kill_gamma_2_lemma} we may rewrite $B$ as \begin{equation*} B(\eta,\zeta) = e^{-c[\Psi(\zeta)+n_0\log(\zeta)]}e^{(\Psi(\zeta)+n_0\log(\zeta))-(\Psi(\eta)-n_0\log(\eta))}\tilde\mu f(\tilde\mu,\zeta/\eta)\frac{1}{\eta} \end{equation*} Lemmas \ref{kill_gamma_2_lemma} and \ref{compact_eta_zeta_taylor_lemma} apply and tell us that the exponential terms decay at least as fast as $\exp\{-\epsilon^{-1/2}c'|\zeta-\eta|\}$. So the final ingredient in proving our proposition is control of $|\tilde\mu f(\tilde\mu, z)|$ for $z=\zeta/\eta'$. We break it up into two regions of $\eta',\zeta$: The first (1) when $|\eta'-\zeta|\leq c$ for a very small constant $c$ and the second (2) when $|\eta'-\zeta|> c$. We will compute $||B||_2$ as the square root of \begin{equation}\label{case1case2} \int_{\eta,\zeta\in \textrm{Case (1)}} |B(\eta,\zeta)|^2 d\eta d\zeta + \int_{\eta,\zeta\in \textrm{Case (2)}} |B(\eta,\zeta)|^2 d\eta d\zeta. \end{equation} We will show that the first term can be bounded by $C|\tilde\mu|^{2\alpha}$ for any $\alpha<1$, while the second term can be bounded by a large constant. As a result $||B||_2\leq C|\tilde\mu|^{\alpha}$ which is exactly as desired since then $||AB||_1\leq e^{c|\tilde\mu|^{\alpha}}$. Consider case (1) where $|\eta'-\zeta|\leq c$ for a constant $c$ which is positive but small (depending on $T$). One may easily check from the defintion of the contours that $\epsilon^{-1/2}(|\zeta/\eta|-1)$ is contained in a compact subset of $(0,2)$. In fact, $\zeta/\eta'$ almost exactly lies along the curve $|z|=1+\epsilon^{1/2}$ and in particular (by taking $\epsilon_0$ and $c$ small enough) we can assume that $\zeta/\eta$ never leaves the region bounded by $|z|=1+(1\pm r)\epsilon^{1/2}$ for any fixed $r<1$. Let us call this region $R_{\epsilon,r}$. Then we have \begin{lemma}\label{final_estimate} Fix $\epsilon_0$ and $r\in (0,1)$. Then for all $\epsilon<\epsilon_0$, $\tilde\mu\in\mathcal{\tilde C}_{\epsilon}$ and $z\in R_{\epsilon,r}$, \begin{equation*} |\tilde\mu f(\tilde\mu,z)| \leq c|\tilde\mu|^{\alpha}/|1-z| \end{equation*} for some $\alpha\in (0,1)$, with $c=c(\alpha)$ independent of $z$, $\tilde\mu$ and $\epsilon$. \end{lemma} \begin{remark} By changing the value of $\alpha$ in the definition of $\kappa(\theta)$ (which then goes into the definition of $\Gamma_{\eta,l}$ and $\Gamma_{\zeta,l}$) and also focusing the region $R_{\epsilon,r}$ around $|z|=1+2\alpha \epsilon^{1/2}$, we can take $\alpha$ arbitrarily small in the above lemma at a cost of increasing the constant $c=c(\alpha)$ (the same also applies for Proposition \ref{reinclude_mu_lemma}). The $|\tilde\mu|^{\alpha}$ comes from the fact that $(1+2\alpha \epsilon^{1/2})^{\tfrac{1}{2}\epsilon^{-1/2}\log|\tilde\mu|} \approx |\tilde\mu|^{\alpha}$. Another remark is that the proof below can be used to provide an alternative proof of Lemma \ref{muf_compact_sets_csc_limit_lemma} by studying the convergence of the Riemann sum directly rather than by using functional equation properties of $f$ and the analytic continuations. \end{remark} We complete the ongoing proof of Proposition \ref{originally_cut_mu_lemma} and then return to the proof of the above lemma. Case (1) is now done since we can estimate the first integral in equation (\ref{case1case2}) using Lemma \ref{final_estimate} and the exponential decay of the exponential term outside of $|\eta'-\zeta|=O(\epsilon^{1/2})$. Therefore, just as with the $A$ operator, the $\epsilon^{-1/2}$ blowup of $|\tilde\mu f(\tilde\mu,\zeta/\eta')|$ is countered by the decay of the exponential and we are just left with a large constant time $|\tilde\mu|^{\alpha}$. Turing to case (2) we need to show that the second integral in equation (\ref{case1case2}) is bounded uniformly in $\epsilon$ and $\tilde\mu\in \tilde\ensuremath{\mathbb{C}}_{\epsilon}$. This case corresponds to $|\eta'-\zeta|>c$ for some fixed but small constant $c$. Since $\epsilon^{-1/2}(|\zeta/\eta|-1)$ stays bounded in a compact set, using an argument almost identical to the proof of Lemma \ref{mu_f_polynomial_bound_lemma} we can show that $|\tilde\mu f(\tilde\mu,\zeta/\eta)|$ can be bounded by $C|\tilde\mu|^{C'}$ for positive yet finite constants $C$ and $C'$. The important point here is that there is only a finite power of $|\tilde\mu|$. Since $|\tilde\mu|<\epsilon^{-1/2}$ this means that this term can blow up at most polynomially in $\epsilon^{-1/2}$. On the other hand we know that the exponential term decays exponentially fast like $e^{-\epsilon^{-1/2}c}$ and hence the second integral in equation (\ref{case1case2}) goes to zero. We now return to the proof of Lemma \ref{final_estimate} which will complete the proof of Proposition~\ref{originally_cut_mu_lemma}. \begin{proof}[Proof of Lemma \ref{final_estimate}] We will prove the desired estimate for $z:|z|=1+\epsilon^{1/2}$. The proof for general $z\in R_{\epsilon,r}$ follows similarly. Recall that \begin{equation*} \tilde\mu f(\tilde\mu,z) = \sum_{k=-\infty}^{\infty} \frac{\tilde\mu \tau^k}{1-\tilde\mu \tau^k} z^k. \end{equation*} Since $\tilde\mu$ has imaginary part 1, the denominator is smallest when $\tau^{k}=1/|\tilde\mu|$, corresponding to \begin{equation*} k=k^*=\lfloor \tfrac{1}{2}\epsilon^{-1/2} \log |\mu|\rfloor. \end{equation*} We start, therefore, by centering our doubly infinite sum at around this value, \begin{equation*} \tilde\mu f(\tilde\mu,z) = \sum_{k=-\infty}^{\infty} \frac{\tilde\mu \tau^{k^*}\tau^k}{1-\tilde\mu\tau^{k^*} \tau^k} z^{k^*}z^k. \end{equation*} By the definition of $k^*$, \begin{equation*} |z|^{k^*}= |\tilde\mu|^{1/2}(1+O(\epsilon^{1/2})) \end{equation*} thus we find that \begin{equation*} |\tilde\mu f(\tilde\mu,z)| = |\tilde\mu|^{1/2}\left|\sum_{k=-\infty}^{\infty} \frac{\omega\tau^k}{1-\omega \tau^k} z^k\right| \end{equation*} where \begin{equation*} \omega =\tilde\mu\tau^{k^*} \end{equation*} and is roughly on the unit circle except for a small dimple near 1. To be more precise, due to the rounding in the definition of $k^*$ the $\omega$ is not exactly on the unit circle, however we do have the following two properties: \begin{equation*} |1-\omega|>\epsilon^{1/2}, \qquad |\omega|-1=O(\epsilon^{1/2}). \end{equation*} The section of $\mathcal{\tilde C}_{\epsilon}$ in which $\tilde\mu=\epsilon^{-1/2}-1+iy$ for $y\in (-1,1)$ corresponds to $\omega$ lying along a small dimple around $1$ (and still respects $|1-\omega|>\epsilon^{1/2}$). We call the curve on which $\omega$ lies $\Omega$. We can bring the $|\tilde\mu|^{1/2}$ factor to the left and split the summation into three parts, so that $|\tilde\mu|^{-1/2}|\tilde\mu f(\tilde\mu,z)|$ equals \begin{equation}\label{three_term_eqn} \left|\sum_{k=-\infty}^{-\epsilon^{-1/2}} \frac{\omega\tau^k}{1-\omega \tau^k} z^k+ \sum_{k=-\epsilon^{-1/2}}^{\epsilon^{-1/2}} \frac{\omega\tau^k}{1-\omega \tau^k} z^k+ \sum_{k=\epsilon^{-1/2}}^{\infty} \frac{\omega\tau^k}{1-\omega \tau^k} z^k\right|. \end{equation} We will control each of these term separately. The first and the third are easiest. Consider \begin{equation*} \left|(z-1)\sum_{k=-\infty}^{-\epsilon^{-1/2}} \frac{\omega\tau^k}{1-\omega \tau^k} z^k\right|. \end{equation*} We wish to show this is bounded by a constant which is independent of $\tilde\mu$ and $\epsilon$. Summing by parts the argument of the absolute value can be written as \begin{equation}\label{eqn174} \frac{\omega\tau^{-\epsilon^{-1/2}+1}}{1-\omega\tau^{-\epsilon^{-1/2}+1}}z^{-\epsilon^{-1/2}+1}+(1-\tau)\sum_{k=-\infty}^{-\epsilon^{-1/2}}\frac{\omega\tau^k}{(1-\omega\tau^k)(1-\omega\tau^{k+1})}z^k. \end{equation} We have $\tau^{-\epsilon^{-1/2}+1} \approx e^2$ and $|z^{-\epsilon^{-1/2}+1}|\approx e^{-1}$ (where $e\sim 2.718$). The denominator of the first term is therefore bounded from zero. Thus the absolute value of this term is bounded by a constant. For the second term of (\ref{eqn174}) we can bring the absolute value inside of the summation to get \begin{equation*} (1-\tau)\sum_{k=-\infty}^{-\epsilon^{-1/2}}\left|\frac{\omega\tau^k}{(1-\omega\tau^k)(1-\omega\tau^{k+1})}\right||z|^k. \end{equation*} The first term in absolute values stays bounded above by a constant times the value at $k=-\epsilon^{-1/2}$. Therefore, replacing this by a constant, we can sum in $|z|$ and we get $\frac{|z|^{-\epsilon^{-1/2}}}{1-1/|z|}$. The numerator, as noted before, is like $e^{-1}$ but the denominator is like $\epsilon^{1/2}/2$. This is cancelled by the term $1-\tau=O(\epsilon^{1/2})$ in front. Thus the absolute value is bounded. The argument for the third term of equation (\ref{three_term_eqn}) works in the same way, except rather than multiplying by $|1-z|$ and showing the result is constant, we multiply by $|1-\tau z|$. This is, however, sufficient since $|1-\tau z|$ and $|1-z|$ are effectively the same for $z$ near 1 which is where our desired bound must be shown carefully. We now turn to the middle term in equation (\ref{three_term_eqn}) which is more difficult. We will show that \begin{equation*} \left|(1-z)\sum_{k=-\epsilon^{-1/2}}^{\epsilon^{-1/2}} \frac{\omega\tau^k}{1-\omega \tau^k} z^k\right|=O(\log|\tilde\mu|). \end{equation*} This is of smaller order than $|\tilde\mu|$ raised to any positive real power and thus finishes the proof. For the sake of simplicity we will first show this with $z=1+\epsilon^{1/2}$. The general argument for points $z$ of the same radius and non-zero angle is very similar as we will observe at the end of the proof. For the special choice of $z$, the prefactor $(1-z)=\epsilon^{1/2}$. The method of proof is to show that this sum is well approximated by a Riemann sum. This idea was mentioned in the formal proof of the $\epsilon$ goes to zero limit. In fact, the argument below can be used to make that formal observation rigorous, and thus provides an alternative method to the complex analytic approach we take in the proof of Lemma \ref{muf_compact_sets_csc_limit_lemma}. The sum we have is given by \begin{equation}\label{first_step_sum} \epsilon^{1/2} \sum_{k=-\epsilon^{-1/2}}^{\epsilon^{-1/2}}\frac{\omega\tau^k}{1-\omega \tau^k} z^k = \epsilon^{1/2} \sum_{k=-\epsilon^{-1/2}}^{\epsilon^{-1/2}}\frac{\omega(1-\epsilon^{1/2}+O(\epsilon))^k}{1-\omega (1-2\epsilon^{1/2}+O(\epsilon))^k} \end{equation} where we have used the fact that $\tau z = 1-\epsilon^{1/2} + O(\epsilon)$. Observe that if $k=t\epsilon^{-1/2}$ then this sum is close to a Riemann sum for \begin{equation}\label{integral_equation_sigma} \int_{-1}^{1}\frac{\omega e^{-t}}{1-\omega e^{-2t}} dt. \end{equation} We use this formal relationship to prove that the sum in equation (\ref{first_step_sum}) is $O(\log|\tilde\mu|)$. We do this in a few steps. The first step is to consider the difference between each term in our sum and the analogous term in a Riemann sum for the integral. After estimating the difference we show that this can be summed over $k$ and gives us a finite error. The second step is to estimate the error of this Riemann sum approximation to the actual integral. The final step is to note that \begin{equation*} \int_{-1}^{1} \frac{\omega e^{-t}}{1-\omega e^{-2t}}dt \sim |\log(1-\omega)|\sim \log|\tilde\mu| \end{equation*} for $\omega\in \Omega$ (in particular where $|1-\omega|>\epsilon^{1/2}$). Hence it is easy to check that it is smaller than any power of $|\tilde\mu|$. A single term in the Riemann sum for the integral looks like $ \epsilon^{1/2} \frac{\omega e^{-k\epsilon^{1/2}}}{1-\omega e^{-2k\epsilon^{1/2}}} $. Thus we are interested in estimating \begin{equation}\label{estimating_eqn} \epsilon^{1/2}\left|\frac{\omega(1-\epsilon^{1/2}+O(\epsilon))^k}{1-\omega (1-2\epsilon^{1/2}+O(\epsilon))^k} - \frac{\omega e^{-k\epsilon^{1/2}}}{1-\omega e^{-2k\epsilon^{1/2}}} \right|. \end{equation} We claim that there exists $C<\infty$, independent of $\epsilon$ and $k$ satisfying $k\epsilon^{1/2}\leq 1$, such that the previous line is bounded above by \begin{equation}\label{giac} \frac{Ck^2\epsilon^{3/2}}{(1-\omega+\omega 2k\epsilon^{1/2})}+\frac{Ck^3\epsilon^{2}}{(1-\omega+\omega 2k\epsilon^{1/2})^2}. \end{equation} To prove that (\ref{estimating_eqn}) $\le $(\ref{giac}) we expand the powers of $k$ and the exponentials. For the numerator and denominator of the first term inside of the absolute value in (\ref{estimating_eqn}) we have $\omega(1-\epsilon^{1/2}+O(\epsilon))^k= \omega-\omega k\epsilon^{1/2} + O(k^2\epsilon)$ and \begin{eqnarray*} \nonumber 1-\omega (1-2\epsilon^{1/2}+O(\epsilon))^k & = & 1-\omega+\omega 2k\epsilon^{1/2} -\omega 2k^2\epsilon+O(k\epsilon)+O(k^3\epsilon^{3/2})\\ & = & (1-\omega+\omega 2k\epsilon^{1/2})(1 - \frac{\omega 2k^2\epsilon +O(k\epsilon)+O(k^3\epsilon^{3/2})}{1-\omega+\omega 2k \epsilon^{1/2}}). \end{eqnarray*} Using $1/(1-z)=1+z+O(z^2)$ for $|z|<1$ we see that \begin{eqnarray*} && \frac{\omega(1-\epsilon^{1/2}+O(\epsilon))^k}{1-\omega (1-2\epsilon^{1/2}+O(\epsilon))^k}\\ \nonumber&=&\frac{\omega-\omega k\epsilon^{1/2} + O(k^2\epsilon)}{1-\omega+\omega 2k\epsilon^{1/2}}\left(1+\frac{\omega 2k^2\epsilon +O(k\epsilon)+O(k^3\epsilon^{3/2})}{1-\omega+\omega 2k \epsilon^{1/2}}\right)\\ \nonumber& =& \frac{\left(\omega-\omega k \epsilon^{1/2} + O(k^2\epsilon)\right)\left(1-\omega+\omega 2k \epsilon^{1/2} + \omega 2k^2\epsilon +O(k\epsilon)+O(k^3\epsilon^{3/2})\right)}{(1-\omega+\omega 2k\epsilon^{1/2})^2} \end{eqnarray*} Likewise, the second term from equation (\ref{estimating_eqn}) can be similarly estimated and shown to be \begin{equation*} \frac{\omega e^{-k\epsilon^{1/2}}}{1-\omega e^{-2k\epsilon^{1/2}}}= \frac{\left(\omega-\omega k\epsilon^{1/2} + O(k^2\epsilon)\right)\left(1-\omega+\omega 2k \epsilon^{1/2} + \omega 2k^2\epsilon +O(k^3\epsilon^{3/2})\right)}{(1-\omega+\omega 2k\epsilon^{1/2})^2}. \end{equation*} Taking the difference of these two terms, and noting the cancellation of a number of the terms in the numerator, gives (\ref{giac}). To see that the error in (\ref{giac}) is bounded after the summation over $k$ in the range $\{-\epsilon^{-1/2},\ldots, \epsilon^{-1/2}\}$, note that this gives \begin{eqnarray*} \epsilon^{1/2}\sum_{-\epsilon^{-1/2}}^{\epsilon^{1/2}}\frac{(2k\epsilon^{1/2})^2}{1-\omega+\omega (2k\epsilon^{1/2})}+\frac{(2k\epsilon^{1/2})^3}{(1-\omega+\omega (2k\epsilon^{1/2}))^2}\\ \nonumber\sim\int_{-1}^{1}\frac{(2t)^2}{1-\omega+\omega 2t} +\frac{(2t)^3}{(1-\omega+\omega 2t)^2} dt . \end{eqnarray*} The Riemann sums and integrals are easily shown to be convergent for our $\omega$ which lies on $\Omega$, which is roughly the unit circle, and avoids the point 1 by distance $\epsilon^{1/2}$. Having completed this first step, we now must show that the Riemann sum for the integral in equation (\ref{integral_equation_sigma}) converges to the integral. This uses the following estimate,\begin{equation}\label{riemann_approx_max} \sum_{k=-\epsilon^{-1/2}}^{\epsilon^{-1/2}} \epsilon^{1/2} \max_{(k-1/2)\epsilon^{1/2}\leq t\leq (k+1/2)\epsilon^{1/2}} \left| \frac{\omega e^{-k\epsilon^{1/2}}}{1-\omega e^{-2k\epsilon^{1/2}}} - \frac{\omega e^{-t}}{1-\omega e^{-2t}}\right|\le C \end{equation} To show this, observe that for $t\in \epsilon^{1/2}[k-1/2,k+1/2]$ we can expand the second fraction as \begin{equation}\label{sec_frac} \frac{\omega e^{-k\epsilon^{1/2}}(1+O(\epsilon^{1/2}))}{1-\omega e^{-2k\epsilon^{1/2}}(1-2l\epsilon^{1/2}+O(\epsilon))} \end{equation} where $l\in [-1/2,1/2]$. Factoring the denominator as \begin{equation} (1-\omega e^{-2k\epsilon^{1/2}})(1+ \frac{\omega e^{-2k\epsilon^{1/2}}(2l\epsilon^{1/2}+O(\epsilon))}{1-\omega e^{-2k\epsilon^{1/2}}}) \end{equation} we can use $1/(1+z)=1-z+O(z^2)$ (valid since $|1-\omega e^{-2k\epsilon^{1/2}}|>\epsilon^{1/2}$ and $|l|\leq 1$) to rewrite equation (\ref{sec_frac}) as \begin{equation*} \frac{\omega e^{-k\epsilon^{1/2}}(1+O(\epsilon^{1/2}))\left(1- \frac{\omega e^{-2k\epsilon^{1/2}}(2l\epsilon^{1/2}+O(\epsilon))}{1-\omega e^{-2k\epsilon^{1/2}}} \right)}{1-\omega e^{-2k\epsilon^{1/2}}}. \end{equation*} Canceling terms in this expression with the terms in the first part of equation (\ref{riemann_approx_max}) we find that we are left with terms bounded by \begin{equation*} \frac{O(\epsilon^{1/2})}{1-\omega e^{-2k\epsilon^{1/2}}} + \frac{O(\epsilon^{1/2})}{(1-\omega e^{-2k\epsilon^{1/2}})^2}. \end{equation*} These must be summed over $k$ and multiplied by the prefactor $\epsilon^{1/2}$. Summing over $k$ we find that these are approximated by the integrals \begin{equation*} \epsilon^{1/2}\int_{-1}^{1}\frac{1}{1-\omega +\omega 2t}dt,\qquad \epsilon^{1/2}\int_{-1}^{1}\frac{1}{(1-\omega +\omega 2t)^2}dt \end{equation*} where $|1-\omega|>\epsilon^{1/2}$. The first integral has a logarithmic singularity at $t=0$ which gives $|\log(1-\omega)|$ which is clearly bounded by a constant time $|\log\epsilon^{1/2}|$ for $\omega\in \Omega$. When multiplied by $\epsilon^{1/2}$ this term is clearly bounded in $\epsilon$. Likewise, the second integral diverges like $|1/(1-\omega)|$ which is bounded by $\epsilon^{-1/2}$ and again multiplying by the $\epsilon^{1/2}$ factor in front shows that this term is bounded. This proves the Riemann sum approximation. This estimate completes the proof of the desired bound when $z=1+\epsilon^{1/2}$. The general case of $|z|=1+\epsilon^{1/2}$ is proved along a similar line by letting $z= 1+\rho\epsilon^{1/2}$ for $\rho$ on a suitably defined contour such that $z$ lies on the circle of radius $1+\epsilon^{1/2}$. The prefactor is no longer $\epsilon^{1/2}$ but rather now $\rho\epsilon^{1/2}$ and all estimates must take into account $\rho$. However, going through this carefully one finds that the same sort of estimates as above hold and hence the theorem is proved in general. \end{proof} This lemma completes the proof of Proposition \ref{originally_cut_mu_lemma} \end{proof} \begin{proof}[Proof of Proposition \ref{reinclude_mu_lemma}] We will focus on the growth of the absolute value of the determinant. Recall that if $K$ is trace class then $|\det(I+K)|\leq e^{||K||_1}$. Furthermore, if $K$ can be factored into the product $K=AB$ where $A$ and $B$ are Hilbert-Schmidt, then $||K||_1\leq ||A||_2||B||_2$. We will demonstrate such a factorization and follow this approach to control the size of the determinant. Define $A:L^2(\tilde\Gamma_{\zeta})\rightarrow L^2(\tilde\Gamma_{\eta})$ and $B:L^2(\tilde\Gamma_{\eta})\rightarrow L^2(\tilde\Gamma_{\zeta})$ via the kernels \begin{eqnarray*} && A(\tilde\zeta,\tilde\eta) = \frac{e^{-|\ensuremath{\mathrm{Im}}(\tilde\zeta)|}}{\tilde\zeta-\tilde\eta}, \\ B(\tilde\eta,\tilde\zeta)& =& \nonumber e^{|\ensuremath{\mathrm{Im}}(\tilde\zeta)|}e^{-\frac{T}{3}(\tilde\zeta^3-\tilde\eta^3)+a\tilde z} 2^{1/3}\frac{\pi (-\tilde\mu)^{\tilde z}}{\sin(\pi \tilde z)}, \end{eqnarray*} where we let $\tilde z = 2^{1/3}(\tilde\zeta-\tilde\eta)$. Notice that we have put the factor $e^{-|\ensuremath{\mathrm{Im}}(\tilde\zeta)|}$ into the $A$ kernel and removed it from the $B$ contour. The point of this is to help control the $A$ kernel, without significantly impacting the norm of the $B$ kernel. Consider first $||A||_2$ which is given by \begin{equation*} ||A||_2^2 = \int_{\tilde\Gamma_{\zeta}}\int_{\tilde\Gamma_{\eta}} d\tilde\zeta d\tilde\eta \frac{e^{-2|\ensuremath{\mathrm{Im}}(\tilde\zeta)|}}{|\tilde\zeta-\tilde\eta|^2}. \end{equation*} The integral in $\tilde\eta$ converges and is independent of $\tilde\zeta$ (recall that $|\tilde\zeta-\tilde\eta|$ is bounded away from zero) while the remaining integral in $\tilde\zeta$ is clearly convergent (it is exponentially small as $\tilde\zeta$ goes away from zero along $\tilde\Gamma_{\zeta}$. Thus $||A||_{2}<c$ with no dependence on $\tilde\mu$ at all. We now turn to computing $||B||_2$. First consider the cubic term $\tilde\zeta^3$. The contour $\tilde\Gamma_{\zeta}$ is parametrized by $-\frac{c_3}{2} + c_3 i r$ for $r\in (-\infty,\infty)$, that is, a straight up and down line just to the left of the $y$ axis. By plugging this parametrization in and cubing it, we see that, $\ensuremath{\mathrm{Re}}(\tilde\zeta^3)$ behaves like $|\ensuremath{\mathrm{Im}}(\tilde\zeta)|^2$. This is crucial; even though our contours are parallel and only differ horizontally by a small distance, their relative locations lead to very different behavior for the real part of their cube. For $\tilde\eta$ on the right of the $y$ axis, the real part still grows quadratically, however with a negative sign. This is important because this implies that $|e^{-\frac{T}{3}(\tilde\zeta^3-\tilde\eta^3)}|$ behaves like the exponential of the real part of the argument, which is to say, like \begin{equation*} e^{-\frac{T}{3}(|\ensuremath{\mathrm{Im}}(\tilde\zeta)|^2+|\ensuremath{\mathrm{Im}}(\tilde\eta)|^2)}. \end{equation*} Turning to the $\tilde\mu$ term, observe that \begin{eqnarray*} |(-\tilde\mu)^{-\tilde z}| &=& e^{\ensuremath{\mathrm{Re}}\left[(\log|\tilde\mu|+i\arg(-\tilde\mu))(-\ensuremath{\mathrm{Re}}(\tilde z)-i\ensuremath{\mathrm{Im}}(\tilde z))\right]}\\ &=& e^{-\log|\tilde\mu| \ensuremath{\mathrm{Re}}(\tilde z) +\arg(-\tilde\mu)\ensuremath{\mathrm{Im}}(\tilde z)}. \end{eqnarray*} The $\csc$ term behaves, for large $\ensuremath{\mathrm{Im}}(\tilde z)$ like $e^{-\pi |\ensuremath{\mathrm{Im}}(\tilde z)|}$, and putting all these estimates together gives that for $\tilde\zeta$ and $\tilde\eta$ far from the origin on their respective contours, $|B(\tilde\eta,\tilde\zeta)|$ behaves like the following product of exponentials: \begin{equation*} e^{|\ensuremath{\mathrm{Im}}(\tilde\zeta)|} e^{-\frac{T}{3}(|\ensuremath{\mathrm{Im}}(\tilde\zeta)|^2+|\ensuremath{\mathrm{Im}}(\tilde\eta)|^2)} e^{-\log|\tilde\mu|\ensuremath{\mathrm{Re}}(\tilde z) + \arg(-\tilde\mu)\ensuremath{\mathrm{Im}}(\tilde z) - \pi |\ensuremath{\mathrm{Im}}(\tilde z)|}. \end{equation*} Now observe that, due to the location of the contours, $-\ensuremath{\mathrm{Re}}(\tilde z)$ is constant and less than one (in fact equal to $1/2$ by our choice of contours). Therefore we may factor out the term $e^{-\log|\tilde\mu|\ensuremath{\mathrm{Re}}(\tilde z)} = |\tilde\mu|^\alpha$ for $\alpha=1/2<1$. The Hilbert-Schmidt norm of what remains is clearly finite and independent of $\tilde\mu$. This is just due to the strong exponential decay from the quadratic terms $-\ensuremath{\mathrm{Im}}(\zeta)^2$ and $-\ensuremath{\mathrm{Im}}(\eta)^2$ in the exponential. Therefore we find that $||B||_2\leq c|\tilde\mu|^\alpha$ for some constant $c$. This shows that $||K_a^{\csc}||_1$ behaves like $|\tilde\mu|^{\alpha}$ for $\alpha<1$. Using the bound that $|\det(I+K_a^{\csc})|\leq e^{||K_a^{\csc}||}$ we find that $|\det(I+K_a^{\csc})|\leq e^{|\tilde\mu|^{\alpha}}$. Comparing this to $e^{-\tilde\mu}$ we have our desired result. Note that the proof also shows that $K_a^{\csc}$ is trace class. \end{proof} \subsubsection{Proofs from Section \ref{J_to_K_sec}}\label{JK_proofs_sec} \begin{proof}[Proof of Lemma \ref{kill_gamma_2_lemma}] Before starting this proof, we remark that the choice (\ref{kappa_eqn}) of $\kappa(\theta)$ was specifically to make the calculations in this proof more tractable. Certainly other choices of contours would do, however, the estimates could be harder. As it is, we used Mathematica as a preliminary tool to assist us in computing the series expansions and simplifying the resulting expressions. Define $g(\eta)=\Psi(\eta)+n_0\log(\eta)$. We wish to control the real part of this function for both the $\eta$ contour and the $\zeta$ contour. Combining these estimates proves the lemma. We may expand $g(\eta)$ into powers of $\epsilon$ with the expression for $\eta$ in terms of $\kappa(\theta)$ from (\ref{kappa_eqn}) with $\alpha=-1/2$ (similarly $1/2$ for the $\zeta$ expansion). Doing this we see that the $n_0\log(\eta)$ term plays an important role in canceling the $\log(\epsilon)$ term in the $\Psi$ and we are left with \begin{equation}\label{psi_real_eqn} \ensuremath{\mathrm{Re}}(g(\eta)) =\epsilon^{-1/2}\left( -{\scriptstyle\frac14} \epsilon^{-1/2}T\alpha \cot^2(\tfrac{\theta}{2}) + {\scriptstyle\frac18}T\left[\alpha+\kappa(\theta)\right]^2 \cot^2(\tfrac{\theta}{2}) \right)+ O(1). \end{equation} We must show that everything in the parenthesis above is bounded below by a positive constant times $|\eta-\xi|$ for all $\eta$ which start at roughly angle $l\epsilon^{1/2}$. Equivalently we can show that the terms in the parenthesis behave bounded below by a positive constant times $|\pi-\theta|$, where $\theta$ is the polar angle of $\eta$. The second part of this expression is clearly positive regardless of the value of $\alpha$. What this suggests is that we must show (in order to also be able to deal with $\alpha=1/2$ corresponding to the $\zeta$ estimate) that for $\eta$ starting at angle $l\epsilon^{1/2}$ and going to zero, the first term dominates (if $l$ is large enough). To see this we first note that since $\alpha=-1/2$, the first term is clearly positive and dominates for $\theta$ bounded away from $\pi$. This proves the inequality for any range of $\eta$ with $\theta$ bounded from $\pi$. Now observe the following asymptotic behavior of the following three functions of $\theta$ as $\theta$ goes to $\pi$: \begin{eqnarray*} \cot^2(\tfrac{\theta}{2}) &\approx& {\scriptstyle\frac{1}{4}}(\pi-\theta)^2\\ \tan^2(\tfrac{\theta}{2})&\approx& {4}(\pi-\theta)^{-2}\\ \log^2\left({\scriptstyle\frac{2}{1-\cos(\theta)}}\right) &\approx&{\scriptstyle \frac{1}{16} }(\pi-\theta)^4. \end{eqnarray*} The behaviour expressed above is dominant for $\theta$ close to $\pi$. We may expand the square in the second term in (\ref{psi_real_eqn}) and use the above expressions to find that for some suitable constant $C>0$ (which depends on $X$ and $T$ only), we have \begin{equation*} \ensuremath{\mathrm{Re}}(g(\eta)) = \epsilon^{-1/2}\left(-{\scriptstyle\frac1{16} }\epsilon^{-1/2} T\alpha (\pi-\theta)^2 + C(\pi-\theta)^2\right) + O(1). \end{equation*} Now use the fact that $\pi-\theta\geq l\epsilon^{1/2}$ to give \begin{equation}\label{eqn_g} \ensuremath{\mathrm{Re}}(g(\eta)) = \epsilon^{-1/2}\left(-{\scriptstyle\frac1{16} }lT\alpha (\pi-\theta) +{\scriptstyle \frac{X^2}{8T}}(\pi-\theta)^2\right) + O(1). \end{equation} Since $\pi-\theta$ is bounded by $\pi$, we see that taking $l$ large enough, the first term always dominates for the entire range of $\theta\in [0,\pi-l\epsilon^{1/2}]$. Therefore since $\alpha=-1/2$, we find that we have have the desired lower bound in $\epsilon^{-1/2}$ and $|\pi-\theta|$. Turn now to the bound for $\ensuremath{\mathrm{Re}}(g(\zeta))$. In the case of the $\eta$ contour we took $\alpha=-1/2$, however since we now are dealing with the $\zeta$ contour we must take $\alpha=1/2$. This change in the sign of $\alpha$ and the argument above shows that equation (\ref{eqn_g}) implies the desired bound for $\ensuremath{\mathrm{Re}}(g(\zeta))$, for $l$ large enough. \end{proof} Before proving Lemma \ref{mu_f_polynomial_bound_lemma} we record the following key lemma on the meromorphic extension of $\mu f(\mu,z)$. Recall that $\mu f(\mu,z)$ has poles at $\mu= \tau^j$, $j\in \ensuremath{\mathbb{Z}}$. \begin{lemma}\label{f_functional_eqn_lemma} For $\mu\neq \tau^j$, $j\in \ensuremath{\mathbb{Z}}$, $\mu f(\mu,z)$ is analytic in $z$ for $1<|z|<\tau^{-1}$ and extends analytically to all $z\neq 0$ or $\tau^k$ for $k\in \ensuremath{\mathbb{Z}}$. This extension is given by first writing $\mu f(\mu,z) = g_+(z)+g_-(z)$ where \begin{equation*} g_+(z)=\sum_{k=0}^{\infty} \frac{\mu \tau^kz^k}{1-\tau^k\mu}\qquad\qquad g_-(z)=\sum_{k=1}^{\infty} \frac{\mu \tau^{-k}z^{-k}}{1-\tau^{-k}\mu}, \end{equation*} and where $g_+$ is now defined for $|z|<\tau^{-1}$ and $g_-$ is defined for $|z|>1$. These functions satisfy the following two functional equations which imply the analytic continuation: \begin{equation*} g_+(z)=\frac{\mu}{1-\tau z}+\mu g_+(\tau z),\qquad g_-(z)=\frac{1}{1-z}+\frac{1}{\mu}g_-(z/\tau). \end{equation*} By repeating this functional equation we find that \begin{equation*} g_+(z)=\sum_{k=1}^{N}\frac{\mu^k}{1-\tau^k z}+\mu^N g_+(\tau^N z),\qquad g_-(z)=\sum_{k=0}^{N-1}\frac{\mu^{-k}}{1-\tau^{-k}z}+\mu^{-N}g_-(z\tau^{-N}). \end{equation*} \end{lemma} \begin{proof} We prove the $g_+$ functional equation, since the $g_-$ one follows similarly. Observe that \begin{eqnarray*} \nonumber g_+(z) & =& \sum_{k=0}^{\infty} \mu(\tau z)^k ( 1+\frac{1}{1-\mu\tau^k} -1) \\&= &\frac{\mu}{1-\tau z}+\sum_{k=0}^{\infty} \frac{\mu^2\tau^k}{1-\mu\tau^k} (\tau z)^k = \frac{\mu}{1-\tau z} + \mu g_+(\tau z), \end{eqnarray*} which is the desired relation. \end{proof} \begin{proof}[Proof of Lemma \ref{mu_f_polynomial_bound_lemma}] Recall that $\tilde\mu$ lies on a compact subset of $\mathcal{\tilde C}$ and hence that $|1-\tilde\mu \tau^k|$ stays bounded from below as $k$ varies. Also observe that due to our choices of contours for $\eta'$ and $\zeta$, $|\zeta/\eta'|$ stays bounded in $(1,\tau^{-1})$. Write $z=\zeta/\eta'$. Split $\tilde\mu f(\tilde\mu ,z)$ as $g_+(z)+g_-(z)$ (see Lemma \ref{f_functional_eqn_lemma} above), we see that $g_+(z)$ is bounded by a constant time $1/(1-\tau z)$ and likewise $g_-(z)$ is bounded by a constant time $1/(1-z)$. Writing this in terms of $\zeta$ and $\eta'$ again we have our desired upperbound. \end{proof} \begin{proof}[Proof of Lemma \ref{compact_eta_zeta_taylor_lemma}] By the discussion preceding the statement of this lemma it suffices to consider the expansion without $n_0\log(\zeta/\eta')$ and without the $\log\epsilon$ term in $m$ since, as we will see, they exactly cancel out. Therefore, for the sake of this proof we modify the definition of $m$ given in equation (\ref{m_eqn}) to be \begin{equation*} m=\frac{1}{2}\left[\epsilon^{-1/2}(-a'+\frac{X^2}{2T})+\frac{1}{2}t+x\right]. \end{equation*} where $a'=a+\log2$. The argument now amounts to a Taylor series expansion with control over the remainder term. Let us start by recording the first four derivatives of $\Lambda(\zeta)$: \begin{eqnarray*} \Lambda(\zeta) &=& -x\log(1-\zeta)+\frac{t\zeta}{1-\zeta} + m\log \zeta\\ \Lambda'(\zeta) &=& \frac{x}{1-\zeta}+\frac{t}{(1-\zeta)^2}+\frac{m}{\zeta}\\ \Lambda''(\zeta) &=& \frac{x}{(1-\zeta)^2}+\frac{2t}{(1-\zeta)^3}-\frac{m}{\zeta^2}\\ \Lambda'''(\zeta) &=& \frac{2x}{(1-\zeta)^3}+\frac{6t}{(1-\zeta)^4}+\frac{2m}{\zeta^3}\\ \Lambda''''(\zeta) &=& \frac{6x}{(1-\zeta)^4}+\frac{24 t}{(1-\zeta)^5} -\frac{6m}{\zeta^4}. \end{eqnarray*} We Taylor expand $\Psi(\zeta)=\Lambda(\zeta)-\Lambda(\xi)$ around $\xi$ and then expand in $\epsilon$ as $\epsilon$ goes to zero and find that \begin{eqnarray*} \Lambda'(\xi) &=& \tfrac{a'}{2}\epsilon^{-1/2} + O(1)\\ \Lambda''(\xi) &=& O(\epsilon^{-1/2})\\ \Lambda'''(\xi) &=& \tfrac{-T}{8} \epsilon^{-3/2} +O(\epsilon^{-1})\\ \Lambda''''(\xi) &=& O(\epsilon^{-3/2}). \end{eqnarray*} A Taylor series remainder estimate shows then that \begin{eqnarray*} \nonumber&& \left|\Psi(\zeta) - \left[\Lambda'(\xi)(\zeta-\xi)+{\scriptstyle\frac1{2!}}\Lambda''(\xi)(\zeta-\xi)^2 + {\scriptstyle\frac1{3!}}\Lambda'''(\xi)(\zeta-\xi)^3\right]\right|\\&&\quad \leq \sup_{t\in B(\xi,|\zeta-\xi|)} {\scriptstyle\frac1{4!}}|\Lambda''''(t)| |\zeta-\xi|^4, \end{eqnarray*} where $B(\xi,|\zeta-\xi|)$ denotes the ball around $\xi$ of radius $|\zeta-\xi|$. Now considering the scaling we have that $\zeta-\xi = c_3^{-1}\epsilon^{1/2}\tilde\zeta$ so that when we plug this in along with the estimates on derivatives of $\Lambda$ at $\xi$, we find that the equation above becomes \begin{equation*} \left|\Psi(\zeta) - \left[2^{1/3}a'\tilde\zeta -\tfrac{T}{3}\tilde\zeta^3 \right]\right|=O(\epsilon^{1/2}). \end{equation*} From this we see that if we included the $\log\epsilon$ term in with $m$ it would, as claimed, exactly cancel the $n_0\log(\zeta/\eta')$ term. The above estimate therefore proves the desired first claimed result. The second result follows readily from $|e^z-e^w|\leq |z-w|\max\{|e^z|,|e^w|\}$ and the first result, as well as the boundedness of the limiting integrand. \end{proof} \begin{proof}[Proof of Lemma \ref{muf_compact_sets_csc_limit_lemma}] Expanding in $\epsilon$ we have that \begin{equation*} z=\frac{\xi+c_{3}^{-1}\epsilon^{1/2}\tilde\zeta}{\xi+c_{3}^{-1}\epsilon^{1/2}\tilde\eta'} = 1-\epsilon^{1/2}\tilde z + O(\epsilon) \end{equation*} where the error is uniform for our range of $\tilde\eta'$ and $\tilde\zeta$ and where \begin{equation*} \tilde z = c_{3}^{-1}(\tilde\zeta-\tilde\eta'). \end{equation*} We now appeal to the functional equation for $f$, explained in Lemma \ref{f_functional_eqn_lemma}. Therefore we wish to study $\epsilon^{1/2}g_{+}(z)$ and $\epsilon^{1/2}g_{-}(z)$ as $\epsilon$ goes to 0 and show that they converge uniformly to suitable integrals. First consider the $g_{+}$ case. Let us, for the moment, assume that $|\tilde\mu|<1$. We know that $|\tau z|<1$, thus for any $N\geq 0$, we have \begin{equation*} \epsilon^{1/2} g_{+}(z) = \epsilon^{1/2}\sum_{k=1}^{N} \frac{\tilde\mu^k}{1-\tau^k z} + \epsilon^{1/2}\tilde\mu^N g_{+}(\tau^N z). \end{equation*} Since, by assumption, $|\tilde\mu|<1$, the first sum is the partial sum of a convergent series. Each term may be expanded in $\epsilon$. Noting that \begin{equation*} 1-\tau^k z = 1-(1-2\epsilon^{1/2}+O(\epsilon))(1-\epsilon^{1/2}\tilde z +O(\epsilon)) = (2k+\tilde z)\epsilon^{1/2} +kO(\epsilon), \end{equation*} we find that \begin{equation*} \epsilon^{1/2}\frac{\tilde\mu^k}{1-\tau^k z} = \frac{\tilde\mu^k}{2k+\tilde z} + k O(\epsilon^{1/2}). \end{equation*} The last part of the expression for $g_{+}$ is bounded in $\epsilon$, thus we end up with the following asymptotics \begin{equation*} \epsilon^{1/2} g_{+}(z) = \sum_{k=1}^{N} \frac{\tilde\mu^k}{2k+\tilde z} + N^2 O(\epsilon^{1/2}) + \tilde\mu^N O(1). \end{equation*} It is possible to choose $N(\epsilon)$ which goes to infinity, such that $N^2 O(\epsilon^{1/2}) = o(1)$. Then for any fixed compact set contained in $\ensuremath{\mathbb{C}}\setminus \{-2,-4,-6,\ldots\}$ we have uniform convergence of this sequence of analytic functions to some function, which is necessarily analytic and equals \begin{equation*} \sum_{k=1}^{\infty} \frac{\tilde\mu^k}{2k+\tilde z}. \end{equation*} This expansion is valid for $|\tilde\mu|<1$ and for all $\tilde z\in \ensuremath{\mathbb{C}}\setminus\{-2,-4,-6,\ldots\}$. Likewise for $\epsilon^{1/2}g_{-}(z)$, for $|\tilde\mu|>1$ and for $\tilde z\in \ensuremath{\mathbb{C}}\setminus\{-2,-4,-6,\ldots\}$, we have uniform convergence to the analytic function \begin{equation*} \sum_{k=-\infty}^{0} \frac{\tilde\mu^k}{2k+\tilde z}. \end{equation*} We now introduce the Hurwitz Lerch transcendental function and relate some basic properties of it which can be found in \cite{SC:2001s}. \begin{equation*} \Phi(a,s,w) = \sum_{k=0}^{\infty} \frac{a^k}{(w+k)^s} \end{equation*} for $w>0$ real and either $|a|<1$ and $s\in \ensuremath{\mathbb{C}}$ or $|a|=1$ and $\ensuremath{\mathrm{Re}}(s)>1$. For $\ensuremath{\mathrm{Re}}(s)>0$ it is possible to analytically extend this function using the integral formula \begin{equation*} \Phi(a,s,w) = \frac{1}{\Gamma(s)} \int_0^{\infty} \frac{e^{-(w-1)t}}{e^t-a} t^{s-1} dt, \end{equation*} where additionally $a\in \ensuremath{\mathbb{C}}\setminus[1,\infty)$ and $\ensuremath{\mathrm{Re}}(w)>0$. Observe that we can express our series in terms of this function as \begin{eqnarray*} \sum_{k=1}^{\infty} \frac{\tilde\mu^k}{2k+\tilde z} = \frac{1}{2}\tilde \mu \Phi(\tilde \mu , 1, 1+\tilde z/2),\\ \sum_{k=-\infty}^{0} \frac{\tilde\mu^k}{2k-\tilde z} = -\frac{1}{2}\Phi(\tilde \mu^{-1} , 1, -\tilde z/2). \end{eqnarray*} These two functions can be analytically continued using the integral formula onto the same region where $\ensuremath{\mathrm{Re}}(1+\tilde z/2)>0$ and $\ensuremath{\mathrm{Re}}(-\tilde z/2)>0$ -- i.e. where $\ensuremath{\mathrm{Re}}(\tilde z/2)\in (-1,0)$. Additionally the analytic continuation is valid for all $\tilde\mu$ not along $\ensuremath{\mathbb{R}}^+$. We wish now to use Vitali's convergence theorem to conclude that $\tilde\mu f(\tilde\mu, z)$ converges uniformly for general $\tilde\mu$ to the sum of these two analytic continuations. In order to do that we need a priori boundedness of $\epsilon^{1/2}g_+$ and $\epsilon^{1/2}g_-$ for compact regions of $\tilde\mu$ away from $\ensuremath{\mathbb{R}}^+$. This, however, can be shown directly as follows. By assumption on $\tilde\mu$ we have that $|1-\tau^k \tilde\mu|>c^{-1}$ for some positive constant $c$. Consider $\epsilon^{1/2}g_+$ first. \begin{equation*} |\epsilon^{1/2}g_+(z)| \leq \epsilon^{1/2}\tilde\mu \sum_{k=0}^{\infty} \frac{|\tau z|^k}{|1-\tau^k \tilde\mu|} \leq c\epsilon^{1/2} \frac{1}{1-|\tau z|}. \end{equation*} We know that $|\tau z|$ is bounded to order $\epsilon^{1/2}$ away from $1$ and therefore this show that $|\epsilon^{1/2}g_+(z)|$ has an upperbound uniform in $\tilde\mu$. Likewise we can do a similar computation for $\epsilon^{1/2}g_-(z)$ and find the same result, this time using that $|z|$ is bounded to order $\epsilon^{1/2}$ away from $1$. As a result of this apriori boundedness, uniform in $\tilde\mu$, we have that for compact sets of $\tilde\mu$ away from $\ensuremath{\mathbb{R}}^+$, uniformly in $\epsilon$, $\epsilon^{1/2}g_+$ and $\epsilon^{1/2}g_-$ are uniformly bounded as $\epsilon$ goes to zero. Therefore Vitali's convergence theorem implies that they converge uniformly to their analytic continuation. Now observe that \begin{equation*} \frac{1}{2}\tilde \mu \Phi(\tilde \mu ,1, 1+\tilde z/2) = \frac{1}{2}\int_0^{\infty} \frac{\tilde\mu e^{-\tilde z t/2}}{e^t-\tilde \mu}dt, \end{equation*} and \begin{equation*} -\frac{1}{2}\Phi(\tilde\mu^{-1}, 1,-\tilde z/2)= -\frac{1}{2}\int_0^{\infty} \frac{e^{-(-\tilde z/2 -1)t}}{e^{t} -1/\tilde{\mu}}dt= \frac{1}{2}\int_{-\infty}^{0} \frac{\tilde\mu e^{-\tilde z t/2}}{e^{t}-\tilde \mu}dt. \end{equation*} Therefore, by a simple change of variables in the second integral, we can combine these as a single integral \begin{equation*} \frac{1}{2}\int_{-\infty}^{\infty} \frac{\tilde\mu e^{-\tilde z t/2}}{e^t-\tilde \mu}dt = \frac{1}{2}\int_0^{\infty} \frac{\tilde\mu s^{-\tilde z/2}}{s-\tilde\mu} \frac{ds}{s}. \end{equation*} The first of the above equations proves the lemma, and for an alternative expression we use the second of the integrals (which followed from the change of variables $e^t=s$) and thus, on the region $\ensuremath{\mathrm{Re}}(\tilde z/2)\in (-1,0)$ this integral converges and equals \begin{equation*} {\scriptstyle\frac{1}{2}}\pi (-\tilde\mu)^{-\tilde z} \csc(\pi \tilde z/2). \end{equation*} This function is, in fact, analytic for $\tilde\mu\in \ensuremath{\mathbb{C}}\setminus[0,\infty)$ and for all $\tilde z\in \ensuremath{\mathbb{C}} \setminus 2\ensuremath{\mathbb{Z}}$. Therefore it is the analytic continuation of our asymptotic series. \end{proof} \section{Weakly asymmetric limit of the corner growth model}\label{BG} Recall the definitions in Section \ref{asepscalingth} of WASEP, its height function (\ref{defofheight}), and, for $X\in \epsilon\mathbb Z$ and $T\ge 0$, \begin{equation}\label{scaledhgt} Z_\epsilon(T,X) =\tfrac{1}{2} \epsilon^{-1/2}\exp\left \{ - \lambda_\epsilon h_{\epsilon^{1/2}}(\epsilon^{-2}T,[\epsilon^{-1}X]) + \nu_\epsilon \epsilon^{-2}T\right\} \end{equation} where, for $\epsilon\in(0,1/4)$, let $p ={\scriptstyle\frac12} - {\scriptstyle\frac12} \epsilon^{1/2}$, $q ={\scriptstyle\frac12} + {\scriptstyle\frac12}\epsilon^{1/2}$ and $\nu_\epsilon$ and $\lambda_\epsilon$ are as in (\ref{nu}) and (\ref{lambda}), and the closest integer $[x]$ is given by \begin{equation*} [x] = \lfloor x+\tfrac12\, \rfloor. \end{equation*} Let us describe in simple terms the dynamics in $T$ of $Z_\epsilon(T,X)$ defined in (\ref{scaledhgt}). It grows continuously exponentially at rate $\epsilon^{-2}\nu_\epsilon $ and jumps at rates \begin{equation*} r_-(X)=\epsilon^{-2}q(1-\eta(x))\eta(x+1)= \frac14\epsilon^{-2}q(1-\hat\eta(x))(1+\hat\eta(x+1)) \end{equation*} to $e^{-2\lambda_\epsilon}Z_\epsilon$ and \begin{equation*}r_+(X)=\epsilon^{-2}p\eta(x)(1-\eta(x+1))=\frac14\epsilon^{-2}p(1+\hat\eta(x))(1-\hat\eta(x+1)) \end{equation*} to $e^{2\lambda_\epsilon}Z_\epsilon$, independently at each site $X=\epsilon x\in \epsilon\mathbb Z$ (recall that $\hat{\eta}=2\eta-1$). We write this as follows, \begin{eqnarray*} dZ_\epsilon(X) & = &\left\{ \epsilon^{-2}\nu_\epsilon + ( e^{-2\lambda_\epsilon }-1) r_-(X) + ( e^{2\lambda_\epsilon }-1)r_+(X) \right\} Z_\epsilon(X) dT\nonumber \\ && + ( e^{-2\lambda_\epsilon }-1) Z_\epsilon(X) dM_-(X) + ( e^{2\lambda_\epsilon }-1) Z_\epsilon(X) dM_+(X) \end{eqnarray*} where $dM_\pm(X) = dP_\pm(X) -r_\pm (X) dT$ where $P_-(X), P_+(X)$, $X\in \epsilon\mathbb{Z}$ are independent Poisson processes running at rates $r_-(X), r_+(X)$, and $d$ always refers to change in macroscopic time $T$. Let \begin{equation*}\mathcal{D}_\epsilon = 2\sqrt{pq} = 1-\frac12 \epsilon + O (\epsilon^2)\end{equation*} and $\Delta_\epsilon$ be the $\epsilon\mathbb Z$ Laplacian, $\Delta f(x) = \epsilon^{-2}(f(x+\epsilon) -2f(x) + f(x-\epsilon))$. We also have \begin{equation*} {\scriptstyle{\frac12}} \mathcal{D}_\epsilon \Delta_\epsilon Z_\epsilon (X) = {\scriptstyle{\frac12}} \epsilon^{-2} \mathcal{D}_\epsilon ( e^{-\lambda_\epsilon \hat\eta(x+1)} -2 + e^{\lambda_\epsilon \hat\eta(x)} ) Z_\epsilon(X). \end{equation*} The parameters have been carefully chosen so that \begin{equation*} {\scriptstyle{\frac12}} \epsilon^{-2} \mathcal{D}_\epsilon ( e^{-\lambda_\epsilon \hat\eta(x+1)} -2 + e^{\lambda_\epsilon \hat\eta(x)} )= \epsilon^{-2}\nu_\epsilon + ( e^{-2\lambda_\epsilon }-1) r_-(X) + ( e^{2\lambda_\epsilon }-1)r_+(X). \end{equation*} Hence \cite{G},\cite{BG}, \begin{equation}\label{sde7} dZ_\epsilon = {\scriptstyle{\frac12}} \mathcal{D}_\epsilon \Delta_\epsilon Z_\epsilon + Z_\epsilon dM_\epsilon \end{equation} where \begin{equation*} dM_\epsilon(X)= ( e^{-2\lambda_\epsilon }-1) dM_-(X) + ( e^{2\lambda_\epsilon }-1) dM_+(X) \end{equation*} are martingales in $T$ with \begin{equation*} d\langle M_\epsilon(X),M_\epsilon(Y)\rangle= \epsilon^{-1}\mathbf{1}(X=Y) b_\epsilon(\tau_{-[\epsilon^{-1}X]}\eta) dT. \end{equation*} Here $\tau_x\eta(y) = \eta(y-x)$ and \begin{equation*} b_\epsilon(\eta) =1- \hat\eta(1) \hat\eta(0)+ \hat{b}_\epsilon(\eta) \end{equation*} where \begin{eqnarray}\nonumber \hat{b}_\epsilon(\eta) & = & \epsilon^{-1}\{ [ p( (e^{-2\lambda_\epsilon}-1)^2-4\epsilon ) + q( (e^{2\lambda_\epsilon}-1)^2-4\epsilon )] \\ && + [q(e^{-2\lambda_\epsilon}-1)^2-p(e^{2\lambda_\epsilon}-1)^2 ](\hat\eta(1)- \hat\eta(0)) \\&& \nonumber - [q(e^{-2\lambda_\epsilon}-1)^2+p(e^{2\lambda_\epsilon}-1)^2 -\epsilon]\hat\eta(1) \hat\eta(0)\} .\end{eqnarray} Clearly $b_\epsilon,\hat{b}_\epsilon\ge 0$. It is easy to check that there is a $C<\infty$ such that \begin{equation*}\hat{b}_\epsilon\le C\epsilon^{1/2}\end{equation*} and, for sufficiently small $\epsilon>0$, \begin{equation}\label{bdona} b_\epsilon\le 3. \end{equation} Note that (\ref{sde7}) is equivalent to the integral equation, \begin{eqnarray}\label{sde3} {Z}_\epsilon(T,X) & = & \epsilon\sum_{Y\in \epsilon \mathbb{Z}} p_\epsilon(T,X-Y) Z_\epsilon(0,Y) \\ && \nonumber + \int_0^T \epsilon\sum_{Y\in \epsilon \mathbb{Z}} p_\epsilon(T-S,X-Y) Z_\epsilon(S,Y)d{M}_\epsilon(S,Y) \end{eqnarray} where $p_\epsilon(T,X) $ are the (normalized) transition probabilities for the continuous time random walk with generator $ {\scriptstyle{\frac12}} \mathcal{D}_\epsilon \Delta_\epsilon $. The normalization is multiplication of the actual transition probabilities by $\epsilon^{-1}$ so that \begin{equation*} p_\epsilon(T,X) \to p(T,X) = \frac{ e^{-X^2/ 2T} }{\sqrt{2\pi T}}. \end{equation*} We need some apriori bounds. \begin{lemma}\label{apriori} For $0< T\le T_0$, and for each $q=1,2,\ldots$, there is a $C_q= C_q(T_0)<\infty$ such that \begin{enumerate}[i.] \item $E [ Z_\epsilon^2(T,X) ] \le C_2p_\epsilon^2(T,X)$; \item $ E\left[ \left( Z_\epsilon(T,X)-\epsilon\sum_{Y\in \epsilon \mathbb{Z}} p_\epsilon(T,X-Y) Z_\epsilon(0,Y)\right)^2 \right] \le C_2 p_\epsilon^2(T,X)$; \item $E [ Z^{2q}_\epsilon(T,X) ] \le C_q p_\epsilon^{2q}(T,X)$. \end{enumerate} \end{lemma} \begin{proof} Within the proof, $C$ will denote a finite number which does not depend on any other parameters except $T$ and $q$, but may change from line to line. Also, for ease of notation, we identify functions on $\epsilon \mathbb{Z}$ with those on $\mathbb{R}$ by $f(x)=f([x])$. First, note that \begin{equation*} Z_\epsilon(0,Y) = {\scriptstyle\frac12}\epsilon^{-1/2} \exp\{ -\epsilon^{-1}\lambda_\epsilon |Y| \}= {\scriptstyle\frac12} \epsilon^{-1/2} \exp\{ -\epsilon^{-1/2} |Y| + O(\epsilon^{1/2}) \} \end{equation*} is an approximate delta function, from which we check that \begin{equation}\label{yy} \epsilon\sum_{Y\in \epsilon \mathbb{Z}} p_\epsilon(T,X-Y) Z_\epsilon(0,Y) \le C p_\epsilon(T,X). \end{equation} Let \begin{equation*} f_\epsilon(T,X)= E[ Z_\epsilon^2(T,X)] . \end{equation*} From (\ref{yy}), (\ref{sde3}) we get \begin{equation}\label{tt} f_\epsilon(T,X)\le C p^2_\epsilon(T,X) + C\int_0^T \int_{-\infty}^{\infty} p^2_\epsilon(T-S,X-Y) f_\epsilon(S,Y) dSdY. \end{equation} Iterating we obtain, \begin{equation}\label{tt2} f_\epsilon(T,X) \le \sum_{n=0}^\infty C^n I_{n,\epsilon}(T,X) \end{equation} where, for $\Delta_n=\Delta_n(T)=\{0=t_0\le T_1<\cdots< T_n<T\}$,$X_0=0$, \begin{equation*} I_{n,\epsilon}(T,X)= \int_{\Delta_n} \int_{\mathbb{R}^n} \prod_{i=1}^{n} p^2_\epsilon(T_i-T_{i-1},X_i-X_{i-1}) p_\epsilon^2(T-T_n, X-x_n)\prod_{i=1}^{n} dX_i dT_i. \end{equation*} One readily checks that \begin{equation*} I_{n,\epsilon}(T,X)\le C^n T^{n/2} (n!)^{-1/2}p^2_\epsilon(T,X). \end{equation*} From which we obtain $i$, \begin{equation*} f_\epsilon(T,X) \le C\sum_{n=0}^\infty (CT)^{n/2} (n!)^{-1/2} p^2_\epsilon(T,X)\le C' p^2_\epsilon(T,X). \end{equation*} Now we turn to $ii$. From (\ref{sde3}), the term on right hand side is bounded by a constant multiple of \begin{equation*} \int_0^T \int_{-\infty}^{\infty} p^2_\epsilon(T-S,X-Y) E[Z^2_\epsilon(S,Y)]dYdS. \end{equation*} Using $i$, this is in turn bounded by $C\sqrt{T} p^2_\epsilon(T,X) $, which proves $ii$. Finally we prove $iii$. Fix a $q\ge 2$. By standard methods of martingale analysis and (\ref{bdona}), we have \begin{eqnarray*} && E\Big[\Big(\int_0^T\epsilon\sum_{Y\in \epsilon \mathbb{Z}} p_\epsilon(T-S,X-Y) Z_\epsilon(S,Y)d{M}_\epsilon(S,Y) \Big)^{2q}\Big] \\&& \qquad \le C E\Big[\Big(\int_0^T \epsilon\sum_{Y\in \epsilon \mathbb{Z}} p^2_\epsilon(T-S,X-Y) Z^2_\epsilon(S,Y)dS \Big)^{q}\Big].\nonumber \end{eqnarray*} Let \begin{equation*} g_\epsilon(T,X) =E[Z^{2q}_\epsilon(T,X)]/ p_\epsilon^{2q}(T,X). \end{equation*} From the last inequality, and Schwarz's inequality, we have \begin{equation*} g_\epsilon(T,X) \le C (1+ \int_{\Delta'_q(T)}\int_{\mathbb{R}^q} \prod_{i=1}^q p^2_\epsilon(S_i-S_{i-1},X_i-X_{i-1})p_\epsilon^{2}(S_i,Y_i)g^{1/q}_\epsilon(S_i,Y_i)dY_idS_i ).\end{equation*} Now use the fact that \begin{equation*} \prod_{i=1}^q g^{1/q}_\epsilon(S_i,Y_i) \le C\sum_{i=1}^q \frac{\prod_{j\neq i} p^{2/(q-1)}_\epsilon(S_j,Y_j)}{p^{2}_\epsilon(S_i,Y_i)}g_\epsilon(S_i,Y_i) \end{equation*} and iterate the inequality to obtain $iii$. \end{proof} We now turn to the tightness. In fact, although we are in a different regime, the arguments of \cite{BG} actually extend to our case. For each $\delta>0$, let $\mathscr{P}^\delta_\epsilon$ be the distributions of the processes $\{Z_\epsilon(T,X)\}_{\delta\le T}$ on $D_u([\delta,\infty); D_u(\mathbb R))$ where $D_u$ refers to right continuous paths with left limits with the topology of uniform convergence on compact sets. Because the discontinuities of $Z_\epsilon(T,\cdot)$ are restricted to $\epsilon(1/2+\ensuremath{\mathbb{Z}})$, it is measurable as a $D_u(\ensuremath{\mathbb{R}})$-valued random function (see Sec. 18 of \cite{Bill}.) Since the jumps of $Z_\epsilon(T,\cdot)$ are uniformly small, local uniform convergence works for us just as well the standard Skhorohod topology. The following summarizes results which are contained \cite{BG} but not explicitly stated there in the form we need. \begin{theorem} \cite{BG} There is an explicit $p<\infty$ such that if there exist $C, c < \infty$ for which \begin{equation}\label{deltainit} \int_{-\infty}^{\infty} Z_\epsilon^p(\delta,X)d\mathscr{P}^\delta_\epsilon \le C e^{ c|X| } ,\qquad X \in \epsilon \mathbb{Z}, \end{equation} then $\{\mathscr{P}^\delta_\epsilon\}_{0\le \epsilon\le 1/4}$ is tight. Any limit point $\mathscr{P}^\delta$ is supported on $C([\delta,\infty); C(\mathbb R))$ and solves the martingale problem for the stochastic heat equation (\ref{she}) after time~$\delta$. \end{theorem} It appears that $p=10$ works in \cite{BG}, though it almost certainly can be improved to $p=4$. Note that the process level convergence is more than we need for the one-point function. However, it could be useful in the future. Although not explicitly stated there the theorem is proved in \cite{BG}. The key point is that all computations in \cite{BG} after the initial time are done using the equation (\ref{sde7}) for $Z_\epsilon$, which scales linearly in $Z_\epsilon$. So the only input is a bound like (\ref{deltainit}) on the initial data. In \cite{BG}, this is made as an assumption, which can easily be checked for initial data close to equilibrium. In the present case, it follows from $iii$ of Lemma \ref{apriori}. The measures $\mathscr{P}^{\delta_1}$ and $\mathscr{P}^{\delta_2}$ for $\delta_1<\delta_2$ can be chosen to be consistent on $C( [\delta_2,\infty), C(\mathbb{R}) ) $ and because of this there is a limit measure $\mathscr{P}$ on $C((0,\infty), C(\mathbb{R}) ) $ which is consistent with any $\mathscr{P}^{\delta}$ when restricted to $C( [\delta,\infty), C(\mathbb{R}) )$. From the uniqueness of the martingale problem for $t\ge \delta>0$ and the corresponding martingale representation theorem \cite{KS} there is a space-time white noise $\dot{\mathscr{W}}$, on a possibly enlarged probability space, $(\bar\Omega,\bar{\mathscr{F}}_T, \bar{\mathscr{P}})$ such that under $\bar{\mathscr{P}}$, for any $\delta>0$, \begin{eqnarray*} {Z}(T,X) & = & \int_{-\infty}^{\infty} p(T-\delta,X-Y) Z(\delta,Y) dY\nonumber \\ && + \int_\delta^{T} \int_{-\infty}^{\infty} p(T-S,X-Y) Z(S,Y)\bar{\mathcal{W}} (dY,dS). \end{eqnarray*} Finally $ii$ of Lemma \ref{apriori} shows that under $\bar{\mathscr{P}}$, \begin{equation*} \int_{-\infty}^{\infty} p(T-\delta,X-Y) Z(\delta,Y) dY \to p(T,X) \end{equation*} as $\delta\searrow 0$, which completes the proof. \section{Alternative forms of the crossover distribution function}\label{kernelmanipulations} We now demonstrate how the various alternative formulas for $F_{T}(s)$ given in Theorem \ref{main_result_thm} are derived from the cosecant kernel formula of Theorem \ref{epsilon_to_zero_theorem}. \subsection{Proof of the crossover Airy kernel formula}\label{cross_over_airy_sec} We prove this by showing that \begin{equation*}\det(I-K_a^{\csc})_{L^2(\tilde\Gamma_{\eta})} =\det(I - K_{\sigma_{T,\tilde\mu}})_{{L}^2(\kappa_T^{-1}a,\infty)} \end{equation*} where $K_{\sigma_{T,\tilde\mu}}$ and $\sigma_{T,\tilde\mu}$ are given in the statement of Theorem \ref{main_result_thm} and $\kappa_T = 2^{-1/3} T^{1/3}$. The kernel $K^{\csc}_{a}(\tilde\eta,\tilde\eta')$ is given by equation (\ref{k_csc_definition}) as \begin{equation*} \int_{\tilde\Gamma_{\zeta}} e^{-\tfrac{T}{3}(\tilde\zeta^3-\tilde\eta'^3)+2^{1/3}a(\tilde\zeta-\tilde\eta')}\bigg(2^{1/3}\int_{-\infty}^{\infty} \frac{\tilde\mu e^{-2^{1/3}t(\tilde \zeta - \tilde \eta')}}{ e^t - \tilde \mu}dt \bigg)\frac{d\tilde\zeta}{\tilde\zeta-\tilde\eta}, \end{equation*} where we recall that the inner integral converges since $\ensuremath{\mathrm{Re}}(-2^{1/3}(\tilde\zeta-\tilde\eta'))=1/2$ (see the discussion in Definition \ref{thm_definitions}). For $\ensuremath{\mathrm{Re}}(z)<0$ we have the following nice identity: \begin{equation*} \int_a^{\infty} e^{xz} dx= -\frac{e^{az}}{z}, \end{equation*} which, noting that $\ensuremath{\mathrm{Re}}(\tilde\zeta-\tilde\eta')<0$, we may apply to the above kernel to get \begin{equation*} -2^{2/3}\int_{\tilde\Gamma_{\zeta}}\int_{-\infty}^{\infty}\int_{a}^{\infty} e^{-\frac{T}{3}(\tilde\zeta^3-\tilde\eta'^3)-2^{1/3}a\tilde\eta'} \frac{\tilde\mu e^{-2^{1/3}t(\tilde \zeta - \tilde \eta')}}{ e^t - \tilde \mu} e^{2^{1/3}(a-x)\tilde\eta} e^{2^{1/3}x\tilde \zeta } dx dt d\tilde \zeta. \end{equation*} This kernel can be factored as a product $ABC$ where \begin{eqnarray*} A:L^2(a,\infty)\rightarrow L^2(\tilde\Gamma_{\eta}),\qquad B:L^2(\tilde\Gamma_{\zeta})\rightarrow L^2(a,\infty),\qquad C:L^2(\tilde\Gamma_{\eta})\rightarrow L^2(\tilde\Gamma_{\zeta}), \end{eqnarray*} and the operators are given by their kernels \begin{eqnarray*} && A(\tilde\eta,x) = e^{2^{1/3}(a-x)\tilde\eta}, \qquad B(x,\tilde\zeta) = e^{2^{1/3}x \tilde\zeta}, \\ \nonumber&& C(\tilde\zeta,\tilde\eta) = -2^{2/3}\int_{-\infty}^{\infty} \exp\left\{-\frac{T}{3}(\tilde\zeta^3-\tilde\eta^3)-2^{1/3}a\tilde\eta\right\}\frac{\tilde\mu e^{-2^{1/3}t(\tilde \zeta - \tilde \eta)}}{ e^t - \tilde \mu}dt. \end{eqnarray*} Since $\det(I-ABC) = \det(I-BCA)$ we consider $BCA$ acting on $L^2(a,\infty)$ with kernel \begin{equation*} -2^{2/3}\int_{-\infty}^{\infty}\int_{\Gamma_{\tilde\zeta}}\int_{\Gamma_{\tilde\eta}} e^{-\frac{T}{3}(\tilde\zeta^3-\tilde\eta^3)+2^{1/3}(x-t)\tilde\zeta -2^{1/3}(y-t)\tilde\eta}\frac{\tilde\mu}{e^t - \tilde\mu} d\tilde\eta d\tilde\zeta dt. \end{equation*} Using the formula for the Airy function given by \begin{equation*} \ensuremath{\mathrm{Ai}}(r) = \int_{\tilde\Gamma_{\zeta}} \exp\{-\frac{1}{3}z^3 +rz\} dz \end{equation*} and replacing $t$ with $-t$ we find that our kernel equals \begin{equation*} 2^{2/3}T^{-2/3}\int_{-\infty}^{\infty}\frac{\tilde\mu}{\tilde\mu - e^{-t}} \ensuremath{\mathrm{Ai}}\big(T^{-1/3}2^{1/3}(x+t)\big)\ensuremath{\mathrm{Ai}}\big(T^{-1/3}2^{1/3}(y+t)\big)dt. \end{equation*} We may now change variables in $t$ as well as in $x$ and $y$ to absorb the factor of $T^{-1/3}2^{1/3}$. To rescale $x$ and $y$ use $\det(I - K(x,y))_{L^2( ra,\infty)} = \det (I- rK(rx,ry))_{L^2(a,\infty)}$. This completes the proof. \subsection{Proof of the Gumbel convolution formula}\label{gumbel_convolution_sec} Before starting we remark that throughout this proof we will dispense with the tilde with respect to $\tilde\mu$ and $\mathcal{\tilde C}$. We choose to prove this formula directly from the form of the Fredholm determinant given in the crossover Airy kernel formula of Theorem \ref{main_result_thm}. However, we make note that it is possible, and in some ways simpler (though a little messier) to prove this directly from the $\csc$ form of the kernel. Our starting point is the formula for $F_T(s)$ given in equation (\ref{sigma_Airy_kernel_formula}). The integration in $\mu$ occurs along a complex contour and even though we haven't been writting it explicitly, the integral is divided by $2\pi i$. We now demonstrate how to squish this contour to the the positive real line (at which point we will start to write the $2\pi i$). The pole in the term $\sigma_{T,\mu}(t)$ for $\mu$ along $\ensuremath{\mathbb{R}}^+$ means that the integral along the positive real axis from above will not exactly cancel the integral from below. Define a family of contour $\mathcal{ C}_{\delta_1,\delta_2}$ parametrized by $\delta_1,\delta_2>0$ (small). The contours are defined in terms of three sections \begin{equation*} \mathcal{C}_{\delta_1,\delta_2} = \mathcal{ C}_{\delta_1,\delta_2}^{-}\cup \mathcal{C}_{\delta_1,\delta_2}^{circ}\cup \mathcal{C}_{\delta_1,\delta_2}^{+} \end{equation*} traversed counterclockwise, where \begin{equation*} \mathcal{C}_{\delta_1,\delta_2}^{circ} = \{\delta_2 e^{i\theta}:\delta_1\leq \theta\leq 2\pi -\delta_1\} \end{equation*} and where $\mathcal{C}_{\delta_1,\delta_2}^{\pm}$ are horizontal lines extending from $\delta_1 e^{\pm i\delta_2}$ to $+\infty$. We can deform the original $\mu$ contour $\mathcal{\mu}$ to any of these contours without changing the value of the integral (and hence of $F_T(s)$). To justify this we use Cauchy's theorem. However this requires the knowledge that the determinant is an analytic function of $\mu$ away from $\ensuremath{\mathbb{R}}^+$. This may be proved similarly to the proof of Lemma \ref{deform_mu_to_C} and relies on Lemma \ref{Analytic_fredholm_det_lemma}. As such we do not include this computation here. Fixing $\delta_2$ for the moment we wish to consider the limit of the integrals over these contours as $\delta_1$ goes to zero. The resulting integral be we written as $I_{\delta_2}^{circ} + I_{\delta_2}^{line}$ where \begin{eqnarray*} I_{\delta_2}^{circ} &=& \oint_{|\mu|=\delta_2} \frac{d\mu}{\mu} e^{-\mu} \det(I-K_{T,\mu})_{L^2(\kappa_T^{-1}a,\infty)},\\ I_{\delta_2}^{line} &=& -\lim_{\delta_{1}\rightarrow 0} \int_{\delta_2}^{\infty} \frac{d\mu}{\mu} e^{-\mu} [\det(I-K_{T,\mu+i\delta_i})-\det(I-K_{T,\mu-i\delta_i})] \end{eqnarray*} \begin{claim} $I_{\delta_2}^{circ}$ exists and $\lim_{\delta_{2}\rightarrow 0}I_{\delta_2}^{circ} = 1$. \end{claim} \begin{proof} It is easiest, in fact, to prove this claim by replacing the determinant by the $\csc$ determinant: equation (\ref{k_csc_definition}). From that perspective the $\mu$ at 0 and at $2\pi$ are on opposite sides of the branch cut for $\log(-\mu)$, but are still defined (hence the $I_{\delta_2}^{circ}$ is clearly defined). As far as computing the limit, one can do the usual Hilbert-Schmidt estimate and show that, uniformly over the circle $|\mu|=\delta_2$, the trace norm goes to zero as $\delta_2$ goes to zero. Thus the determinant goes uniformly to 1 and the claim follows. \end{proof} Turning now to $I_{\delta_2}^{line}$, that this limit exists can be seen by going to the equivalent $\csc$ kernel (where this limit is trivially just the kernel on different levels of the $\log(-\mu)$ branch cut). Notice now that we can write the operator $K_{T,\mu+i\delta_1}=K_{\delta_1}^{\rm sym}+K_{\delta_1}^{\rm asym}$ and likewise $K_{T,\mu-i\delta_1}=K_{\delta_1}^{\rm sym}-K_{\delta_1}^{\rm asym}$ where $K_{\delta_1}^{\rm sym}$ and $K_{\delta_1}^{\rm asym}$ also act on $L^2(\kappa_T^{-1}a,\infty)$ and are given by their kernels \begin{eqnarray*} K_{\delta_1}^{{\rm sym}}(x,y) &=& \int_{-\infty}^{\infty}\frac{\mu(\mu-b)+\delta_1^2}{(\mu-b)^2+\delta_1^2} \ensuremath{\mathrm{Ai}}(x+t)\ensuremath{\mathrm{Ai}}(y+t)dt\\ K_{\delta_1}^{{\rm asym}}(x,y) &=& \int_{-\infty}^{\infty}\frac{-i\delta_1 b}{(\mu-b)^2+\delta_1^2} \ensuremath{\mathrm{Ai}}(x+t)\ensuremath{\mathrm{Ai}}(y+t)dt, \end{eqnarray*} where $b=b(t)=e^{-\kappa_T t}$. From this it follows that \begin{equation*} K^{\rm sym}(x,y):= \lim_{\delta_1\rightarrow 0} K_{\delta_1}^{\rm sym}(x,y) =\mathrm{P.V.}\int \frac{\mu}{\mu-e^{-\kappa_T t}} \ensuremath{\mathrm{Ai}}(x+t)\ensuremath{\mathrm{Ai}}(y+t)dt. \end{equation*} As far as $K_{\delta_1}^{{\rm asym}}$, since $\mu-b$ has a unique root at $t_0=-\kappa_T^{-1}\log\mu$, it follows from the Plemelj formula \cite{D} that \begin{equation*} \lim_{\delta_1\rightarrow 0}K_{\delta_1}^{\rm asym}(x,y) = -\frac{\pi i} {\kappa_T} \ensuremath{\mathrm{Ai}}(x+t_0)\ensuremath{\mathrm{Ai}}(y+t_0). \end{equation*} With this in mind we define \begin{equation*} K^{\rm asym}(x,y) = \frac{2\pi i} {\kappa_T} \ensuremath{\mathrm{Ai}}(x+t_0)\ensuremath{\mathrm{Ai}}(y+t_0). \end{equation*} We see that $K^{\rm asym}$ is a multiple of the projection operator onto the shifted Airy functions. We may now collect the calculations from above and we find that \begin{eqnarray*} \nonumber I_{\delta_2}^{line} &=& -\frac{1}{2\pi i}\int_{\delta_2}^{\infty} \frac{d\mu}{\mu}e^{-\mu} [\det(I-K^{\rm sym}+\tfrac{1}{2}K^{\rm asym})-\det(I-K^{\rm sym}-\tfrac{1}{2}K^{\rm asym})]\\ &=& -\frac{1}{2\pi i}\int_{\delta_2}^{\infty} \frac{d\mu}{\mu}e^{-\mu} \det(I-K^{\rm sym})\mathrm{tr}\left((I-K^{\rm sym})^{-1}K^{\rm asym}\right) \end{eqnarray*} where both $K^{\rm sym}$ and $K^{\rm asym}$ act on $L^2(\kappa_T^{-1}a,\infty)$ and where we have used the fact that $K^{\rm asym}$ is rank one, and if you have $A$ and $B$, where $B$ is rank one, then \begin{equation*} \det(I-A+B) = \det(I-A)\det(I+ (I-A)^{-1}B) = \det(I-A)\left[1+\mathrm{tr}\left((I-A)^{-1}B\right)\right]. \end{equation*} As stated above we've only shown the pointwise convergence of the kernels to $K^{\rm sym}$ and $K^{\rm asym}$. However, using the decay properties of the Airy function and the exponential decay of $\sigma$ this can be strengthened to trace-class convergence. We may now take $\delta_2$ to zero and find that \begin{eqnarray*}\nonumber F_T(s) & = & \lim_{\delta_2\rightarrow 0 } (I_{\delta_2}^{circ} + I_{\delta_2}^{line}) \\ &= &1-\frac{1}{2\pi i}\int_0^{\infty} \frac{d\mu}{\mu}e^{-\mu} \det(I-K^{\rm sym})\mathrm{tr}\left((I-K^{\rm sym})^{-1}K^{\rm asym}\right) \end{eqnarray*} with $K^{\rm sym}$ and $K^{\rm asym}$ as above acting on $L^2(\kappa_T^{-1}a,\infty)$ and where the integral is improper at zero. We can simplify our operators so that by changing variables and replacing $x$ by $x+t_0$ and $y$ by $y+t_0$. We can also change variables from $\mu$ to $e^{-r}$. With this in mind we redefine the operators $K^{\rm sym}$ and $K^{\rm asym}$ to act on $L^2(\kappa_T^{-1}(a-r),\infty)$ with kernels \begin{eqnarray*} K^{\rm sym}(x,y) &=& \mathrm{P.V.} \int \sigma(t)\ensuremath{\mathrm{Ai}}(x+t)\ensuremath{\mathrm{Ai}}(y+t)dt\\ \nonumber K^{\rm asym}(x,y) &=& \ensuremath{\mathrm{Ai}}(x)\ensuremath{\mathrm{Ai}}(y), \end{eqnarray*} where $\sigma(t) = \frac{1}{1-e^{-\kappa_T t}}$. In terms of these operators we have \begin{equation*} F_T(s) = 1-\int_{-\infty}^{\infty} e^{-e^{-r}} f(a-r) dr \end{equation*} where \begin{equation*} f(r) =\kappa_T^{-1} \det(I-K^{\rm sym})_{L^2(\kappa_T^{-1}r,\infty)}\mathrm{tr}\left((I-K^{\rm sym})^{-1}K^{\rm asym}\right)_{L^2(\kappa_T^{-1}r,\infty)}. \end{equation*} Calling $G(r)=e^{-e^{-r}}$ and observing that $K^{\rm sym}=K_{\sigma_T}$ and $K^{\rm asym}=\rm{P}_{\ensuremath{\mathrm{Ai}}}$ this completes the proof of the first part of the Gumbel convolution formula. Turning now to the Hilbert transform formula, we may isolate the singularity of $\sigma_T(t)$ from the above kernel $K^{\rm sym}$ (or $K_{\sigma_T}$) as follows. Observe that we may write $\sigma_T(t)$as \begin{equation*} \sigma_T(t) = \tilde\sigma_T(t) + \frac{1}{\kappa_T t} \end{equation*} where $\tilde\sigma_T(t)$ (given in equation (\ref{tilde_sigma_form})) is a smooth function, non-decreasing on the real line, with $\tilde\sigma_T(-\infty)=0$ and $\tilde\sigma_T(+\infty)=1$. Moreover, $\tilde\sigma_T^{\prime}$ is an approximate delta function with width $\kappa_T^{-1}= 2^{1/3}T^{-1/3}$. The principle value integral of the $\tilde\sigma_T(t)$ term can be replaced by a simple integral. The new term gives \begin{equation*} \mathrm{P.V.}\int \frac{1}{\kappa_T t}\ensuremath{\mathrm{Ai}}(x+t)\ensuremath{\mathrm{Ai}}(y+t). \end{equation*} This is $\kappa_T^{-1}$ times the Hilbert transform of the product of Airy functions, which is explicitly computable \cite{Varlamov} with the result begin \begin{equation*} \mathrm{P.V.}\int \frac{1}{\kappa_T t}\ensuremath{\mathrm{Ai}}(x+t)\ensuremath{\mathrm{Ai}}(y+t) = \kappa_T^{-1} \pi G_{\frac{x-y}{2}}(\frac{x+y}{2}) \end{equation*} where $G_a(x)$ is given in equation (\ref{tilde_sigma_form}). \section{Formulas for a class of generalized integrable integral operators}\label{int_int_op_sec} Presently we will consider a certain class of Fredholm determinants and make two computations involving these determinants. The second of these computations closely follows the work of Tracy and Widom and is based on a similar calculation done in \cite{TWAiry}. In that case the operator in question is the Airy operator. We deal with the family of operators which arise in considering $F_{T}(s)$. Consider the class of Fredholm determinants $\det(I-K)_{L^2(s,\infty)}$ with operator $K$ acting on $L^2(s,\infty)$ with kernel \begin{equation}\label{gen_int_int_ker} K(x,y) =\int_{-\infty}^{\infty} \sigma(t) \ensuremath{\mathrm{Ai}}(x+t)\ensuremath{\mathrm{Ai}}(y+t)dt, \end{equation} where $\sigma(t)$ is a function which is smooth except at a finite number of points at which it has bounded jumps and which approaches $0$ at $-\infty$ and $1$ at $\infty$, exponentially fast. These operators are, in a certain sense, generalizations of the class of integrable integral operators (see \cite{BorodinDeift}). The kernel can be expressed alternatively as \begin{equation}\label{kernel} K(x,y) = \int_{-\infty}^{\infty}\sigma'(t) \frac{\varphi(x+t) \psi(y+t) - \psi(x+t) \varphi(y+t)}{ x-y} dt, \end{equation} with $\varphi(x)=\textrm{Ai}(x)$ and $\psi(x)= \textrm{Ai}^{\prime}(x)$ and $\ensuremath{\mathrm{Ai}}(x)$ the Airy function. This, and the entire generalization we will now develop is analogous to what is known for the Airy operator which is defined by its kernel $K_{\ensuremath{\mathrm{Ai}}}(x,y)$ on $L^2(-\infty,\infty)$ \begin{equation*} K_{\ensuremath{\mathrm{Ai}}}(x,y) = \int_{-\infty}^{\infty} \chi(t) \ensuremath{\mathrm{Ai}}(x+t)\ensuremath{\mathrm{Ai}}(y+t)dt = \frac{\ensuremath{\mathrm{Ai}}(x)\ensuremath{\mathrm{Ai}}^\prime(x)-\ensuremath{\mathrm{Ai}}(y)\ensuremath{\mathrm{Ai}}^\prime(x)}{x-y}, \end{equation*} where presently $\chi(t)=\mathbf{1}_{\{t\geq 0\}}$. Note that the $\sigma(t)$ in our main result is not exactly of this type. However, one can smooth out the $\sigma$, and apply the results of this section to obtain formulas, which then can be shown to converge to the desired formulas as the smoothing is removed. It is straightforward to control the convergence in terms of trace norms, so we will not provide further details here. \subsection{Symmetrized determinant expression}\label{symmetrized_sec} It is well known that \begin{equation*} \det(I-K_{\ensuremath{\mathrm{Ai}}})_{L^2(s,\infty)} = \det(I-\sqrt{\chi_{s}}K_{\ensuremath{\mathrm{Ai}}}\sqrt{\chi_{s}})_{L^2(-\infty,\infty)} \end{equation*} where $\chi_{s}$ is the multiplication operator by $\mathbf{1}_{\{\bullet\geq s\}}$ (i.e., $(\chi_s f)(x) = \mathbf{1}(x\geq s)f(x)$). The following proposition shows that for our class of determinants the same relation holds, and provides the proof of formula (\ref{sym_F_eqn}) of Theorem 1. \begin{proposition} For $K$ in the class of operators with kernel as in (\ref{gen_int_int_ker}), \begin{equation*} \det(I-K)_{L^2(s,\infty)} = \det(I-\hat{K}_s)_{L^2(-\infty,\infty)}, \end{equation*} where the kernel for $\hat{K}_s$ is given by \begin{equation*} \hat{K}_s(x,y) = \sqrt{\sigma(x-s)}K(x,y) \sqrt{\sigma(y-s)}. \end{equation*} \end{proposition} \begin{proof} Define $L_s: L^2(s,\infty)\to L^2(-\infty,\infty)$ by \begin{equation*} (L_sf)(x) = \int_s^\infty {\rm Ai}(x+y)f(y)dy. \end{equation*} Also define $\sigma: L^2(-\infty,\infty)\to L^2(-\infty,\infty)$ by $(\sigma f)(x) = \sigma(x) f(x)$, and similarly $\chi_s: L^2(-\infty,\infty)\to L^2(s,\infty)$ by $(\chi_s f)(x) = \mathbf{1}(x\ge s) f(x) $. Then \begin{equation*} K = \chi_s L_{-\infty}\sigma L_s. \end{equation*} We have \begin{equation*} \det(I - K)_{L^2(s,\infty)} = \det(I - \tilde K_s)_{L^2(-\infty,\infty)} \end{equation*} where \begin{equation*} \tilde K_s = \sqrt{\sigma} L_s \chi_s L_{-\infty}\sqrt{\sigma}. \end{equation*} The key point is that \begin{equation*} L_s \chi_sL_{-\infty} (x,y) = K_{\ensuremath{\mathrm{Ai}}}(x+s,y+s) \end{equation*} where $K_{\ensuremath{\mathrm{Ai}}}$ is the Airy kernel. One can also see now that this operator is self-adjoint on the real line. \end{proof} \subsection{Painlev\'{e} II type integro-differential equation}\label{integro_differential} We now develop an integro-differential equation expression for $\det(I-K)_{L^2(s,\infty)}$. This provides the proof of Proposition \ref{prop2}. Recall that $F_{\mathrm{GUE}}(s)=\det(I-K_{\ensuremath{\mathrm{Ai}}})_{L^2(s,\infty)}$ can be expressed in terms of a non-linear version of the Airy function, known as Painlev\'{e} II, as follows. Let $q$ be the unique (Hastings-McLeod) solution to Painlev\'{e} II: \begin{equation*} \frac{d^2}{ds^2}q(s) = (s+2q^2(s))q(s) \end{equation*} subject to $q(s)\sim \ensuremath{\mathrm{Ai}}(s)$ as $s\rightarrow \infty$. Then \begin{equation*} \frac{d^2}{ds^2} \log \det(I-K_{\ensuremath{\mathrm{Ai}}})_{L^2(s,\infty)} = q^2(s). \end{equation*} From this one shows that \begin{equation*} F_{\mathrm{GUE}}(s) = \exp \left( -\int_{s}^{\infty} (x-s) q^2(x)dx\right). \end{equation*} See \cite{TWAiry} for details. We now show that an analogous expression exists for the class of operators described in (\ref{gen_int_int_ker}). \begin{proposition}\label{pII_prop} For $K$ in the class of operators with kernel as in (\ref{gen_int_int_ker}), let $q(t,s)$ be the solution to \begin{equation}\label{5.15} \frac{d^2}{ds^2} q_t(s) = \left(s+t + 2\int_{-\infty}^{\infty} \sigma^\prime(r)q_r^2(s)dr\right) q_t(s) \end{equation} subject to $q_t(s)\sim \ensuremath{\mathrm{Ai}}(t+s)$ as $s\rightarrow \infty$. Then we have \begin{eqnarray}\label{sicks} \frac{d^2}{ds^2} \log\det(I-K)_{L^2(s,\infty)} &=& \int_{-\infty}^{\infty} \sigma^\prime(t)q_t^2(s)dt,\\ \nonumber \det(I-K)_{L^2(s,\infty)} &=& \exp\left(-\int_s^{\infty}(x-s)\int_{-\infty}^{\infty} \sigma^\prime(t)q_t^2(x)dtdx\right) \end{eqnarray} \end{proposition} \begin{proof} As mentioned, we follow the work of Tracy and Widom \cite{TWAiry} very closely, and make the necessary modifications to our present setting. Consider an operator $K$ of the type described in (\ref{gen_int_int_ker}). We use the notation $K\doteq K(x,y)$ to indicate that operator $K$ has kernel $K(x,y)$. It will be convenient to think of our operator $K$ as acting, not on $(s,\infty)$, but on $(-\infty,\infty)$ and to have kernel \begin{equation*} K(x,y) \chi_s(y) \end{equation*} where $\chi$ is the characteristic function of $(s,\infty)$. Since the integral operator $K$ is trace class and depends smoothly on the parameter $s$, we have the well known formula \begin{equation}\label{dLog} \frac{d}{ds}\log\det\left(I-K\right)=-\textrm{tr}\left(\left(I-K\right)^{-1} \frac{\partial K}{\partial s}\right). \end{equation} By calculus \begin{equation}\label{Kderiv} \frac{\partial K}{\partial s}\doteq -K(x,s)\delta(y-s). \end{equation} Substituting this into the above expression gives \begin{equation*} \frac{d}{ds} \log\det\left(I-K\right)= - R(s,s) \end{equation*} where $R(x,y)$ is the resolvent kernel of $K$, i.e.\ $R=(I-K)^{-1}K\doteq R(x,y)$. The resolvent kernel $R(x,y)$ is smooth in $x$ but discontinuous in $y$ at $y=s$. The quantity $R(s,s)$ is interpreted to mean the limit of $R(s,y)$ as $y$ goes to $s$ from above: \begin{equation*} \lim_{y\rightarrow s^+}R(s,y). \end{equation*} \subsubsection{Representation for $R(x,y)$} If $M$ denotes the multiplication operator, $(Mf)(x)=x f(x)$, then \begin{eqnarray*} \nonumber\left[M,K\right] & \doteq & x K(x,y)- K(x,y) y = (x-y) K(x,y) \\ &=& \int_{-\infty}^{\infty} \sigma'(t)\{ \varphi(x+t) \psi(y+t) - \psi(x+t) \varphi(y+t)\} dt \end{eqnarray*} where $\varphi(x) = {\rm Ai}(x)$ and $\psi(x)= {\rm Ai}'(x)$. As an operator equation this is \begin{equation*} \left[M,K\right]=\int_{-\infty}^{\infty} \sigma'(t) \{\tau_t\varphi\otimes \tau_t\psi - \tau_t\psi\otimes \tau_t\varphi\} dt, \end{equation*} where $a\otimes b\doteq a(x) b(y)$ and $\left[\cdot,\cdot\right]$ denotes the commutator. The operator $\tau_{t}$ acts as $(\tau_{t}f)(x) = f(x+t)$. Thus \begin{eqnarray}\label{comm1} \left[M,\left(I-K\right)^{-1}\right]&=&\left(I-K\right)^{-1} \left[M,K\right] \left(I-K\right)^{-1} \nonumber \\ &=&\int\sigma'(t)\{ \left(I-K\right)^{-1}\left(\tau_t\varphi\otimes \tau_t\psi - \tau_t\psi\otimes \tau_t\varphi\right) \left(I-K\right)^{-1}\} dt\nonumber\\ &=&\int\sigma'(t)\{ Q_t\otimes P_t - P_t\otimes Q_t \} dt, \end{eqnarray} where we have introduced \begin{equation*} Q_t(x;s)=Q_t(x)= \left(I-K\right)^{-1} \tau_t\varphi \ \ \ \textrm{and} \ \ \ P_t(x;s)=P_t(x)= \left(I-K\right)^{-1} \tau_t\psi. \end{equation*} Note an important point here that as $K$ is self-adjoint we can use the transformation $\tau_t\varphi\otimes \tau_t\psi(I-K)^{-1}= \tau_t\varphi\otimes (I-K)^{-1}\tau_t\psi$. On the other hand since $(I-K)^{-1}\doteq \rho(x,y)=\delta(x-y)+R(x,y)$, \begin{equation}\label{comm2} \left[M,\left(I-K\right)^{-1}\right]\doteq (x-y)\rho(x,y)=(x-y) R(x,y). \end{equation} Comparing (\ref{comm1}) and (\ref{comm2}) we see that \begin{equation*} R(x,y) = \int_{-\infty}^{\infty} \sigma'(t) \{ \frac{Q_t(x) P_t(y) - P_t(x) Q_t(y)}{x- y}\} dt, \ \ x,y\in (s,\infty). \end{equation*} Taking $y\rightarrow x$ gives \begin{equation*} R(x,x)= \int_{-\infty}^{\infty} \sigma'(t) \{Q_t^\prime(x) P_t(x) - P_t^\prime(x) Q_t(x)\} dt \end{equation*} where the ${}^\prime$ denotes differentiation with respect to $x$. Introducing \begin{equation*} q_t(s)=Q_t(s;s) \ \ \ \textrm{and} \ \ \ p_t(s) = P_t(s;s), \end{equation*} we have \begin{equation}\label{RDiag} R(s,s) = \int_{-\infty}^{\infty} \sigma'(t) \{ Q_t^\prime(s;s) p_t(s) - P_t^\prime(s;s) q_t(s)\} dt,\ \ s<x,y<\infty. \end{equation} \subsubsection{Formulas for $Q_t^\prime(x)$ and $P_t^\prime(x)$} As we just saw, we need expressions for $Q_t^\prime(x)$ and $P_t^\prime(x)$. If $D$ denotes the differentiation operator, $d/dx$, then \begin{eqnarray}\label{Qderiv1} Q_t^\prime(x;s)&=& D \left(I-K\right)^{-1} \tau_t\varphi \nonumber= \left(I-K\right)^{-1} D\tau_t\varphi + \left[D,\left(I-K\right)^{-1}\right]\tau_t\varphi\nonumber\\ &=& \left(I-K\right)^{-1} \tau_t\psi + \left[D,\left(I-K\right)^{-1}\right]\tau_t\varphi\nonumber\\ &=& P_t(x) + \left[D,\left(I-K\right)^{-1}\right]\tau_t\varphi. \end{eqnarray} We need the commutator \begin{equation*} \left[D,\left(I-K\right)^{-1}\right]=\left(I-K\right)^{-1} \left[D,K\right] \left(I-K\right)^{-1}. \end{equation*} Integration by parts shows \begin{equation*} \left[D,K\right] \doteq \left( \frac{\partial K}{\partial x} + \frac{\partial K}{\partial y}\right) + K(x,s) \delta(y-s). \end{equation*} The $\delta$ function comes from differentiating the characteristic function $\chi$. Using the specific form for $\varphi$ and $\psi$ ($\varphi^\prime=\psi$, $\psi^\prime=x\varphi$), \begin{equation*} \left( \frac{\partial K}{\partial x} + \frac{\partial K}{\partial y}\right) = \int_{-\infty}^{\infty}\sigma'(t)\tau_{t}\varphi(x) \tau_{t}\varphi(y)dt. \end{equation*} Thus \begin{equation}\label{DComm} \left[D,\left(I-K\right)^{-1}\right]\doteq - \int_{-\infty}^{\infty}\sigma'(t) Q_t(x) Q_t(y)dt + R(x,s) \rho(s,y). \end{equation} (Recall $(I-K)^{-1}\doteq \rho(x,y)$.) We now use this in (\ref{Qderiv1}) \begin{eqnarray*} Q_t^\prime(x;s)&=&P_t(x) -\int_{-\infty}^{\infty} \sigma'(\tilde t) Q_{\tilde t}(x) \left(Q_{\tilde t},\tau_t\varphi\right) d\tilde t+ R(x,s) q_t(s) \\ \nonumber&=& P_t(x) - \int_{-\infty}^{\infty} \sigma'(\tilde t)Q_{\tilde t}(x) u_{t,\tilde t}(s) + R(x,s) q_t(s) \end{eqnarray*} where the inner product $\left(Q_{\tilde t},\tau_t\varphi\right)$ is denoted by $u_{t,\tilde t}(s)$ and $u_{t,\tilde t}(s)=u_{\tilde t,t}(s)$. Evaluating at $x=s$ gives \begin{equation*} Q_t^\prime(s;s) = p_t(s) - \int_{-\infty}^{\infty} \sigma'(\tilde t)q_{\tilde t}(s) u_{t,\tilde t}(s) +R(s,s) q_t(s). \end{equation*} We now apply the same procedure to compute $P^\prime$ encountering the one new feature that since $\psi^\prime(x)=x\varphi(x)$ we need to introduce an additional commutator term. \begin{eqnarray*} \nonumber P_t^\prime(x;s)&=& D \left(I-K\right)^{-1} \tau_t\psi= \left(I-K\right)^{-1} D\tau_t\psi + \left[D,\left(I-K\right)^{-1}\right]\tau_t\psi\\ &=& (M+t) \left(I-K\right)^{-1} \tau_t\varphi + \left[\left(I-K\right)^{-1},M\right]\tau_t\varphi+ \left[D,\left(I-K\right)^{-1}\right]\tau_t\psi. \end{eqnarray*} Writing it explicitly, we get $(x+t) Q_t(x) +R(x,s) p_t(s) + \Xi$ where \begin{eqnarray*} \nonumber \Xi &=& \int_{-\infty}^{\infty} \sigma'(\tilde t)\left(P_{\tilde t}\otimes Q_{\tilde t}-Q_{\tilde t}\otimes P_{\tilde t}\right)\tau_t\varphi d\tilde t-\int_{-\infty}^{\infty}\sigma(\tilde t) (Q_{\tilde t}\otimes Q_{\tilde t})\tau_t\psi d\tilde t\\ &=& \int_{-\infty}^{\infty} \sigma'(\tilde t)\left\{ P_{\tilde t}(x)\left(Q_{\tilde t},\tau_t\varphi\right) - Q_{\tilde t}(x) \left(P_{\tilde t},\tau_t\varphi\right)- Q_{\tilde t}(x) \left(Q_{\tilde t},\tau_t\psi\right)\right\}d\tilde t\\ \nonumber&=& \int_{-\infty}^{\infty} \sigma'(\tilde t)\left\{ P_{\tilde t}(x)u_{t,\tilde t}(s)- Q_{\tilde t}(x) v_{t,\tilde t}(s)- Q_{\tilde t}(x) v_{\tilde t,t}(s)\right\}d\tilde t, \end{eqnarray*} with the notation $v_{t,\tilde t}(s)=\left(P_{\tilde t},\tau_t\varphi\right)=\left(\tau_{\tilde t}\psi,Q_t\right)$. Evaluating at $x=s$ gives \begin{eqnarray*} P^{\prime}(s;s) & = & (s+t) q_t(s) + \int_{-\infty}^{\infty} \sigma'(\tilde t)\left\{ p_{\tilde t}(s)u_{t,\tilde t}(s)- q_{\tilde t}(s) v_{t,\tilde t}(s)- q_{\tilde t}(s) v_{\tilde t,t}(s)\right\}d\tilde t \nonumber \\&& +R(s,s) p_t(s). \end{eqnarray*} Using this and the expression for $Q^\prime(s;s)$ in (\ref{RDiag}) gives, \begin{equation*} R(s,s)= \int_{-\infty}^{\infty} \sigma'(t) \{ p_t^2-s q_t^2 -\int_{-\infty}^{\infty} \sigma'(\tilde t) \{ [q_{\tilde t} p_t+p_{\tilde t} q_t] u_{t,\tilde t} -q_{\tilde t}q_{ t}[ v_{t,\tilde t}+ v_{\tilde t, t}]\} \}d\tilde t dt. \end{equation*} \subsubsection{First order equations for $q$, $p$, $u$ and $v$} By the chain rule \begin{equation}\label{qDeriv} \frac{dq_t}{ds} = \left( \frac{\partial}{\partial x}+\frac{\partial}{\partial s}\right) Q_t(x;s)\left\vert_{x=s}. \right. \end{equation} We have already computed the partial of $Q(x;s)$ with respect to $x$. The partial with respect to $s$ is, using (\ref{Kderiv}), \begin{equation*} \frac{\partial}{\partial s} Q_t(x;s)= \left(I-K\right)^{-1} \frac{\partial K}{\partial s} \left(I-K\right)^{-1}\tau_t \varphi= - R(x,s) q_t(s). \end{equation*} Adding the two partial derivatives and evaluating at $x=s$ gives, \begin{equation}\label{qEqn} \frac{dq_t}{ds} = p_t - \int_{-\infty}^{\infty} \sigma'(\tilde t)q_{\tilde t} u_{t,\tilde t} d\tilde t. \end{equation} A similar calculation gives, \begin{equation*} \frac{dp}{ds}= (s+t) q_t + \int_{-\infty}^{\infty} \sigma'(\tilde t)\left\{ p_{\tilde t}u_{t,\tilde t}- q_{\tilde t} [v_{t,\tilde t}+ v_{\tilde t,t}]\right\}d\tilde t. \end{equation*} We derive first order differential equations for $u$ and $v$ by differentiating the inner products. $ u_{t,\tilde t}(s) = \int_s^\infty \tau_t\varphi(x) Q_{\tilde t}(x;s)\, dx$, \begin{eqnarray*} \frac{du_{t,\tilde t}}{ds}&=& -\tau_t\varphi(s) q_{\tilde t}(s) + \int_s^\infty \tau_t\varphi(x) \frac{\partial Q_{\tilde t}(x;s)}{\partial s}\, dx \\ &=& -\left(\tau_t\varphi(s)+\int_s^\infty R(s,x) \tau_t\varphi(x)\,dx\right) q_{\tilde t}(s)\\ &=& -\left(I-K\right)^{-1} \tau_t\varphi(s) \, q_{\tilde t}(s)= - q_tq_{\tilde t}. \end{eqnarray*} Similarly, $\frac{dv_{t,\tilde t}}{ds} = - q_tp_{\tilde t}.$ \subsubsection{Integro-differential equation for $q_t$} From the first order differential equations for $q_t$, $u_t$ and $v_{t,\tilde t}$ it follows that the derivative in $s$ of $\int_{-\infty}^{\infty} \sigma'(t') u_{t,t'}u_{t',\tilde t} dt' - [ v_{t,\tilde t}+v_{\tilde t, t}] -q_tq_{\tilde t} $ is zero. Examining the behavior near $s=\infty$ to check that the constant of integration is zero then gives, \begin{equation*} \int_{-\infty}^{\infty} \sigma'(t') u_{t,t'}u_{t',\tilde t} dt' - [ v_{t,\tilde t}+v_{\tilde t, t}]=q_tq_{\tilde t} , \end{equation*} a \textit{first integral}. Differentiate (\ref{qEqn}) with respect to $s$, to get \begin{equation*} q_t''= (s+t)q_t + \int_{-\infty}^{\infty} \sigma'(\tilde t) \Big\{ \int_{-\infty}^{\infty} \sigma'(t') q_{t'} u_{\tilde t, t'} dt' u_{t,\tilde t} - q_{\tilde t}[ v_{t,\tilde t} + v_{\tilde t, t} ] + q_t q_{\tilde t}^2 \Big\} d\tilde t \end{equation*} and then use the first integral to deduce that $q$ satisfies (\ref{5.15}). Since the kernel of $[D,(I-K)^{-1}]$ is $(\partial/\partial x+\partial/\partial y)R(x,y)$, (\ref{DComm}) says \begin{equation*} \left(\frac{\partial}{\partial x}+\frac{\partial}{\partial y}\right)R(x,y)=- \int_{-\infty}^{\infty}\sigma'(t) Q_t(x) Q_t(y)dt + R(x,s) \rho(s,y). \end{equation*} In computing $\partial Q(x;s)/\partial s$ we showed that \begin{equation*} \frac{\partial}{\partial s} \left(I-K\right)^{-1}\doteq \frac{\partial}{\partial s}R(x,y) = -R(x,s)\rho(s,y). \end{equation*} Adding these two expressions, \begin{equation*} \left(\frac{\partial}{\partial x}+\frac{\partial}{\partial y}+ \frac{\partial}{\partial s}\right)R(x,y)=- \int_{-\infty}^{\infty}\sigma'(t) Q_t(x) Q_t(y)dt , \end{equation*} and then evaluating at $x=y=s$ gives \begin{equation*} \frac{d}{ds}R(s,s)=- \int_{-\infty}^{\infty}\sigma'(t)q_t^2(s) dt. \end{equation*} Hence $ q_t^{\prime\prime}=\Big\{ s+t - 2R'\Big\} q_t.$ Integrating and recalling (\ref{dLog}) gives, \begin{equation*} \frac{d}{ds}\log\det\left(I-K\right)=-\int_s^\infty \int_{-\infty}^{\infty}\sigma'(t)q_t^2(x) dt \, dx; \end{equation*} and hence, \begin{equation*} \log\det\left(I-K\right)=-\int_s^\infty\left(\int_y^\infty \int_{-\infty}^{\infty}\sigma'(t)q_t^2(x) dt\,dx\right)\, dy. \end{equation*} Rearranging gives (\ref{sicks}). This completes the proof of Proposition \ref{pII_prop}. \end{proof} \section{Proofs of Corollaries to Theorem \ref{main_result_thm}}\label{corollary_sec} \subsection{Large time $F_{GUE}$ asymptotics (Proof of Corollary \ref{TW})}\label{twasymp} We describe how to turn the idea described after Corollary \ref{TW} into a rigorous proof. The first step is to cut the $\tilde\mu$ contour off outside of a compact region around the origin. Proposition \ref{reinclude_mu_lemma} shows that for a fixed $T$, the tail of the $\tilde\mu$ integrand is exponentially decaying in $\tilde\mu$. A quick inspection of the proof shows that increasing $T$ only further speeds up the decay. This justifies our ability to cut the contour at minimal cost. Of course, the larger the compact region, the smaller the cost (which goes to zero). We may now assume that $\tilde\mu$ is on a compact region. We will show the following critical point: that $\det(I-K_{a}^{\csc})_{L^2(\tilde\Gamma_{\eta})}$ converges (uniformly in $\tilde\mu$) to the Fredholm determinant with kernel \begin{equation}\label{limiting_kernel} \int_{\Gamma_{\tilde\zeta}}e^{-\frac{1}{3}(\tilde\zeta^3-\tilde\eta'^3)+2^{1/3}s (\tilde\zeta-\tilde\eta')} \frac{d\tilde\zeta}{(\zeta-\eta')(\zeta-\eta)}. \end{equation} This claim shows that we approach, uniformly, a limit which is independent of $\tilde\mu$. Therefore, for large enough $T$ we may make the integral arbitrarily close to the integral of $\frac{e^{-\tilde\mu}}{\tilde\mu}$ times the above determinant (which is independent of $\tilde\mu$), over the cutoff $\tilde\mu$ contour. The $\tilde\mu$ integral approaches $1$ as the contour cutoff moves towards infinity, and the determinant is equal to $F_{\mathrm{GUE}}(2^{1/3}s)$ which proves the corollary. A remark worth making is that the complex contours on which we are dealing are not the same as those of \cite{TW3}, however, owing to the decay of the kernel and the integrand (in the kernel definition), changing the contours to those of \cite{TW3} has no effect on the determinant. All that remains, then, is to prove the uniform convergence of the Fredholm determinant claimed above. The proof of the claim follows in a rather standard manner. We start by taking a change of variables in the equation for $K_{a}^{\csc}$ in which we replace $\tilde\zeta$ by $T^{-1/3}\tilde\zeta$ and likewise for $\tilde\eta$ and $\tilde\eta'$. The resulting kernel is then given by \begin{equation*} T^{-1/3} \int_{\tilde\Gamma_{\zeta}} e^{-\frac{1}{3}(\tilde\zeta^3-\tilde\eta'^3)+2^{1/3}(s+a')(\tilde\zeta-\tilde\eta')} \frac{ \pi 2^{1/3}(-\tilde\mu)^{-2^{1/3}T^{-1/3}(\tilde\zeta-\tilde\eta')}}{\sin(\pi 2^{1/3}T^{-1/3}(\tilde\zeta-\tilde\eta'))} \frac{d\tilde\zeta}{\tilde\zeta-\tilde\eta}. \end{equation*} Notice that the $L^2$ space as well as the contour of $\tilde\zeta$ integration should have been dilated by a factor of $T^{1/3}$. However, it is possible (using Lemma \ref{TWprop1}) to show that we may deform these contours back to their original positions without changing the value of the determinant. We have also used the fact that $a=T^{1/3}s-\log\sqrt{2\pi T}$ and hence $T^{-1/3}a = s+a'$ where $a'=-T^{-1/3}\log\sqrt{2\pi T}$. We may now factor this, just as in Proposition \ref{reinclude_mu_lemma}, as $AB$ and likewise we may factor our limiting kernel (\ref{limiting_kernel}) as $K'=A'B'$ where \begin{eqnarray*} && A(\tilde\zeta,\tilde\eta) = \frac{e^{-|\ensuremath{\mathrm{Im}}(\tilde\zeta)|}}{\tilde\zeta-\tilde\eta}\\ \nonumber B(\tilde\eta,\tilde\zeta) &= &e^{|\ensuremath{\mathrm{Im}}(\tilde\zeta)|} e^{-\frac{1}{3}(\tilde\zeta^3-\tilde\eta^3)+2^{1/3}(s+a')(\tilde\zeta-\tilde\eta)} \frac{\pi2^{1/3}T^{-1/3} (-\tilde\mu)^{-2^{1/3}T^{-1/3}(\tilde\zeta-\tilde\eta)}}{ \sin(\pi 2^{1/3}T^{-1/3}(\tilde\zeta-\tilde\eta))} \end{eqnarray*} and similarly \begin{eqnarray*} A'(\tilde\zeta,\tilde\eta) &=& \frac{e^{-|\ensuremath{\mathrm{Im}}(\tilde\zeta)|}}{\tilde\zeta-\tilde\eta}\\ \nonumber B'(\tilde\eta,\tilde\zeta) &=& e^{|\ensuremath{\mathrm{Im}}(\tilde\zeta)|} e^{-\frac{1}{3}(\tilde\zeta^3-\tilde\eta'^3)+2^{1/3}s)(\tilde\zeta-\tilde\eta')} \frac{1}{\tilde\zeta-\tilde\eta} \end{eqnarray*} Notice that $A=A'$. Now we use the estimate \begin{equation*} |\det(I-K_a^{\csc})-\det(I-K')|\leq ||K_a^{\csc}-K'||_1 \exp\{1+||K_a^{\csc}||_1+||K'||_1\}. \end{equation*} Observe that $||K_a^{\csc}-K'||_1\leq ||AB-AB'||_1\leq ||A||_2||B-B'||_2$. Therefore it suffices to show that $||B-B'||_2$ goes to zero (the boundedness of the trace norms in the exponential also follows from this). This is an explicit calculation and is easily made by taking into account the decay of the exponential terms, and the fact that $a'$ goes to zero. The uniformness of this estimate for compact sets of $\tilde\mu$ follows as well. This completes the proof of Corollary \ref{TW}. \subsection{Small time Gaussian asymptotics }\label{Gaussian_asymptotics} \begin{proposition} As $T\beta^{4} \searrow0$, $2^{1/2}\pi^{-1/4}\beta^{-1}T^{-1/4}\mathcal{F}_\beta(T,X)$ converges in distribution to a standard Gaussian. \end{proposition} \begin{proof} We have from (\ref{nine}), \begin{equation*} \mathcal{F}_\beta(T,X) = \log \left(1+ \beta T^{1/4} G(T,X) + \beta^2 T^{1/2} \Omega(\beta,T,X)\right) \end{equation*} where \begin{equation*}G(T,X) = T^{-1/4}\int_0^T \int_{-\infty}^\infty \frac{ p(T-S,X-Y)p(S,Y)}{p(T,X)} \mathscr{W}(dY,dS) \end{equation*} and \begin{equation*} \Omega(\beta,T,X) = T^{-1/2} \sum_{n=2}^\infty \int_{\Delta_n(T)} \int_{\mathbb R^n}(-\beta)^{n-2} p_{ t_1, \ldots, t_n}(x_1,\ldots,x_n) \mathscr{W} (dt_1 dx_1) \cdots \mathscr{W} (dt_n dx_n). \end{equation*} It is elementary to show that for each $T_0<\infty$ there is a $C=C(T_0)<\infty$ such that, for $T<T_0$ \begin{equation*} E[\Omega^2(\beta,T,X)] \le C. \end{equation*} $G(T,X)$ is Gaussian and \begin{equation*} E[ G^2(T,X)] = T^{-1/2}\int_0^T \int_{-\infty}^\infty \frac{ p^2(T-S,X-Y)p^2(S,Y)}{p^2(T,X)} dYdS = \frac12 \sqrt{\pi }. \end{equation*} Hence by Chebyshev's inequality, \begin{eqnarray*} F_T(2^{-1/2}\pi^{1/4} \beta T^{1/4} s) & = & P( \beta T^{1/4} G(T,X) + \beta^2 T^{1/2} \Omega(\beta,T,X)\le e^{2^{-1/2}\pi^{1/4} \beta T^{1/4} s}-1) \nonumber \\ & = & \int_{-\infty}^s \frac{e^{-x^2/2}}{\sqrt{2\pi}} dx + O( \beta T^{1/4}).\end{eqnarray*} \end{proof}
proofpile-arXiv_065-4754
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Autonomic Computing (AC) is an emerging field for the development of large-scale self-managing complex systems \cite{ac-book}. The idea behind is that complex systems such as spacecraft can autonomously manage themselves and deal with dynamic requirements and unanticipated threads. NASA is currently approaching AC with interest, recognizing in its concepts a promising approach to developing new class of space exploration missions, where spacecraft should be independent, autonomous, and smart. Here, AC software makes spacecraft autonomic systems capable of planning and executing many activities onboard the spacecraft to meet the requirements of changing objectives and harsh external conditions. Examples of such AC-based unmanned missions are the Autonomous Nano-Technology Swarm (ANTS) concept mission \cite{nasa-swrm-msns} and the Deep Space One mission \cite{ac-book}. It is important to mention though that the development of systems implying such a large-scale automation implies sophisticated software, which requires new development approaches and new verification techniques. Practice has shown that traditional development methods cannot guarantee software reliability and prevent software failures, which is very important in complex systems such as spacecraft where software errors can be very expensive and even catastrophic (e.g., the malfunction in the control software of Ariane-5 \cite{model-check}). Thus, formal methods need to be employed in the development of autonomic spacecraft systems. When employed correctly, formal methods have proven to be an important technique that ensures quality of software \cite{ten-cmmndmnts}. In the course of this research, to develop models for autonomic space exploration missions, we employ the ASSL (Autonomic System Specification Language) formal method \cite{assl-book}, \cite{assl-computer}. Conceptually, ASSL is an AC-dedicated framework providing a powerful formal notation and computational tools that help developers with problem formation, system design, system analysis and evaluation, and system implementation. \section{Preliminaries} \subsection{Targeted NASA Missions} Both NASA ANTS and NASA Voyager missions are targeted by this research. Other space-exploration missions, such as NASA Mars Rover and ESA's Herschel are a subject of interest as well. \subsubsection{NASA ANTS} The ANTS (Autonomous Nano-Technology Swarm concept sub-mission PAM (Prospecting Asteroids Mission) is a novel approach to asteroid-belt resource exploration. ANTS provides extremely high autonomy, minimal communication requirements to Earth, and a set of very small explorers with a few consumables \cite{nasa-swrm-msns}. The explorers forming the swarm are pico-class, low-power, and low-weight spacecraft, yet capable of operating as fully autonomous and adaptable agents. There are three classes of ANTS spacecraft: \textit{rulers}, \textit{messengers} and \textit{workers}. By grouping them in certain ways, ANTS forms teams that explore particular asteroids. The internal organization of a team depends on the task to be performed and on the current environmental conditions. In general, each team has a group leader (ruler), one or more messengers, and a number of workers carrying a specialized instrument. The messengers are needed to connect the team members when they cannot connect directly. \subsubsection{NASA Voyager} The NASA Voyager Mission \cite{nasa-voyager} was designed for exploration of the Solar System. The mission started in 1977, when the twin spacecraft Voyager I and Voyager II were launched (cf. Figure \ref{fig:voyager}). The original mission objectives were to explore the outer planets of the Solar System. As the Voyagers flew across the Solar System, they took pictures of planets and their satellites and performed close-up studies of Jupiter, Saturn, Uranus, and Neptune. \begin{figure}[htp] \begin{centering} \includegraphics[width=0.6\textwidth]{Images/voyager.png} \caption{ Voyager Spacecraft \cite{nasa-voyager}} \label{fig:voyager} \end{centering} \end{figure} After successfully accomplishing their initial mission, both Voyagers are now on an extended mission, dubbed the "Voyager Interstellar Mission". This mission is an attempt to chart the heliopause boundary, where the solar winds and solar magnetic fields meet the so-called \textit{interstellar medium} \cite{nasa-voyager-intrstllr}. \subsection{ASSL} The ASSL framework \cite{assl-book}, \cite{assl-computer} provides a powerful formal notation and suitable mature tool support that allow ASSL specifications to be edited and validated and Java code to be generated from any valid specification. ASSL is based on a specification model exposed over hierarchically organized formalization tiers. This specification model is intended to provide both infrastructure elements and mechanisms needed by an autonomic system (AS). The latter is considered as being composed of special \textit{autonomic elements} (AEs) interacting over \textit{interaction protocols}, whose specification is distributed among the ASSL tiers. Note that each tier is intended to describe different aspects of the AS in question, such as \textit{service-level objectives}, \textit{policies}, \textit{interaction protocols}, \textit{events}, \textit{actions}, etc. This helps to specify an AS at different levels of abstraction imposed by the ASSL tiers. The following elements represent the major tiers and sub tiers in ASSL. \begin{flushleft} I. Autonomic System (AS) \end{flushleft} \begin{itemize} \setlength{\itemsep}{0pt}% \setlength{\parskip}{0pt}% \item AS Service-level Objectives \item AS Self-managing Policies \item AS Architecture \item AS Actions \item AS Events \item AS Metrics \end{itemize} \begin{flushleft} II. AS Interaction Protocol (ASIP) \end{flushleft} \begin{itemize} \setlength{\itemsep}{0pt}% \setlength{\parskip}{0pt}% \item AS Messages \item AS Communication Channels \item AS Communication Functions \end{itemize} \begin{flushleft} III. Autonomic Element (AE) \end{flushleft} \begin{itemize} \setlength{\itemsep}{0pt}% \setlength{\parskip}{0pt}% \item AE Service-level Objectives \item AE Self-managing Policies \item AE Friends \item AE Interaction Protocol (AEIP) \begin{itemize} \setlength{\itemsep}{0pt}% \setlength{\parskip}{0pt}% \item AE Messages \item AE Communication Channels \item AE Communication Functions \item AE Managed Elements \end{itemize} \item AE Recovery Protocol \item AE Behavior Models \item AE Outcomes \item AE Actions \item AE Events \item AE Metrics \end{itemize} As shown, the ASSL multi-tier specification model decomposes an AS in two directions: \begin{enumerate} \item into levels of functional abstraction; \item into functionally related tiers (sub-tiers). \end{enumerate} With the first decomposition, an AS is presented from three different perspectives, these depicted as three main tiers (main concepts): \begin{itemize} \item AS Tier forms a general and global AS perspective exposing the architecture topology, general system behaviour rules, and global \textit{actions}, \textit{events} and \textit{metrics} applied to these rules. \item ASIP Tier (AS interaction protocol) forms a communication perspective exposing a means of communication for the AS under consideration. \item AE Tier forms a unit-level perspective, where an interacting sets of the AS's individual components is specified. These components are specified as AEs with their own behaviour, which must be synchronized with the behaviour rules from the global AS perspective. \end{itemize} \section{Research} In this section, we present our research objectives and current trends. \subsection{Objectives} This research emphasizes the ASSL formal development approach to autonomic systems (ASs). We believe that ASSL may be successfully applied to the development of experimental models for space-exploration missions integrating autonomic features. Thus, we use ASSL to develop experimental models for NASA missions in a stepwise manner (feature by feature) and generate a series of prototypes, which we evaluate in simulated conditions. Here, it is our understanding that both prototyping and formal modeling, which will aid in the design and implementation of real space-exploration missions, are becoming increasingly necessary and important as the urgent need emerges for higher levels of assurance regarding correctness. \subsection{Benefits for Space Systems} Experimental modeling of space-exploration missions can be extremely useful for the design and implementation of such systems. The ability to compare features and issues with actual missions and with hypothesized possible autonomic approaches gives significant benefit. In our approach, we develop space-mission models in a series of incremental and iterative steps where each model includes new autonomic features. This helps to evaluate the performance of each feature and gradually construct a model of more realistic space exploration missions. Different prototypes can be tried and tested (and benchmarked as well), and get valuable feedback before we implement the real system. Moreover, this approach helps to discover eventual design flaws in existing missions and the prototype models. \subsection{Modeling NASA ANTS and NASA Voyager with ASSL} ASSL has been successfully used to specify autonomic features and generate prototype models for two NASA projects - the ANTS (Autonomous Nano-Technology Swarm) concept mission (cf. Section 2.1.1) and the Voyager mission (cf. Section 2.1.2). In both cases the generated prototype models helped to simulate space-exploration missions and validate features through simulated experimental results. In our endeavor to develop NASA missions with ASSL, we emphasized modeling ANTS self-managing policies \cite{twrds-assl-ants} of self-configuring, self-healing and self-scheduling and the Voyager image-processing autonomic behavior \cite{assl-voyager}. In addition, we proposed a specification model for the ANTS safety requirements. In general, a complete specification of these autonomic properties requires a two-level approach. They need to be specified at the individual spacecraft level (AE tier) and at the level of the entire system (AS tier). Here, to specify the self-managing policies we used four base ASSL elements: \begin{itemize} \item \textit{a self-managing policy structure} - which describes the self-managing policy under consideration. We use a set of special ASSL constructs such as \textit{fluents} and \textit{mappings} to specify such a policy \cite{assl-book}. With fluents we express specific situations, in which the policy is interested, and with mappings we map those situations to actions. \item \textit{actions} - a set of actions that can be undertaken by ANTS in response to certain conditions, and according to that policy. \item \textit{events} - a set of events that initiate fluents and are prompted by the actions according to the policies. \item \textit{metrics} - a set of metrics \cite{assl-book} needed by the events and actions. \end{itemize} \begin{figure}[t] \begin{minipage}[t]{0.480\textwidth} \begin{alltt} \scriptsize \hrulefill \textbf{SELF_PROTECTING} \{ \textbf{FLUENT} inSecurityCheck \{ \textbf{INITIATED_BY} \{ \textbf{EVENTS}.privateMessageIsComming \} \textbf{TERMINATED_BY} \{ \textbf{EVENTS}.privateMessageSecure, \textbf{EVENTS}.privateMessageInsecure \}\} \textbf{MAPPING} \{ \textbf{CONDITIONS} \{ inSecurityCheck\} \textbf{DO_ACTIONS} \{ \textbf{ACTIONS}.checkPrivateMessage \}\}\} \hrulefill \end{alltt} \vspace{-6mm} \caption{Self-managing Policy} \vspace{-2mm} \label{fig:assl-policy-example} \end{minipage} \hfill \begin{minipage}[t]{0.500\textwidth} \begin{alltt} \scriptsize \hrulefill \textbf{EVENT} privateMessageIsComming \{ \textbf{ACTIVATION} \{ \textbf{SENT} \{ \textbf{AEIP.MESSAGES}.privateMessage \}\}\} \textbf{EVENT} privateMessageInsecure \{ \textbf{GUARDS} \{ \textbf{NOT METRICS}.thereIsInsecureMsg \} \textbf{ACTIVATION} \{ \textbf{CHANGED} \{ \textbf{METRICS}.thereIsInsecureMsg \}\}\} \textbf{EVENT} privateMessageSecure \{ \textbf{GUARDS} \{ \textbf{METRICS}.thereIsInsecureMsg \} \textbf{ACTIVATION} \{ \textbf{CHANGED} \{ \textbf{METRICS}.thereIsInsecureMsg \}\}\} \hrulefill \end{alltt} \vspace{-6mm} \caption{Policy Events} \vspace{-2mm} \label{fig:policy_events} \end{minipage} \begin{minipage}[t]{0.900\textwidth} \begin{alltt} \scriptsize \hrulefill \textbf{ACTION} checkPrivateMessage \{ \textbf{GUARDS} \{ .... \} \textbf{ENSURES} \{ .... \} \textbf{DOES} \{ senderIdentified = \textbf{call ACTIONS}.checkSenderCertificate; .... \} \textbf{ONERR_DOES} \{ .... \} \textbf{TRIGGERS} \{ .... \} \textbf{ONERR_TRIGGERS} \{ .... \} \} \hrulefill \end{alltt} \vspace{-6mm} \caption{Action} \vspace{-3mm} \label{fig:policy_action} \end{minipage} \end{figure} Figure \ref{fig:assl-policy-example}, Figure \ref{fig:policy_events}, and Figure \ref{fig:policy_action} present a partial specification of a \textit{self-protecting policy} employed by one of the prototype models we built for the NASA ANTS concept mission. Note that ASSL events (cf. Figure \ref{fig:policy_events}) and actions (cf. Figure \ref{fig:policy_action}) may be specified with a special \textit{GUARDS} clause stating preconditions that must be met before an event may be raised or an action may be undertaken. In addition, events (cf. Figure \ref{fig:policy_events}) are specified with a special \textit{ACTIVATION} clause and actions may be specified with an \textit{ENSURES} clause to state post-conditions that must be met after the action execution. Actions may call other actions in their \textit{DOES} or \textit{ONERR\_DOES} clauses. Finally, actions (cf. Figure \ref{fig:policy_action}) may trigger events specified in special \textit{TRIGGERS} and \textit{ONERR\_TRIGGERS} clauses. Note that the \textit{ONERR\_DOES} and \textit{ONERR\_TRIGGERS} clauses specify the action execution path in case of an error \cite{assl-book}. \subsection{Formal Verification} Safety is a major concern to NASA missions, where both reliability and maintainability form an essential part. In that context, the ASSL framework toolset provides verification mechanisms for automatic reasoning about a specified AS. The base validation approach in ASSL comes in the form of consistency checking. The latter is a mechanism for verifying ASSL specifications by performing exhaustive traversal to check for both syntax and consistency errors (type consistency, ambiguous definitions, etc.). In addition this mechanism checks whether a specification conforms to special correctness properties, defined as ASSL semantic definitions. Although considered efficient, the ASSL consistency checking mechanism cannot handle logical errors (specification flaws) and thus, it is not able to assert safety (e.g., freedom from deadlock) or liveness properties. Currently, a model-checking validation mechanism able to handle such errors is under development \cite{assl-modelcheck}. The ASSL model-checking mechanism is the next generation of the ASSL consistency checker based on automated reasoning. By allowing for automated system analysis and evaluation, this mechanism completes the AS development process with ASSL. The ability to verify the ASSL specifications for design flaws can lead to significant improvements in both specification models and generated ASs. Subsequently, ASSL can be used to specify, validate and generate better prototype models for current and future space-exploration systems. In general, model checking provides an automated method for verifying finite state systems by relying on efficient graph-search algorithms. The latter help to determine whether or not system behavior described with temporal correctness properties holds for the system's state graph. In ASSL, the model-checking problem is: given autonomic system $A$ and its ASSL specification $a$, determine in the system's state graph $g$ whether or not the behavior of $A$, expressed with the correctness properties $p$, meets the specification $a$. Formally, this can be presented as a triple $(a,p,g)$ where: $a$ is the ASSL specification of the autonomic system $A$, $p$ presents the correctness properties specified in FOLTL, and $g$ is the state graph constructed from the ASSL specification in a labeled transition system (LTS) \cite{model-check} format. However, due to the so-called state-explosion problem \cite{model-check}, model checking cannot efficiently handle logical errors in large ASSL specifications. Therefore, to improve the detection of errors introduced not only with the ASSL specifications, but also with the supplementary coding, the automatic verification provided by the ASSL tools is augmented by appropriate testing. Currently, a novel test generator tool based on change-impact analysis is under development. This tool helps the ASSL framework automatically generate high-quality test suites for self-managing policies. The test generator tool accepts as input an ASSL specification comprising sets of policies that need to be tested and generates a set of tests each testing a single execution path of a policy. \section{Conclusion} In this research, we place emphasis on modeling autonomic properties of space-exploration missions with ASSL. With ASSL we model such properties through the specification of self-managing policies and service-level objectives. Formal verification handles consistency flaws and operational Java code is generated for valid models. Generated code is the basis for functional prototypes of space-exploration missions employing self-managing features. Those prototypes may be extremely useful when undertaking further investigation based on practical results and help developers test the autonomic behavior under simulated conditions. \bibliographystyle{eptcs}
proofpile-arXiv_065-4776
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Survival analysis is one of the oldest fields of statistics, going back to the beginning of the development of actuarial science and demography in the 17th century. The first life table was presented by John Graunt in 1662 (Kreager, 1988). Until well after the Second World War the field was dominated by the classical approaches developed by the early actuaries (Andersen and Keiding, 1998). As the name indicates, survival analysis may be about the analysis of actual survival in the true sense of the word, that is death rates, or mortality. However, survival analysis today has a much broader meaning, as the analysis of the time of occurrence of any kind of event one might want to study. A problem with survival data, which does not generally arise with other types of data, is the occurrence of censoring. By this one means that the event to be studied, may not necessarily happen in the time window of observation. So observation of survival data is typically incomplete; the event is observed for some individuals and not for others. This mixture of complete and incomplete data is a major characteristic of survival data, and it is a main reason why special methods have been developed to analyse this type of data. A major advance in the field of survival analysis took place from the 1950's. The inauguration of this new phase is represented by the paper by Kaplan and Meier (1958) where they propose their famous estimator of the survival curve. This is one of the most cited papers in the history of statistics with more than 33,000 citations in the ISI Web of Knowledge (by April, 2009). While the classical life table method was based on a coarse division of time into fixed intervals, e.g.\ one-year or five-year intervals, Kaplan and Meier realized that the method worked quite as well for short intervals, and actually for intervals of infinitesimal length. Hence they proposed what one might call a continuous-time version of the old life table. Their proposal corresponded to the development of a new type of survival data, namely those arising in clinical trials where individual patients were followed on a day to day basis and times of events could be registered precisely. Also, for such clinical research the number of individual subjects was generally much smaller than in the actuarial or demographic studies. So, the development of the Kaplan-Meier method was a response to a new situation creating new types of data. The 1958 Kaplan-Meier paper opened a new area, but also raised a number of new questions. How, for instance, does one compare survival curves? A literature of tests for survival curves for two or more samples blossomed in the 1960's and 1970's, but it was rather confusing. The more general issue of how to adjust for covariates was first resolved by the introduction of the proportional hazards model by David Cox in 1972 (Cox, 1972). This was a major advance, and the more than 24,000 citations that Cox's paper has attracted in the ISI Web of Knowledge (by April 2009) is a proof of its huge impact. However, with this development the theory lagged behind. Why did the Cox model work? How should one understand the plethora of tests? What were the asymptotic properties of the Kaplan-Meier estimator? In order to understand this, one had to take seriously the stochastic process character of the data, and the martingale concept turned out to be very useful in the quest for a general theory. The present authors were involved in pioneering work in this area from the mid-seventies and we shall describe the development of these ideas. It turned out that the martingale concept had an important role to play in statistics. In the 35 years gone by since the start of this development, there is now an elaborate theory, and recently it has started to penetrate into the general theory of longitudinal data (Diggle, Farewell and Henderson, 2007). However, martingales are not really entrenched in statistics in the sense that statistics students are routinely taught about martingales. While almost every statistician will know the concept of a Markov process, far fewer will have a clear understanding of the concept of a martingale. We hope that this historical account will help statisticians, and probabilists, understand why martingales are so valuable in survival analysis. The introduction of martingales into survival analysis started with the 1975 Berkeley Ph.D.\ thesis of one of us (Aalen, 1975) and was then followed up by the Copenhagen based cooperation between several of the present authors. The first journal presentation of the theory was Aalen (1978b). General textbook introductions from our group have been given by Andersen, Borgan, Gill and Keiding (1993), and by Aalen, Borgan and Gjessing (2008). An earlier textbook was the one by Fleming and Harrington (1991). In a sense, martingales were latent in the survival field prior to the formal introduction. With hindsight there is a lot of martingale intuition in the famous Mantel-Haenszel test (Mantel and Haenszel, 1959) and in the fundamental partial likelihood paper by Cox (1975), but martingales were not mentioned in these papers. Interestingly, Tarone and Ware (1977) use dependent central limit theory which is really of a martingale nature. The present authors were all strongly involved in the developments we describe here, and so our views represent the subjective perspective of active participants. \section{The hazard rate and a martingale\\ estimator} In order to understand the events leading to the introduction of martingales in survival analysis, one must take a look at an estimator which is connected to the Kaplan-Meier estimator, and which today is called the Nelson-Aalen estimator. This estimation procedure focuses on the concept of a hazard rate. While the survival curve simply tells us how many have survived up to a certain time, the hazard rate gives us the risk of the event happening as a function of time, conditional on not having happened previously. Mathematically, let the random variable $T$ denote the survival time of an individual. The survival curve is then given by $S(t)=P(T>t)$. The hazard rate is defined by means of a conditional probability. Assuming that $T$ is absolutely continuous (i.e., has a probability density), one looks at those who have survived up to some time $t$, and considers the probability of the event happening in a small time interval $[t,t+dt)$. The hazard rate is defined as the following limit:% \begin{equation*} \alpha(t)=\lim_{\Delta t\,\rightarrow0}\frac{1}{\Delta t\,}P(t\leq T<t+\Delta t\,|\text{ }T\geq t). \label{hazard-def}% \end{equation*} Notice that, while the survival curve is a function that starts in 1 and then declines (or is partly constant) over time, the hazard function can be essentially any non-negative function. While it is simple to estimate the survival curve, it is more difficult to estimate the hazard rate as an arbitrary function of time. What, however, is quite easy is to estimate the cumulative hazard rate defined as% \[ A(t)=\int_{0}^{t}\alpha(s)\,ds. \] A non-parametric estimator of $A(t)$ was first suggested by Wayne Nelson (Nelson, 1969, 1972) as a graphical tool to obtain engineering information on the form of the survival distribution in reliability studies; see also Nelson (1982). The same estimator was independently suggested by Altshuler (1970) and by Aalen in his 1972 master thesis, which was partly published as a statistical research report from the University of Oslo (Aalen, 1972) and later in Aalen (1976a). The mathematical definition of the estimator is given in (\ref{naa}) below. \begin{figure} [ptb] \begin{center} \includegraphics[width=5cm,angle=-90]{fig1.eps}\\[0.2cm] \caption{\emph{Transition in a subset of a Markov chain}}% \label{Markov}% \end{center} \end{figure} In the 1970's there were close connections between Norwegian statisticians and the Department of Statistics at Berkeley, with the Berkeley professors Kjell Doksum (originally Norwegian) and Erich Lehmann playing particularly important roles. Several Norwegian statisticians went to Berkeley in order to take a Ph.D. The main reason for this was to get into a larger setting, which could give more impulses than what could be offered in a small country like Norway. Also, Berkeley offered a regular Ph.D.\ program that was an alternative to the independent type doctoral dissertation in the old European tradition, which was common in Norway at the time. Odd Aalen also went there with the intention to follow up on his work in his master thesis. The introduction of martingales in survival analysis was first presented in his\ 1975 Berkeley Ph.D.\ thesis (Aalen, 1975) and was in a sense a continuation of his master thesis. Aalen was influenced by his master thesis supervisor Jan M. Hoem who emphasized the importance of continuous-time Markov chains\ as a tool in the analysis when several events may occur to each individual (e.g., first the occurrence of an illness, and then maybe death; or the occurrence of several births for a woman). A subset of a state space for such a Markov chain may be illustrated as in Figure 1. Consider two states $i$ and $j$ in the state space, with $Y(t)$ the number of individuals being in state $i$ at time $t$, and with $N(t)$ denoting the number of transitions from $i$ to $j$ in the time interval $[0,t]$. The rate of a new event, i.e., a new transition occurring, is then seen to be $\lambda(t)=\alpha(t)Y(t)$. Censoring is easily incorporated in this setup, and the setup covers the usual survival situation if the two states $i$ and $j$ are the only states in the system with one possible transition, namely the one from $i$ to $j$. The idea of Aalen was to abstract from the above a general model, later termed the multiplicative intensity model; namely where the rate $\lambda(t)$\ of a counting process $N(t)$ can be written as the product of an observed process $Y(t)$ and an unknown rate function $\alpha(t)$, i.e.% \begin{equation} \lambda(t)=\alpha(t)Y(t). \label{multiplint}% \end{equation} This gives approximately% \[ dN(t)\approx\lambda(t)dt=\alpha(t)Y(t)dt, \] that is \[ \frac{dN(t)}{Y(t)}\approx\alpha(t)dt,\\[0.2cm] \] and hence a reasonable estimate of $A(t)=\int_{0}^{t}\alpha(s)\,ds$ would be:% \begin{equation} \label{naa} \widehat{A}(t)=\int_{0}^{t}\frac{dN(s)}{Y(s)}{\,}. \end{equation} This is precisely the Nelson-Aalen estimator. Although a general formulation of this concept can be based within the Markov chain framework as defined above, it is clear that this really has nothing to do with the Markov property. Rather, the correct setting would be a general point process, or counting process, $N(t)$ where the rate, or intensity process as a function of past occurrences, $\lambda(t)$, satisfied the property (\ref{multiplint}). This was clear to Aalen before entering the Ph.D.\ study at the University of California at Berkeley in 1973. The trouble was that no proper mathematical theory for counting processes with intensity processes dependent on the past had been published in the general literature by that time. Hence there was no possibility of formulating general results for the Nelson-Aalen estimator and related quantities. On arrival in Berkeley, Aalen was checking the literature and at one time in 1974 he asked professor David Brillinger at the Department of Statistics whether he knew about any such theory. Brillinger had then recently received the Ph.D.\ thesis of Pierre Bremaud (Bremaud, 1973), who had been a student at the Electronics Research Laboratory in Berkeley, as well as preprints of papers by Boel, Varayia and Wong (1975a, 1975b) from the same department. Aalen received those papers and it was immediately clear to him that this was precisely the right tool for giving a proper theory for the Nelson-Aalen estimator. Soon it turned out that the theory led to a much wider reformulation of the mathematical basis of the whole of survival and event history analysis, the latter meaning the extension to transitions between several different possible states. The mentioned papers were apparently the first to give a proper mathematical theory for counting processes with a general intensity process. As explained in this historical account, it turned out that martingale theory was of fundamental importance. With hindsight, it is easy to see why this is so. Let us start with a natural heuristic definition of an intensity process formulated as follows:% \begin{equation} \lambda(t)=\frac{1}{dt}P(dN(t)=1\, |\text{ past}), \label{intensdef}% \end{equation} where $dN(t)$ denotes the number of jumps (essentially 0 or 1) in $[t,t+dt)$. We can rewrite the above as \[ \lambda(t)=\frac{1}{dt}E(dN(t)\, |\text{ past}), \] that is% \begin{equation} E(dN(t)-\lambda(t)dt\, |\text{ past})=0, \label{count1}% \end{equation} where $\lambda(t)$ can be moved inside the conditional expectation since it is a function of the past. Let us now introduce the following process:% \begin{equation} M(t)=N(t)-\int_{0}^{t}\lambda(s)ds. \label{count3}% \end{equation} Note that (\ref{count1}) can be rewritten \[ E(dM(t)\, |\text{ past})=0. \] This is of course a (heuristic) definition of a martingale. Hence the natural intuitive concept of an intensity process (\ref{intensdef}) is equivalent to asserting that the counting process minus the integrated intensity process is a martingale. The Nelson-Aalen estimator is now derived as follows. Using the multiplicative intensity model of formula (\ref{multiplint}) we can write: \begin{equation} dN(t)=\alpha(t)\,Y(t)\,dt+dM(t). \label{differential}% \end{equation} For simplicity, we shall assume $Y(t)>0$ (this may be modfied, see e.g.\ Andersen et al., 1993). Dividing over (\ref{differential}) by $Y(t)$ yields% \[ \frac{1}{Y(t)}dN(t)=\alpha(t)\,+\frac{1}{Y(t)}dM(t). \] By integration we get% \begin{equation} \int_{0}^{t}\frac{dN(s)}{Y(s)}=\int_{0}^{t}\,\alpha(s)\,ds\,+\int_{0}^{t}% \frac{dM(s)}{Y(s)}. \label{nelson}% \end{equation} The right-most integral is recognized as a stochastic integral with respect to a martingale, and is therefore itself a zero-mean martingale. This represents noise in our setting and therefore $\hat{A}(t)$ is an unbiased estimator of $A(t)$, with the difference $\hat{A}(t)-A(t)$ being a martingale. Usually there is some probability that $Y(t)$ may become zero, which gives a slight bias. The focus of the Nelson-Aalen estimator is the hazard $\alpha(t)$, where $\alpha(t)dt$ is the instantaneous probability that an individual at risk at time $t$ has an event in the next little time interval $[t,t+dt)$. In the special case of survival analysis we study the distribution function $F(t)$ of a nonnegative random variable, which we for simplicity assume has density $f(t)=F^{\prime}(t)$, which implies $\alpha(t)=f(t)/(1-F(t))$, $t>0$. Rather than studying the hazard $\alpha(t)$, interest is often on the survival function $S(t)=1-F(t)$, relevant to calculating the probability of an event happening over some finite time interval $(s,t]$. To transform the Nelson-Aalen estimator into an estimator of $S(t)$ it is useful to consider the \emph{product-integral} transformation (Gill and Johansen, 1990; Gill, 2005):% \begin{equation*} S(t)= \mathop{\lower 2pt\hbox{\bbbigsym\char'031}}_{(0,t]} {}\left\{ 1-dA(s)\right\} . \end{equation*} Since $A(t)=\int_{0}^{t}\alpha(s)ds$ is the cumulative intensity corresponding to the hazard function $\alpha(t)$, we have \[\mathop{\lower 2pt\hbox{\bbbigsym\char'031}}_{(0,t]} {}\left\{ 1-dA(s)\right\} =\exp\left(-\int_{0}^{t}\alpha(s)ds\right), \] while if $A(t)=\sum_{s_{j}\leq t}h_{j}$ is the cumulative intensity corresponding to a discrete measure with jump $h_{j}$\ at time $s_{j}$ ($s_{1}<s_{2}<\cdots$) then% \[\mathop{\lower 2pt\hbox{\bbbigsym\char'031}}_{(0,t]} {}\left\{ 1-dA(s)\right\} =% \prod_{s_{j}\leq t} {}\left\{ 1-h_{j}\right\} . \] The plug-in estimator% \begin{equation} \label{km} \widehat{S}(t)= \mathop{\lower 2pt\hbox{\bbbigsym\char'031}}_{(0,t]} {}\left\{ 1-d\widehat{A}(s)\right\} \end{equation} is the Kaplan-Meier estimator (Kaplan and Meier, 1958). It is a finite product of the factors $1-1/Y(t_{j})$ for $t_{j}\le t$, where $t_{1}<t_{2}<\cdots$ are the times of the observed events. A basic martingale representation is available for the Kaplan-Meier estimator as follows. Still assuming $Y(t)>0$ (see Andersen et al., 1993, for how to relax this assumption) it may be shown by Duhamel's equation\ that% \begin{equation} \label{duhamel-u} \frac{\widehat{S}(t)}{S(t)}-1=-\int_{0}^{t}\frac{\widehat{S}(s-)}% {S(s)Y(s)}dM(s), \end{equation} where the right-hand side is a stochastic integral of a predictable process with respect to a zero-mean martingale, that is, itself a martingale. \textquotedblleft Predictable\textquotedblright\ is a mathematical formulation of the idea that the value is determined by the past, in our context it is sufficient that the process is adapted and has left-continuous sample paths. This representation is very useful for proving properties of the Kaplan-Meier estimator as shown by Gill (1980). \section{Stochastic integration and statistical\\ estimation} The discussion in the previous section shows that the martingale property arises naturally in the modelling of counting processes. It is not a modelling assumption imposed from the outside, but is an integral part of an approach where one considers how the past affects the future. This dynamic view of stochastic processes represents what is often termed the French probability school. A central concept is the local characteristic, examples of which are transition intensities of a Markov chain, the intensity process of a counting process, drift and volatility of a diffusion process, and the generator of an Ornstein-Uhlenbeck process. The same concept is valid for discrete time processes, see Diggle et al.\ (2007) for a statistical application of discrete time local characteristics. It is clearly important in this context to have a formal definition of what we mean by the \textquotedblleft past\textquotedblright. In stochastic process theory the past is formulated as a $\sigma$-algebra $\mathcal{F}_{t}$ of events, that is the family of events that can be decided to have happened or not happened by observing the past. We denote $\mathcal{F}_{t}$ as the \emph{history at time }$t$, so that the entire history (or filtration) is represented by the increasing family of $\sigma$-algebras $\{\mathcal{F}_{t}\}$. Unless otherwise specified processes will be adapted to $\{\mathcal{F}_{t}\}$, i.e., measurable with respect to $\mathcal{F}_{t}$\ at any time $t$. The definition of a martingale\ $M(t)$ in this setting will be that it fulfils the relation:% \[ \mathrm{E}(M(t)\,|\,\mathcal{F}_{s})=M(s)\mathrm{\ }\text{\textrm{for all} }t>s. \] In the present setting there are certain concepts from martingale theory that are of particular interest. Firstly, equation (\ref{count3}) can be rewritten as% \[ N(t)=M(t)+\int_{0}^{t}\lambda(s)ds. \] This is a special case of the \emph{Doob-Meyer decomposition}. This is a very general result, stating under a certain uniform integrability assumption that any submartingale can be decomposed into the sum of a martingale and a predictable process, which is often denoted a \emph{compensator}. The compensator in our case is the stochastic process $\int_{0}^{t}\lambda(s)ds$. Two important variation processes for martingales are defined, namely the predictable variation process $\left\langle M\right\rangle $, and the optional variation process $\left[ M\right] $. Assume that the time interval $[0,t]$ is divided into $n$ equally long intervals, and define $\Delta M_{k}% =M(k/n)-M((k-1)/n)$. Then \[ \left\langle M\right\rangle _{t}=\lim_{n\rightarrow\infty} {\displaystyle\sum\limits_{k=1}^{n}} \mathrm{Var}(\Delta M_{k}\,|\text{ }\mathcal{F}_{(k-1)/n})\quad \mbox{and} \quad\left[ M\right] _{t}=\lim_{n\rightarrow\infty} {\displaystyle\sum\limits_{k=1}^{n}} (\,\Delta M_{k})^{2}, \] where the limits are in probability. A second concept of great importance is stochastic integration. There is a general theory of stochastic integration with respect to martingales. Under certain assumptions, the central results are of the following kind: \begin{enumerate} \item A stochastic integral $\int_{0}^{t}H(s)\,dM(s)$ of a predictable process $H(t)$\ with respect to a martingale $M(t)$ is itself a martingale. \item The variation processes satisfy:% \begin{equation} \left\langle\int H\,dM\right\rangle=\int H^{2}\,d\langle M\rangle \,\,\, \mbox{and} \,\,\, \left[ \int H\,dM\right] =\int H^{2}\,d\left[ M\right] . \label{varproc} \end{equation} \end{enumerate} These formulas can be used to immediately derive variance formulas for estimators and tests in survival and event history analysis. The general mathematical theory of stochastic integration is quite complex. What is needed for our application, however, is relatively simple. Firstly, one should note that the stochastic integral in equation (\ref{nelson}) (the right-most integral) is simply the difference between an integral with respect to a counting processes and an ordinary Riemann integral. The integral with respect to a counting process is of course just of the sum of the integrand over jump times of the process. Hence, the stochastic integral in our context is really quite simple compared to the more general theory of martingales, where the martingales may have sample paths of infinite total variation on any interval, and where the It\={o} integral is the relevant theory. Still the above rules 1 and 2 are very useful in organizing and simplifying calculations and proofs. \section{Stopping times, unbiasedness and\\ independent censoring} The concepts of martingale and stopping time in probability theory are both connected to the notion of a fair game and originate in the work of Ville (1936, 1939). In fact one of the older (non-mathematical) meanings of martingale is a fair-coin tosses betting system which is supposed to give a guaranteed payoff. The requirement of unbiasedness in statistics can be viewed as essentially the same concept as a fair game. This is particularly relevant in connection with the concept of censoring which pervades survival and event history analysis. As mentioned above, censoring simply means that the observation of an individual process stops at a certain time, and after this time there is no more knowledge about what happened. In the 1960's and 1970's survival analysis methods were studied within reliability theory and the biostatistical literature assuming specific censoring schemes. The most important of these censoring schemes were the following: \begin{itemize} \item For \textit{type I censoring}, the survival time $T_{i}$ for individual $i$ is observed if it is no larger than a fixed censoring time $c_{i}$, otherwise we only know that $T_{i}$ exceeds $c_{i}$. \item For \textit{type II censoring}, observation is continued until a given number of events $r$ is observed, and then the remaining units are censored. \item \textit{Random censoring} is similar to type I censoring, but the censoring times $c_{i}$ are here the observed values of random variables $C_{i}$ that are independent of the $T_{i}$'s. \end{itemize} However, by adopting the counting process formulation, Aalen noted in his Ph.D. thesis and later journal publications (e.g.\ Aalen, 1978b) that if censoring takes place at a stopping time, as is the case for the specific censoring schemes mentioned above, then the martingale property will be preserved and no further assumptions on the form of censoring is needed to obtain unbiased estimators and tests. Aalen's argument assumed a specific form of the history, or filtration, $\{\mathcal{F}_{t}\}$. Namely that it is given as $\mathcal{F}_{t}% =\mathcal{F}_{0}\vee\mathcal{N}_{t}$, where $\{\mathcal{N}_{t}\}$ is the filtration generated by the uncensored individual counting processes, and $\mathcal{F}_{0}$ represents information available to the researcher at the outset of the study. However, censoring may induce additional variation not described by a filtration of the above form, so one may have to consider a larger filtration $\{\mathcal{G}_{t}\}$ also describing this additional randomness. The fact that we have to consider a larger filtration may have the consequence that the intensity processes of the counting processes may change. However, if this is not the case, so that the intensity processes with respect to $\{\mathcal{G}_{t}\}$ are the same as the $\{\mathcal{F}_{t}\}$-intensity processes, censoring is said to be \emph{independent}. Intuitively this means that the additional knowledge of censoring times up to time $t$ does not carry any information on an individual's risk of experiencing an event at time $t$. A careful study of independent censoring for marked point process models along these lines was first carried out by Arjas and Hara (1984). The ideas of Arjas and Hara were taken up and further developed by Per Kragh Andersen, {\O}rnulf Borgan, Richard Gill, and Niels Keiding as part of their work on the monograph \emph{Statistical Models Based on Counting Processes}; cf. Section 11 below. Discussions with Martin Jacobsen were also useful in this connection (see also Jacobsen, 1989). Their results were published in Andersen et al. (1988) and later Chapter 3 of their monograph. It should be noted that there is a close connection between drop-outs in longitudinal data and censoring for survival data. In fact, independent censoring in survival analysis is essentially the same as \emph{sequential missingness at random} in longitudinal data analysis (e.g., Hogan, Roy and Korkontzelou, 2004). In many standard statistical models there is an intrinsic assumption of independence between outcome variables. While, in event history analysis, such an assumption may well be reasonable for the basic, uncensored observations, censoring may destroy this independence. An example is survival data in an industrial setting subject to type 2 censoring; that is the situation where items are put on test simultaneously and the experiment is terminated at the time of the $r$-th failure (cf.\ above). However, for such situations martingale properties may be preserved; in fact, for type 2 censoring $\{\mathcal{G}_{t}\}=\{\mathcal{F}_{t}\}$ and censoring is trivially independent according to the definition just given. This suggests that, for event history data, the counting process and martingale framework is, indeed, the natural framework and that the martingale property replaces the traditional independence assumption, also in the sense that it forms the basis of central limit theorems, which will be discussed next. \section{Martingale central limit theorems} As mentioned, the martingale property replaces the common independence assumption. One reason for the ubiquitous assumption of independence in statistics is to get some asymptotic distributional results of use in estimation and testing, and the martingale assumption can fulfil this need as well. Central limit theorems for martingales can be traced back at least to the beginning of the 1970's (Brown, 1971; Dvoretsky, 1972). Of particular importance for the development of the present theory was the paper by McLeish (1974). The potential usefulness of this paper was pointed out to Aalen by his Ph.D.\ supervisor Lucien Le Cam. In fact this happened before the connection had been made to Bremaud's new theory of counting processes, and it was first after the discovery of this theory that the real usefulness of McLeish's paper became apparent. The application of counting processes to survival analysis including the application of McLeish's paper was done by Aalen during 1974--75. The theory of McLeish was developed for the discrete-time case, and had to be further developed to cover the continuous-time setting of the counting process theory. What presumably was the first central limit theorem for continuous time martingales was published in Aalen (1977). A far more elegant and complete result was given by Rebolledo (1980), and this formed the basis for further developments of the statistical theory; see Andersen et al.\ (1993) for an overview. A nice early result was also given by Helland (1982). The central limit theorem for martingales is related to the fact that a martingale with continuous sample paths and a deterministic predictable variation process is a Gaussian martingale, i.e., with normal finite-dimensional distributions. Hence one would expect a central limit theorem for counting process associated martingales to depend on two conditions: \begin{enumerate} \item[(i)] the sizes of the jumps go to zero (i.e., approximating continuity of sample paths) \item[(ii)] either the predictable or the optional variation process converges to a deterministic function \end{enumerate} In fact, the conditions in Aalen (1977) and Rebolledo (1980) are precisely of this nature. Without giving the precise formulations of these conditions, let us look informally at how they work out for the Nelson-Aalen estimator. We saw in formula (\ref{nelson}) that the difference between estimator and estimand of cumulative hazard up to time $t$ could be expressed as $\int_{0}% ^{t}dM(s)/Y(s)$, the stochastic integral of the process $1/Y$ with respect to the counting process martingale $M$. Considered as a stochastic process (i.e., indexed by time $t$), this \textquotedblleft estimation-error process\textquotedblright\ is therefore itself a martingale. Using the rules (\ref{varproc}) we can compute its optional variation process to be $\int_{0}% ^{t}dN(s)/Y(s)^{2}$ and its predictable variation process to be $\int_{0}% ^{t}\alpha(s)ds/Y(s)$. The error process only has jumps where $N$ does, and at a jump time $s$, the size of the jump is $1/Y(s)$. As a first attempt to get some large sample information about the Nelson-Aalen estimator, let us consider what the martingale central limit theorem could say about the Nelson-Aalen estimation-error process. Clearly we would need the number at risk process $Y$ to get uniformly large, in order for the jumps to get small. In that case, the predictable variation process $\int_{0}^{t} \alpha(s) ds /Y(s)$ is forced to be getting smaller and smaller. Going to the limit, we will have convergence to a continuous Gaussian martingale with zero predictable variation process. But the only such process is the constant process, equal to zero at all times. Thus in fact we obtain a consistency result: if the number at risk process gets uniformly large, in probability, the estimation error converges uniformly to zero, in probability. (Actually there are martingale inequalities of Chebyshev type which allow one to draw this kind of conclusion without going via central limit theory.) In order to get nondegenerate asymptotic normality results, we should zoom in on the estimation error. A quite natural assumption in many applications is that there is some index $n$, standing perhaps for sample size, such that for each $t$, $Y(t)/n$ is roughly constant (non random) when $n$ is large. Taking our cue from classical statistics, let us take a look at $\sqrt{n}$ times the estimation error process $\int_{0}^{t}dM(s)/Y(s)$. This has jumps of size $(1/\sqrt{n})(Y(s)/n)^{-1}$. The predictable variation process of the rescaled estimation error is $n$ times what it was before: it becomes $\int_{0}% ^{t}(Y(s)/n)^{-1}\alpha(s)ds$. So, the convergence of $Y/n$ to a deterministic function ensures simultaneously that the jumps of the rescaled estimation error process become vanishingly small and that its predictable variation process converges to a deterministic function. The martingale central limit theorem turns out to be extremely effective in allowing us to guess the kind of results which might be true. Technicalities are reduced to a minimum; results are essentially optimal, i.e., the assumptions are minimal. Why is that so? In probability theory, the 1960's and 1970's were the heyday of study of martingale central limit theorems. The outcome of all this work was that the martingale central limit theorem was not only a generalization of the classical Lindeberg central limit theorem, but that the proof was the same: it was simply a question of judicious insertion of conditional expectations, and taking expectations by repeated conditioning, so that the same line of proof worked exactly. In other words, the classical Lindeberg proof of the central limit theorem (see e.g.\ Feller, 1967) already is the proof of the martingale central limit theorem. The difficult extension, taking place in the 1970's to the 1980's, was in going from discrete time to continuous time processes. This required a major technical investigation of what are the continuous time processes which we are able to study effectively. This is quite different from research into central limit theorems for other kinds of processes, e.g., for stationary time series. In that field, one splits the process under study into many blocks, and tries to show that the separate blocks are almost independent if the distance between the blocks is large enough. The distance between the blocks should be small enough that one can forget about what goes on between. The central limit theorem comes from looking for approximately independent summands hidden somewhere inside the process of interest. However in the martingale case, one is already studying exactly the kind of process for which the best (sharpest, strongest) proofs are already attuned. No approximations are involved. At the time martingales made their entry to survival analysis, statisticians were using many different tools to get large sample approximations in statistics. One had different classes of statistics for which special tools had been developed. Each time something was generalized from classical data to survival data, the inventors first showed that the old tools still worked to get some information about large sample properties (e.g.\ U statistics, rank tests). Just occasionally, researchers saw a glimmering of martingales behind the scenes, as when Tarone and Ware (1977) used Dvoretsky's (1972) martingale central limit theorem in the study of their class of non-parametric tests. Another important example of work where martingale type arguments were used, is Cox's (1975) paper on partial likelihood; cf.\ Section~\ref{sec:cox}. \section{Two-sample tests for counting processes} During the 1960's and early 1970's a plethora of tests for comparing two or more survival functions were suggested (Gehan, 1965; Mantel, 1966; Efron, 1967;\ Breslow, 1970; Peto and Peto, 1972). The big challenge was to handle the censoring, and various simplified censoring mechanisms were proposed with different versions of the tests fitted to the particular censoring scheme. The whole setting was rather confusing, with an absence of a theory connecting the various specific cases. The first connection to counting processes was made by Aalen in his Ph.D.\ thesis when it was shown that a generalized Savage test (which is equivalent to the logrank test) could be given a martingale formulation. In a Copenhagen research report (Aalen, 1976b) this was extended to a general martingale formulation of two-sample tests which turned out to encompass a number of previous proposals as special cases. The very simple idea was to write the test statistic as a weighted stochastic integral over the difference between two Nelson-Aalen estimators. Let the processes to be compared be indexed by $i=1,2$. A class of tests for comparing the two rate functions $\alpha_{1}(t)$ and $\alpha_{2}(t)$ is then defined by \[ X(t)=\int_{0}^{t}L(s)d(\hat{A}_{1}(s)-\hat{A}_{2}(s)) =\int_{0}^{t}L(s)\left( \frac{dN_{1}(s)}{Y_{1}(s)}-\frac{dN_{2}(s)}{Y_{2}(s)}\right) . \] Under the null hypothesis of $\alpha_{1}(s)\equiv\alpha_{2}(s)$ it follows that $X(t)$\ is a martingale since it is a stochastic integral. An estimator of the variance can be derived from the rules for the variation processes, and the asymptotics is taken care of by the martingale central limit theorem. It was found by Aalen (1978b) and detailed by Gill (1980) that almost all previous proposals for censored two-sample tests in the literature were special cases that could be arrived at by judicious choice of the weight function $L(t)$. A thorough study of two-sample tests from this point of view was first given by Richard Gill in his Ph.D.\ thesis from Amsterdam (Gill, 1980). The inspiration for Gill's work was a talk given by Odd Aalen at the European Meeting of Statisticians in Grenoble in 1976. At that time Gill was about to decide on the topic for his Ph.D.\ thesis, one option being two sample censored data rank tests. He was very inspired by Aalen's talk and the uniform way to treat all the different two-sample statistics offered by the counting process formulation, so this decided the topic for his thesis work. At that time, Gill had no experience with martingales in continuous time. But by reading Aalen's thesis and other relevant publications, he soon mastered the theory. To that end it also helped him that there was organized a study group in Amsterdam on counting processes with Piet Groeneboom as a key contributor. \section{The Copenhagen environment} \noindent Much of the further development of counting process theory to statistical issues springs out of the statistics group at the University of Copenhagen. After his Ph.D.\ study in Berkeley, Aalen was invited by his former master thesis supervisor, Jan M. Hoem, to visit the University of Copenhagen, where Hoem had taken a position as professor in actuarial mathematics. Aalen spent 8 months there (November 1975 to June 1976) and his work immediately caught the attention of Niels Keiding, S\o ren Johansen, and Martin Jacobsen, among others. The Danish statistical tradition at the time had a strong mathematical basis combined with a growing interest in applications. Internationally, this combination was not so common at the time; mostly the good theoreticians tended to do only theory while the applied statisticians were less interested in the mathematical aspects. Copenhagen made a fertile soil for the further development of the theory. It was characteristic that for such a new paradigm, it took time to generate an intuition for what was obvious and what really required detailed study. For example, when Keiding gave graduate lectures on the approach in 1976/77 and 1977/78, he followed Aalen's thesis closely and elaborated on the mathematical prerequisites [stochastic processes in the French way, counting processes (Jacod, 1975), square integrable martingales, martingale central limit theorem (McLeish, 1974)]. This was done in more mathematical generality than became the standard later. For example, he patiently went through the Doob-Meyer decompositions following Meyer's \emph{Probabilit\'{e}s et Potentiel} (Meyer, 1966), and he quoted the derivation by Courr\`{e}ge and Priouret (1965) of the following result: If $(N_{t})$ is a stochastic process, $\{\mathcal{N}_{t}\}$ is the family of $\sigma$-algebras generated by $(N_{t})$, and $T$ is a stopping time (i.e.\ $\{T\leq t\}\in\mathcal{N}_{t}$ for all $t$), then the conventional definition of the $\sigma$-algebra $\mathcal{N}_{T}$ of events happening before $T$ is% \[ A\in\mathcal{N}_{T}\iff\forall t:A\cap\{T\leq t\}\in\mathcal{N}_{t}. \] A more intuitive way of defining this $\sigma$-algebra is% \[ \mathcal{N}_{T}^{\ast}=\sigma\{N_{T\wedge u},\,\, u\geq0\}. \] Courr\`{e}ge and Priouret (1965) proved that $\mathcal{N}_{T}=\mathcal{N}% _{T}^{\ast}$ through a delicate analysis of the path properties of $(N_{t})$. Keiding quoted the general definition, valid for measures with both discrete and continuous components, of predictability, not satisfying himself with the ``essential equivalence to left-continuous sample paths" that we work with nowadays. Keiding had many discussions with his colleague, the probabilist Martin Jacobsen, who had long focused on path properties of stochastic processes. Jacobsen developed his own independent version of the course in 1980 and wrote his lecture notes up in the \emph{Springer Lecture Notes in Statistics} series (Jacobsen, 1982). Among those who happened to be around in the initial phase was Niels Becker from Melbourne, Australia, already then well established with his work in infectious disease modelling. For many years to come martingale arguments were used as important tools in Becker's further work on statistical models for infectious disease data; see Becker (1993) for an overview of this work. A parallel development was the interesting work of Arjas and coauthors on statistical models for marked point processes, see e.g.\ Arjas and Haara (1984) and Arjas (1989). \section{From Kaplan-Meier to the empirical \\ transition matrix} A central effort initiated in Copenhagen in 1976 was the generalization from scalar to matrix values of the Kaplan-Meier estimator. This started out with the estimation of transition probabilities in the competing risks model developed by Aalen (1972); a journal publication of this work first came in Aalen (1978a). This work was done prior to the introduction of martingale theory, and just like the treatment of the cumulative hazard estimator in Aalen (1976a) it demonstrates the complications that arose before the martingale tools had been introduced. In 1973 Aalen had found a matrix version of the Kaplan-Meier estimator for Markov chains, but did not attempt a mathematical treatment because this seemed too complex. It was the martingale theory that allowed an elegant and compact treatment of these attempts to generalize the Kaplan-Meier estimator, and the breakthrough here was made by S\o ren Johansen in 1975--76. It turned out that martingale theory could be combined with the product-integral approach to non-homogeneous Markov chains via an application of Duhamel's equality; cf.\ (\ref{duhamel-m}) below. The theory of stochastic integrals could then be used in a simple and elegant way. This was written down in a research report (Aalen and Johansen, 1977) and published in Aalen and Johansen (1978). Independently of this the same estimator was developed by Fleming and published in Fleming (1978a, 1978b) just prior to the publication of Aalen and Johansen (and duly acknowledged in their paper). Tom Fleming and David Harrington were Ph.D. students of Grace Yang at the University of Maryland, and they have later often told us that they learned about Aalen's counting process theory from Grace Yang's contact with her own former Ph.D. advisor, Lucien Le Cam. Fleming also based his work on the martingale counting process approach. He had a more complex presentation of the estimator presenting it as a recursive solution of equations; he did not have the simple matrix product version of the estimator nor the compact presentation through the Duhamel equality which allowed for general censoring and very compact formulas for covariances. The estimator is named the empirical transition matrix,\, see e.g.\ Aalen et al.\ (2008). The compact matrix product version of the estimator presented in Aalen and Johansen (1978) is often called the Aalen-Johansen estimator, and we are going to explain the role of martingales in this estimator. More specifically, consider an inhomogeneous continuous-time Markov process with finite state space $\{1,\ldots,k\}$ \ and transition intensities $\alpha_{hj}(t)$ between states $h$ and $j$, where in addition we define $\alpha_{hh}(t)=-\sum_{j\neq h}\alpha_{hj}(t)$ and denote the matrix of all $A_{hj}(t)=\int_{0}^{t}\alpha_{hj}(s)ds$ as $\mathbf{A}(t)$. Nelson-Aalen estimators $\widehat{A}_{hj}(t)$ of the cumulative transition intensities $A_{hj}(t)$ may be collected in the matrix $\widehat{\mathbf{A}}% (t)=\{\widehat{A}_{hj}(t)\}$. To derive an estimator of the transition probability matrix $\mathbf{P}(s,t)=\{P_{hj}(s,t)\}$ it is useful to represent it as the matrix product-integral \begin{equation*} \mathbf{P}(s,t)=\mathop{\lower 2pt\hbox{\bbbigsym\char'031}}_{(s,t]} {}\left\{ \mathbf{I}+d\mathbf{A}(u)\right\}, \end{equation*} which may be defined as \[\mathop{\lower 2pt\hbox{\bbbigsym\char'031}}_{(s,t]} {}\left\{ \mathbf{I}+d\mathbf{A}(u)\right\} =\lim_{\max|u_{i}-u_{i-1}% |\rightarrow0} \prod_{i} {}\left\{ \mathbf{I}+\mathbf{A}(u_{i})-\mathbf{A}(u_{i-1})\right\}, \] where $s=u_{0}<u_{1}<\cdots<u_{n}=t$ is a partition of $(s,t]$ and the matrix product is taken in its natural order from left to right. The empirical transition matrix or Aalen-Johansen estimator is the plug-in estimator% \begin{equation} \label{aj} \widehat{\mathbf{P}}(s,t)=\mathop{\lower 2pt\hbox{\bbbigsym\char'031}}_{(s,t]} {}\left\{ \mathbf{I}+d\widehat{\mathbf{A}}(u)\right\}, \end{equation} which may be evaluated as a finite matrix product over the times in $(s,t]$ when transitions are observed. Note that (\ref{aj}) is a multivariate version of the Kaplan-Meier estimator (\ref{km}). A matrix martingale relation may derived from a matrix version of the Duhamel equation (\ref{duhamel-u}). For the case where all numbers at risk in the various states, $Y_{h}(t)$, are positive this reads% \begin{equation} \label{duhamel-m} \widehat{\mathbf{P}}(s,t)\mathbf{P}(s,t)^{-1}-\mathbf{I}=\int_{s}^{t} \widehat{\mathbf{P}}(s,u-)d(\widehat{\mathbf{A}}-\mathbf{A})(u)\mathbf{P}% (s,u)^{-1}. \end{equation} This is a stochastic integral representation from which covariances and asymptotic properties can be deduced directly. This particular formulation is from Aalen and Johansen (1978). \section{Pustulosis palmo-plantaris and $k$-sample\\ tests} One of the projects that were started when Aalen visited the University of Copenhagen was an epidemiological study of the skin disease pustulosis palmo-plantaris with Aalen, Keiding and the medical doctor Jens Thormann as collaborators. Pustulosis palmo-plantaris is mainly a disease among women, and the question was whether the risk of the disease was related to the occurrence of menopause. Consecutive patients from a hospital out-patient clinic were recruited, so the data could be considered a random sample from the prevalent population. At the initiative of Jan M.\ Hoem, another of his former master students from Oslo, {\O }rnulf Borgan, was asked to work out the details. Borgan had since 1977 been assistant professor in Copenhagen, and he had learnt the counting process approach to survival analysis from the above mentioned series of lectures by Niels Keiding. The cooperation resulted in the paper Aalen et al.\ (1980). In order to be able to compare patients without menopause with patients with natural menopuase and with patients with induced menopause, the statistical analysis required an extension of Aalen's work on two-sample tests to more than two samples. (The work of Richard Gill on two-sample tests was not known in Copenhagen at that time.) The framework for such an extension is $k$ counting processes $N_{1},\ldots,N_{k}$, with intensity processes $\lambda _{1},\ldots,\lambda_{k}$ of the multiplicative form $\lambda_{j}(t)=\alpha _{j}(t)Y_{j}(t)$; $j=1,2,\ldots,k$; and where the aim is to test the hypothesis that all the $\alpha_{j}$ are identical. Such a test may be based on the processes \[ X_{j}(t)=\int_{0}^{t}K_{j}(s)d(\hat{A}_{j}(s)-\hat{A}(s)),\qquad j=1,2,\ldots,k, \] where $\hat{A}_{j}$ is the Nelson-Aalen estimator based on the $j$-th counting process, and $\hat{A}$ is the Nelson-Aalen estimator based on the aggregated counting process $N=\sum_{j=1}^{k}N_{j}$. This experience inspired a decision to give a careful presentation of the $k$-sample tests for counting processes and how they gave a unified formulation of most rank based tests for censored survival data, and Per K. Andersen (who also had followed Keiding's lectures), {\O }rnulf Borgan, and Niels Keiding embarked on this task in the fall of 1979. During the work on this project, Keiding was (by Terry Speed) made aware of Richard Gill's work on two-sample tests. (Speed, who was then on sabbatical in Copenhagen, was at a visit in Delft where he came across an abstract book for the Dutch statistical association's annual gathering with a talk by Gill about the counting process approach to censored data rank tests.) Gill was invited to spend the fall of 1980 in Copenhagen. There he got a draft manuscript by Andersen, Borgan and Keiding on $k$-sample tests, and as he made a number of substantial comments to the manuscript, he was invited to co-author the paper (Andersen, Borgan, Gill, and Keiding, 1982). \section{The Cox model} \label{sec:cox} With the development of clinical trials in the 1950's and 1960's the need to analyze censored survival data dramatically increased, and a major breakthrough in this direction was the Cox proportional hazards model published in 1972 (Cox, 1972). Now, regression analysis of survival data was possible. Specifically, the Cox model describes the hazard rate for a subject, $i$ with covariates $\mathbf{Z}_{i}=(Z_{i1},\dots,Z_{ip})^{\mathsf{T}}$ as \[ \alpha(t\mid\mathbf{Z}_{i})=\alpha_{0}(t)\exp(\mbox{\boldsymbol{$\beta$}}^{\mathsf{T}% }\mathbf{Z}_{i}). \] This is a product of a \emph{baseline} hazard rate $\alpha_{0}(t)$, common to all subjects, and the exponential function of the linear predictor, $\mbox{\boldsymbol{$\beta$}}^{\mathsf{T}}\mathbf{Z}_{i}=\sum_{j}\beta_{j}Z_{ij}$. With this specification, hazard rates for all subjects are proportional and $\exp (\beta_{j})$ is the hazard rate ratio associated with an increase of 1 unit for the $j$th covariate $Z_{j}$, that is the ratio \[ \exp(\beta_{j})=\frac{\alpha(t\mid Z_{1},Z_{2},...,Z_{j-1},Z_{j}% +1,Z_{j+1},...,Z_{p})}{\alpha(t\mid Z_{1},Z_{2},...,Z_{j-1},Z_{j}% ,Z_{j+1},...,Z_{p})}, \] where $Z_{\ell}$ for $\ell\neq j$ are the same in numerator and denominator. The model formulation of Cox (1972) allowed for covariates to be time-dependent and it was suggested to estimate $\mbox{\boldsymbol{$\beta$}}$ by the value $\widehat{\mbox{\boldsymbol{$\beta$}}}$ maximizing the \emph{partial likelihood} \begin{equation} \label{partlik} L(\mbox{\boldsymbol{$\beta$}})=\prod_{i:D_{i}=1}\frac{\exp(\mbox{\boldsymbol{$\beta$}}^{\mathsf{T}% }\mathbf{Z}_{i}(T_{i}))}{\sum_{j\in R_{i}}\exp(\mbox{\boldsymbol{$\beta$}}^{\mathsf{T}% }\mathbf{Z}_{j}(T_{i}))}. \end{equation} Here, $D_{i}=I(i\mbox{ was observed to fail})$ and $R_{i}$ is the \emph{risk set}, i.e., the set of subjects still at risk at the time, $T_{i}$, of failure for subject $i$. The cumulative baseline hazard rate $A_{0}(t)=\int_{0}% ^{t}\alpha_{0}(u)du$ was estimated by the Breslow (1972, 1974) estimator \begin{equation} \label{breslow} \widehat{A}_{0}(t)=\sum_{i:T_{i}\leq t}\frac{D_{i}}{\sum_{j\in R_{i}}% \exp(\widehat{\mbox{\boldsymbol{$\beta$}}}^{\mathsf{T}}\mathbf{Z}_{j}(T_{i}))}. \end{equation} Cox's work triggered a number of methodological questions concerning inference in the Cox model. In what respect could the partial likelihood (\ref{partlik}) be interpreted as a proper likelihood function? How could the large sample properties of the resulting estimators be established? Cox himself used repeated conditional expectations (which essentially was a martingale argument) to show informally that his partial likelihood (\ref{partlik}) had similar properties as an ordinary likelihood, while Tsiatis (1981) used classical methods to provide a thorough treatment of large sample properties of the estimators $(\widehat{\mbox{\boldsymbol{$\beta$}}% },\widehat{A}_{0}(t))$ when only time-fixed covariates were considered. The study of large sample properties, however, were particularly intriguing when time-dependent covariates were allowed in the model. At the Statistical Research Unit in Copenhagen, established in 1978, analysis of survival data was one of the key research areas and several applied medical projects using the Cox model were conducted. One of these projects, initiated in 1978 and published by Andersen and Rasmussen (1986), dealt with recurrent events: admissions to psychiatric hospitals among pregnant women and among women having given birth or having had induced abortion. Here, a model for the intensity of admissions was needed and since previous admissions were strongly predictive for new admissions, time-dependent covariates should be accounted for. Counting processes provided a natural framework in which to study the phenomenon and research activities in this area were already on the agenda, as exemplified above. It soon became apparent that the Cox model could be immediately applied for the recurrent event intensity, and Johansen's (1983) derivation of Cox's partial likelihood as a profile likelihood also generalized quite easily. The individual counting processes, $N_{i}(t)$, counting admissions for woman $i$ could then be \textquotedblleft Doob-Meyer decomposed\textquotedblright\ as \begin{equation} \label{cox-decomp} N_{i}(t)=\int_{0}^{t}Y_{i}(u)\alpha_{0}(u)\exp(\mbox{\boldsymbol{$\beta$}}^{\mathsf{T}% }\mathbf{Z}_{i}(u))du+M_{i}(t). \end{equation} Here, $Y_{i}(t)$ is the at-risk indicator process for woman $i$ (indicating that she is still in the study and out of hospital at time $t$), $\mathbf{Z}_{i}(t)$ is the, possibly time-dependent, covariate vector including information on admissions before $t$, and $\alpha_{0}(t)$ the unspecified baseline hazard. Finally, $M_{i}(t)$ is the martingale. We may write the sum over event times in the score $ \mathbf{U}(\mbox{\boldsymbol{$\beta$}})=\partial\log L(\mbox{\boldsymbol{$\beta$}})/\partial\mbox{\boldsymbol{$\beta$}}, $ derived from Cox's partial likelihood (\ref{partlik}), as the counting process integral \[ \mathbf{U}(\mbox{\boldsymbol{$\beta$}})=\sum_{i}\int_{0}^{\infty}\left(\mathbf{Z}_{i}% (u)-\frac{\sum_{j}Y_{j}(u)\mathbf{Z}_{j}(u)\exp(\mbox{\boldsymbol{$\beta$}}^{\mathsf{T}% }\mathbf{Z}_{j}(u))}{\sum_{j}Y_{j}(u)\exp(\mbox{\boldsymbol{$\beta$}}^{\mathsf{T}% }\mathbf{Z}_{j}(u))}\right)\, dN_{i}(u). \] Then using (\ref{cox-decomp}), the score can be re-written as $U_{\infty }(\mbox{\boldsymbol{$\beta$}})$, where \[ \mathbf{U}_{t}(\mbox{\boldsymbol{$\beta$}})=\sum_{i}\int_{0}^{t}\left(\mathbf{Z}_{i}% (u)-\frac{\sum_{j}Y_{j}(u)\mathbf{Z}_{j}(u)\exp(\mbox{\boldsymbol{$\beta$}}^{\mathsf{T}% }\mathbf{Z}_{j}(u))}{\sum_{j}Y_{j}(u)\exp(\mbox{\boldsymbol{$\beta$}}^{\mathsf{T}% }\mathbf{Z}_{j}(u))}\right)\, dM_{i}(u). \] Thus, evaluated at the true parameter values, the Cox score, considered as a process in $t$ is a martingale stochastic integral, provided the time-dependent covariates (and $Y_{i}(t)$) are predictable. Large sample properties for the score could then be established using the martingale central limit theorem and transformed into a large sample result for $\widehat{\mbox{\boldsymbol{$\beta$}}}$ by standard Taylor expansions. Also, asymptotic properties of the Breslow estimator (\ref{cox-decomp}) could be established using martingale methods. This is because we may write the estimator as $\widehat{A}_{0}(t)=\widehat{A}_{0} (t\, |\, \widehat{\mbox{\boldsymbol{$\beta$}}})$, where for the true value of $\mbox{\boldsymbol{$\beta$}}$ we have \[ \widehat{A}_{0}(t\, |\, \mbox{\boldsymbol{$\beta$}})=\int_{0}^{t}\frac{\sum_{i}d N_{i}(u)}{\sum_{j}Y_{j}(u)\exp(\mbox{\boldsymbol{$\beta$}}^{\mathsf{T}}\mathbf{Z}_{j}(u))} =A_{0}(t)+\int_{0}^{t}\frac{\sum_{i}dM_{i}(u)}{\sum_{j}Y_{j}% (u)\exp(\mbox{\boldsymbol{$\beta$}}^{\mathsf{T}}\mathbf{Z}_{j}(u))}. \] That is, $\widehat{A}_{0}(t\, | \, \mbox{\boldsymbol{$\beta$}})-A_{0}(t)$ is a martingale stochastic integral. These results were obtained by Per Kragh Andersen in 1979-80, but a number of technicalities remained to get proper proofs. As mentioned above, Richard Gill visited Copenhagen in 1980 and he was able to provide the proof of consistency and work out the detailed verifications of the general conditions for the asymptotic results in Andersen and Gill's (1982) \emph{Annals of Statistics} paper. It should be noted that N{\ae }s (1982), independently, published similar results under somewhat more restrictive conditions using discrete-time martingale results. Obviously, the results mentioned above also hold for counting processes $ N_{i}(t)=I(T_{i}\leq t, D_i=1) $ derived from censored survival times and censoring indicators, $(T_{i}, D_i)$, but historically the result was first derived for the \textquotedblleft Andersen-Gill\textquotedblright\ recurrent events process. Andersen and Borgan (1985), see also Andersen et al.\ (1993, Chapter VII), extended these results to multivariate counting processes modelling the occurrence of several types of events in the same subjects. Later, Barlow and Prentice (1988) and Therneau, Grambsch and Fleming (1990) used the Doob-Meyer decomposition (\ref{cox-decomp}) to define \emph{martingale residuals} \begin{equation} \label{martres} \widehat{M}_{i}(t)=N_{i}(t)-\int_{0}^{t}\exp(\widehat{\mbox{\boldsymbol{$\beta$}}% }^{\mathsf{T}}\mathbf{Z}_{i}(u)) d\widehat{A}_{0}(u). \end{equation} Note how $N_{i}(t)$ plays the role of the observed data while the compensator term estimates the expectation. We are then left with the martingale noise term. The martingale residuals (\ref{martres}) provide the basis for a number of goodness-of-fit techniques for the Cox model. First, they were used to study whether the functional form of a quantitative covariate was modelled in a sensible way. Later, cumulative sums of martingale residuals have proven useful for examining several features of hazard based models for survival and event history data, including both the Cox model, Aalen's additive hazards model and others (e.g., Lin, Wei and Ying, 1993; Martinussen and Scheike, 2006). The additive hazards model was proposed by Aalen (1980) as a tool for analyzing survival data with changing effects of covariates. It is also useful for recurrent event data and dynamic path analysis, see e.g.\ Aalen et al.\ (2008). \section{The monograph \emph{Statistical models based on counting processes}} As the new approach spread, publishers became interested, and as early as 1982 Martin Jacobsen had published his exposition in the Springer Lecture Notes in Statistics (Jacobsen, 1982). In 1982 Niels Keiding gave an invited talk ``Statistical applications of the theory of martingales on point processes" at the Bernoulli Society conference on Stochastic Processes in Clermont-Ferrand. (One slide showed a graph of a simulated sample function of a compensator, which prompted the leading French probabilist Michel M\'{e}tivier to exclaim ``This is the first time I have seen a compensator".) At that conference Klaus Krickeberg, himself a pioneer in martingale theory and an advisor to the Springer Series in Statistics, invited Keiding to write a monograph on this topic. Keiding floated this idea in the well-established collaboration with Andersen, Borgan and Gill. Aalen was asked to participate, but had just started to build up a group of medical statistics in Oslo and wanted to give priority to that. So the remaining four embarked upon what became an intense 10-year collaboration resulting in the monograph \emph{Statistical Models Based on Counting Processes} (Andersen et al., 1993). The monograph combines concrete practical examples, almost all of the authors' own experience, with an exposition of the mathematical background, several detailed chapters on non- and semiparametric models as well as parametric models, as well as chapters giving preliminary glimpses into topics to come: semiparametric efficiency, frailty models (for more elaborate introductions of frailty models see Hougaard, 2002; or Aalen et al., 2008) and multiple time-scales. Fleming and Harrington had published their monograph \emph{Counting Processes and Survival Analysis} with Wiley in 1991 (Fleming and Harrington, 1991). It gives a more textbook-type presentation of the mathematical background and covers survival analysis up to and including the proportional hazards model for survival data. \section{Limitations of martingales} Martingale tools do not cover all areas where survival and event history analysis may be used. In more complex situations one can see the need to use a variety of tools, alongside of what martingale theory provides. For staggered entry, the Cox frailty model, and in Markov renewal process/semi-Markov models (see e.g.\ Andersen et al., 1993, Chapters IX and X, for references on this work), martingale methods give transparent derivations of mean values and covariances, likelihoods, and maximum likelihood estimators; however to derive large sample theory, one needs input from the theory of empirical processes. Thus in these situations the martingale approach helps at the modelling stage and the stage of constructing promising statistical methodology, but one needs different tools for the asymptotic theory. The reason for this in a number of these examples is that the martingale structure corresponds to the dynamics of the model seen in real (calendar) time, while the principal time scales of statistical interest correspond to time since an event which is repeated many times. In the case of frailty models, the problem is that there is an unobserved covariate associated with each individual; observing that individual at late times gives information about the value of the covariate at earlier times. In all these situations, the natural statistical quantities to study can no longer be directly expressed as sums over contributions from each (calendar) time point, weighted by information only from the (calendar time) past. More complex kinds of missing data (frailty models can be seen as an example of missing data), and biased sampling, lead also to new levels of complexity in which the original dynamical time scale becomes just one feature of the problem at hand, other features which do not mesh well with this time scale become dominating, with regards to the technical investigation of large sample behaviour. A difficulty with the empirical process theory is the return to a basis of independent processes, and so a lot of the niceness of the martingale theory is lost. Martingales allow for very general dependence between processes. However, the martingale ideas also enter into new fields. Lok (2008) used martingale theory to understand the continuous time version of James Robins' theory of causality. Similarly, Didelez (2007) used martingales to understand the modern formulation of local dependence and Granger causality. Connected to this are the work of Arjas and Parner (2004) on posterior predictive distributions for marked point process models and the dynamic path analysis of Fosen et al.\ (2006), see also Aalen et al.\ (2008). Hence, there is a new lease of life for the theory. Fundamentally, the idea of modelling how the past influences the present and the future is inherent to the martingale formulation, and this must with necessity be of importance in understanding causality. The martingale concepts from the French probability school may be theoretical and difficult to many statisticians. Jacobsen (1982) and Helland (1982) are nice examples of how the counting process work stimulated probabilists to reappraise the basic probability theory. Both authors succeeded in giving a much more compact and elementary derivation of (different parts of) the basic theory from probability needed for the statistics. This certainly had a big impact at the time, in making the field more accessible to more statisticians. Especially while the fundamental results from probability were still in the course of reaching their definitive forms and were often not published in the most accessible places or languages. Later these results became the material of standard textbooks. \ In the long run, statisticians tend to use standard results from probability without bothering too much about how one can prove them from scratch. Once the martingale theory became well established people were more confident in just citing the results they needed. Biostatistical papers have hardly ever cited papers or even books in theoretical probability. However at some point it became almost obligatory to cite Andersen and Gill (1982), Andersen et al.\ (1993), and other such works. What was being cited then was worked out examples of applying the counting process approach to various more or less familiar applied statistical tools like the Cox regression model, especially when being used in a little bit non-standard context, e.g., with repeated events. It helped that some software packages also refer to such counting process extensions as the basic biostatistical tool. The historical overview presented here shows that the elegant theory of martingales has been used fruitfully in statistics. This is another example showing that mathematical theory developed on its own terms may produce very useful practical tools. \subsubsection*{Acknowledgement} Niels Keiding and Per Kragh Andersen were supported by National Cancer Institute; Grant Number: R01-54706-13 and Danish Natural Science Research Council; Grant Number: 272-06-0442. {\small
proofpile-arXiv_065-4777
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \setcounter{footnote}{0} The most massive dark matter halos to have formed so far have characteristic masses of $10^{14}$ to $10^{15}$ solar masses. Although dark matter makes up the vast majority of the mass of these objects, most observation signatures result from baryons. A small fraction of the baryons in these massive halos eventually cools to form stars and galaxies, and it was through the light from these galaxies that the most massive halos were first identified. Because of this, we generally refer to these objects as galaxy clusters, despite the small contribution of galaxies to their total mass. Galaxy clusters are tracers of the highest peaks in the matter density field and, as such, their abundance is exponentially sensitive to the growth of structure over cosmic time. A measurement of the abundance of galaxy clusters as a function of mass and redshift has the power to constrain cosmological parameters to unprecedented levels \citep{wang98,haiman01,holder01b,battye03,molnar04,wang04,lima07}, assuming that the selection criteria are well understood. To usefully constrain the growth history of large scale structure, a sample of galaxy clusters must cover a wide redshift range. Furthermore, the observable property with which the clusters are selected should correlate strongly with halo mass, which is the fundamental quantity predicted from theory and simulations. The thermal Sunyaev Zel'dovich \citep[SZ;][]{sunyaev72} signatures of galaxy clusters provide nearly these selection criteria. Surveys of galaxy clusters based on the SZ effect have consequently been eagerly anticipated for over a decade. This paper presents the first cosmologically meaningful catalog of galaxy clusters selected via the thermal SZ effect. \subsection{The Thermal SZ Effect} The vast majority of known galaxy clusters have been identified by their optical properties or from their X-ray emission. Clusters of galaxies contain anywhere from several tens to many hundreds of galaxies, but these galaxies account for a small fraction of the total baryonic mass in a cluster. Most of the baryons in clusters are contained in the intra-cluster medium (ICM), the hot ($\sim 10^7-10^8\,$K) X-ray-emitting plasma that pervades cluster environments. \citet{sunyaev72} noted that this same plasma should also interact with cosmic microwave background (CMB) photons via inverse Compton scattering, causing a small spectral distortion of the CMB along the line of sight to a cluster. The thermal SZ effect has been observed in dozens of known clusters (clusters previously identified in the optical or X-ray) over the last few decades \citep{birkinshaw99, carlstrom02}. However, it was not until very recently that the first previously unknown clusters were identified through their thermal SZ effect \citep{staniszewski09}. This is mostly due to the small amplitude of the effect. The magnitude of the temperature distortion at a given position on the sky is proportional to the integrated electron pressure along the line of sight. At the position of a massive galaxy cluster, this fluctuation is only on the order of a part in $10^4$, or a few hundred \mbox{$\mu \mbox{K}$} \footnote{Throughout this work, the unit $\textrm{K}$ refers to equivalent fluctuations in the CMB temperature, i.e.,~the level of temperature fluctuation of a 2.73$\,$K blackbody that would be required to produce the same power fluctuation. The conversion factor is given by the derivative of the blackbody spectrum, $\frac{dB}{dT}$, evaluated at 2.73$\,$K.}. It is only with the current generation of large ($\sim$ kilopixel) detector arrays on 6-12~m telescopes \citep{fowler07, carlstrom09} that large areas of sky are being surveyed to depths sufficient to detect signals of this amplitude. A key feature of the SZ effect is that the SZ surface brightness is insensitive to the redshift of the cluster. As a spectral distortion of the CMB (rather than an intrinsic emission feature), SZ signals redshift along with the CMB. A given parcel of gas will imprint the same spectral distortion on the CMB regardless of its cosmological redshift, depending only on the electron density $n_e$ and temperature $T_e$. This makes SZ surveys an excellent technique for discovering clusters over a wide redshift range. Another aspect of the thermal SZ effect that makes it especially attractive for cluster surveys is that the integrated thermal SZ flux is a direct measure of the total thermal energy of the ICM. The SZ flux is thus expected to be a robust proxy for total cluster mass \citep{barbosa96, holder01a, motl05}. A mass-limited cluster survey across a wide redshift range provides a growth-based test of dark energy to complement the distance based tests provided by supernovae \citep{perlmutter99a, schmidt98}. Recent results \citep[e.g.,][]{vikhlinin09, mantz10b}, have demonstrated the power of such tests to constrain cosmological models and parameters. \subsection{The SPT SZ Cluster Survey} The South Pole Telescope (SPT) \citep{carlstrom09} is a 10-meter off-axis telescope optimized for arcminute-resolution studies of the microwave sky. It is currently conducting a survey of a large fraction of the southern sky with the principal aim of detecting galaxy clusters via the SZ effect. In 2008, the SPT surveyed $\sim 200\,$\ensuremath{\mathrm{deg}^2} \ of the microwave sky with an array of 960 bolometers operating at 95, 150, and $220\,$GHz. Using 40 \ensuremath{\mathrm{deg}^2} of these data (and a small amount of overlapping data from 2007), \citet{staniszewski09} (hereafter S09) presented the first discovery of previously unknown clusters by their SZ signature. \citet{lueker10} (hereafter L10) used $\sim100\,$deg$^2$ \ of the 2008 survey to measure the power spectrum of small scale temperature anisotropies in the CMB, including the first significant detection of the contribution from the SZ secondary anisotropy. In this paper we expand upon the results in S09 and present an SZ-detection-significance-limited catalog of galaxy clusters identified in the 2008 SPT survey. Redshifts for 21 of these objects have been obtained from follow-up optical imaging, the details of which are discussed in a companion paper \citep{high10}. Using simulated observations we characterize the SPT cluster selection function --- the detectability of galaxy clusters in the survey as a function of mass and redshift --- for the 2008 fields. A simulation-based mass scaling relation allows us to compare the catalog to theoretical predictions and place constraints on the normalization of the matter power spectrum on small scales, $\sigma_8$, and the dark energy equation of state parameter $w$. This paper is organized as follows: \S\ref{sec:obs-reduc} discusses the observations, including data reduction, mapmaking, filtering, cluster-finding, optical follow-up and cluster redshift estimation; \S\ref{sec:results} presents the resulting cluster catalog; \S\ref{sec:selection} provides a description of our estimate of the selection function; \S\ref{sec:cosmology} investigates the sample in the context of our current cosmological understanding and derives parameter constraints; we discuss limitations and possible contaminants in \S\ref{sec:systematics}, and we close with a discussion in \S\ref{sec:discussion}. For our fiducial cosmology we assume a spatially flat \mbox{$\Lambda$CDM} \ model (parameterized by $\Omega_b h^2$, $\Omega_c h^2$, $H_0$, $n_s$, $\tau$, and $A_{002}$) with parameters consistent with the WMAP 5-year \mbox{$\Lambda$CDM} \ best-fit results \citep{dunkley09}\footnote{These parameters are sufficiently similar to the WMAP 7-year preferred cosmology \citep{larson10} that a re-analysis based on that newer work is not warranted.}, namely $\Omega_M=0.264$, $\Omega_b=0.044$, $h=0.71$, $\sigma_8=0.80$. All references to cluster mass refer to $M_{200}$, the mass enclosed within a spherical region of mean overdensity $200\times\rho_{mean}$, where $\rho_{mean}$ is the mean matter density on large scales at the redshift of the cluster. \section{Observations, Data Reduction, Cluster Extraction, and Optical Follow-up} \label{sec:obs-reduc} \subsection{Observations} \label{sec:obs} The results presented in this work are based on observations performed by the SPT in 2008. \citet{carlstrom09} and S09 describe the details of these observations; we briefly summarize them here. Two fields were mapped to the nominal survey depth in 2008: one centered at right ascension (R.A.) $5^\mathrm{h} 30^\mathrm{m}$, declination (decl.) $-55^\circ$ (J2000), hereafter the $5^\mathrm{h}$ \ field; and one centered at R.A. $23^\mathrm{h} 30^\mathrm{m}$, decl.~$-55^\circ$, hereafter the $23^\mathrm{h}$ \ field. Results in this paper are based on roughly 1500 hours of observing time split between the two fields. The areas mapped with near uniform coverage were $91\,\ensuremath{\mathrm{deg}^2}$ in the $5^\mathrm{h}$ \ field and $105\,\ensuremath{\mathrm{deg}^2}$ in the $23^\mathrm{h}$ \ field. This work considers only the $150\,$GHz data from the uniformly covered portions of the 2008 fields. The detector noise for the $95\,$GHz detectors was very high for the 2008 season, and the $220\,$GHz observations were contaminated by the atmosphere at large scales where they would be useful for removing CMB fluctuations. Including these bands did not significantly improve the efficiency of cluster detections and they were not used in the analysis presented here. The final depth of the $150\,$GHz \ maps of the two fields is very similar, with the white noise level in each map equal to $18\,\mbox{$\mu \mbox{K}$}$-arcmin. The two fields were observed using slightly different scan strategies. For the $5^\mathrm{h}$ \ field, the telescope was swept in azimuth at a constant velocity ($\sim$0.25$^\circ$/s on the sky at the field center) across the entire field then stepped in elevation, with this pattern continuing until the whole field was covered. The $23^\mathrm{h}$ \ field was observed using a similar strategy, except that the azimuth scans covered only one half of the field at any one time, switching halves each time one was completed. One consequence of this observing strategy was that a narrow strip in the middle of the $23^\mathrm{h}$ \ field received twice as much detector time as the rest of the map. The effect of this strip on our catalog is minimal and is discussed in \S\ref{sec:deepstrip}. A single observation of either field lasted $\sim2\,$hours. Between individual observations, several short calibration measurements were performed, including measurements of a chopped thermal source, $2$ degree elevation nods, and scans across the galactic HII regions RCW38 and MAT5a. This series of regular calibration measurements was used to identify detectors with good performance, assess relative detector gains, monitor atmospheric opacity and beam parameters, and model pointing variations. \subsection{Data processing and mapmaking} \label{sec:processing} The data reduction pipeline applied to SPT data in this work is very similar to that used in S09. Broadly, the pipeline for both fields consists of filtering the time-ordered data from each individual detector, reconstructing the pointing for each detector, and combining data from all detectors in a given observing band into a map by simple inverse-variance-weighted binning and averaging. The small differences between the data reduction used in this work and that of S09 are: \begin{itemize} \item In S09, a 19th-order polynomial was fit and removed from each detector's timestream on each scan across the field. Samples in the timestream which mapped to positions on the sky near bright point sources were excluded from the fit. A similar subtraction was performed here, except that a first-order polynomial was removed, supplemented by sines and cosines (Fourier modes). Frequencies for the Fourier modes were evenly spaced from $0.025\,$Hz to $0.25\,$Hz. This acts approximately as a high-pass filter in the R.A. direction with a characteristic scale of $\sim 1 ^\circ$ on the sky. \item In S09, a mean across functioning detectors was calculated at each snapshot in time and subtracted from each sample. Here, both a mean and a slope across the two-dimensional array were calculated at each time and subtracted. This acts as a roughly isotropic spatial high-pass filter, with a characteristic scale of $\sim 0.5^\circ$. \item As in \citet{vieira10}, a small pointing correction ($\sim 5^{\prime\prime}$ on the sky) was applied, based on comparisons of radio source positions derived from SPT maps and positions of those sources in the AT20G catalog \citep{murphy10}. \end{itemize} The relative and absolute calibrations of detector response were performed as in L10. The relative gains of the detectors and their gain variations over time were estimated using measurements of their response to a chopped thermal source. These relative calibrations were then tied to an absolute scale through direct comparison of WMAP 5-year maps \citep{hinshaw09} to dedicated large-area SPT scans. This calibration is discussed in detail in L10, and is accurate to $3.6\%$ in the $150\,$GHz data. \subsection{Cluster Extraction} \label{sec:clusterfind} The cluster extraction procedure used in this work for both fields is identical to the procedure used in S09, where more details can be found. The SPT maps were filtered to optimize detection of objects with morphologies similar to the SZ signatures expected from galaxy clusters, through the application of spatial matched filters \citep{haehnelt96,herranz02a,herranz02b,melin06}. In the spatial Fourier domain, the map was multiplied by \begin{equation*} \psi(k_x,k_y) = \frac{B(k_x,k_y) S(|\vec{k}|)}{B(k_x,k_y)^2 N_{astro}(|\vec{k}|) + N_{noise}(k_x,k_y)} \end{equation*} where $\psi$ is the matched filter, $B$ is the response of the SPT instrument after timestream processing to signals on the sky, $S$ is the assumed source template, and the noise power has been broken into astrophysical ($N_{astro}$) and noise ($N_{noise}$) components. For the source template, a projected spherical $\beta$-model, with $\beta$ fixed to 1, was used: $$ \Delta T = \Delta T_0 (1+\theta^2/\theta_c^2)^{-1}, $$ where the normalization $\Delta T_0$ and the core radius $\theta_c$ are free parameters. The noise power spectrum $N_{noise}$ includes contributions from atmospheric and instrumental noise, while $N_{astro}$ includes power from primary and lensed CMB fluctuations, an SZ background, and point sources. The atmospheric and instrumental noise were estimated from jackknife maps as in S09, the CMB power spectrum was updated to the lensed CMB spectrum from the WMAP5 best-fit \mbox{$\Lambda$CDM} \ cosmology \citep{dunkley09}, the SZ background level was assumed to be flat in $\ell(\ell+1)C_\ell$ with the amplitude taken from L10, and the point source power was assumed to be flat in $C_\ell$ at the level given in \citet{hall10}. To avoid spurious decrements from the wings of bright point sources, all positive sources above a given flux (roughly $7\,$mJy, or $5\,\sigma$ in a version of the map filtered to optimize point-source signal-to-noise) were masked to a radius of $4^\prime$ before the matched filter was applied. Roughly 150 sources were masked in each field, of which $90$-$95 \%$ are radio sources. The final sky areas considered after source masking were 82.4 and $95.1\,$\ensuremath{\mathrm{deg}^2} \ for the $5^\mathrm{h}$ \ and $23^\mathrm{h}$ \ fields respectively. The maps were filtered for twelve different cluster scales, constructed using source templates with core radii $\theta_c$ evenly spaced between $0.25^\prime$ to $3.0^\prime$. Each filtered map $M_{ij}$, where $i$ refers to the filter scale and $j$ to the field, was then divided into strips corresponding to distinct $90^\prime$ ranges in elevation. The noise was estimated independently in each strip in order to account for the weak elevation dependence of the survey depth. The noise in the $k^{th}$ strip of map $M_{ij}$, $\sigma_{ijk}$, was estimated as the standard deviation of the map within that strip. Signal-to-noise maps $\widetilde{M}_{ij}$ were then constructed by dividing each strip $k$ in map $M_{ij}$ by $\sigma_{ijk}$. SZ cluster decrements were identified in each map $\widetilde{M}_{ij}$ by a simple (negative) peak detection algorithm similar to SExtractor \citep{bertin96}. The highest signal-to-noise value associated with a decrement, across all filter scales, was defined as \ensuremath{\xi}, and taken as the significance of a detection. Candidate clusters were identified in the data down to \ensuremath{\xi} \ of 3.5, though this work considers only the subset with \ensuremath{\xi}$\geq 5$. These detection significances are robust against choice of source template: the use of Nagai \citep{nagai07}, Arnaud \citep{arnaud10} or Gaussian templates in place of $\beta-$models was found to be free of bias and to introduce negligible ($\sim 2\%$) scatter on recovered \ensuremath{\xi}. \subsection{Optical Imaging and Spectroscopy} \label{sec:optical} Optical imaging was used for confirmation of candidates, for photometric redshift estimation, and for cluster richness characterization. A detailed description of the coordinated optical effort is presented in \citet{high10} and is summarized here. The $5^\mathrm{h}$ \ and $23^\mathrm{h}$ \ fields were selected in part for overlap with the Blanco Cosmology Survey \citep[BCS, see][]{ngeow09}, which consists of deep optical images using $g$, $r$, $i$ and $z$ filters. These images, obtained from the Blanco 4m telescope at CTIO with the MOSAIC-II imager in 2005--2007, were used when available. The co-added BCS images have 5$\,\sigma$ galaxy detection thresholds of 24.75, 24.65, 24.35 and 23.05 magnitude in $griz$, respectively. For clusters that fell outside the BCS coverage region, as well as for 5 that fell within, and for the unconfirmed candidate, images were obtained using the twin $6.5\,$m Magellan telescopes at Las Campanas, Chile. The imaging data on Magellan were obtained by taking successively deeper images until a detection of the early-type cluster galaxies was achieved, complete to between $L^*$ and $0.4L^*$ in at least one band. The Magellan images were obtained under a variety of conditions, and the Stellar Locus Regression technique \citep{high09} was used to obtain precise colors and magnitudes. Spectroscopic data were obtained for a subset of the sample using the Low Dispersion Survey Spectrograph on Magellan \citep[LDSS3, see][]{osip08} in longslit mode. Typical exposures were $20$--$60$ minutes, with slit orientations that contained the brightest cluster galaxy (BCG) and as many additional red cluster members as possible. Photometric redshifts were estimated using standard red sequence techniques and verified using the spectroscopic subsample. A red sequence model was derived from the work of \citet{bruzual03}, and local overdensities of red galaxies were searched for near the cluster candidate positions. By comparing the resulting photo-$z$'s to spectroscopic redshifts within the subsample of $10$ clusters for which spectroscopic data are available, H10 estimates photo-$z$ uncertainty $\sigma_z$ as given in Table \ref{tab:catalog}, of order $3\%$ in $(1+z)$. Completeness in red sequence cluster finding was estimated from mock catalogs. At BCS depths, galaxy cluster completeness for the masses relevant for this sample is nearly unity up to a redshift of one, above which the completeness falls rapidly, reaching 50\% at about redshift 1.2, and 0\% at redshift 1.25. At depths about a magnitude brighter (corresponding to the depth of the Magellan observations of the unconfirmed candidate, \S\ref{sec:cand_notes}), the completeness deviates from unity at redshift $\sim 0.8$. \section{Catalog} \label{sec:results} \begin{table*} \begin{minipage}{\textwidth} \centering \caption{The SPT Cluster Catalog for the 2008 Observing Season} \small \begin{tabular}{l cc cc cccc} \hline\hline \rule[-2mm]{0mm}{6mm} Object Name & R.A. & decl. & \ensuremath{\xi} & $\theta_c$ & Photo-z & $\sigma_z$ & Spec-z & Opt \\ \hline SPT CL J0509-5342 \tablenotemark{\ensuremath{\dagger}}\tablenotemark{\ensuremath{\ddagger}} &77.336 &-53.705 & 6.61 & 0.50 & 0.47 & 0.04 & 0.4626 &BCS+Mag\\ SPT-CL J0511-5154 \tablenotemark{a} &77.920 &-51.904 & 5.63 & 0.50 & 0.74 & 0.05 & - &Mag\\ SPT-CL J0516-5430 \tablenotemark{\ensuremath{\dagger}}\tablenotemark{\ensuremath{\ddagger}}\tablenotemark{b} &79.148 &-54.506 & 9.42 & 0.75 & 0.25 & 0.03 & 0.2952 &BCS+Mag\\ SPT-CL J0521-5104 \tablenotemark{\ensuremath{\ddagger}}\tablenotemark{c} &80.298 &-51.081 & 5.45 & 1.00 & 0.72 & 0.05 & - &BCS\\ SPT-CL J0528-5300 \tablenotemark{\ensuremath{\dagger}}\tablenotemark{d} &82.017 &-53.000 & 5.45 & 0.25 & 0.75 & 0.05 & 0.7648 &BCS+Mag\\ SPT-CL J0533-5005 &83.398 &-50.092 & 5.59 & 0.25 & 0.83 & 0.05 & 0.8810 &Mag\\ SPT-CL J0539-5744 \tablenotemark{\ensuremath{\ddagger}} &85.000 &-57.743 & 5.12 & 0.25 & 0.77 & 0.05 & - &Mag\\ SPT-CL J0546-5345 \tablenotemark{\ensuremath{\dagger}}\tablenotemark{\ensuremath{\ddagger}} &86.654 &-53.761 & 7.69 & 0.50 & 1.16 & 0.06 & - &BCS\\ SPT-CL J0551-5709 \tablenotemark{\ensuremath{\ddagger}}\tablenotemark{e} &87.902 &-57.156 & 6.13 & 1.00 & 0.41 & 0.04 & 0.4230 &Mag\\ SPT-CL J0559-5249 \tablenotemark{\ensuremath{\ddagger}} &89.925 &-52.826 & 9.28 & 1.00 & 0.66 & 0.04 & 0.6112 &Mag\\ SPT-CL J2259-5617 \tablenotemark{\ensuremath{\ddagger}}\tablenotemark{f} &344.997 &-56.288 & 5.29 & 0.25 & 0.16 & 0.03 &0.1528 &Mag\\ SPT-CL J2300-5331 \tablenotemark{\ensuremath{\ddagger}}\tablenotemark{g} &345.176 &-53.517 & 5.29 & 0.25 & 0.29 & 0.03 & - &Mag\\ SPT-CL J2301-5546 &345.469 &-55.776 & 5.19 & 0.50 & 0.78 & 0.05 & - &Mag\\ SPT-CL J2331-5051 &352.958 &-50.864 & 8.04 & 0.25 & 0.55 & 0.04 & 0.5707 &Mag\\ SPT-CL J2332-5358 \tablenotemark{\ensuremath{\ddagger}}\tablenotemark{h} &353.104 &-53.973 & 7.30 & 1.50 & 0.32 & 0.03 & - &BCS+Mag\\ SPT-CL J2337-5942 \tablenotemark{\ensuremath{\ddagger}} &354.354 &-59.705 & 14.94 & 0.25 & 0.77 & 0.05 & 0.7814 &Mag\\ SPT-CL J2341-5119 &355.299 &-51.333 & 9.65 & 0.75 & 1.03 & 0.05 & 0.9983 &Mag\\ SPT-CL J2342-5411 &355.690 &-54.189 & 6.18 & 0.50 & 1.08 & 0.06 & - &BCS\\ SPT-CL J2343-5521 &355.757 &-55.364 & 5.74 & 2.50 & - & - & - &BCS+Mag\\ SPT-CL J2355-5056 &358.955 &-50.937 & 5.89 & 0.75 & 0.35 & 0.04 & - &Mag\\ SPT-CL J2359-5009 &359.921 &-50.160 & 6.35 & 1.25 & 0.76 & 0.05 & - &Mag\\ SPT-CL J0000-5748 &0.250 &-57.807 & 5.48 & 0.50 & 0.74 & 0.05 & - &Mag\\ \hline \end{tabular} \label{tab:catalog} \begin{@twocolumnfalse} \tablecomments{Recall that \ensuremath{\xi} \ is the maximum signal-to-noise obtained over the set of filter scales for each cluster. The $\theta_c$ (given in arcminutes) refer to the preferred filter scale, that at which \ensuremath{\xi} \ was found. Cluster positions in R.A. and decl. are given in degrees, and refer to the center of SZ brightness in map filtered at the preferred scale, calculated as the mean position of all pixels associated with the detection, weighted by their SZ brightness. The four rightmost columns refer to optical follow-up observations, giving the measured photometric redshift measurements and uncertainties of the optical counterpart, spectroscopic redshifts where available, and the source (BCS or Magellan) of the follow-up data.} \tablenotetext{\ensuremath{\dagger}}{Clusters identified in S09. The cluster names have been updated (clusters were identified as SPT-CL 0509-5342, SPT-CL 0517-5430, SPT-CL 0528-5300, and SPT-CL 0547-5345 in S09) in response to an IAU naming convention request, an improved pointing model, and the updated data processing.} \tablenotetext{\ensuremath{\ddagger}}{Clusters within $2^\prime$ of RASS sources \citep[RASS-FSC, RASS-BSC;][]{voges00, voges99}.} \tablenotetext{a}{SCSO J051145-515430 \citep{menanteau10}.} \tablenotetext{b}{Abell S0520 \citep{abell89}, RXCJ0516.6-5430 \citep{boehringer04}, SCSO J051637-543001 \citep{menanteau10}.} \tablenotetext{c}{SCSO J052113-510418 \citep{menanteau10}.} \tablenotetext{d}{SCSO J052803-525945 \citep{menanteau10}.} \tablenotetext{e}{Abell S0552 \citep{abell89} is in the foreground, $5\arcmin$ away at z=0.09 (this redshift not previously measured).} \tablenotetext{f}{Abell 3950 \citep{abell89}. Spectroscopic redshift from \citet{jones05b}.} \tablenotetext{g}{Abell S1079 \citep[][redshift shown not previously measured]{abell89} .} \tablenotetext{h}{SCSO J233227-535827 \citep{menanteau10}.} \end{@twocolumnfalse} \normalsize \end{minipage} \end{table*} The resulting catalog of galaxy clusters, complete for \ensuremath{\xi} $\geq5$, is presented in Table \ref{tab:catalog}. Simulations (\S\ref{sec:sims}) suggest that this catalog should be highly complete above limiting mass and redshift thresholds, with relatively low contamination. A total of 22 candidates were identified, for which optical follow-up confirmed and obtained redshift information on all but one. Three clusters were previously known from X-ray and optical surveys, three were previously reported from this survey by S09, three were first identified in a recent analysis of BCS data by \citet{menanteau10}, and the remainder are new discoveries. Detailed comparisons of the SPT and \citet{menanteau10} cluster catalogs and selection will be the subject of future work. Thumbnail images of the signal-to-noise maps, $\widetilde{M}$, at the preferred filter scale for each cluster are provided in Appendix \ref{app:thumbnails}, Figure \ref{fig:paper_thumbs}. Signal-to-noise as a function of filter scale for each cluster is shown in Appendix \ref{app:thumbnails}, Figure \ref{fig:paper_rcores}. Estimates of cluster masses are possible with the aid of a scaling relation (below, \S\ref{sec:scaling_relation}), and are discussed in Appendix \ref{app:mass_estim}. \subsection{Noteworthy Clusters} \label{sec:cand_notes} \paragraph{SPT-CL 2259-5617} The SZ signal from this cluster is anomalously compact for such a low redshift object. The cosmological analysis (\S\ref{sec:cosmology}) explicitly excludes all $z<0.3$ clusters, so this cluster not used in parameter estimation. \paragraph{SPT-CL J2331-5051} This cluster appears to be one of a pair of clusters at comparable redshift, likely undergoing a merger. It will be discussed in detail in a future publication. The fainter partner is not included in this catalog as its significance ($\ensuremath{\xi}=4.81$) falls below the detection threshold. \paragraph{SPT-CL J2332-5358} This cluster is coincident with a bright dusty point source which we identify in the 23h $220\,$GHz data. Although the $150\,$GHz flux from this source could be removed with the aid of the $220\,$GHz map, a multi-frequency analysis is outside the scope of the present work. The impact of point sources on the resulting cluster catalog is discussed in \S\ref{sec:corr_ps}. \paragraph{SPT-CL J2343-5521} No optical counterpart was found for this candidate. The field was imaged with both BCS and Magellan, and no cluster of galaxies was found to a $5\,\sigma$ point source detection depth. The simulated optical completeness suggests that this candidate is either a false positive in the SPT catalog or a cluster at high redshift ($z \gtrsim 1.2$). While the relatively high $\ensuremath{\xi}=5.74$ indicates a $\sim7\%$ chance of a false detection in the SPT survey area (see discussion of contamination below, \S\ref{sec:sf}), the signal-to-noise of this detection exhibits peculiar behavior with $\theta_c$ (see Figure \ref{fig:paper_rcores}), preferring significantly larger scales than any other candidate, consistent with a CMB decrement. Further multi-wavelength follow-up observations are underway on this candidate, and preliminary results indicate it is likely a false detection. \subsection{Recovering integrated SZ flux} \label{sec:sz_flux} The optimal filter described in \S\ref{sec:clusterfind} provides an estimate of the $\beta$-model normalization, $\Delta T_0$, and core size for each cluster, based on the filter scale $\theta_c$ at which the significance \ensuremath{\xi} \ was maximized. Assuming prior knowledge of the ratio $\theta_{200}/\theta_c$ (where $\theta_{200}$ is the angle subtending the physical radius $R_{200}$ at the redshift of the cluster), one can integrate the $\beta$-profile to obtain an estimate of the integrated SZ flux, $Y$. Basic physical arguments and hydrodynamical simulations of clusters have demonstrated $Y$ to be a tight (low intrinsic scatter) proxy for cluster mass \citep{barbosa96, holder01a, motl05, nagai07, stanek10}. In single-frequency SZ surveys, the primary CMB temperature anisotropies provide a source of astrophysical contamination that greatly inhibits an accurate measure of $\theta_c$ \citep{melin06}. The modes at which the primary CMB dominates must be filtered out from the map, significantly reducing the range of angular scales that can be used by the optimal filter to constrain $\theta_c$. This range is already limited by the $\sim1^\prime$ instrument beam, which only resolves $\theta_c$ for the larger clusters. Any integrated quantity will thus be poorly measured. \citet{melin06} demonstrated that if the value of $\theta_c$ can be provided by external observations, e.g., X-ray, $Y$ can be accurately measured. The inability to constrain $\theta_c$ can be seen in Figure \ref{fig:paper_rcores}, where the highest signal-to-noise associated with a peak is plotted against each filter scale. For several clusters (for example, SPT-CL J0516-5430, SPT-CL J0551-5709, and SPT-CL J2332-5358), the peak in signal-to-noise associated with a cluster is very broad in $\theta_c$. Because of this confusion, we do not report $Y$ in this work. Instead, as described below (\S\ref{sec:scaling_relation}), we use detection significance as a proxy for mass. Multi-frequency surveys are not in principle subject to this limitation as the different frequencies can be combined to eliminate sources of noise that are correlated between bands, thus increasing the range of angular scales available for constraining cluster profiles. \section{SZ Selection Function} \label{sec:selection} In this section, we characterize the SPT cluster sample identified in Table \ref{tab:catalog}. Specifically, we describe the SPT cluster selection function in terms of the catalog completeness as a function of mass and redshift, and the contamination rate. This selection function was determined by applying the cluster detection algorithm described in \S\ref{sec:clusterfind} to a large number of simulated SPT observations. These simulations included the dominant astrophysical components (primary and lensed CMB, cluster thermal SZ, and two families of point sources), accounted for the effects of the SPT instrument and data processing (the ``transfer function''), and contained realistic atmospheric and detector noise. \subsection{Simulated thermal SZ Cluster Maps} \label{sec:sims} Simulated SZ maps were generated using the method of \citet{shaw09}, where a detailed description of the procedure can be found. In brief, the semi-analytic gas model of \cite{bode07} was applied to halos identified in the output of a large dark matter lightcone simulation. The cosmological parameters for this simulation were chosen to be consistent with those measured from the WMAP 5-year data combined with large-scale structure observations \citep{dunkley09}, namely $\Omega_M = 0.264$, $\Omega_b = 0.044$, and $\sigma_8 = 0.8$. The simulated volume was a periodic box of size 1 Gpc/h. The matter distribution in 421 time slices was arranged into a lightcone covering a single octant of the sky from $0 < z \leq 3$. Dark matter halos were identified and gas distributions were calculated for each halo using the semi-analytic model of \citet{bode07}. This model assumes that intra-cluster gas resides in hydrostatic equilibrium in the gravitational potential of the host dark matter halo with a polytropic equation of state. As discussed in \citet{bode07}, the most important free parameter is the energy input into the cluster gas via non-thermal feedback processes, such as supernovae and outflows from active galactic nuclei (AGN). This is set through the parameter $\epsilon_f$ such that the feedback energy is $E_{f} = \epsilon_f M_* c^2$, where $M_*$ is the total stellar mass in the cluster. \citet{bode07} calibrate $\epsilon_f$ by comparing the model against observed X-ray scaling relations for low redshift ($z<0.25$) group and cluster mass objects. We note that the redshift range in which the model has been calibrated and that encompassed by the cluster sample presented here barely overlap; comparison of the model to the SPT sample (as, for example, in \S\ref{sec:cosmology}) thus provides a test of the predicted cluster and SZ signal evolution at high redshift. For our fiducial model we adopt $\epsilon_f = 5\times10^{-5}$, however for comparison we also generate maps using the `standard' and `star-formation only' versions of this model described in \citet{bode09}. There are two principal differences between these models. First, the stellar mass fraction $M_*/M_{\rm gas}$ is constant with total cluster mass in the fiducial model, but mass-dependent in the `standard' and `star-formation' model. Second, the amount of energy feedback is significantly lower in the `standard' than model than in the fiducial, and zero in the `star-formation' model. From the output of each model, a 2-d image of SZ intensity for each cluster with mass $M > 5 \times 10^{13}\,\ensuremath{M_\odot} h^{-1}$ was produced by summing up the electron pressure along the line of sight. SZ cluster sky maps were constructed by projecting down the lightcone, summing up the contribution of all the clusters along the line of sight. Individual SZ sky maps were $10 \times 10$ degrees in size, resulting in a total of 40 independent maps. For each map, the mass, redshift, and position of each cluster was recorded. From SPT pointed observations of X-ray-selected clusters, \citet{plagge10} have demonstrated that cluster radial SZ profiles match the form of the ``universal'' electron pressure profile measured by \citet{arnaud10} from X-ray observations of massive, low-redshift clusters. To complement the set of maps generated using the semi-analytic gas model, SZ sky maps were generated in which the projected form of the \citet{arnaud10} pressure profile was used to generate the individual cluster SZ signals. \subsection{Point Source Model} At $150\,$GHz and at the flux levels of interest to this analysis ($\sim 1$ to $\sim 10$~mJy), the extragalactic source population is expected to be primarily composed of two broad classes: sources dominated by thermal emission from dust heated by a burst of star formation, and sources dominated by synchrotron emission from AGN. We refer to these two families as ``dusty sources'' and ``radio sources'' and include models for both in our simulated observations. For dusty sources, the source count model of \citet{negrello07} at $350\,$GHz was used. These counts are based on the physical model of \citet{granato04} for high-redshift SCUBA-like sources and on a more phenomenological approach for late-type galaxies (starburst plus normal spirals). Source counts were estimated at $150\,$GHz by assuming a scaling for the flux densities of $S_\nu \propto \nu^\alpha$, with $\alpha=3$ for high-redshift protospheroidal galaxies and $\alpha=2$ for late-type galaxies. For radio sources, the \citet{dezotti05} model for counts at $150\,$GHz was used. This model is consistent with the measurements of \citet{vieira10} for the radio source population at $150\,$GHz. Realizations of source populations were generated by sampling from Poisson distributions for each population in bins with fluxes from $0.01\,$mJy to $1000\,$mJy. Sources were distributed in a random way across the map. Correlations between sources or with galaxy clusters were not modeled, and we discuss this potential contamination in \S\ref{sec:corr_ps}. \subsection{CMB Realizations} Simulated CMB anisotropies were produced by generating sky realizations based on the power in the gravitationally lensed WMAP 5-year \mbox{$\Lambda$CDM} \ CMB power spectrum. Non-gaussianity in the lensed power was not modeled. \subsection{Transfer Function} \label{sec:transfunc} The effects of the transfer function, i.e., the effects of the instrument beam and the data processing on sky signal, were emulated by producing synthetic SPT timestreams from simulated skies sampled using the same scans employed in the observations. The sky signal was convolved with the measured SPT $150\,$GHz beam, timestream samples were convolved with detector time constants, and the SPT data processing (\S\ref{sec:processing}) was performed on the simulated timestreams to produce maps. Full emulation of the transfer function is a computationally intensive process; to make a large number of simulated observations, the transfer function was modeled as a 2D Fourier filter. The accuracy of this approximation was measured by comparing recovered \ensuremath{\xi} \ of simulated clusters in skies passed through the full transfer function against the \ensuremath{\xi} \ of the same clusters when the transfer function was approximated as a Fourier filter applied to the map. Systematic differences were found to be less than 1\% and on an object-by-object basis the two methods produced measured \ensuremath{\xi} \ that agreed to better than 3\%. \subsection{Instrumental \& Atmospheric Noise} \label{sec:noisesim} Noise maps were created from SPT data by subtracting one half of each observation from the other half. Within each observation, one direction (azimuth either increasing or decreasing) was chosen at random, and all data when the telescope was moving in that direction were multiplied by -1. The data were then processed and combined as usual to produce a ``jackknife'' map which contained the full noise properties of the final field map, but with all sky signal removed. \subsection{The $23^\mathrm{h}$ \ Deep Strip} \label{sec:deepstrip} Due to the observing strategy employed on the $23^\mathrm{h}$ \ field, a $\sim 1.5^\circ$ strip in the middle of that map contains significantly lower atmospheric and instrumental noise than the rest of the map. The jackknife noise maps (\S\ref{sec:noisesim}) used in simulated observations naturally include this deep strip, so any effects due to this feature are taken into account in the simulation-based estimation of the average selection function (\S\ref{sec:sf}) and scaling relation (\S\ref{sec:scaling_relation}) across the whole survey region. The cosmological analysis (\S\ref{sec:cosmology}) uses these averaged quantities; simulated observations performed with and without a deep strip demonstrated that any bias or additional scatter from using the averaged quantities is negligible compared to the statistical errors. \subsection{Completeness and Contamination} \label{sec:sf} Forty realizations of the 2008 SPT survey (two fields each) were simulated, from which clusters were extracted and matched against input catalogs. Figure \ref{fig:sz_completeness} shows the completeness of the simulated SPT sample, the fraction of clusters in simulated SPT maps that were detected with $\ensuremath{\xi} \geq 5$, as a function of mass and redshift. The exact shape and location of the curves in this figure depend on the detailed modeling of intra-cluster physics, which remain uncertain. The increase in SZ brightness (and cluster detectability) with increasing redshift at fixed mass is due to the increased density and temperature of high redshift clusters, and is in keeping with self-similar evolution. At low redshifts ($z\lesssim0.3$), CMB confusion suppresses cluster detection significances and drives a strong low-redshift evolution in the selection function. These completeness curves were not used in the cosmological analysis (\S\ref{sec:cosmology}), where uncertainties on the mass scaling relation (\S\ref{sec:scaling_relation}) account for uncertainties in the modeling of intra-cluster physics. \begin{figure}[] \centering \includegraphics[scale=0.58]{sz_completeness.pdf} \caption[]{Simulated catalog completeness as a function of mass and redshift for a significance cut of $\ensuremath{\xi} \geq 5$. The contours show lines of constant completeness. From left to right, the lines represent 30, 50, 80 and 99\% completeness. The temperature and density of clusters at a given mass tends to increase with redshift, leading to the increased SZ flux and improved detectability of high-redshift clusters. The strong evolution below $z\sim0.3$ arises from reduced \ensuremath{\xi} \ on nearby clusters due to CMB confusion. Note that these contours are based on the fiducial simulations used in this work. Uncertainties in modeling (discussed in \S\ref{sec:scaling_relation}) can shift the position and shape of these contours coherently but significantly (of order $30\%$ in mass). \\ } \label{fig:sz_completeness} \end{figure} The SZ sky was removed from simulations to estimate the rate of false positives in the SPT sample. Figure \ref{fig:false_rate} shows this contamination rate as a function of lower \ensuremath{\xi} \ threshold, averaged across the survey area. A $\ensuremath{\xi}~\geq~5$ threshold leads to approximately one false detection within the survey area. To test for biases introduced by an SZ background composed of low-mass systems, a simulation was run including only SZ sources well below the SPT threshold, with masses $M < 10^{14}\,\ensuremath{M_\odot} h^{-1}$. This background was found to have negligible effect on the detection rate as compared to the SZ-free false detection simulation. \begin{figure}[] \includegraphics[scale=0.88]{false_rate.pdf} \caption[]{Simulated false detection rate, averaged across the survey area. The left axis shows the number density of false detections above a given \ensuremath{\xi}; the right axis shows the equivalent number of false detections within the combined $5^\mathrm{h}$ \ and $23^\mathrm{h}$ \ survey fields. The dotted lines show the $\ensuremath{\xi}\geq5$ threshold applied to the catalog, and the false detection rate at that threshold, $\sim1.2$ across the full survey area.\\} \label{fig:false_rate} \end{figure} \subsection{Mass Scaling Relation} \label{sec:scaling_relation} As discussed in \S\ref{sec:sz_flux}, the integrated SZ flux $Y$ is poorly estimated in this analysis and so is not used as a mass proxy. However, the noise $\sigma_{ijk}$ measured in each elevation strip is relatively even across the SPT maps, so it is possible to work in the native space of the SPT selection function and use detection significance \ensuremath{\xi} \ as proxy for mass. Additional uncertainty and bias introduced by use of such a relation (in place of, for example, a $Y$-based scaling relation) are small compared to the Poisson noise of the sample and the uncertainties in modeling intra-cluster physics. The steepness of the cluster mass function in the presence of noise will result in a number of detections that have boosted significance. Explicitly, \ensuremath{\xi} \ is a biased estimator for $\langle\ensuremath{\xi}\rangle$, the average detection significance of a given cluster across many noise realizations. An additional bias on \ensuremath{\xi} \ comes from the choice to maximize signal-to-noise across three free parameters, R.A., decl. and $\theta_c$. These biases make the relation between \ensuremath{\xi} \ and mass complex and difficult to characterize. In order to produce a mass scaling relation with a simple form, the unbiased significance \ensuremath{\zeta} \ is introduced. It is defined as the average detection signal-to-noise of a simulated cluster, measured across many noise realizations, evaluated at the preferred position and filter scale of that cluster as determined by fitting the cluster in the absence of noise. Relating \ensuremath{\zeta} \ and \ensuremath{\xi} \ is a two step process. The expected relation between \ensuremath{\zeta} \ and $\langle\ensuremath{\xi}\rangle$ is derived and compared to simulated observations in Appendix \ref{app:nsn_deriv}, and found to be $\ensuremath{\zeta} = \sqrt{\langle\ensuremath{\xi}\rangle^2-3}$. Given a known $\langle\ensuremath{\xi}\rangle$, the expected distribution in \ensuremath{\xi} \ is derived by convolution with a Gaussian of unit width. The relation between \ensuremath{\zeta} \ and $\langle\ensuremath{\xi}\rangle$ is taken to be exact, and was verified through simulations to introduce negligible additional scatter; i.e., the scatter in the \ensuremath{\zeta}-\ensuremath{\xi} \ relation is the same as the scatter in the $\langle\ensuremath{\xi}\rangle$-\ensuremath{\xi} \ relation, namely a Gaussian of unit width. The scaling between $\ensuremath{\zeta}$ and $M$ is assumed to take the form of power-law relations with both mass and redshift: \begin{equation} \label{eq:mass_scaling} \ensuremath{\zeta} = A \left(\frac{M}{5\times10^{14}\,\ensuremath{M_\odot} h^{-1}}\right)^B \left(\frac{1+z}{1.6}\right)^C, \end{equation} parameterized by the normalization $A$, the slope $B$, and the redshift evolution $C$. Appendix \ref{app:szscaling} presents a physical argument for the form of this relation, along with the expected ranges in which the values of the parameters $B$ and $C$ are expected to reside based on self-similar scaling arguments. Values for the parameters $A$, $B$, and $C$ were determined by fitting Eq.~\ref{eq:mass_scaling} to a catalog of $\ensuremath{\zeta} > 1$ clusters detected in simulated maps, using clusters with mass $M > 2\times 10^{14}\,\ensuremath{M_\odot} h^{-1}$ and in the redshift range $0.3 \leq z \leq 1.2$. This redshift range was chosen to match the SPT sample, while the mass limit was chosen to be as low as possible without the sample being significantly cut off by the $\ensuremath{\zeta} > 1$ threshold. The best fit was defined as the combination of parameters that minimized the intrinsic fractional scatter around the mean relation. Figure \ref{fig:mass_scaling} shows the best-fit scaling relation obtained for our fiducial simulated SZ maps, where $A=6.01$, $B=1.31$, and $C=1.6$. The intrinsic scatter was measured to be 21\% (0.21 in $\ln(\ensuremath{\zeta})$) and the relation was found to adhere to a power-law well below the limiting mass threshold. Over the three gas model realizations (\S\ref{sec:sims}), the best fit value of $A$, $B$, and the intrinsic scatter were all found to vary by less than 10\%, while the values of $C$ predicted by the `standard' and `star-formation' models drop to $\sim1.2$. For maps generated using the electron pressure profile of \citet{arnaud10}, best-fit values of $A=6.89$, $B=1.38$, $C = 0.6$ were found, with a 19\% intrinsic scatter. The values of $A$ and $B$ remain within 15\% of the fiducial model, although $C$ is significantly lower. \citet{arnaud10} measured the pressure profile using a low-redshift ($z<0.2$) cluster sample and assume that the profile normalization will evolve in a self-similar fashion. The mass dependence of their pressure profile was determined using cluster mass estimates derived from the equation of hydrostatic equilibrium; simulations suggest that this method may underestimate the true mass by $10-20\%$ \citep{rasia04, meneghetti10, lau09}. We do not take this effect into account in our simulations -- doing so would reduce the value of $A$ by approximately 10\%. The \citet{bode09} gas model is calibrated against X-ray scaling relations measured from low-redshift cluster samples \citep{vikhlinin06, sun09}, but assumes an evolving stellar-mass fraction which may drive the stronger redshift evolution. Based on these simulations, priors on the scaling relation parameters ($A$, $B$, $C$, scatter) were adopted, with conservative $1\,\sigma$ Gaussian uncertainties of (30\%, 20\%, 50\%, 20\%) about mean values measured from the fiducial simulation model. These large uncertainties in scaling relation parameters are the dominant source of uncertainty in the cosmological analysis (\S\ref{sec:cosmology}) and mass estimation (Appendix \ref{app:szscaling}). Furthermore, although the weakest prior is on the redshift evolution, $C$, it is the uncertainty on the amplitude $A$ that dominates the error budget on the measurement of $\sigma_8$ (see \S\ref{sec:low_sz}). \begin{figure}[] \includegraphics[scale=0.8]{scaling_relation.pdf} \caption[]{Mass-significance relation plotted over clusters identified in simulated maps. The relation was fit to points with $M>2\times10^{14}\,\ensuremath{M_\odot} h^{-1}$, shown by the dotted line, and across a redshift range $0.3<z<1.2$. Simulated clusters outside this redshift range are not included in this plot. The approximate lower mass threshold of the high-redshift end of the SPT sample ($M=4\times10^{14}\,\ensuremath{M_\odot} h^{-1}$) is shown by the dashed line.\\} \label{fig:mass_scaling} \end{figure} It should be noted that at low redshift ($z\lesssim0.3$), such a power-law scaling relation fails to fully capture the behavior of the CMB-confused selection function. The cosmological analysis below therefore excludes this region during likelihood calculation. The mass estimates presented in Appendix \ref{app:mass_estim} may be biased low for low-redshift objects, although this effect is expected to be small compared to existing systematic errors. \section{Cosmological Analysis} \label{sec:cosmology} The 2008 SPT cluster catalog is an SZ-detection-significance-limited catalog. Simulated maps were used to calibrate the statistics of the relation between cluster mass and detection significance, as well as the impact of noise-bias and selection effects. This relation was combined with theoretical mass functions to construct estimates of the number density of galaxy clusters as a function of the significance \ensuremath{\xi} \ and redshift, to be compared to the SPT catalog. Cosmological information from the SPT cluster catalog was combined with information from existing data sets, providing improved parameter constraints. \subsection{Cosmological Likelihood Evaluation} \label{sec:likelihood} Evaluation of cosmological models in the context of the SPT catalog requires a theoretical model that is capable of predicting the number density of dark matter halos as a function of both redshift and input cosmology. For a given set of cosmological parameters, the simulation-based mass function of \citet{tinker08} was used in conjunction with matter power spectra computed by CAMB \citep{lewis00} to construct a grid of cluster number densities in the native $\ensuremath{\xi}$-z space of the SPT catalog: \begin{itemize} \item A 2D grid of the number of clusters as a function of redshift and mass was constructed by multiplying the \citet{tinker08} mass function by the comoving volume element. The gridding was set to be very fine in both mass and redshift, with $\Delta z=0.01$ and the mass binning set so that $\Delta \ensuremath{\zeta}=0.0025$ (see below). The grids were constructed to extend beyond the sensitivity range of SPT, $0.1 < z < 2.6$ and $1.8 < \ensuremath{\zeta} < 23$. Extending the upper limits was found not to impact cosmological results, as predicted number counts have dropped to negligible levels above those thresholds. \item The parameterized scaling relation (\S\ref{sec:scaling_relation}) was used to convert the mass for each bin to unbiased significance \ensuremath{\zeta} \ for assumed values of $A,B$ and $C$. \item This grid of number counts (in \ensuremath{\zeta}$-z$ space) was convolved with a Gaussian in ln(\ensuremath{\zeta}) with width set by the assumed intrinsic scatter in the scaling relation ($21\%$ in the fiducial relation). \item The unbiased significance $\ensuremath{\zeta}$ of each bin was converted to an ensemble-averaged significance $\langle \ensuremath{\xi} \rangle$. \item This grid was convolved with a unit-width Gaussian in \ensuremath{\xi} \ to account for noise, with the resulting grid in the native SPT catalog space, \ensuremath{\xi}$-z$. \item Each row (fixed \ensuremath{\xi}) of the \ensuremath{\xi}$-z$ grid which contained a cluster was convolved with a Gaussian with width set as the redshift uncertainty for that cluster. Photometric redshift uncertainties are given in Table \ref{tab:catalog}, and are described briefly in \S\ref{sec:optical} and in detail in \cite{high09}; spectroscopic redshifts were taken to be exact. \item A hard cut in \ensuremath{\xi} \ was applied, corresponding to the catalog selection threshold of $\ensuremath{\xi} \geq 5$. \item An additional cut was applied, requiring $z\geq0.3$, to avoid low-redshift regions where the power-law scaling relation fails to capture the behavior of the CMB-confused selection function. This cut excludes 3 low-redshift clusters from the cosmological analysis, leaving 18 clusters plus the unconfirmed candidate, whose treatment is described below. \end{itemize} The likelihood ratio for the SPT catalog was then constructed, as outlined in \citet{cash79}, using the Poisson probability, $$ \mathcal{L}=\prod_{i=1}^N P_i=\prod_{i=1}^N \frac{e_i^{n_i}e^{-e_i}}{n_i !}, $$ where the product is across bins in \ensuremath{\xi}$-z$ space, $N$ is the number of bins, $P_i$ is the Poisson probability in bin $i$, and $e_i$ and $n_i$ are the fractional expected and integer observed number counts for that bin, respectively. The unconfirmed candidate was accounted for by simultaneously allowing it to either be at high redshift or a false detection. Its contribution to the total likelihood was calculated as the union of the likelihoods for $n=0$ and $n=1$ within a large $z>1.0$ bin, the redshift range corresponding to where the optical completeness for this candidate's follow-up deviates from unity. Ultimately, two sources of mass-observable scatter -- the intrinsic scatter in the scaling relation, and the $1\,\sigma$ measurement noise -- were included in this analysis, along with redshift errors and systematic uncertainties on scaling relation parameters. Other sources of bias and noise (such as point source contamination, \S\ref{sec:corr_ps}, and the mass function normalization described below) are thought to be subdominant to these and were disregarded. While \citet{tinker08} claim a very small ($<5\%$) uncertainty in the mass function normalization, \citet{stanek10} have demonstrated that the inclusion of non-gravitational baryon physics in cosmological simulations can modify cluster masses in the range of the SPT sample by $\sim \pm10\%$ relative to gravity-only hydrodynamical simulations. The large $30\%$ uncertainty on the amplitude $A$ of the scaling relation effectively subsumes such uncertainties in the mass function normalization. This analysis does not account for the effects of sample variance \citep{hu03a}; for the mass and redshift range of the SPT sample, this is not expected to be a problem \citep{hu06}. The SPT survey fields span of order 100 $h^{-1}$ Mpc, where the galaxy cluster correlation function would be expected to be a few percent or less \citep{bahcall03, estrada09}. This leads to clustering corrections to the uncertainty on number counts on the order of a few percent or less of the Poisson variance. \subsection{Application to MCMC Chains} \label{sec:mcmc} The present SPT sample only meaningfully constrains a subset of cosmological parameters, so to explore the cosmological implications of the SPT catalog it is necessary to include information from other experiments. Existing analyses of other cosmological data in the form of Markov Chain Monte Carlo (MCMC) chains provide fully informative priors. These were importance sampled by weighting each set of cosmological parameters in the MCMC chain by the likelihood of the SPT cluster catalog given that set of parameters. In this analysis, four MCMC chains were used to explore constraints on parameters: the first two use only the 7-year data set from the WMAP experiment to explore the standard spatially flat \mbox{$\Lambda$CDM} \ and \mbox{wCDM} \ cosmologies, while the third explores \mbox{wCDM} \ while adding data from baryon acoustic oscillations (BAO) \citep{percival10} and supernovae (SNe) \citep{hicken09}. These three chains were taken from the official WMAP analysis\footnote{Chains available at http://lambda.gsfc.nasa.gov} \citep{komatsu10}. The fourth chain was computed by L10 and allows for a direct comparison with that work. It explores a spatially flat \mbox{$\Lambda$CDM} \ parameter space based on the ``CMBall'' data set: WMAP 5-year + QUaD \citep{brown09} + ACBAR \citep{reichardt09a} + SPT (power spectrum measurements with $A_{SZ}$ as a free parameter; L10). \begin{figure}[h] \centering \includegraphics[scale=0.45]{wcdm_sanity.pdf} \caption[]{The SPT catalog, binned into 3 redshift bins (z=0.1-0.5, 0.5-0.9, 0.9-1.3), with number counts derived from 100 randomly selected points in the WMAP7 \mbox{wCDM} \ MCMC chain overplotted. The SPT data are well covered by the chain and provide improved constraining power. The unconfirmed candidate is not included in this plot, and the binning is much coarser for display purposes than that used in the likelihood calculation (\S\ref{sec:likelihood}).\\} \label{fig:sanity_plot} \end{figure} Figure \ref{fig:sanity_plot} shows the number density of clusters in the SPT catalog, plotted over theoretical predictions calculated using the method described in \S\ref{sec:likelihood}, for 100 random positions in the WMAP7-only MCMC chain. The SPT data are adequately described by many cosmological models that are allowed by this data set, and the MCMC chains are well-sampled within the region of high probability. Uncertainties in the scaling relation parameters were accounted for by marginalizing over them: at each step in the chain, the likelihood was maximized across $A$, $B$, $C$ and scatter, subject to the priors applied to each parameter, using a Newton-Raphson method. The parameter values selected in this way at the highest likelihood point in each MCMC chain are given in Table \ref{tab:marg_scaling_params}. The fiducial values of $B$ and the scatter appear consistent with those preferred by the chains, while the preferred values of the normalization $A$ and redshift evolution $C$ are both approximately 10\% lower than their fiducial values. This weaker-than-fiducial redshift evolution could come from a variety of sources, and is consistent with other simulated models, e.g., the `standard' and `star-formation' models, see \S\ref{sec:scaling_relation}. Uncertainties in the redshift evolution are not a significant source of error in this analysis: recovered parameter values and uncertainties (\S\ref{sec:cosmo_results}) are found to be insignificantly affected by widely varying priors on $C$. \subsection{Cosmological Parameter Constraints} \label{sec:cosmo_results} \begin{figure*}[]\centering \includegraphics[scale=0.75]{cont_s8w.pdf} \caption[]{Likelihood contour plot of $w$ versus $\sigma_8$ showing $1\,\sigma$ and $2\,\sigma$ contours for several data sets. The left panel shows the constraints from WMAP7 alone (blue) and with the SPT cluster catalog included (red). The right panel shows show the full cosmological data set of WMAP7+SN+BAO (blue), and this plus the SPT catalog (red). The ability to constrain cosmological parameters is severely impacted by the uncertainties in the mass scaling relation, though some increase in precision is still evident.\\} \label{fig:cont_s8w} \end{figure*} \begin{table*}[] \begin{minipage}{\textwidth} \centering \caption{Cosmological Parameter Constraints} \small \begin{tabular}{l cc} \hline\hline \rule[-2mm]{0mm}{6mm} Chain & $\sigma_8$ & $w$ \\ \hline \mbox{$\Lambda$CDM} \ WMAP7 & $0.801\pm0.030$ & $-1$ \\ \mbox{$\Lambda$CDM} \ WMAP7+SPT & $0.791\pm0.027$ & $-1$ \\ \\ \mbox{$\Lambda$CDM} \ CMBall & $0.794\pm0.029$ & $-1$ \\ \mbox{$\Lambda$CDM} \ CMBall+SPT & $0.788\pm0.026$ & $-1$ \\ \\ \mbox{wCDM} \ WMAP7 & $0.832\pm0.134$ & $-1.118\pm0.394$ \\ \mbox{wCDM} \ WMAP7+SPT & $0.810\pm0.090$ & $-1.066\pm0.288$ \\ \\ \mbox{wCDM} \ WMAP7+BAO+SNe & $0.802\pm0.038$ & $-0.980\pm0.053$ \\ \mbox{wCDM} \ WMAP7+BAO+SNe+SPT & $0.790\pm0.034$ & $-0.968\pm0.049$ \\ \hline \end{tabular} \label{tab:cosmo_params} \tablecomments{Mean values and symmetrized $1\,\sigma$ range for $\sigma_8$ and $w$, as found from each of the four data sets considered, shown with and without the weighting by likelihoods derived from the SPT cluster catalog. The parameter best constrained by the SPT cluster catalog is $\sigma_8$. CMB power spectrum measurements alone have a large degeneracy between the dark energy equation of state, $w$, and $\sigma_8$. Adding the SPT cluster catalog breaks this degeneracy and leads to an improved constraint on $w$. The SPT catalog has negligible effect on other parameters in these chains ($\Omega_b h^2$, $\Omega_c h^2$, $H_0$, $\tau$ and $n_s$). } \end{minipage} \end{table*} The resulting constraints on $\sigma_8$ and $w$ are given for all chains in Table \ref{tab:cosmo_params}. The parameter best constrained by the SPT cluster catalog is $\sigma_8$. CMB power spectrum measurements alone have a large degeneracy between the dark energy equation of state, $w$, and $\sigma_8$. Figure \ref{fig:cont_s8w} shows this degeneracy, along with the added constraints from the SPT cluster catalog. Including the cluster results tightens the $\sigma_8$ contours and leads to an improved constraint on $w$. This is a growth-based determination of the dark energy equation of state, and is therefore complementary to dark energy measurements based on distances, such as those based on SNe and BAO. When combined with the \mbox{wCDM} \ WMAP7 chain, the SPT data provide roughly a factor of 1.5 improvement in the precision of $\sigma_8$ and $w$, finding $0.81 \pm 0.09$ and $-1.07 \pm 0.29$, respectively. Including data from BAO and SNe, these constraints tighten to $\sigma_8 = 0.79 \pm 0.03$ and $w = -0.97 \pm 0.05$. The dominant sources of uncertainty limiting these constraints are the Poisson error due to the relatively modest size of the current catalog and the uncertainty in the normalization $A$ of the mass scaling relation. With weak-lensing- and X-ray-derived mass estimates of SPT clusters, along with an order of magnitude larger sample expected from the full survey, cosmological constraints from the SPT galaxy cluster survey will markedly improve. \subsection{Amplitude of the SZ Effect} \label{sec:low_sz} The value of the normalization parameter $A$ (which can be thought of as an ``SZ amplitude'') preferred by the likelihood analysis was found to be lower than the fiducial value, as shown in Figure \ref{fig:s8A_prior}. The prior assumed on this parameter is sufficiently large that it is not a highly significant shift; however, in light of the recent report by L10 of lower-than-expected SZ flux, it is worth addressing. The SPT cluster catalog results are complementary to the the results of the power spectrum analysis, in that the majority of the SZ power at the angular scales probed by L10 comes from clusters below the mass threshold of the cluster catalog. Figure \ref{fig:s8A_prior} shows that the amplitude $A$ is strongly degenerate with $\sigma_8$. The constraints provided by the SPT cluster catalog indicate either a value of $\sigma_8$ that is at the low end of the CMB-allowed distribution (or equivalently an erroneously high mass function normalization), or an over-prediction of SZ flux by the fiducial simulations. If the fiducial amplitude is assumed, the best-fit $\sigma_8$ drops from the WMAP5+CMBall value of $0.794\pm0.029$ to $0.775 \pm 0.015$. This value is anomalously low compared to recent results \citep[e.g.][]{vikhlinin09, mantz10b}, and in slight tension with the results of the power spectrum analysis of L10, where a still lower value of $\sigma_8=0.746 \pm 0.017$ was obtained for similar simulation models.\footnote{ The fiducial thermal SZ simulation model used in this paper predicts a power spectrum that is in very close agreement with the fiducial model of L10, which was measured from the simulations of \citet{sehgal10}.} The SZ amplitude parameter used in L10, $A_{sz}$, is roughly analogous to $A^2$ in the current notation. When including the expected contribution from homogeneous reionization, L10 found $A_{sz}=0.42 \pm 0.21$, in mild tension (at the $\sim 1 \sigma$ level) with the marginalized value of $(A/A_{fid})^2 = 0.79 \pm 0.30$ found in this analysis. The fiducial simulations in this work use the semi-analytic gas model of \citet{bode07,bode09}, which is calibrated against low-redshift ($z<0.25$) X-ray observations but has not previously been compared to higher redshift systems. One interpretation of these results is that this model may over-predict the thermal electron pressure in high-redshift ($z>0.3$) systems; this is not in conflict with the low-redshift calibration of the model and suggests a weaker redshift evolution in the SZ signal than predicted by the model. Alternately, a combination of mass function normalization and point source contamination could potentially account for the difference. \begin{figure}[] \includegraphics[scale=0.45]{sigma8_scale_A.pdf} \caption[]{Degeneracy between $\sigma_8$ and SZ scaling relation amplitude $A$, plotted without prior (green) and with a 30\% Gaussian prior (red) on $A$, for the \mbox{$\Lambda$CDM} \ WMAP5+CMBall MCMC chain. The Gaussian prior is shown ($\pm1\,\sigma$) by the gray band, with the fiducial relation amplitude shown by the blue line. This figure is analogous to Fig. 9 of L10, although that work dealt with SZ power, which is roughly proportional to the square of the amplitude being considered here. The prior is slightly higher than the preferred value; these results suggest that simulations may over-estimate the SZ flux in the high-mass, high-redshift systems contained in this catalog.\\} \label{fig:s8A_prior} \end{figure} \section{Sources of Systematic Uncertainties} \label{sec:systematics} There are several systematic effects that might affect the utility of the SPT cluster sample. For example, there remains large uncertainty in the mapping between detection significance and cluster mass. It is also possible that strong correlations (or anti-correlations) between galaxy clusters and mm-bright point sources are significant. We address these issues in this section. \subsection{Relation between SZ signal and Mass} Theoretical arguments \citep{barbosa96, holder01a, motl05} suggest that the SZ flux of galaxy clusters is relatively well understood. However, there is very little high-precision empirical evidence to confirm these arguments, and there are physical mechanisms that could lead to suppressed SZ flux, such as non-thermal pressure support from turbulence \citep{lau09} or non-equilibrium between protons and electrons \citep{fox97, rudd09}. Cluster SZ mass proxies (such as $Y$ and $y_0$, the integrated SZ flux and amplitude of the SZ decrement, respectively) depend linearly on the gas fraction and the gas temperature. There remain theoretical and observational uncertainties in both of these quantities. Estimates of gas fractions for individual clusters can disagree by nearly 20\% \citep[e.g.,][]{allen08, vikhlinin06}, while theoretical and observed estimates of the mass-temperature relation currently agree at the level of 10-20\% \citep{nagai07}. Adding these in quadrature leads to uncertainties slightly below our assumed prior uncertainty of 30\%. With the number counts as a function of mass, $dN/d\ln M$, scaling as $M^{-2}$ or $M^{-3}$ for typical SPT clusters \citep{shaw10a}, a $10\%$ offset in mass would lead to a 20-30\% shift in the number of galaxy clusters. With a catalog of 22 clusters, counting statistics lead to an uncertainty of at least 20\%. Therefore, systematic offsets in the mass scale of order 10\% will have a significant effect on cosmological constraints, and the current $30\%$ prior on $A$ will dominate Poisson errors. A follow-up campaign using optical and X-ray observations will buttress our current theory/simulation-driven understanding of the SPT SZ-selected galaxy cluster catalog. \subsection{Clusters Obscured by Point Sources} \label{sec:corr_ps} The sky density of bright point sources at $150\,$GHz is low enough --- on the order of 1 $\mathrm{deg}^{-2}$ \citep{vieira10} --- that the probability of a galaxy cluster being missed due to a chance superposition with a bright source is negligible. However, sources associated with clusters will preferentially fill in cluster SZ decrements. Characterizing the contamination of cluster SZ measurements by member galaxies will be necessary to realize the full potential of the upcoming much larger SPT cluster catalog, but the systematic uncertainty predicted here and in the literature is well below the statistical precision of the current sample; it is disregarded in the current cosmological analysis (\S\ref{sec:cosmology}). \subsubsection{Dusty Source Contamination} Star formation is expected to be suppressed in cluster environments \citep[e.g.,][]{hashimoto98}. \citet{bai07} measure the abundance of infrared-luminous star-forming galaxies in a massive ($\gtrsim 10^{15} \ensuremath{M_\odot}$) cluster at $z=0.8$ to be far lower relative to the field abundance than a simple mass scaling would predict: the cluster volume that is hundreds of times overdense in mass is only 20 times overdense in infrared luminosity. A sphere at $z=1$ with a 1 Mpc radius and infrared luminosity that is 20 times larger than the field would produce $<0.1\,$mJy of emission at $150\,$GHz, according to the sub-mm luminosity measurements of BLAST \citep{pascale09}(Pascale et al, 2009). Even if the IR overdensity evolves strongly with mass and redshift, we can expect $\ll1\,$mJy of contamination for the highest-redshift ($z \sim 1$) clusters at the SPT mass threshold. This corresponds to $\ll10\%$ of the cluster SZ signal, which is far less than the uncertainty in the normalization of cluster masses presented in this work. Additionally, \citet[][in prep.]{keisler10} measures the average $100$~$\mu \mathrm{m}$ flux of cluster members from a sample of clusters at $\langle z\rangle=0.2$ and with masses similar to those selected by SPT and, after extrapolating to $150\,$GHz and allowing for strong redshift evolution in the infrared luminosity function, constrains this contamination to be less than $10\%$ of the cluster SZ signal. Again, this level of contamination is subdominant to the uncertainty in the normalization of cluster masses presented in this work. \subsubsection{Gravitational Lensing} Galaxy clusters can gravitationally lens sources located behind them. Because gravitational lensing conserves surface brightness, this process cannot alter the mean flux due to the background sources when averaged over many clusters. The background of sources is composed of both overdensities and underdensities, leading to both positive and negative fluctuations, relative to the mean, which will be gravitationally lensed. We do not explicitly account for this effect in this work. The unlensed fluctuating background of sources at $150\,$GHz is expected to be small \citep{hall10} compared to both the experimental noise and intrinsic scatter on the mass scaling relation, and lensing only marginally increases the noise associated with these background sources \citep{lima10}. Within the context of the cosmological analysis, this additional noise term is expected to be small compared to the intrinsic scatter on the mass scaling relation. \subsubsection{Radio Source Contamination} Galaxy clusters are known to host radio sources, but these correlated sources are not expected to be a major contaminant at $150\,$GHz. Calculations \citep{lin09} and explicit simulations \citep{sehgal10} demonstrate that, even taking into account the expected correlation between clusters and radio sources, these sources are not expected to significantly affect the SZ flux in more than 1\% percent of galaxy clusters above $2 \times 10^{14} M_\odot$ at a redshift of $z \sim 0.5$ (where ``significantly" here means at the $\ge 20\%$ level). Simulations were also performed using knowledge of the radio source population at $150\,$GHz from \citet{vieira10} and the cluster profiles that maximize the significance for the SPT clusters presented here. Each profile between $\mbox{$r_\mathrm{core}$} = 0.25^\prime$ and $\mbox{$r_\mathrm{core}$} = 1.5^\prime$ (a range which encompasses all of the optically confirmed clusters in Table \ref{tab:catalog}) was scaled so that the filtered version of that profile would result in a $\ensuremath{\xi}=5$ detection in the 2008 SPT maps. Point sources of a given flux were then added at a given radius from the profile center. These point-source-contaminated profiles were then convolved with the transfer function, the matched filter was applied, and the resulting central value was compared to the central value of the filter-convolved, uncontaminated profile. Clusters were found to suffer a systematic $\Delta\ensuremath{\xi} = 1$ reduction in significance from a $2\,$mJy($5\,$mJy) source at $0.5^\prime$($1^\prime$) from the profile center. This effect is nearly independent of core radius in the range of core radii probed. The \citet{vieira10} radio source counts at $150\,$GHz indicate roughly $1.5$ per \ensuremath{\mathrm{deg}^2} \ above $5\,$mJy, while the \citet{dezotti05} $150\,$GHz model predicts roughly $3$ radio sources per \ensuremath{\mathrm{deg}^2} \ above $2\,$mJy.\footnote{The \citet{vieira10} counts do not cover a low enough flux range to predict counts at $2\,$mJy, but the \citet{dezotti05} model is consistent with the \citet{vieira10} counts at all fluxes above $5\,$mJy, so counts from this model can confidently be extrapolated down a factor of $2.5$ in flux.} If there were no correlation between clusters and radio sources, the clusters contained in the SPT catalog should have a $0.14\%$ $(0.03\%)$ chance of incurring an error of $\Delta\ensuremath{\xi}\geq1$ from a $\ge 5\,$mJy ($2-5\,$mJy) source. Furthermore, using $30\,$GHz observations of a sample of clusters ranging from $0.14 < z < 1.0$, \citet{coble07} find the probability of finding a radio source near a cluster to be $8.9^{+4.3}_{-2.8}$ ($3.3^{+4.1}_{-1.8}$) times the background rate when using a $0.5^\prime$($5^\prime$) radius. From these results, it can be estimated that roughly $1\%$ of SPT-detected clusters would suffer an error of $\Delta\ensuremath{\xi}\geq1$ from radio source contamination. This is in very close agreement with the predictions from \citet{lin09} and \citet{sehgal10}. \section{Discussion} \label{sec:discussion} We have presented the first cosmologically significant SZ-selected galaxy cluster catalog, characterized the selection function, and performed a preliminary cosmological analysis to both demonstrate the general consistency of the catalog with current understanding of cosmology and provide improved constraints on cosmological parameters. This is an important step toward exploiting the potential of SZ-selected galaxy clusters as a powerful cosmological tool. Using single-frequency data taken in 2008 with SPT, a total of 22 candidates were identified, all but one of which were optically confirmed as galaxy clusters \citep{high10}. Of these 21 clusters, three were previously known from optical and/or X-ray surveys, three were new SPT detections reported in S09, three were first identified from BCS data by \citet{menanteau10}, and 12 are new discoveries. Simulations were used to calibrate the selection function of the survey and measure a scaling between SPT detection significance and mass. These simulations indicate that SZ detection significance traces mass with little ($\sim20\%$) intrinsic scatter, making SZ surveys well suited to selecting mass-limited catalogs of galaxy clusters. As a demonstration of the constraining power of the survey, the SPT cluster catalog was used to refine estimates of cosmological parameters, including the dark energy equation of state, $w$, and the normalization of the matter power spectrum on small scales, $\sigma_8$. Using \mbox{wCDM} \ MCMC chains derived from the WMAP 7-year data combined with the SPT cluster catalog, the best-fit values were $w=-1.07 \pm 0.29$ and $\sigma_8=0.81 \pm 0.09$, a factor of roughly 1.5 improvement in precision compared to the WMAP7 constraints alone. When combined with other cosmological data sets (baryon acoustic oscillations and supernovae), the SPT cluster catalog improves precision on these parameters by $\sim 10\%$. These results can be compared to those of \citet{vikhlinin09} and \citet{mantz10b}, who performed a similar analysis using large samples of clusters drawn from X-ray surveys. The SPT results are less precise: in combination with various cosmological data sets, \citet{vikhlinin09} find nearly 4 times tighter constraints on $\sigma_8$, while \citet{mantz10b} are more precise by nearly a factor of 2. This is not surprising: both X-ray analyses had significantly larger cluster samples and smaller stated uncertainty in the mass scaling relation. The weaker parameter constraints found from the SPT cluster catalog are a direct result of uncertainties in the mass scaling relation, which derive from uncertainties modeling intra-cluster physics. The fiducial thermal SZ simulation model assumed here was shown to produce some tension between the analysis presented here and contemporary cosmological results. This may be explained by a variety of factors. The value of $\sigma_8$ may be lower than currently favored, or equivalently the normalization of the \citet{tinker08} mass function may be erroneously high. Alternatively, current simulations, while reproducing observed low-redshift X-ray observations, may over-estimate SZ flux in higher redshift systems, implying missing physics in the semi-analytic gas modeling \citep[for example, a non-negligible amount of non-thermal pressure support at higher redshifts, ][]{shaw10b}. The observed SZ signal could potentially be contaminated by an increasing incidence of point-source emission at high redshift although the arguments presented in \S\ref{sec:corr_ps} suggest point-source contamination is unlikely to be wholly responsible. L10 also found lower-than-anticipated power from the SPT measurement of the SZ power spectrum, consistent with many of these scenarios. A concerted, multi-wavelength program aimed at studying high redshift clusters should help to resolve these issues. The SPT catalog presented here is based on less than 1/3 of the current data and roughly 1/10 of the full SPT survey. The large multifrequency SPT survey, combined with X-ray and/or weak lensing mass estimates of a subsample of SZ-selected galaxy clusters, should allow an order of magnitude improvement in the precision of $\sigma_8$ and $w$ measurements. \acknowledgments The SPT team gratefully acknowledges the contributions to the design and construction of the telescope by S.\ Busetti, E.\ Chauvin, T.\ Hughes, P.\ Huntley, and E.\ Nichols and his team of iron workers. We also thank the National Science Foundation (NSF) Office of Polar Programs, the United States Antarctic Program and the Raytheon Polar Services Company for their support of the project. We are grateful for professional support from the staff of the South Pole station. We thank H.-M.\ Cho, T.\ Lanting, J.\ Leong, W.\ Lu, M.\ Runyan, D.\ Schwan, M.\ Sharp, and C.\ Greer for their early contributions to the SPT project and J.\ Joseph and C.\ Vu for their contributions to the electronics. We acknowledge S.\ Alam, W.\ Barkhouse, S.\ Bhattacharya, L.\ Buckley-Greer, S.\ Hansen, H.\ Lin, Y-T Lin, C.\ Smith and D.\ Tucker for their contribution to BCS data acquisition, and we acknowledge the DESDM team, which has developed the tools we used to process and calibrate the BCS data We acknowledge the use of the Legacy Archive for Microwave Background Data Analysis (LAMBDA). Support for LAMBDA is provided by the NASA Office of Space Science. This research was facilitated in part by allocations of time on the COSMOS supercomputer at DAMTP in Cambridge, a UK-CCC facility supported by HEFCE and PPARC. This work is based in part on observations obtained at the Cerro Tololo Inter-American Observatory, and the Las Campanas Observatory. CTIO is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under cooperative agreement with the National Science Foundation (NSF). The South Pole Telescope is supported by the National Science Foundation through grants ANT-0638937 and ANT-0130612. Partial support is also provided by the NSF Physics Frontier Center grant PHY-0114422 to the Kavli Institute of Cosmological Physics at the University of Chicago, the Kavli Foundation and the Gordon and Betty Moore Foundation. This research used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This work is supported in part by the Director, Office of Science, Office of High Energy Physics, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. The McGill group acknowledges funding from the National Sciences and Engineering Research Council of Canada, the Quebec Fonds de recherche sur la nature et les technologies, and the Canadian Institute for Advanced Research. Partial support was provided by NSF grant MRI-0723073. The following individuals acknowledge additional support: A. Loehr and B. Stalder from the Brinson Foundation, B. Benson and K. Schaffer from KICP Fellowships, J. McMahon from a Fermi Fellowship, R. Foley from a Clay Fellowship, D. Marrone from Hubble Fellowship grant HF-51259.01-A, M. Brodwin from the Keck Foundation, Z. Staniszewski from a GAAN Fellowship, and A.T. Lee from the Miller Institute for Basic Research in Science, University of California Berkeley. Facilities: Blanco (MOSAIC II), Magellan:Baade (IMACS), Magellan:Clay (LDSS2)
proofpile-arXiv_065-4781
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \begin{figure*} \centering \plotone{./p1.eps} \caption{Simulated gas properties projected on top of \HI kinematic data by \citet{HibbardEtAl2001AJ} at the time of best match. Simulated gas particles are displayed in blue (NGC 4038) and red (NGC 4039). Yellow points represent the observational data. {\it Upper left panel:} Projected positions in the plane-of-the-sky (x$^\prime$-y$^\prime$ plane). {\it Upper right and lower left panel:} Declination (y$^\prime$) and Right Ascension (x$^\prime$) against line-of-sight velocity. Similarly to the observations, we apply a column density threshold of $N_\mathrm{gas} = 10^{20} \cm^{-2}$ to the simulated gas distribution.} \label{pic1:PV} \end{figure*} \label{Intro} In the local universe ($z < 0.3$) about $\sim5-10\%$ of all galaxies are interacting and merging (e.g. \citealp{2008ApJ...672..177L,2010ApJ...709.1067B}). Mass assembly via this mechanism was more important at earlier cosmic times when major mergers were more frequent \citep[e.g.][]{2002ApJ...565..208P,ConseliceEtAl2003AJ....126.1183C} and also more gas-rich \citep[e.g.][]{2010Natur.463..781T}. Major mergers dramatically affect the formation and evolution of galaxies. By inducing tidal torques they can efficiently transport gas to the centers of the galaxies \citep{BarnesHernquist1996ApJ, 2006MNRAS.372..839N}, trigger star formation \citep{Mihos&Hernquist1996ApJ, 2000MNRAS.312..859S, 2008MNRAS.384..386C}, feed super-massive black holes \citep{2005ApJ...630..705H, SpringelDiMatteoHernquist2005MNRAS, JohanssonEtAl2009ApJ} and convert spiral galaxies into intermediate-mass ellipticals \citep{1992ApJ...393..484B, NaabBurkert2003ApJ,2004AJ....128.2098R,2009ApJ...690.1452N}. The \object{Antennae} galaxies (NGC 4038/39) are the nearest and best-studied example of an on-going major merger of two gas-rich spiral galaxies. The system sports a beautiful pair of elongated tidal tails extending to a projected size of $\sim20 \arcmin$ (i.e. $106 \kpc$ at an assumed distance of 22 Mpc), together with two clearly visible, still distinct galactic disks. The latter has been assumed to be an indication of an early merger state, putting the system in the first place of the \citet{Toomre1977egsp.conf..401T} merger sequence of 11 prototypical mergers. Due to their proximity and the ample number of high-quality observations covering the spectrum from radio to X-ray \citep[e.g.][]{NeffUlvestad2000AJ, WangEtAl2004ApJS, WhitmoreEtAl1999AJ, 2005ApJ...619L..87H, ZezasEtAl2006ApJS..166..211Z} the Antennae provide an ideal laboratory for understanding the physics of merger-induced starbursts through comparison with high-resolution simulations. At the center of the Antennae galaxies, HST imaging has revealed a large number of bright young star clusters ($\gtrsim 1000$) which plausibly have formed in several bursts of star formation induced by the interaction \citep{WhitmoreEtAl1999AJ}. The spatial distribution and the age of these clusters are correlated: the youngest clusters are found in the overlap region ($\tau < 5 \Myr$), while the young starburst is generally located in the overlap and a ring-like configuration in the disk of NGC 4038 ($\tau \lesssim 30 \Myr$). An intermediate-age population ($\tau = 500-600 \Myr$) is distributed throughout the disk of NGC 4038 \citep{WhitmoreEtAl1999AJ,ZhangFallWhitmore2001ApJ}. Of particular interest is the spectacular nature of an extra-nuclear starburst observed in the dusty overlap region between the merging galactic disks \citep{MirabelEtAl1998A&A,WangEtAl2004ApJS}. The Antennae seem to be the only interacting system where an off-center starburst is outshining the galactic nuclei in the mid-IR \citep{XuEtAl2000ApJ} and among only a few systems which show enhanced inter-nuclear gas concentrations \citep{1999ApJ...524..732T}. To date, this prominent feature has not been reproduced in any simulation of the Antennae system \citep[see][]{BarnesHibbard2009AJ}. Thus, the question remains whether this feature cannot be captured by current sub-grid modeling of star formation or whether the previous dynamical models (e.g. initial conditions) were not accurate enough. A first simulation of the Antennae galaxies was presented by \citet{Toomre&Toomre1972ApJ}, reproducing the correct trends in the morphology of the tidal tails. \citet{Barnes1988ApJ} repeated the analysis with a self-consistent multi-component model consisting of a bulge, disk and dark halo component. \citet{MihosBothunRichstone1993ApJ} included gas and star formation in their model and found the star formation to be concentrated in the nuclei of the disks, thus, not reproducing the overlap star formation. In this Letter, we present the first high-resolution merger simulation of NGC 4038/39 with cosmologically motivated progenitor disks galaxy models. We are able to match both the large-scale morphology and the line-of-sight kinematics, as well as important key aspects of the distribution and ages of newly-formed stars at the center of the Antennae, being a direct consequence of the improved merger orbit. \section{Simulations} \label{modelling} The simulation presented here is the best-fitting model of a larger parameter study and was performed using \Gadget2 \citep{Springel2005MNRAS}. We include primordial radiative cooling and a local extra-galactic UV background. Star formation and associated SNII feedback are modeled following the sub-grid multi-phase prescription of \citet{Springel&Hernquist2003MNRAS}, but excluding supernovae-driven galactic winds. For densities $n > n_\mathrm{crit} = 0.128 \cm^{-3}$ the ISM is treated as a two-phase medium with cold clouds embedded in pressure equilibrium in a hot ambient medium. We deploy a fiducial set of parameters governing the multi-phase feedback model resulting in a star formation rate (SFR) of $\sim1 \Msun \yr^{-1}$ for a Milky Way-type galaxy. We adopt a softened equation of state (EQS) with $q_\mathrm{EQS} = 0.5$, where the parameter $q_\mathrm{EQS}$ interpolates the star formation model between the full feedback model ($q_\mathrm{EQS}=1.0$) and an isothermal EQS with $T=10^{4} \ \rm K$ ($q_\mathrm{EQS}=0$) (see \citealp{SpringelDiMatteoHernquist2005MNRAS} for further details). \begin{table} \caption{Model parameters of the best-fit merger configuration} \label{Tab:RunParameters} \centering \begin{tabular}{ c | c | c } \hline \hline Property & \object{NGC 4038} & \object{NGC 4039} \\ \hline $M_\mathrm{vir}$\footnote{Mass in $10^{10}\Msun$} & $55.2$ & $55.2$ \\ $M_{\mathrm{disk, stellar}}$ & $3.3$ & $3.3$ \\ $M_{\mathrm{disk, gas}}$ & $0.8$ & $0.8$ \\ $M_{\mathrm{bulge}}$ & $1.4$ & $1.4$ \\ $r_{\mathrm{disk}}$\footnote{Disk and bulge lengths ($r_{\mathrm{disk}}$, $r_{\mathrm{bulge}}$) and disk scale height ($z_0$) are given in $\kpc$} & $6.28$ & $4.12$ \\ $z_0$ & $1.26$ & $0.82$ \\ $r_{\mathrm{bulge}}$ & $1.26$ & $0.82$ \\ c\footnote{Halo concentration parameter} & $15$ & $15$ \\ $\lambda$\footnote{Halo spin parameter} & $0.10$ & $0.07$ \\ $v_{\mathrm{rot}}^{\mathrm{max}}$\footnote{Maximum rotational velocity in $\kms$} & $189$ & $198$\\ \hline \hline \end{tabular} \end{table} The progenitor galaxies are set up in equilibrium according to the method of \citet{SpringelDiMatteoHernquist2005MNRAS} with a total virial mass of $M_{\mathrm{vir}} = 5.52 \times 10^{11} \Msun$ for each galaxy. The dark matter halos are constructed using a \citet{Hernquist1990ApJ} density profile. They are populated with exponential stellar disks comprising a constant disk mass fraction $m_{\mathrm{d}} = 0.075$ of the total virial mass and a stellar Hernquist bulge with a bulge mass fraction of $m_{\mathrm{b}} = 0.025$ $(m_{\mathrm{b}}=1/3 m_{\mathrm{d}})$. The gas mass fraction of the disk component is $f_{\mathrm{g}} = 0.2$ with the rest of the disk mass remaining in stars. The disk and bulge scale lengths are determined using the \citet*{MoMaoWhite1998MNRAS} formalism. A summary of the most relevant model parameters is given in Table \ref{Tab:RunParameters}. Each galaxy is realized with $N_\mathrm{tot} = 1.2 \times 10^6$ particles, i.e. 400,000 halo particles, 200,000 bulge particles, 480,000 disk particles and 120,000 SPH particles. In order to avoid spurious two-body effects we ensured that all baryonic particles have the same mass and only {\it one} stellar particle is spawned per SPH particle. This yields a total baryon fraction of $f_\mathrm{bary} = 10\%$ with particle masses for the baryonic components (bulge, disk, formed stars, and gas) of $m_\mathrm{bary} = 6.9 \times 10^4 \Msun$ and $m_\mathrm{DM} = 1.2 \times 10^6 \Msun$ for the dark matter particles. The gravitational softening lengths are set to $\epsilon_\mathrm{bary} = 0.035 \kpc$ for baryons, and $\epsilon_\mathrm{DM} = 0.15 \kpc$ for dark matter particles, scaled according to $\epsilon_\mathrm{DM} = \epsilon_\mathrm{bary} \thinspace(m_\mathrm{DM}/m_\mathrm{bary})^{1/2}$. We adopt an initially nearly-parabolic, prograde orbit geometry (the orbital plane lies in the x-y plane) with a pericenter distance of $r_{\mathrm{p}} = r_{\mathrm{d,4038}} + r_{\mathrm{d,4039}} = 10.4 \kpc$ and an initial separation of $r_{\mathrm{sep}} = r_{\mathrm{vir}} = 168 \kpc$. For the orientation of the progenitor disks we found the best match to the Antennae system with inclinations $i_{\mathrm{4038}} = 60\degr,\, i_{\mathrm{4039}} = 60\degr$ and arguments of pericenter $\omega_{\mathrm{4038}}= 30\degr,\, \omega_{\mathrm{4039}}= 60\degr$ \citep[see][]{Toomre&Toomre1972ApJ}. \section{Results} \label{results} \subsection{The morphological and kinematical match} \label{subsec:KinModel} We determine the time when the simulation best matches the Antennae together with the viewing angles ($\theta$,$\psi$,$\phi$) which specify a series of subsequent rotations around the x-, y-, and z-axis. In the further analysis we will use the rotated 3D position-velocity subspace, i.e. the plane-of-the-sky (x$^\prime$-y$^\prime$ plane) and the line-of-sight velocity $v_{\mathrm{los}}$, for comparison with the observations. Finally, we apply a distance scale ($\mathcal{L}$) relative to a fiducial distance of 22 \Mpc \citep{SchweizerEtAl2008AJ}\footnote{Note that the distance to the Antennae is a matter of recent debate. The systemic recession velocity yields a distance of $19.2 \Mpc$ (assuming $\mathrm{H}_0=75\,\kms\,\Mpc^{-1}$) while photometry of the tip of the red giant branch suggests a much shorter distance of only $13.3 \Mpc$ \citep{SavianeEtAl2008ApJ...678..179S}. Recently, \citet{SchweizerEtAl2008AJ} have used three independent methods to determine a distance of $22\pm 3 \Mpc$.} and assume a systemic helio-centric velocity of $1630 \kms$ to fit the observational data to the physical scales in the simulation. We find our best match to the observed large- and small-scale properties of the system with viewing angles of $(93,69,253.5)$ and $\mathcal{L} = 1.4$, yielding a distance of $D = 30.8 \Mpc$ to the system. The "best fit" ($t=1.24 \Gyr$ after beginning of the simulation) is reached only $\sim40 \Myr$ after the second encounter ($t = 1.20 \Gyr$), and approximately $50 \Myr$ before the final merging of the galaxy centers ($t = 1.29 \Gyr$). From our larger parameter study we found this exact timing to be a mandatory requirement for reproducing the overlap starburst. The first close passage of the two progenitor disk galaxies occurred $\sim600 \Myr$ ago which is considerably longer ago than $\sim200\, -\, 400 \Myr$ as suggested in earlier models \citep{Barnes1988ApJ, MihosBothunRichstone1993ApJ} and in much better agreement with observed 'intermediate-age' star clusters ($\sim500-600 \Myr$). \begin{figure*} \centering \includegraphics[width=18cm]{./p2.eps} \caption{Line-of-sight velocity fields inside $18 \kpc$ of the simulated and observed central disks. Isovelocity contours are drawn at $10 \kms$ intervals ranging from $-150 \kms$ to $150 \kms$. {\it Left:} density-weighted velocity map. {\it Right:} intensity-weighted \HI velocity field of the high-resolution data cube. A column density threshold is applied as in Fig.\ref{pic1:PV}.} \label{pic2:LoSHIVelfield} \end{figure*} In Fig.\ref{pic1:PV} we show three large-scale projections of the PV cube of our simulated gas particles (NGC 4038: blue, NGC 4039: red) together with a direct comparison to \HI observations (yellow) by \citet{HibbardEtAl2001AJ}. The \HI gas phase is used here as a tracer for the smooth underlying morphological and kinematical structure of the gas in the Antennae and we apply, similarly to the \HI observations a column density threshold of $N_{\mathrm{\HI}} \leq 1\times 10^{20}\cm^{-2}$ in the simulation. The top left panel displays the plane-of-the-sky projection, while in the top right and bottom left panels we show two orthogonal position-velocity profiles, Declination versus $v_\mathrm{los}$ (upper right) and $v_\mathrm{los}$ versus Right Ascension (lower left). The simulation matches the morphology and kinematics of the observed system very closely, especially for the southern arm, including the prominent kink in the velocities at the tip of the tidal arm (see Fig.\ref{pic1:PV}, upper right and lower left panels). Due to the different initial orientations of the progenitor disks, the gas distribution in the northern arm is more diffuse than in the southern arm. The assumed column density cut-off therefore results in a similar characteristic stubby geometry as observed (Fig.\ref{pic1:PV}, upper left). A closeup of the simulated and observed line-of-sight gas velocity fields in the central $18 \kpc$ of NGC 4038/39 is shown in the left and right panels of Fig.\ref{pic2:LoSHIVelfield}. Gas particles are binned on a SPH-kernel weighted $256^3$ grid and summed up along the line-of-sight to produce a density-weighted velocity map \citep[see][]{HibbardEtAl2001AJ}. The grid is smoothed with the observed beam profile and displayed using the same projected pixel sizes ($\Delta_\mathrm{RA} = 2.64 \arcsec$ and $\Delta_\mathrm{Decl} = 2.5 \arcsec$) as in \citet{HibbardEtAl2001AJ}. We overlay isovelocity contours spaced by $10 \kms$ and apply the same column density threshold as in Fig.\ref{pic1:PV}. The simulation agrees well with the observed velocity field of the disk of NGC 4038. The northern part is approaching and the southern part is receding at similar velocities. Similarly, the simulated disk of NGC 4039 is approaching in the northern part and receding in the southern part like in the observations. In the simulation we have significantly more gas in the overlap region and the southern disk than in the observed $\HI$ velocity field. This is due to the fact that we do not distinguish between molecular and atomic gas in our simulation whereas most of the gas in the central regions of the Antennae seems to be in molecular form \citep{GaoEtAl2001ApJ}. \subsection{The recent starburst} In Fig.\ref{pic3:RemnantDisks} we show a color-coded map of the total gas surface density in the central $18 \kpc$ of the simulation (upper panel). The nuclei of the progenitor disks are still distinct and connected by a bridge of high density gas. In the lower panel of Fig.\ref{pic3:RemnantDisks} we show the corresponding iso-density contours and overplot in color all stellar particles (with individual masses of $m_\mathrm{star} = 6.9 \times 10^4 M_{\odot}$) formed in the last $ \tau < 15 \Myr$ (blue), $15 \Myr < \tau < 50 \Myr$ (green), and $50 \Myr < \tau < 100 \Myr$ (red). In regions of currently high gas densities the very young stars (blue) form predominantly at the centers, in the overlap region, as well as in the spiral features around the disks similar to the observed system \citep{WhitmoreEtAl1999AJ,WangEtAl2004ApJS}, save the fact that the star formation in the centers seems to be much more pronounced in our simulation (see below). However, the overlap region is almost devoid of stars older than $50 \Myr$ (red) indicating that the overlap starburst is a very recent phenomenon. Simulating the system further in time we find that the total duration of the off-center starburst is no longer than $\approx 20 \Myr$. \citet{2009ApJ...699.1982B} derived SFRs in the nuclei of NGC 4038 ($0.63 \Msun \yr^{-1}$) and NGC 4039 ($0.33 \Msun \yr^{-1}$), and a total SFR of $5.4 \Msun \yr^{-1}$ for 5 infrared peaks in the overlap region. Comparing these values to simulated SFRs of $2.9 \Msun \yr^{-1}$ (NGC 4038) and $2.8 \Msun \yr^{-1}$ (NGC 4039) in the galactic nuclei (defined as the central $\kpc$), together with $1.0 \Msun \yr^{-1}$ in the overlap region, we find that our simulation still falls short of producing the most intense starburst in the overlap compared to only modest star formation in the nuclei. We note, however, that we find a ratio $(\mathrm{SFR}_\mathrm{overlap}/\mathrm{SFR}_\mathrm{nuclei})$ of a factor of $\sim60$ (!) higher than reported in a previous Antennae model \citep{MihosBothunRichstone1993ApJ}. The total SFR of $8.1 \Msun \yr^{-1}$ measured from the SPH particles is in good agreement with the range of observed values between $5 - 20 \Msun \yr^{-1}$ \citep[e.g.][]{ZhangFallWhitmore2001ApJ}. In Fig.\ref{pic4:SFH} we plot the formation rate of stellar particles within $18 \kpc$ against their age (solid line). We find a significant increase of the SFR after the first and in particular after the second pericenter (dotted and dashed horizontal lines). Assuming the simulated SFR to be directly proportional to the cluster formation rate we compare the simulated SFR to observations of the age distribution of young star clusters \citep{FallChandarWhitmore2005ApJ, WhitmoreChandarFall2007AJ}, using the same time binning of $0.5$ dex in $log(\tau\mathrm{[yr]})$. We find that our simulated data are in very good agreement exhibiting a similarly good match to the observed cluster formation rate as found by \citet{BastianEtAl2009ApJ...701..607B} who compared to the \citet{MihosBothunRichstone1993ApJ} Antennae model. This model predicted a nearly constant formation rate for ages $\tau < 100 \Myr$. However, in contrast to the \citet{MihosBothunRichstone1993ApJ} model, we find an additional significant increase in the formation rate of young stellar populations at ages $\tau \lesssim 10\Myr$ induced by the recent second encounter. Despite the increase, the predicted formation rates of young clusters are still an order of magnitude lower than observed. Further investigations have to show whether this discrepancy originates from still uncertain details of the star formation model or explicit effects of the early disruption and evolution of massive clusters \citep[``infant mortality'', see e.g.][]{WhitmoreChandarFall2007AJ,BastianEtAl2009ApJ...701..607B} which were not included in our model. In this simulation, and other simulations in our parameter study with similar central properties, we only find prominent star formation in the overlap region for a very short period of time after the second encounter, lasting for only $\sim20 \Myr$. In addition, stellar feedback is required to prevent the rapid consumption of gas by star formation at earlier times. In a comparison run without stellar feedback most of the gas is consumed efficiently after the first pericenter and not enough gas is left over to form the overlap starburst after the second encounter. Thus, a central conclusion of our study is that the strong localized, off-center starbursts observed in the overlap region stems from a short-lived transient phase in the merging process associated with the recent second encounter. \begin{figure} \centering \includegraphics[width=9cm]{./p3.eps} \caption{{\it Top:} Gas surface density in the central $18 \kpc$ of the simulation. There are clear concentrations of gas in at the two centers of the galaxies and the overlap region (red contours). {\it Bottom:} Recently formed stellar particles color-coded by their ages. The youngest stars (blue: $\tau < 15 \Myr$) have formed predominantly in the overlap region and the centers associated with the peaks in the gas surface density, and tidal features around the disks (see upper panel). Older stars (green: $15 \Myr < \tau < 50 \Myr$; red: $50 \Myr < \tau < 100 \Myr$) have formed throughout the galactic disks and the tidal arcs. } \label{pic3:RemnantDisks} \end{figure} \begin{figure} \centering \epsscale{1.} \plotone{./p4.eps} \caption{Formation rate of stellar particles versus age for our simulation (stars and solid line). Vertical lines indicate the time of first (dotted) and second (dashed) pericenter. The observed cluster formation rate from \citet{WhitmoreChandarFall2007AJ} is given as filled circles.} \label{pic4:SFH} \end{figure} \section{Discussion} \label{discussion} The new numerical model for the Antennae galaxies presented in this Letter improves on previous models in several key aspects. We find an excellent morphological and kinematical match to the observed large-scale morphology and \HI velocity fields \citep{HibbardEtAl2001AJ}. In addition, our model produces a fair morphological and kinematical representation of the observed central region. A strong off-center starburst naturally develops in the simulation - in good qualitative and quantitative agreement with the observed extra-nuclear star-forming sites \citep[e.g.][]{MirabelEtAl1998A&A,WangEtAl2004ApJS}. This is a direct consequence of our improved merger orbit. All previous studies using traditional orbits failed to reproduce the overlap starburst \citep [see e.g.][] {KarlEtAl2008AN....329.1042K}. The exact timing after the second encounter shortly before the final merger ensures that the galaxies are close enough for the efficient tidally-induced formation of the overlap region. The formation of the extra-nuclear starburst is likely to be supported by compressive tidal forces which can dominate the overlap region in Antennae-like galaxy mergers during close encounters \citep{2008MNRAS.391L..98R,2009ApJ...706...67R}. Energetic feedback from supernovae prevents the depletion of gas by star formation at earlier merger stages and ensures that by the time of the second encounter enough gas is left over to fuel the starburst. Simulating the system with an identical orbit, but now employing an isothermal EQS without feedback from supernovae ($q_\mathrm{EQS}=0$) resulted in most of the gas being depleted by star formation at earlier phases of the merger, i.e. during the first encounter. Our model predicts that the observed off-center starburst is a transient feature with a very short lifetime ($ \approx 20$ \Myr) compared to the full merger process ($\approx 650 \Myr$ from first encounter to final merger). This fact serves as a plausible explanation for why such features are rarely observed in interacting galaxies \citep{XuEtAl2000ApJ}. However, the observed puzzling gas concentration between the two nuclei of the \object{NGC 6240} merger system might be of a similar origin (\citealp[Engel in prep. 2010]{1999ApJ...524..732T}) suggesting that the Antennae overlap region, although rare, is not a unique feature. In addition, our improved model can serve as a solid basis and testbed for further theoretical studies of the enigmatic interacting NGC 4038/39 system. For example, the overlap region in the Antennae is dominated by molecular gas, which we do not model in the simulation presented here. Given that we now have a dynamically viable method for forming the overlap region, detailed investigations of the molecular gas formation process can be undertaken using improved theoretical models \citep[e.g.][]{2008ApJ...680.1083R,2009ApJ...707..954P}. In a first application using this new orbital configuration we have been able to qualitatively and quantitatively reproduce the magnetic field morphology of the Antennae galaxies \citep{2009arXiv0911.3327K}. Finally, accurate modeling of nearby interacting systems also provide unique insights into the merger dynamics and timing of observed merger systems. The Antennae galaxies are traditionally in the first place in the classical Toomre sequence which orders galaxies according to their apparent merger stage \citep{Toomre1977egsp.conf..401T} with the Mice (\object{NGC 4676}) being between their first and second pericenter \citep{Barnes2004MNRAS} and thus in second place behind the Antennae. According to our proposed model the Antennae galaxies are in a later merger phase, after the second pericenter. As a consequence the Antennae would lose their first place, and thus requiring a revision of the classical Toomre sequence. \begin{acknowledgements} This work was supported by the DFG priority program 1177 and the DFG Cluster of Excellence ``Origin and Structure of the Universe''. We would like to thank J. Hibbard, N. Bastian, B. Whitmore and M. Fall for valuable discussions on the manuscript. F. Renaud is a member of the IK I033-N Cosmic Matter Circuit at the University of Vienna. \end{acknowledgements}
proofpile-arXiv_065-4782
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Fluctuations appearing when counting the atoms in a given sub-volume of a quantum system, are a fundamental feature determined by the interplay between the atomic interactions and quantum statistics. They can be used to investigate many-body properties of the system and in particular non-local properties of the $g^{(2)}$ pair-correlation function. These fluctuations were studied at zero temperature for quantum gases in different regimes and spatial dimensions in \cite{Molmer,Houches2003,VarennaYvan,Combescot,Brustein}. Sub-poissonian fluctuations appear for non-interacting fermions and for interacting bosons. A related issue in condensed matter physics is that of partition noise in electron systems \cite{Levitov}. In cold atoms experiments it is now possible to directly measure the fluctuations in atom number within a given region, as done for example in \cite{Bouchoule} for a quasi one-dimensional system. However finite temperature plays a major role in experiments. Very recently, experiments were done on an atom chip where a cold gas of Rb atoms, initially trapped in an single harmonic potential well, is split into two parts by raising a potential barrier. Accurate statistics of the particle number difference between the left and right wells $N_L-N_R$ is then performed in the modified potential. By varying the initial temperature of the sample across the transition for Bose-Einstein condensation, they observe a marked peak in the fluctuations of the particle number difference below the transition temperature $T_c$ while shot noise fluctuations are recovered for $T>T_c$. For $T \ll T_c$ they finally get sub-shot noise fluctuations due to the repulsive interactions between atoms \cite{Kenneth}. Here we show that the peak of fluctuations for $T<T_c$ is in fact a general feature already appearing in a single harmonic well if we look at the fluctuations of the atom number difference $N_L-N_R$ between the left half and the right half of the trap along one axis. We show that, contrarily to what happens to the total number fluctuations, fluctuations in the particle number difference $N_L-N_R$ can be computed within the grand canonical ensemble without any pathology. In the first part of the paper we address the ideal gas case for which we find the complete analytical solution in the grand canonical ensemble. We derive the asymptotic behaviors for $T \ll T_c$ and $T \gg T_c$ and we explain the physical origin of the ``bump" in fluctuations of the particle number difference for $T<T_c$. In the second part of our paper we then address the interacting case. \section{Ideal gas: exact solution} We consider an ideal gas of bosons in a three-dimensional harmonic potential. The signal we are interested in is the particle number difference $N_L-N_R$ between the left and right halves of the harmonic potential along one direction, as shown in Fig.\ref{fig:parab}. \begin{figure}[hob] \centerline{\includegraphics[width=7cm,clip=]{fig1.eps}} \caption{We consider fluctuations of the particle number difference $N_L-N_R$ between the left and the right halves of a three-dimensional harmonic potential. \label{fig:parab}} \end{figure} In terms of the atomic field operators: \begin{equation} N_L-N_R=\int_{\mathbf{r}\in L} {\psi}^\dagger \psi - \int_{\mathbf{r}\in R} {\psi}^\dagger \psi \,. \end{equation} Due to the symmetry of the problem, $N_L-N_R$ has a zero mean value. It is convenient to express its variance in terms of the unnormalized pair correlation function \begin{equation} g^{(2)}(\mathbf{r},\mathbf{r'})=\langle \psi^\dagger(\mathbf{r}) \psi^\dagger(\mathbf{r'}) \psi(\mathbf{r'}) \psi(\mathbf{r}) \rangle \,. \end{equation} Normally ordering the field operators with the help of the bosonic commutation relation pulls out a term equal to the mean total number of particles: \begin{eqnarray} \label{eq:one} \mbox{Var}(N_L-N_R) &=& \langle N \rangle + 2 \left[ \int_{\mathbf{r}\in L} \int_{\mathbf{r'}\in L} g^{(2)}(\mathbf{r},\mathbf{r'}) \right. \nonumber \\ &-& \left. \int_{\mathbf{r}\in L} \int_{\mathbf{r'}\in R} g^{(2)}(\mathbf{r},\mathbf{r'}) \right] \,. \label{eq:prima} \end{eqnarray} We assume that the system is in thermal equilibrium in the grand canonical ensemble with $\beta=1/k_BT$ the inverse temperature and $\mu$ the chemical potential. Since the density operator is Gaussian, we can use Wick's theorem and express $g^{(2)}(\mathbf{r},\mathbf{r'})$ in terms of the first-order coherence function $g^{(1)}(\mathbf{r},\mathbf{r'}) =\langle \psi^\dagger(\mathbf{r}) \psi(\mathbf{r'}) \rangle$: \begin{equation} g^{(2)}(\mathbf{r},\mathbf{r'})=g^{(1)}(\mathbf{r},\mathbf{r'})g^{(1)}(\mathbf{r'},\mathbf{r})+g^{(1)}(\mathbf{r},\mathbf{r})g^{(1)}(\mathbf{r'},\mathbf{r'}) \,. \end{equation} The $g^{(1)}(\mathbf{r},\mathbf{r'})$ is a matrix element of the one-body density operator \begin{equation} g^{(1)}(\mathbf{r},\mathbf{r'})=\langle \mathbf{r'}| \frac{1}{z^{-1} e^{\beta h_1}-1} |\mathbf{r} \rangle \end{equation} where $h_1$ is the single particle Hamiltonian \begin{equation} h_1=\frac{\mathbf{p}^2}{2m}+ \sum_{\alpha=x,y,z} \frac{1}{2} m \omega_\alpha^2 r_\alpha^2 \,. \end{equation} To compute $g_1$, it is convenient to expand the one-body density operator in powers of the fugacity $z=e^{\beta \mu}$ \cite{LesHouches}: \begin{equation} g^{(1)}(\mathbf{r},\mathbf{r'})=\langle \mathbf{r'}| \sum_{l=1}^{\infty} z^l e^{-l\beta h_1} |\mathbf{r} \rangle \,. \end{equation} On the other hand for a harmonic potential the matrix elements of $e^{-\beta h_1}$ are known \cite{Landau}. We then have: \begin{eqnarray} \label{eq:g1} g^{(1)}(\mathbf{r},\mathbf{r'}) = \sum_{l=1}^{\infty} z^l \left( \frac{m \bar{\omega}}{2\pi \hbar} \right)^{3/2} \prod_{\alpha=x,y,z} \left[ \sinh (l \eta_\alpha) \right]^{-1/2} &\times& \nonumber \\ \exp \left\{ - \frac{m \omega_\alpha}{4\hbar} \left[ (r_\alpha+r'_\alpha)^2 \tanh \left(\frac{l\eta_\alpha}{2}\right) \right. \right. &+& \nonumber \\ \left. \left. (r_\alpha-r'_\alpha)^2 \coth \left(\frac{l\eta_\alpha}{2}\right) \right] \right\} \end{eqnarray} where we introduced the geometric mean of the oscillation frequencies $\bar{\omega}=(\omega_x \omega_y \omega_z)^{1/3}$ and $\eta_\alpha=\beta \hbar \omega_\alpha$. It is convenient to renormalize the fugacity introducing $\tilde{z}=z \exp(-\sum_{\alpha} \eta_\alpha/2)$ that spans the interval $(0,1)$. After some algebra \cite{Integrales}, the variance of $N_L-N_R$ is expressed as a double sum that we reorder as \begin{equation} \mbox{Var}(N_L-N_R) = \langle N \rangle + \sum_{s=1}^{\infty} c_s \tilde{z}^s \label{eq:result} \end{equation} with \begin{multline} c_s = \displaystyle{\sum_{l=1}^{s-1}} \frac{ 1-\frac{4}{\pi} \arctan \sqrt{\tanh (\frac{1}{2}l\eta_x) \tanh \left[ \frac{1}{2}(s-l)\eta_x \right] }}{\displaystyle{\prod_{\alpha=x,y,z}} \left[ 1 - e^{-\eta_\alpha s} \right] }, \end{multline} with $c_1=0$. Correspondingly, the mean atom number is expressed as \begin{equation} \langle N \rangle = \sum_{l=1}^{\infty} \tilde{z}^l \prod_\alpha \frac{1+\coth(l\eta_\alpha/2)}{2}. \label{eq:nat} \end{equation} This constitutes our analytical solution of the problem in the grand canonical ensemble. In practice, the forms (\ref{eq:result}) and (\ref{eq:nat}) are difficult to handle in the degenerate regime, since the series converge very slowly when $\tilde{z}\to 1$. A useful exact rewriting is obtained by pulling out the asymptotic behaviors of the summands. For the signal we obtain the operational form \begin{equation} \mbox{Var}(N_L-N_R) = \langle N \rangle + c_\infty \langle N_0 \rangle + \sum_{s=1}^{\infty} ( c_s -c_\infty) \tilde{z}^s \label{eq:split} \end{equation} where \begin{equation} \label{eq:clim} c_\infty = \lim_{s\to \infty} c_s = 2 \sum_{l=1}^\infty \left( 1 - \frac{4}{\pi} \arctan \sqrt{\tanh \frac{l \eta_x}{2} }\, \right) \,, \end{equation} and $\langle N_0 \rangle=\tilde{z}/(1-\tilde{z})$ is the mean number of condensate particles. The mean atom number is rewritten as \begin{equation} \label{eq:split_nat} \langle N\rangle = \langle N_0\rangle + \sum_{l=1}^{\infty} \tilde{z}^l \left[-1 + \prod_\alpha \frac{1+\coth(l\eta_\alpha/2)}{2}\right]. \end{equation} In Fig. \ref{fig:varn} we show an example of fluctuations of the particle number difference for realistic parameters of an atom-chip experiment. It is apparent that fluctuations are weakly super-poissonian above $T_c$ and a marked peak of fluctuations occurs for $T<T_c$. In the following sections we perform some approximations or transformations in order to obtain explicit formulas and get some physical insight. \begin{figure}[hob] \centerline{\includegraphics[width=7cm,clip=]{fig2.eps}} \caption{(Color online) Normalized variance of the particle number difference $N_L-N_R$ as a function of temperature in a cigar-shaped trap with $\omega_y=\omega_z=2 \omega_x$. The number of particles is $\langle N\rangle=6000$ (black lines) and $\langle N\rangle=13000$ (red lines). The inset is a magnification of the $T>T_c$ region. Solid lines: exact result (\ref{eq:result}) together with (\ref{eq:nat}). Dashed line for $T>T_c$: approximate result (\ref{eq:ht}) together with (\ref{eq:nat_ht}). Dashed line for $T<T_c$: approximate result resulting from the improved estimates (\ref{eq:euler}) and (\ref{eq:ltb_nat}). The temperature $T$ is expressed in units of the critical temperature $T_c$ defined in Eq. (\ref{eq:Tc}). \label{fig:varn}} \end{figure} \section{Approximate formulas for $k_BT \gg \hbar \omega$} In this section, we consider the limit of a large atom number and a high temperature $k_B T \gg \hbar \omega_\alpha$ for all $\alpha$. {\it Non-condensed regime:} Taking the limit $\eta_\alpha \ll 1$ in Eq. (\ref{eq:nat}) we get \begin{equation} \langle N \rangle \simeq \left( \frac{k_BT}{\hbar \bar{\omega}}\right)^3 g_3(\tilde{z})\, \label{eq:nat_ht} \end{equation} where $g_\alpha(z)=\sum_{l=1}^\infty z^l/l^\alpha$ is the Bose function. From this equation we recover the usual definition of the critical temperature $T_c$: \begin{equation} \label{eq:Tc} k_B T_c = [N/\zeta(3)]^{1/3} \hbar \bar{\omega} \,, \end{equation} where $\zeta(3)=g_3(1)$ with $\zeta$ the Riemann function. Taking the same limit in (\ref{eq:result}) gives \begin{equation} \mbox{Var}(N_L-N_R) \simeq \langle N \rangle \left[ 1+ \frac{g_2(\tilde{z})-g_3(\tilde{z})}{\zeta(3)} \frac{T^3}{T_c^3} \right]. \label{eq:ht} \end{equation} At $T=T_c$ this leads to \begin{equation} \mbox{Var}(N_L-N_R)(T_c) \simeq N\frac{\zeta(2)}{\zeta(3)}\simeq 1.37 N\,, \end{equation} showing that the non-condensed gas is weakly super-poissonian, as already observed in Fig.\ref{fig:varn}. Alternatively, one may directly take the limit $\eta_\alpha\to 0$ in Eq.(\ref{eq:g1}), yielding \begin{multline} g^{(1)}(\mathbf{r},\mathbf{r'}) \simeq \sum_{l=1}^{\infty} \frac{\tilde{z}^l}{l^{3/2} \lambda_{dB}^3} \prod_{\alpha=x,y,z} \\ \exp \left\{ - \frac{l m \omega_\alpha^2}{2 k_BT} \left( \frac{r_\alpha+r'_\alpha}{2} \right)^2 + \frac{\pi}{l \lambda_{dB}^2} (r_\alpha-r'_\alpha)^2 \right\} . \label{eq:g1lda} \end{multline} This semiclassical approximation coincides with the widely used local density approximation, and allows to recover (\ref{eq:nat_ht}) and (\ref{eq:ht}). {\it Bose-condensed regime:} In this regime $\tilde{z}\to 1$ so we use the splittings (\ref{eq:split}) and (\ref{eq:split_nat}). Setting $\tilde{z}=1$ and taking the limit $\eta_\alpha\to 0$ in each term of the sum over $l$ in (\ref{eq:split_nat}), we obtain the usual condensate fraction \begin{equation} \frac{\langle N_0\rangle}{\langle N\rangle} \simeq 1 - \frac{T^3}{T_c^3}. \label{eq:lt_nat} \end{equation} The same procedure may be applied to the sum over $s$ in (\ref{eq:split}). The calculation of the small-$\eta$ limit of $c_\infty$ requires a different technique: Contrarily to the previous cases, the sum in (\ref{eq:clim}) is not dominated by values of the summation index $\ll 1/\eta_\alpha$ and explores high values of $l\sim 1/\eta_x$. As a remarkable consequence, the local-density approximation fails in this case \cite{lda}. We find that one rather has to replace the sum over $l$ by an integral in (\ref{eq:clim}): \begin{equation} \label{eq:climint} c_\infty \simeq \int_0^{+\infty} dl\, f(l) = \frac{2\ln 2}{\eta_x} \end{equation} with $f(x)=2-(8/\pi)\arctan\sqrt{\tanh(x\eta_x/2)}$. This leads to the simple formula for $T<T_c$: \begin{multline} \label{eq:lt} \mbox{Var}(N_L-N_R) \simeq \langle N \rangle\left[1 + \frac{\zeta(2)-\zeta(3)}{\zeta(3)} \frac{T^3}{T_c^3} \right.\\ \left.+2\ln 2 \left( \frac{k_BT_c}{\hbar \omega_x} \right) \left(1 - \frac{T^3}{T_c^3}\right)\, \frac{T}{T_c} \right]. \end{multline} The second line of Eq.(\ref{eq:lt}) is a new contribution involving the macroscopic value of $\langle N_0\rangle$ below the critical temperature. Since $k_B T_c \gg \hbar \omega_x$ here, it is the dominant contribution to the fluctuations of the particle number difference. It clearly leads to the occurrence of a maximum of these fluctuations, at a temperature which remarkably is independent of the trap anisotropy: \begin{equation} \left( \frac{T}{T_c} \right)_{\rm max} \simeq 2^{-2/3} \simeq 0.63 \,. \label{eq:pos_max} \end{equation} The corresponding variance is strongly super-poissonian in the large atom-number limit: \begin{equation} \left[ \mbox{Var}(N_L-N_R)\right]_{\rm max} \simeq \langle N \rangle \left[ 1+ \frac{3}{4} \ln 2 \left( \frac{2 \langle N \rangle}{\zeta(3)} \right)^{1/3} \frac{\bar{\omega}}{\omega_x} \right] \label{eq:max}. \end{equation} For the parameters of the upper curve in Fig.~\ref{fig:varn}, these approximate formulas lead to a maximal variance over $\langle N\rangle$ equal to $\simeq 27.5$, whereas the exact result is $\simeq 22.2$, located at $T/T_c\simeq 0.61$. We thus see that finite size corrections remain important even for the large atom number $\langle N\rangle=13000$. Fortunately, it is straightforward to calculate the next order correction. For the condensate atom number we simply expand the summand in (\ref{eq:split_nat}) up to order $\eta_\alpha^2$, and we recover the known result \cite{Qui_citer}: \begin{equation} \frac{\langle N_0\rangle}{\langle N\rangle} \simeq 1 - \frac{T^3}{T_c^3} -\frac{T^2}{T_c^2} \frac{3\zeta(2)}{2\zeta(3)} \frac{\hbar\omega_m}{k_B T_c} \label{eq:ltb_nat} \end{equation} with the arithmetic mean $\omega_m=\sum_\alpha \omega_\alpha/3$. For $c_\infty$ we use the Euler-Mac Laurin summation formula, applied to the previously defined function $f(x)$ over the interval $(1,+\infty)$, and we obtain \begin{equation} \label{eq:euler} c_\infty = \frac{2\ln 2}{\eta_x} -1 + A \eta_x^{1/2} + O(\eta_x^{3/2}) \end{equation} with $A = - 2^{5/2} \zeta(-1/2)/\pi \simeq 0.374$ \cite{details}. Using these more accurate formulas for $\langle N_0\rangle$ and $c_\infty$ in the second term of the right-hand side of (\ref{eq:split}) leads to an excellent agreement with the exact result, see the dashed lines in Fig.~\ref{fig:varn} practically indistinguishable from the solid lines. Note that the effect of the $-1$ correction in (\ref{eq:euler}) is to change the shot noise term $1$ in the square brackets (\ref{eq:lt}) and (\ref{eq:max}) into $1-\langle N_0\rangle/\langle N \rangle$. \section{A physical analysis singling out the condensate mode} To investigate the contribution of different physical effects on our observable, it is convenient to go back to the expression (\ref{eq:one}) and split the field operator into the condensate and the non-condensed part: \begin{equation} \psi(\mathbf{r})=\phi(\mathbf{r})a_0 + \delta \psi(\mathbf{r}) \,, \label{eq:sepa} \end{equation} where $\phi(\mathbf{r})$ is the ground mode wavefunction of the harmonic potential. The pair correlation function $g^{(2)}$ is then expressed as the sum of three contributions, $g^{(2)}(\mathbf{r},\mathbf{r'})= g_I^{(2)}+g_{II}^{(2)}+g_{III}^{(2)}$, sorted by increasing powers of $\delta\psi$: \begin{eqnarray} g_I^{(2)}&=& \phi^2(\mathbf{r})\phi^2(\mathbf{r'}) \langle a_0^\dagger a_0^\dagger a_0 a_0 \rangle \\ g_{II}^{(2)} &=& \left[ \phi(\mathbf{r})\phi(\mathbf{r'}) \langle a_0^\dagger a_0 \, \delta \psi^\dagger(\mathbf{r'})\delta \psi(\mathbf{r}) \rangle + \mathbf{r} \leftrightarrow \mathbf{r'} \right] \nonumber \\ &+& \left[ \phi^2(\mathbf{r}) \langle a_0^\dagger a_0 \, \delta \psi^\dagger(\mathbf{r'})\delta \psi(\mathbf{r'}) \rangle + \mathbf{r} \leftrightarrow \mathbf{r'} \right] \label{eq:gIIid} \\ g_{III}^{(2)} &=& \langle \delta \psi^\dagger(\mathbf{r})\delta \psi^\dagger(\mathbf{r'}) \delta \psi(\mathbf{r'})\delta \psi(\mathbf{r}) \rangle \,. \end{eqnarray} Averages involving different numbers of operators $a_0$ and $a_0^\dagger$ vanish since the system is in a statistical mixture of Fock states in the harmonic oscillator eigenbasis. The term $g_I^{(2)}$ originates from the condensate mode only. Its contribution to $\mbox{Var}(N_L-N_R)$ is zero for symmetry reasons. This is a crucial advantage, because it makes our observable immune to the non-physical fluctuations of the number of condensate particles in the grand canonical ensemble, and legitimates the use of that ensemble. For the same symmetry reasons, the second line of $g_{II}^{(2)}$ has a zero contribution to $\mbox{Var}(N_L-N_R)$. The term $g_{III}^{(2)}$ originates from the non-condensed gas only. Below $T_c$ this gas is saturated ($\tilde{z}\simeq 1$) to a number of particles scaling as $T^3$, see (\ref{eq:lt_nat}), and its contribution to $\mbox{Var}(N_L-N_R)$, maximal at $T=T_c$, makes the fluctuations in the particle number difference only weakly super-poissonian, as already discussed. Below $T_c$, the first line of $g_{II}^{(2)}$ is thus the important term. It originates from a beating between the condensate and the non-condensed fields. Its contribution to $\mbox{Var}(N_L-N_R)$ can be evaluated from Wick's theorem \cite{note}: \begin{multline} \label{eq:contII} \mbox{Var}_{II}(N_L-N_R) = 4 \langle N_0 \rangle \left[\int_{\mathbf{r}\in L} \int_{\mathbf{r'} \in L} \phi(\mathbf{r})\phi(\mathbf{r'}) g^{(1)}(\mathbf{r},\mathbf{r'}) \right. \\ \left. -\int_{\mathbf{r}\in L} \int_{\mathbf{r'} \in R} \phi(\mathbf{r})\phi(\mathbf{r'}) g^{(1)}(\mathbf{r},\mathbf{r'}) \right] \,. \end{multline} Using Eq. (\ref{eq:g1}) and setting $\tilde{z}\simeq 1$ in $g^{(1)}$, after some algebra, we obtain for $T<T_c$ \begin{equation} \mbox{Var}_{II}(N_L-N_R) \simeq \langle N_0\rangle c_\infty. \label{eq:beat} \end{equation} We can thus give a physical meaning to the mathematical splitting (\ref{eq:split}) for $T<T_c$: The second term and the sum over $s$ in the right-hand side of (\ref{eq:split}) respectively correspond to the condensate-non-condensed beating contribution $\mbox{Var}_{II}(N_L-N_R)$ and to the purely non-condensed contribution $\mbox{Var}_{III}(N_L-N_R)$. \section{Classical field approximation and interacting case} In this section we show for the ideal gas that the classical field approximation \cite{Kagan,Sachdev,Rzazewski0,Burnett} exactly gives the high temperature limit ($k_BT \gg \hbar \omega_x$) of the amplitude $c_\infty$ in the condensate-non-condensed beating term of $\mbox{Var}(N_L-N_R)$ (\ref{eq:beat}). We then use the classical field approximation to extend our analysis to the interacting case. \subsection{Ideal gas: test of the classical field approximation} It is useful to rewrite (\ref{eq:contII}) as an integral over the whole space introducing the sign function $s(x)$. One then recognizes two closure relations on $\mathbf{r}$ and $\mathbf{r'}$ and obtains: \begin{equation} \label{eq:cinfcomp} c_\infty = 2 \langle \phi | s(x) \, \frac{1}{z^{-1} e^{\beta h_1}-1} \, s(x)|\phi \rangle \,. \end{equation} Correspondingly, in the classical field limit: \begin{equation} c_\infty^{\rm class} = 2 \langle \phi | s(x) \, \frac{k_BT }{h_1-\sum_\alpha \frac{\hbar \omega_\alpha}{2}} \, s(x)|\phi \rangle \,. \label{eq:cinf_class_0} \end{equation} Inserting a closure relation on the eigenstates of the harmonic oscillator $|n\rangle$ in (\ref{eq:cinfcomp}), we are then led to calculate the matrix elements in one dimension \begin{equation} {}_x\langle 0| s(x) |n\rangle_x = \left( \frac{2\hbar} {m \omega_xn}\right)^{1/2} \phi_{0}^x(0) \phi_{n-1}^x(0) \,. \end{equation} To obtain this result, we introduced the raising operator $a_x^\dagger$ of the harmonic oscillator along $x$ and we evaluated the matrix element ${}_x \langle 0| [s(x),a_x^\dagger ] |n-1\rangle_x$ in two different ways. First, it is equal to $n^{1/2} {}_x \langle 0 |s(x)|n\rangle_x$ since $a_x^\dagger |n-1\rangle_x = n^{1/2} |n\rangle_x$. Second it can be deduced from the commutator $[Y(x),a_x^\dagger]=[\hbar/(2m\omega_x)]^{1/2} \delta(x)$ where $Y(x)$ is the Heaviside distribution. From the known values of $\phi_n^x(0)$ (see e.g. \cite{VarennaYvan}), we thus obtain: \begin{equation} c_\infty = \frac{4}{\pi} \sum_{m \in \mathbb{N}} \left[ e^{(2m+1)\eta_x}-1\right]^{-1} \frac{(2m)!}{2^{2m}(2m+1)(m!)^2} \,. \label{eq:clim_occ} \end{equation} The equivalent for the classical field is readily computed and we obtain: \begin{equation} c_\infty^{\rm class} = \frac{2 \ln 2}{\eta_x} \end{equation} showing that the classical field approximation gives the right answer for the dominant contribution to our observable. Moreover, going to the first order beyond the classical field approximation, that is including the $-1/2$ term in the expansion $1/[\exp(u)-1]=u^{-1} -1/2 + O(u)$, equation (\ref{eq:cinfcomp}) readily gives the term $-1$ in (\ref{eq:euler}). In what follows we will use the classical field approximation to treat the interacting case. \subsection{Classical field simulations for the interacting gas} In Fig.\ref{fig:int} we show results of a classical field simulation in presence of interactions for two different atom numbers (blue circles and black triangles). The non interacting case for one atom number (red circles and red curve) is shown for comparison. We note that the assumption of an ideal Bose gas is nowadays realistic: Recently, the use of a Feshbach resonance has allowed to reach a scattering length of $a=0.06$ Bohr radii \cite{Inguscio}. We estimate from the Gross-Pitaevskii equation for a pure condensate that interactions are negligible if $\frac{1}{2} N g \int |\phi|^4 \ll \hbar \omega_{\rm min}$ where $\omega_{\rm min}$ is the smallest of the three oscillation frequencies $\omega_\alpha$. For the parameters of Fig.~\ref{fig:int} (red curve) this results in the well-satisfied condition $N \ll 10^5$. In presence of repulsive interactions, the peak of fluctuations in the particle number difference at $T<T_c$ is still present, approximately in the same position, but its amplitude is strongly suppressed with respect to the ideal gas case. Another notable effect is that the dependence of the curve on the atom number is almost suppressed in the interacting case. To compute the normally ordered contribution in (\ref{eq:prima}) we generate 800 stochastic fields in the canonical ensemble sampling the Glauber-P function that we approximate by the classical distribution $P\propto \delta\left( N - \int |\psi|^2 \right) \exp\{-\beta E[\psi,\psi^*]\}$ where $E[\psi,\psi^*]$ is the Gross-Pitaevskii energy functional \begin{equation} E[\psi,\psi^*]=\int \psi^* h_1 \psi + \frac{g}{2} |\psi|^4 \,, \end{equation} where the coupling constant $g=4\pi \hbar^2 a/m$ is proportional to the $s$-wave scattering length $a$. The approximate Glauber-P function is sampled by a brownian motion simulation in imaginary time \cite{Mandonnet}. The simulation results are plotted as a function of $T/T_c^{\rm class}$ where the transition temperature in the classical field simulations, extracted by diagonalization of the one-body density matrix, slightly differs from $T_c$ given by (\ref{eq:Tc}): $T_c^{\rm class} = 1.15 \, T_c$. \begin{figure}[htb] \centerline{\includegraphics[width=7cm,clip=]{fig3.eps}} \caption{(Color online) Symbols: Classical field simulations for $N=17000$ (circles) and $N=6000$ (triangles) ${}^{87}$Rb atoms with interactions (blue and black), and without interactions for comparison (red). Red solid line: analytical prediction (\ref{eq:result}) in the non interacting case. Blue and black solid lines: analytical prediction (\ref{eq:analy_int}) in the interacting case. The oscillation frequencies along $x,y,z$ are $\omega_\alpha /2\pi [{\rm Hz}]=(234,1120,1473)$. For the interacting case the $s$-wave scattering length is $a=100.4$ Bohr radii. \label{fig:int} } \end{figure} \subsection{Analytical treatment for the interacting gas} In the interacting case, we perform the same splitting as in (\ref{eq:sepa}) except that now the condensate field $\psi_0=\langle N_0 \rangle^{1/2} \phi$ solves the Gross-Pitaevskii equation \begin{equation} (h_1 + g \psi_0^2) \psi_0 = \mu \psi_0 \,. \label{eq:gpe} \end{equation} For the dominant contribution of $g^{(2)}$ to the signal, for $T<T_c$, we then obtain \begin{eqnarray} g_{II}^{(2)}(\mathbf{r},\mathbf{r'})&=& 2 \langle N_0 \rangle \phi(\mathbf{r})\phi(\mathbf{r'}) \left[ \langle \Lambda^\dagger(\mathbf{r'})\Lambda(\mathbf{r}) \rangle + \langle \Lambda(\mathbf{r'})\Lambda(\mathbf{r}) \rangle \right] \nonumber \\ &+& \langle N_0 \rangle \left[ \phi^2(\mathbf{r}) \langle \Lambda^\dagger(\mathbf{r'})\Lambda(\mathbf{r'}) \rangle + \mathbf{r} \leftrightarrow \mathbf{r'} \right] \label{eq:g2int} \,, \end{eqnarray} where we have neglected fluctuations of $N_0$, set $a_0 = \langle N_0 \rangle^{1/2} e^{i\theta}$ and introduced $\Lambda(\mathbf{r}) = e^{-i\theta} \delta \psi(\mathbf{r})$ \cite{CastinDum}. The second line in (\ref{eq:g2int}) brings no contribution to the signal for symmetry reasons. Note that the terms in $g^{(2)}$ that are cubic in the condensate field vanish since $\langle \Lambda(\mathbf{r}) \rangle=0$. In the Bogoliubov \cite{TestQMC} and classical field approximation, the non-condensed field $\Lambda$ has an equilibrium probability distribution \begin{equation} P(\Lambda,\Lambda^\ast) \propto \exp \left\{-\frac{\beta}{2} \int (\Lambda^*, \Lambda) \eta \mathcal{L} \left( \begin{array}{l} \Lambda \\ \Lambda^* \end{array}\right) \right\} \end{equation} where the matrix $\eta \mathcal{L}$ is given in \cite{cartago}. Splitting $\Lambda$ into its real and imaginary parts $\Lambda_R$ and $\Lambda_I$, which turn out to be independent random variables, we then obtain the probability distribution for $\Lambda_R$: \begin{equation} P(\Lambda_R) \propto \exp \left\{-\beta \int \Lambda_R \mathcal{H} \Lambda_R \right\} \end{equation} with \begin{equation} \mathcal{H} = h_1 + 3 g \psi_0^2 -\mu \,. \end{equation} The relevant part of $g_{II}^{(2)}$, first line in (\ref{eq:g2int}), can then be expressed in terms of $\Lambda_R$ only. Passing through the eigenstates of $\mathcal{H}$ and proceeding as we have done to obtain (\ref{eq:cinfcomp})-(\ref{eq:cinf_class_0}), we then obtain \begin{equation} c_\infty^{\rm class} = 2 k_BT \langle \phi |s(x) \frac{1}{\mathcal{H}} s(x)|\phi \rangle \label{eq:cinf_class_int} \end{equation} that is the equivalent of (\ref{eq:cinf_class_0}) for the interacting gas. We then have to solve the equation \begin{equation} \mathcal{H} \chi = s(x) \phi \,. \label{eq:surchi} \end{equation} In the absence of the sign function the solution $\phi_a$ of (\ref{eq:surchi}) is known \cite{Lewenstein,CastinDum}: \begin{equation} \phi_a(\mathbf{r})= \frac{\partial_{\langle N_0 \rangle} \psi_0(\mathbf{r}) }{{\langle N_0 \rangle}^{1/2} \mu'(\langle N_0 \rangle)} \end{equation} where $\mu'$ is the derivative of the chemical potential with respect to $\langle N_0 \rangle$. This can be obtained by taking the derivative of the Gross-Pitaevskii equation (\ref{eq:gpe}) with respect to $\langle N_0 \rangle$. In the presence of $s(x)$, the spatially homogeneous case may be solved exactly. One finds that the solution $\chi$ differs from $s(x) \phi_a$ only in a layer around the plane $x=0$ of width given by the healing length $\xi$. In the trapped case, in the Thomas-Fermi limit where the radius of the condensate is much larger than $\xi$, we reach the same conclusion and we use the separation of length scales to calculate $\chi$ approximately. Setting \begin{equation} \chi(\mathbf{r}) = f(\mathbf{r}) \phi_a(\mathbf{r}) \end{equation} we obtain the still exact equation \begin{equation} -\frac{\hbar^2}{2m} \frac{\phi_a}{\phi} \Delta f - \frac{\hbar^2}{m} \frac{{\bf grad}\,\phi_a}{\phi}\cdot {\bf grad}\, f + f = s(x). \end{equation} We expect that $f$ varies rapidly, that is over a length scale $\xi$, in the direction $x$ only, so that we take $\Delta f\simeq \partial_x^2 f$ and we neglect the term involving $ {\bf grad}\, f$. Also $f$ deviates significantly from $s(x)$ only at a distance $\lesssim \xi$ from $x=0$, so that $\phi_a/\phi$ may be evaluated in $x=0$ only. This leads to the approximate equation \begin{equation} -\frac{1}{\kappa^2(y,z)} \partial_x^2 f + f = s(x), \label{eq:f} \end{equation} with \begin{equation} \frac{\hbar^2 \kappa^2(y,z)}{2 m} = \frac{\phi(0,y,z)}{\phi_a(0,y,z)} \simeq 2(\mu - U) \end{equation} where $U$ is the trapping potential and the approximation of $\kappa$ was obtained in the Thomas-Fermi approximation $g \psi_0^2 \simeq \mu - U$. Equation (\ref{eq:f}) may be integrated to give \begin{equation} \chi(\mathbf{r}) \simeq s(x) \phi_a(\mathbf{r}) \left[1-e^{-\kappa(y,z)|x|}\right]. \end{equation} Neglecting the deviation of $\chi$ from $s(x) \phi_a$ in the thin layer around $x=0$, we finally set \begin{equation} \chi(\mathbf{r}) \simeq s(x) \phi_a(\mathbf{r}) \end{equation} and obtain the simple result \begin{equation} c_\infty^{\rm clas} \simeq \frac{k_B T}{\langle N_0\rangle \mu'(\langle N_0\rangle)}. \end{equation} For harmonic trapping where $\mu$ scales as $\langle N_0\rangle^{2/5}$, and assuming that $\langle N_0\rangle$ is well approximated by the ideal gas formula (\ref{eq:lt_nat}), we then obtain: \begin{equation} \label{eq:analy_int} \mbox{Var}\, (N_L - N_R) \simeq \langle N\rangle \left[1 + \frac{k_B T_c}{\frac{2}{5} \mu(\langle N\rangle)} \left(1-\frac{T^3}{T_c^3}\right)^{3/5} \frac{T}{T_c} \right]. \end{equation} The analytic prediction (\ref{eq:analy_int}) is plotted as a full line (black and blue for two different atom numbers) in Fig.\ref{fig:int}. We note a good agreement with the numerical simulation. From (\ref{eq:analy_int}) we can extract the position of the maximum in the normalized fluctuations: \begin{equation} \left(\frac{T}{T_c}\right)_{\rm max} = \frac{980^{1/3}}{14} \simeq 0.709 \end{equation} as well as their amplitude: \begin{equation} [\mbox{Var}\, (N_L - N_R)]_{\rm max} \simeq \langle N\rangle \left[1+ 1.36 \frac{k_B T_c}{ \mu(\langle N\rangle)}\right]. \label{eq:result_int} \end{equation} Note the very weak dependence of $k_B T_c/\mu(\langle N\rangle)$ on the atom number, scaling as $\langle N\rangle^{-1/15}$ \cite{hydro}. We point out that our analysis in the interacting case is quite general and can be applied to atoms in any even trapping potential provided that the potential does not introduce a length scale smaller than the healing length $\xi$. \subsection{First quantum corrections to the classical field} \label{sec:quantcorr} Expanding the non-condensed fields $\Lambda(\mathbf{r})$ and $\Lambda^\dagger(\mathbf{r})$ over Bogoliubov modes: \begin{equation} \left(\begin{array}{c}{\Lambda(\mathbf{r})} \\ {\Lambda}^\dagger(\mathbf{r})\end{array}\right)= \sum_{j} \: b_j \left(\begin{array}{c}{u_j(\mathbf{r})} \\ {v_j(\mathbf{r})}\end{array}\right)+ b_j^\dagger \left(\begin{array}{c}{v_j^*(\mathbf{r})} \\ {u_j^*(\mathbf{r})}\end{array}\right) \label{eq:dev} \end{equation} and using (\ref{eq:g2int}), the coefficient $c_\infty$ giving the dominant contribution (\ref{eq:beat}) to the signal for $T<T_c$, can be split as $c_\infty=c_\infty^{\rm th}+c_\infty^{0}$. With the thermal contribution \begin{equation} c_\infty^{\rm th}= 2 \sum_j \bar{n}_j |\langle \phi | s(x) (|u_j \rangle + |v_j \rangle)|^2 \geq 0 \label{eq:cinfth_uv} \end{equation} where $\bar{n}_j=\langle b_j^\dagger b_j \rangle =1/[\exp(\beta \epsilon_j) -1]$, $\epsilon_j$ being the energy of the Bogoliubov mode $j$, and the zero temperature contribution \begin{equation} c_\infty^{0}= 2 \sum_j |\langle \phi | s(x) |v_j \rangle|^2 + \langle \phi |s(x)| u_j \rangle \langle v_j |s(x) |\phi \rangle \,. \end{equation} Performing the classical field approximation that is setting $\bar{n}_j=k_B T/\epsilon_j$ in (\ref{eq:cinfth_uv}), exactly gives the expression (\ref{eq:cinf_class_int}) of $c_\infty^{\rm class}$ \cite{equivalence}. Introducing the first quantum correction $-1/2$ to the occupation number $\bar{n}_j$ and including the quantum contribution, we obtain the first quantum correction to $c_\infty^{\rm class}$ in the interacting case $\delta c_\infty = c_\infty^0 + \delta c_\infty^{\rm th}$: \begin{equation} \delta c_\infty = \sum_j \langle \phi | s(x) |v_j \rangle \langle v_j | s(x) |\phi \rangle - \langle \phi | s(x) |u_j \rangle \langle u_j | s(x) |\phi \rangle \,. \end{equation} Using the closure relation $\sum_j |u_j\rangle \langle u_j| - |v_j\rangle \langle v_j| = 1-|\phi \rangle \langle \phi|$ \cite{CastinDum}, we obtain \begin{equation} \delta c_\infty = -1 \,. \end{equation} This exactly corresponds to the first correction in (\ref{eq:euler}) for the ideal gas. We conclude that also in the interacting case, the first quantum correction to $c_\infty^{\rm class}$ has the effect of changing the shot noise term (the first term equal to one) in the square brackets of (\ref{eq:analy_int}) into $1-\langle N_0\rangle/\langle N \rangle$. \section{Conclusion} We have studied fluctuations in the difference of number of particles in the left ($x<0$) and right ($x>0$) halves of a three dimensional harmonically trapped Bose gas as a function of the temperature across the critical temperature for condensation. Both for the ideal gas and the interacting gas, fluctuations are weakly super poissonian for $T>T_c$. If one lowers the temperature from $T_c$ down to zero, fluctuations increase, reach a maximum and then decrease again as the non-condensed fraction vanishes. We have solved this problem analytically for the ideal gas case, and we have found an approximate solution in the interacting case in the Thomas-Fermi limit when the temperature is larger than the quantum of oscillation in the trap. Remarkably, the local density approximation fails for this problem for the ideal gas. On the contrary, we show that the classical field approximation correctly gives the high temperature contribution to the fluctuations in the particle number difference both for the ideal gas and the interacting gas. For a large ideal gas, the maximum of normalized fluctuations $\mbox{Var}(N_L-N_R)/\langle N\rangle$ is located at $T/T_c=2^{-2/3}$ independently on the trap oscillation frequencies and its amplitude approximately scales as $N^{1/3}\bar{\omega}/\omega_x$. For the interacting case in the Thomas-Fermi regime the maximum of fluctuations in the relative number of particle subsists approximately for the same value of $T/T_c$ but its amplitude is strongly reduced as well as its dependence on $N$ scaling as $N^{-1/15}$. \begin{figure}[t!b] \centerline{\includegraphics[width=7cm,clip=]{fig4.eps}} \caption{The measurement can be seen as a balanced homodyne detection of the non-condensed field where the condensate acts as a local oscillator \label{fig:homodyne} } \end{figure} Finally, we give a physical interpretation of the ``bump" in fluctuations for $T<T_c$ as due to a beating between the condensate mode $\phi_0(\mathbf{r})$ and the non-condensed modes $\delta \psi(\mathbf{r})$, as it is apparent in Eqs.~(\ref{eq:gIIid},\ref{eq:g2int}). For symmetry reasons, only the antisymmetric component of the non-condensed field $\delta \psi_A(x)=-\delta \psi_A(-x)$ contributes to $\mbox{Var}(N_L-N_R)$. The measurement can then be seen in a pictorial way as a balanced homodyne detection of the non-condensed field where the condensate field acts as a local oscillator (Fig.\ref{fig:homodyne}). Since one or the other of these two fields vanishes for $T$ tending to $0$ or $T_c$, the beating effect, and thus $\mbox{Var}(N_L-N_R)$, are obviously maximal at some intermediate temperature. \acknowledgments We thank J. Reichel, J. Est\`eve, K. Maussang and R. Long for stimulating discussions. The teams of A.S. and Y.C. are parts of IFRAF. L. Y. acknowledges financial support from the National Basic Research Program of China (973 Program) under Grant No. 2006CB921104.
proofpile-arXiv_065-4801
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The Jahn-Teller effect of impurity ions in semiconductors with degenerate ground state is well known since long time. Most of its theoretical treatments were based on crystal field theory. Actually, the class of diluted magnetic semiconductors regains a lot of interest in connection with the search for new materials for spintronics applications. For example, high Curie temperatures were predicted in GaN:Mn. \cite{Dietl01} Corresponding experimental \cite{Hidenobu02} or ab-initio studies \cite{Sato03} seemed to confirm these predictions. However, the experimental results are very disputed since they might be caused by small inclusions of secondary phases. \cite{Sarigiannidou06} And also most of the previous ab-initio studies obtained a partially filled band of only one spin direction at the Fermi level (half-metallic behavior) which will be shown to be an artifact of those calculations. Actually, the Mn ion changes its valence in the chemical series from GaAs:Mn, via GaP:Mn to GaN:Mn. \cite{Schulthess06} It is Mn$^{2+}$ in GaAs:Mn (for a sufficiently high Mn-concentration)\cite{Jungwirth07} which leads to hole doping, but it remains Mn$^{3+}$ in GaN:Mn. And the Jahn-Teller effect is crucial to stabilize Mn$^{3+}$. As will be shown below, only the combined treatment of Jahn-Teller effect and strong electron correlations leads to the correct electronic structure. The electron correlations turn out to be the leading interaction, but the Jahn-Teller effect is necessary to break the symmetry. We present here a combined ab-initio and crystal field theory of magnetic ions in II-VI or III-V semiconductors. As representative examples we treat the 3$d^4$ ions Mn$^{3+}$ in GaN and Cr$^{2+}$ in ZnS. The first one is chosen because of its actual interest for spintronic applications and the second one since it is a very well studied model system for the Jahn-Teller effect of $d^4$ ions. \cite{Vallin70} Our ab-initio results are in good agreement with the experimental data, but only if we include properly the effects of the electron correlations in the 3$d$ shell. For that purpose we use the LSDA+$U$ method. The resulting electronic structure corrects previous electronic structure calculations (which did not take into account the combined effect of Jahn-Teller distortion and Coulomb correlation, neither for Mn-doped GaN \cite{Sanyal03,Kulatov02} nor for Cr-doped ZnS \cite{McNorton08,Tablero06}) in a dramatic way: instead of half-metallic behavior we obtain an insulating ground state with a considerable excitation gap. Similar results for GaN:Mn were reported earlier but using different methods than in our study. \cite{Schulthess06,Stroppa09} In a second step, to make contact with the traditional literature on that subject, we connect our ab-initio results with crystal field theory. We obtain the complete set of crystal field parameters in good agreement with previous optical measurements.\cite{Vallin70,Kaminska79,Wolos04} We treat the host crystals in zinc-blende phase. Both magnetic ions are in the 3$d^4$ configuration. The electronic level is split by a cubic crystal field created by the first nearest neighbors (ligands) of the transition metal ion. But it remains partially filled and the Jahn-Teller effect may occur which induces the splitting of the partially filled level due to displacements of the ligands around the transition metal ion. The local symmetry of the crystal is reduced and the total energy of the supercell is minimized. The energy gain is denoted as $E_{JT}$. \section{Method of calculation} Our calculations are performed using the full potential local orbital (FPLO)\cite{Koepernik99} method with the LSDA+$U$ (local spin density approximation with strong Coulomb interaction)\cite{Anisimov91} approximation in the atomic limit scheme. The lattice constants are optimized for the pure semiconductors using the LSDA method, $a_0=4.48$ \AA \ for GaN and $a_0=5.32$ \AA \ for ZnS. To study the Jahn-Teller effect, we use a supercell of 64 atoms in zinc blende phase, and a $4\times4\times4$ k-point mesh. The LSDA+$U$ parameters are introduced as Slater parameters : for Mn (see Ref.\ \onlinecite{Anisimov91}), $F^2=7.41$ eV, $F^4=4.63$ eV and for Cr (see Ref.\ \onlinecite{Korotin98}), $F^2=7.49$ eV, $F^4=4.68$ eV. The $F^0$ parameter is equal to $U$, and is chosen to be 4 eV. This value is in good agreement with other works and it corresponds to the value which gives the maximum splitting of the triplet state in $D_{2d}$ symmetry. We have verified that our results are not very sensitive to the actual choice of the $U$ parameter. The value $U=4$ eV gives representative results for a rather large range of $U$ parameters reaching from 3 up to 8 eV. \begin{figure}[h] \includegraphics[scale=0.12]{fig1} \caption{(Color online) Schematic drawing of a tetragonal distortion from $T_d$ (red spheres) to $D_{2d}$ (green spheres).}\label{f1} \end{figure} The transition metal ions (Mn or Cr) substitute one atom in the center of the crystal and there is a complete tetrahedron of ligands around the magnetic ion. Without distortion, the tetrahedron is in the $T_{d}$ symmetry group (cubic). By symmetry, there are two tetragonal (point groups $C_{2v}$ and $D_{2d}$) and one trigonal ($C_{3v}$) possible Jahn-Teller distortions. We have verified that the most important energy gain is obtained with the pure tetragonal distortion where the symmetry reduces from $T_{d}$ to $D_{2d}$. That is in agreement with a previous study of the Jahn-Teller effect in GaN:Mn where, however, the electron correlation had not been taken into account. \cite{Luo05} The schematic displacement of ligands is represented in Fig. \ref {f1}, it is defined by $\delta_x=\delta_y \neq \delta_z$. \section{Ab-initio results} \begin{figure}[h] \subfloat[GaN:Mn]{% \includegraphics[scale=0.30]{fig2a}} \hspace{1cm} \subfloat[ZnS:Cr]{% \includegraphics[scale=0.30]{fig2b} } \caption{Partial density of states (DOS) of the two DMS. The results were obtained with the LSDA+$U$ method ($U=4$ eV). The dashed line represents the partial $3d$ DOS of the transition metal ion and the solid line represents the total DOS. The Fermi level is set to 0 eV.}\label{f2} \end{figure} To study the Jahn-Teller distortion we use a supercell of 64 atoms. We performed a series of calculations with different displacements of the ligands around the impurity in order to find the configuration which minimizes the total energy of the supercell. The preferred configuration is of $D_{2d}$ symmetry and the results for the density of states (DOS) are presented in Fig. \ref{f2}. For Mn-doped GaN, the distance between Mn and N is $1.942$ \AA \ (with cubic symmetry, it is $1.937$ \AA) with $\delta_x=1.68$ pm and $\delta_z=1.76$ pm. The Jahn-Teller effect induces a lowering of the total energy by 38.13 meV and a splitting of the triplet state by 0.81 eV. For Cr-doped ZnS, the results are similar, the distance Cr-S is $2.314$ \AA \ ($2.304$ \AA \ without Jahn-Teller effect), with $\delta_x=3.36$ pm and $\delta_z=-0.32$ pm. The total energy decreases by $58.1$ meV and the splitting of the triplet level is $1.44$ eV. Without $U$, in the LSDA method, we also find a Jahn-Teller effect for GaN:Mn but the energy gain is smaller by nearly two orders of magnitude, only 0.71 meV. In the LSDA method all the 3$d$ levels are located in the gap with a rather small mixing to the $2p$ orbitals. Without Jahn-Teller effect, there is a small band of $t_{2g}$ character which is partially filled and one finds half-metallic behavior. The Hubbard correlation opens up a considerable gap. The empty state (singlet) is mainly of 3$d$ character for the two compounds and the doublet, just below the Fermi level is mainly of 2$p$ character originating from ligand orbitals. That change of orbital character arises since the LSDA+$U$ method pushes the occupied 3$d$ levels much lower in energy than in the LSDA method. As a consequence, the transition to the first excited state has a considerable $p$-$d$ character and should be visible as an optical interband transition. That allows a reinterpretation of the optical transition at 1.4 eV (for GaN:Mn) which is usually considered as a pure $d$-$d$ transition. \cite{Wolos04} In agreement with a proposal of Dietl it corresponds to a transition from the $d^4$ configuration to $d^5$ and ligand hole. \cite{Dietl08} There are two peaks in the unoccupied, minority DOS at about 2.5 eV for GaN:Mn. They correspond to crystal field split 3$d$ levels which were well seen in X-ray absorption spectroscopy. Up to now, they were interpreted by means of a LSDA calculation, \cite{Titov05} but our DOS shows that these peaks occur also in the more realistic LSDA+$U$ approach where, however, a detailed calculation of the matrix element effects (which was performed in Ref.\ \onlinecite{Titov05}) is still lacking. For GaN:Mn, the total magnetic moment is equal to $4 \mu_B$, corresponding to $S=2$. That fits well with the ionicity $3+$ for manganese. The local magnetic moment at the manganese site is slightly enhanced $M_{Mn}=4.042 \mu_B$ which is compensated by small induced magnetic moments of opposite sign at the neighboring ligands and further neighbors. Another interesting result is presented in Fig. \ref{f3}. In this case we introduced the local Coulomb correlation but no lattice deformation. The local cubic symmetry was however broken by the occupation of the triplet state. The calculation still shows an insulating case with nearly the same splitting of the triplet state. This observation is not visible within the LSDA method. Therefore, we can say that the splitting of the triplet state is mainly due to the strong correlation of 3$d$ electrons. The Jahn-Teller distortion introduces a small additional splitting of the triplet state : $60$ meV for Mn-doped GaN and $-11.4$ meV for Cr-doped ZnS. This interpretation is confirmed by a pure LSDA calculation, because it gives an identical value of the splitting with the same ligand coordinates. A negative value of the splitting corresponds to a stretching of the tetrahedron along the z axis (Fig. \ref{f3}(b)). \begin{figure}[h] \subfloat[GaN:Mn]{% \includegraphics[scale=0.30]{fig3a}} \hspace{1cm} \subfloat[ZnS:Cr]{% \includegraphics[scale=0.30]{fig3b} } \caption{DOS resulting from the LSDA+$U$ calculation ($U=4$ eV). The dashed line represents the DOS without Jahn-Teller distortion, and the solid line represents the DOS with Jahn-Teller effect but by breaking the local cubic symmetry. The Fermi level is set to 0 eV. }\label{f3} \end{figure} \section{Ligand field theory} For a deeper understanding of the Jahn-Teller effect we treat it also in ligand field theory. In that theory, the degeneracy of the impurity 3$d$ level is split due to hybridization with the neighboring ligands. In the following we neglect the electrostatic contributions which will be shown to be justified for GaN:Mn and to a lesser extent for ZnS:Cr. When the 3$d$ ion is in the center of an ideal tetrahedron, the cubic crystal field splits this level into a triplet state and a doublet state. The distance separating these two levels is denoted as $\Delta_{q}$. Then, the Jahn-Teller effect splits the doublet state into two singlet states, and the triplet state into a doublet state and a singlet state. The distance between these levels is denoted as $\Delta_{^5T_2}$ (shown in Fig. \ref{f4}). The local Hamiltonian in ligand field or crystal field theory can be expressed as \begin{equation} \begin{split} H_{CF} & =H_{cub}+H_{tetra} \\ & =B_{4}(O^{0}_{4}+5O^{4}_{4})+(B^{0}_{2}O^{0}_{2}+B^{0}_{4}O^{0}_{4}) \end{split} \label{eq1} \end{equation} where $B_k^q$ and $O_k^q$ are Steven's parameters and Steven's operators, respectively. \cite{Abragam70} The first part represents the Hamiltonian of an ideal tetrahedron and the second describes the linear Jahn-Teller effect. The eigenvalues of $H_{CF}$ correspond to the 3$d$ electronic levels of the magnetic ion (they represent the spectrum). $B_4$ is the parameter of the cubic crystal field, and it is equal to $\Delta_{q}/120$. \begin{figure}[h] \includegraphics[scale=0.17]{fig4} \caption{Schematic multiplet spectrum for a transition metal ion in $d^4$ configuration. From left to right, $^5D$: isolated ion, cubic crystal field splitting, and level splitting due to the Jahn-Teller distortion. }\label{f4} \end{figure} The crystal field Hamiltonian (\ref{eq1}) has the same form for the one-particle problem (with parameters $B_4$, $B_2^0$, and $B_4^0$) or for the 3$d^4$ multiplet (with parameters $\tilde{B}_4$, $\tilde{B}_2^0$, and $\tilde{B}_4^0$). \cite{Abragam70,Kuzian06} The fundamental multiplet state is $^5D$ ($L=2$ and $S=2$). In this fundamental state, the orbital moment for one electron ($l=2$) is equal to the total orbital moment. This particular case induces that the one electron spectrum is opposite to the multiplet spectrum (ex : $\tilde{B}_4=-B_4$). In the superposition model \cite{Kuzian06,Bradbury67,Kuzmin91} the crystal field Hamiltonian (\ref{eq1}) can be calculated by adding up the hybridization contributions of all the ligands. Then, the matrix elements of $H_{CF}$ with respect to the projection $m_L$ of the total orbital momentum $L=2$ are given by \begin{equation} \begin{split} V_{m_{L},m'_{L}} =& \sum_{i}\left[ A_{m'_{L},m_{L}}b_{4}(R_{i})Y_{4}^{m_{L}-m'_{L}}(\theta_{i},\phi_{i})\right]\\ &+\sum_{i}\left[B_{m'_{L},m_{L}}b_{2}(R_{i})Y_{2}^{m_{L}-m'_{L}}(\theta_{i},\phi_{i})\right]\\ &+\sum_{i}\left[b_{0}(R_{i})\delta_{m'_{L},m_{L}} \right] \\ \end{split} \end{equation} with \begin{equation} \begin{split} & A_{m'_{L},m_{L}}=\frac{(-1)^{m'_L}5\sqrt{4\pi}}{27}C^{224}_{-m'_{L},m_{L}}C^{224}_{0} \\ & B_{m'_{L},m_{L}}=\frac{(-1)^{m'_L} \sqrt{4\pi}}{\sqrt{5}}C^{222}_{-m'_{L},m_{L}}C^{222}_{0}\\ \end{split} \end{equation} and where $C_{m_{1}m_{2}}^{j_{1}j_{2}J}$ are Clebsch-Gordon coefficients and $Y_{L}^{m_L,m_{L'}}$ are spherical harmonics. The $i$ index corresponds to the ligand number. The axis system is defined in Fig. \ref{f5}. \begin{figure}[h] \includegraphics[scale=0.1]{fig5a} \includegraphics[scale=0.1]{fig5b} \caption{(Color online) Definition of coordinate axes and lattice displacements used in the present work. Example of MnN$_4$. }\label{f5} \end{figure} In identifying the eigenvalues of $H_{CF}$ with the equivalent values of $V_{m_{L},m'_{L}} $, we obtain the expressions of the Steven's parameters as a function of the ligand coordinates \begin{equation} \begin{split} &\tilde{B}_4=\frac{-b_4(R)\tilde{Y}_{4}^{4}(\theta_2,\theta_3)}{120}\\ &\tilde{B}_{4}^{0}=\frac{b_4(R)}{84}\left(\frac{\tilde{Y}_{4}^{0}(\theta_2,\theta_3)}{12} +\frac{7\tilde{Y}_{4}^{4}(\theta_2,\theta_3)}{12}\right)\\ &\tilde{B}_2^0=\frac{-2b_2(R)\tilde{Y}_{2}^{0}(\theta_2,\theta_3)}{49} \end{split} \end{equation} with \begin{equation} \begin{split} &\tilde{Y}_{4}^{0}(\theta_2,\theta_3)=6-30(\cos^2\theta_2+\cos^2\theta_3)+35(\cos^4\theta_2+\cos^4\theta_3)\\ &\tilde{Y}_{4}^{4}(\theta_2,\theta_3)=\sin^4\theta_2+\sin^4\theta_3\\ &\tilde{Y}_{2}^{0}(\theta_2,\theta_3)=-2+3\cos^2\theta_2+3\cos^2\theta_3\\ &b_2(R)=\frac{\hbar^4R^3_d}{ \Delta_{pd}m^2R^7}(\eta^2_{pd\sigma}+\eta^2_{pd\pi})\\ &b_4(R)=\frac{9\hbar^4R^3_d}{5 \Delta_{pd}m^2R^7}(\eta^2_{pd\sigma}-\frac{4}{3}\eta^2_{pd\pi})\\ \end{split} \end{equation} where $b_2(R)$ and $b_4(R)$ are expressed by means of the Harrison parametrization of $p$-$d$ hopping. The values of $\eta_{pd\sigma}=-2.95$ and $\eta_{pd\pi}=1.36$ are extracted from his book (first edition). \cite{Harrison80} In the case of $D_{2d}$ symmetry, the distances $R$, between magnetic ion and the four ligands are identical. $\Delta_{pd}$ is the charge transfer energy. It is treated as an adjustable parameter in this theory. \begin{table}[h] \caption{Summary of parameters.}\label{t1} \begin{tabular}{ccccc} \hline \hline &\multicolumn{2}{c}{GaN:Mn}&\multicolumn{2}{c}{ZnS:Cr}\\ &our work&experiment\footnotemark[1]&our work&experiment \\ \hline $\Delta_{^5T_2}$ (meV)&187&111&167.1&213.9\footnotemark[2] 111.6\footnotemark[3] \\ $E_{JT}$ (meV)&59.9&37&58.4&71.3\footnotemark[2] , 37.2\footnotemark[3] \\ $\tilde{B}_4^0$ (meV)&-1.88&-1.05&-1.26&\\ $\tilde{B}_2^0$ (meV)&-8.24&-3.98&-10.15&\\ $\Delta_q $(eV)&1.4&1.37&0.57&0.58\footnotemark[2] , 0.59\footnotemark[3] \\ $R$ (\AA)&1.932&&2.303&\\ $\delta_x$ (pm)&1.28&&1.65&\\ $\delta_z$ (pm)&3.79&&3.46&\\ \hline $\Delta_{pd}$ (eV)&2.31&&& \\ $b_2/b_4$&&&3.3&\\ \hline \hline \end{tabular} \footnotetext[1]{optical masurements for wurtzite GaN:Mn, Wolos {\em et al}, Ref.\ ~\onlinecite{Wolos04}.} \footnotetext[2]{optical measurements, Vallin {\em et al}, Ref.\ ~\onlinecite{Vallin74}.} \footnotetext[3]{optical measurements, Kaminska {\em et al}, Ref.\ ~\onlinecite{Kaminska79}.} \end{table} Hamiltonian (\ref{eq1}) concerns the electronic degrees of freedom only. The tetragonal distortion results from the coupling to the lattice. As it is shown in Fig.\ \ref{f4} any tetragonal distortion leads to a splitting of the lowest triplet $^5T_2$ and to an energy gain. This energy gain is linear in the lattice displacements, whereas the vibronic energy loss $E_{vibronic}$ is a quadratic term. One has to minimize the total energy \begin{equation} \Delta_{total}=\frac{2}{3}\left(E(^5B_2)-E(^5E_2)\right)+E_{vibronic} \; . \end{equation} For the sake of simplicity we approximate $E_{vibronic}$ by the breathing mode energy, extracted from the LSDA+$U$ calculation. We find the lattice contribution for GaN:Mn to be $E_{vibronic} [\mbox{eV}]=36.7708 R'^2$ and for ZnS:Cr $E_{vibronic} [\mbox{eV}]=30.4887 R'^2$ with $R'=\sqrt{\delta_x^2+\delta_y^2+\delta_z^2}$ in \AA. One should note, that there is no difference between LSDA and LSDA+$U$ methods for the lattice energy. Also, we have probed the lattice energy for the more specific tetragonal mode with no essential difference. The energy gain which is induced by the Jahn-Teller effect, corresponds to : \begin{equation} E_{JT}= min(\Delta_{total}) \; . \end{equation} \begin{table*} \caption{Comparison of parameters. When possible, the complete set of parameters was calculated from the literatures values by the relations given in Sect V. HSE : Heyd-Scuseria-Ernzerhof hybrid functional, GGA : Generalized Gradient Aproximation, GFC : Green-function Calculation.} \label{t2} \begin{ruledtabular} \begin{tabular}{ccccccc} &&method&$E_{JT}$ [meV]&$Q_{\theta}$ [\AA]&V [eV/\AA]&$\hbar \omega$ [$cm^{-1}$] \\ \hline GaN:Mn&present work&ligand field theory&59.9&-0.0828&-1.44&579 \\ &&ab-initio, LSDA+$U$&38.13&-0.0562&-1.35&680.5 \\ &literature values&ab-initio, GGA\footnotemark[1]&100&-0.1365&-1.46&454\\ &&ab-initio, HSE\footnotemark[2]&184&&&\\ &&experiment, optics\footnotemark[3]&37&&&\\ \hline ZnS:Cr&present work&ligand field theory&58.4&-0.0834&-1.4&375.8\\ &&ab-initio, LSDA+$U$&58.4&-0.0496&-2.35&631.5\\ &literature values&ab-initio, GFC\footnotemark[4]&185.98&-0.16&-2.32&349.4\\ &&experiment, optics\footnotemark[5]&37.2&-0.279&-0.266&90\\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Luo {\em et al.} Ref. ~\onlinecite{Luo05}.} \footnotetext[2]{Stroppa {\em et al.} Ref. ~\onlinecite{Stroppa09}.} \footnotetext[3]{Wolos {\em et al.} Ref. ~\onlinecite{Wolos04}, wurzite GaN:Mn.} \footnotetext[4]{Oshiyama {\em et al.} Ref. ~\onlinecite{Oshiyama88}.} \footnotetext[5]{Kaminska {\em et al.} Ref. ~\onlinecite{Kaminska79}.} \end{table*} The results of the ligand field model are presented in Table \ref{t1}. For GaN:Mn, we have exclusively used the ligand hybridization as the microscopic origin for the level splitting. The value of $\Delta_{pd}$ is adjusted such that $\Delta_q$ equals the experimental value. The results are very convincing which proves that the exceptional large value of $\Delta_q=1.4$ eV in the case of GaN:Mn is dominantly caused by the hybridization energy to the ligands. On the other hand, the neglect of the electrostatic contribution is certainly an approximation which shows its limits for ZnS:Cr. The procedure described above gives no satisfactory results. We interpret this deficiency such that the electrostatic corrections become more important for ZnS:Cr which has a much smaller value of $\Delta_q$ indicating a smaller hybridization. As discussed in Ref.\ \onlinecite{Savoyant09}, the higher order crystal field parameters, and especially $\tilde{B_4^0}$ are certainly more influenced by further reaching neighbors than $\tilde{B_4}$. Therefore, we determine the $b_4$ parameter from the experimentally known value of $\Delta_q$ and introduce a second free parameter ($b_2/b_4$) which is fitted to the LSDA+$U$ energy gain. The parameter set for ZnS:Cr which was found in such a manner (Table \ref{t1}) is now in good agreement with previous optical measurements. \section{Comparison to other works} The lattice displacements obtained from the ligand field theory (Table \ref{t1}) fulfill approximately the relationship $\delta_x=\delta_y=\frac{1}{2}\delta_z$. That occurs not by accident since this relationship characterizes a pure tetragonal mode $Q_{\theta}$. In general, up to now, we considered $\delta_x=\delta_y\neq \delta_z$, which corresponds to a mixture of tetragonal and breathing mode $Q_b$ ($\delta_x=\delta_y=- \delta_z$). The normal coordinates of the two modes are defined as $Q_{\theta}= - 2\sqrt{\frac{2}{3}}(\delta_x+\delta_z)$ and $Q_b=\frac{2}{\sqrt{3}}(2\delta_x-\delta_z)$, respectively. \cite{Sturge67} But it is only the tetragonal mode which leads to a splitting of the $^5T_2$ level. That is described in the Hamiltonian which was used by Vallin {\em et al} $\;$ \cite{Vallin70, Vallin74,Kaminska79} to analyze their data obtained by optical measurements and electron paramagnetic resonance (EPR) \begin{equation} H_D=VQ_\theta \epsilon_{\theta}+\frac{\kappa}{2}Q^2_{\theta} \; . \label{eq8} \end{equation} Here, the $V$ parameter is the Jahn-Teller coupling coefficient, and $\epsilon_{\theta}$ is a diagonal 3*3 matrix which describes the splitting of the $^5T_2$ triplet into the upper $^5E_2$ doublet and the lower $^5B_2$ singlet. Its diagonal elements are 1/2 (corresponding to $^5E_2$) and -1 (corresponding to$^5B_2$). The parameter $\kappa$ describes the lattice stiffness and is connected with the phonon frequency by $\omega = \sqrt{\kappa/m}$ where $m$ is the mass of one ligand. Minimizing the energy (\ref{eq8}) we find the relationship $V=\kappa Q_{\theta}$ and the Jahn-Teller energy \begin{equation} E_{JT}=\frac{V}{2}Q_{\theta} \; . \label{for1} \end{equation} Therefore, if we know the Jahn-Teller energy $E_{JT}$ and the lattice distortion $Q_{\theta}$, we may calculate the coupling coefficient $V$ and the phonon frequency $\omega$ and compare it with other works. The comparison is not an ideal one since our LSDA+$U$ results do not correspond to a pure tetragonal distortion. They contain an important part of the breathing mode which comes about due to recharging effects around the impurity being not treated in the ligand field theory or in the analysis of the optical data in Refs.\ \onlinecite{Vallin70} and \onlinecite{Kaminska79}. Nevertheless, we compare in Table \ref{t2} our ab-initio and ligand field results with other theoretical or experimental work from the literature. We may remark a rather good agreement for the Jahn-Teller energy $E_{JT}$ between our work and the optical data of Vallin {\em et al} for ZnS:Cr. \cite{Vallin70} However, our estimate of the phonon frequency is much larger. A detailed analysis of this discrepancy is beyond the scope of the present work. We just remark that higher phonon frequencies would mean a more profound tendency towards the dynamic Jahn-Teller effect. And there are indeed discussions (not for ZnS:Cr, however) whether the experimental data of GaN:Mn should be interpreted as a static \cite{Wolos04} or a dynamic \cite{Marcet06} Jahn-Teller effect. Such nonadiabatic effects can, however, not at all be treated in density functional based methods. \section{Discussion and Conclusion} Many ab-initio studies of Mn-doped GaN or Cr-doped ZnS, mostly based on the LSDA approximation, result in a half-metallic behavior. In those calculations, the Fermi level is located in a small 3$d$ band of majority spin within the middle of the gap of the host semi-conductor. \cite{Sanyal03,Kulatov02,McNorton08,Tablero06} We have shown, that such a result is an artifact of the LSDA method and can be repaired by taking into account the strong Coulomb correlation in the $3d$ shell and the Jahn-Teller effect simultaneously. Allowing for a Jahn-Teller distortion only, but neglecting the Coulomb correlation leads to a tiny gap at the Fermi level.\cite{Luo05} And also the opposite procedure of taking into account the Coulomb correlation by the LSDA+$U$ method (i.e.\ the same method which we have used) but remaining in cubic symmetry is insufficient.\cite{Sandratskii04} The origin for this deficiency in the given case of a $d^4$ impurity is a threefold degenerate level of $t_{2g}$ symmetry at the Fermi level which is occupied with one electron only. It is sufficient to break the local cubic symmetry in the presence of a strong electron correlation to obtain insulating behavior with a gap of the order of 1 eV. Such an electronic structure is in agreement with the known experimental data for both compounds. Using the LSDA+$U$ method we found the Jahn-Teller energy gain $E_{JT}$ in good agreement with known optical data.\cite{Wolos04,Kaminska79} In addition to the ab-initio calculations, we developed a ligand field theory to reach a deeper understanding of the Jahn-Teller effect. It uses the $p$-$d$ hybridization between 3$d$ impurity and ligands as principal origin of the crystal field splitting.\cite{Kuzian06} This hybridization is parametrized by the Harrison scheme\cite{Harrison80} and the lattice energy as obtained from our ab-initio calculation is added. The resulting energy gain is very close to the ab-initio results but the ligand field theory allows in addition the determination of the complete set of crystal field parameters. We find good agreement with the experimental parameter set for GaN:Mn.\cite{Wolos04} The comparison is a little bit speculative since the experimental data were obtained for Mn in wurtzite GaN. It turns out, that the Mn impurity leads to an additional tetragonal Jahn-Teller distortion in addition to the intrinsic trigonal deformation of the host lattice. And we find the parameters of this tetragonal distortion close to our results for zinc-blende GaN (which can be synthesized, but for which no measurements exist up to now). We have observed that our ligand field method works better for GaN:Mn than for ZnS:Cr due to the very large cubic crystal field splitting $\Delta_q=1.4$ eV in the former case (in contrast to $\Delta_q=0.58$ eV for ZnS:Cr). Our combined ab-initio and analytical study allows also a comparison with the "classical" work on ZnS:Cr. The Jahn-Teller efect of that model compound was already studied in the seventies in great detail by optical and EPR measurements. \cite{Vallin70,Vallin74, Kaminska79} We obtain an energy gain $E_{JT}$ very close to the experimental data but higher phonon frequencies. Finally, we would like to discuss the importance of our results for spintronics applications. First of all, the Jahn-Teller mechanism which we describe is not restricted to the two compounds of our study. It may occur in all cases where the impurity level is well separated from the valence band but only partially filled. The Jahn-Teller effect leads to insulating behavior which questions many previous ab-initio studies in the literature. In the case of GaN:Mn all available information points to the stability of Mn$^{3+}$ which hinders an intrinsic hole doping by Mn. (Experimentally, one may find Mn$^{2+}$ in electron doped samples, which is not well suited for spintronics applications either.) One still has the possibility to reach hole doping by a second impurity besides Mn (co-doping). That would allow the classical mechanism for ferromagnetism in diluted magnetic semiconductors with $S=2$ local moments at the Mn sites. Our work was supported by a PICS project (No. 4767) and we thank Anatoli Stepanov and Andrey Titov for useful discussions.
proofpile-arXiv_065-4805
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }